Powerful and Free. When working with DOCSIS cable modems, you inevitably need to review or edit DOCSIS config files. But, do you have the tools? Excentis Cable Modem Config File Editor for Mac OSX; Excentis Cable Modem Config File Editor for Mac OSX. Therefore Excentis will continue to protect your personal data with the highest security measures in accordance with the European GPDR regulation.
ContentsIntroduction
Cisco Cable Modem (CM) cards allow you to connect CMs on the Hybrid Fiber Coaxial (HFC) network to a Cisco uBR7200 series in a Cable Television (CATV) headend facility. The CM cards provide the interface between the Cisco uBR7200 series Peripheral Component Interconnect (PCI) bus and the Radio Frequency (RF) signal on the HFC network.
Before You BeginConventions
For more information on document conventions, see the Cisco Technical Tips Conventions.
Prerequisites
Readers of this document should be knowledgeable of the following:
The following fields are expected in the DHCP response returned to the CM. The CM MUST configure itself based on the DHCP response.
Configuration File Settings
The following configuration settings MUST be included in the configuration file and MUST be supported by all CMs.
In order for CPE devices connected to the CM to be granted network connectivity, the Network Access value must be set to 1. Also, the CM needs a profile for Class of Service depending on the service level agreement with the customer.
Cisco supplies sample DOCSIS 1.0 configuration files in the 'Downloadable DOCSIS configuration Files' section of the document Building DOCSIS 1.0 Configuration Files Using Cisco DOCSIS Configurator.
Lastly, the configuration file MUST have an 'End of File' marker. This in done by a data maker, the values MUST be ff..
The following configuration settings MAY be included in the configuration file and if present MUST be supported by all CMs.
The Telephone Settings Option configuration MAY be included in the configuration file and if present, and applicable to this type of modem, MUST be supported.
The Vendor-Specific Configuration Settings MAY be included in the configuration file, and if present, MAY be supported by a CM.
Depending on the RF design and the services provided by the Multiple Service Operator (MSO), additional fields are used in the CM configuration file.
If you have further questions or want to get full details on this document, refer to CableLabs .
Related Information
Before you attempt to measure the performance of a cable network, there are some limiting factors that you should take into consideration. To design and deploy a highly available and reliable network, you must establish an understanding of basic principles and measurement parameters of cable network performance.
This document presents some of those limiting factors and then discusses how to actually optimize and qualify throughput and availability on your deployed system.Readers of this document should have knowledge of these topics:.Data-over-Cable Service Interface Specification (DOCSIS).Radio Frequency (RF) technologies.Cisco IOS® software command-line interface (CLI)This document is not restricted to specific software or hardware versions.The information in this document was created from the devices in a specific lab environment. All of the devices used in this document started with a cleared (default) configuration. If your network is live, make sure that you understand the potential impact of any command.For more information on document conventions, refer to the.This section explains the differences between bits, bytes, and baud. The word bit is a contraction of BInary digi T, and it is usually symbolized by a lower case b. A binary digit indicates two electronic states: an “on” state or an “off” state, sometimes referred to as “1s” or “0s.”A byte is symbolized by an upper case B, and it is usually 8 bits in length. A byte could be more than 8 bits, so an 8-bit word is more precisely called an octet. Also, there are two nibbles in a byte.
A nibble is defined as a 4-bit word, which is half of a byte.Bit rate, or throughput, is measured in bits per second (bps), and it is associated with the speed of a signal through a given medium. For example, this signal could be a baseband digital signal or, perhaps, a modulated analog signal that is conditioned to represent a digital signal.One type of modulated analog signal is Quadrature Phase Shift Keying (QPSK). This is a modulation technique that manipulates the phase of the signal by 90 degrees to create four different signatures, as shown in.
These signatures are called symbols, and their rate is referred to as baud. Baud equates to symbols per second. Figure 1 – QPSK DiagramQPSK signals have four different symbols; four is equal to 2 2. The exponent gives the theoretical number of bits per cycle (symbol) that can be represented, which equals 2 in this case. The four symbols represent the binary numbers 00, 01, 10, and 11. Therefore, if a symbol rate of 2.56 Msymbols/s is used to transport a QPSK carrier, then it would be referred to as 2.56 Mbaud and the theoretical bit rate would be 2.56 Msymbols/s × 2 bits/symbol = 5.12 Mbps. This is further explained later in this document.You might also be familiar with the term packets per second (PPS).
This is a way to qualify the throughput of a device based on packets, regardless of whether that packet contains a 64-byte or a 1518-byte Ethernet frame. Sometimes the “bottleneck” of the network is the power of the CPU to process a certain amount of PPS and is not necessarily the total bps.Data throughput begins with a calculation of a theoretical maximum throughput, then concludes with effective throughput. Effective throughput available to subscribers of a service will always be less than the theoretical maximum, and it is what you should try to calculate.Throughput is based on many factors:.total number of users.bottleneck speed.type of services accessed.cache and proxy server usage.MAC layer efficiency.noise and errors on the cable plant.many other factorsThe goal of this document is to explain how to optimize throughput and availability in a DOCSIS environment and to explain the inherent protocol limitations that affect performance.
If you want to test or troubleshoot performance issues, refer to. For guidelines on the maximum number of recommended users on an upstream (US) or downstream (DS) port, refer to.Legacy cable networks rely on polling—or carrier sense multiple access collision detect (CSMA/CD)—as the MAC protocol.
Today’s DOCSIS modems rely on a reservation scheme where the modems request a time to transmit and the CMTS grants time slots based on availability. Cable modems are assigned a Service ID (SID) that is mapped to class of service (CoS) or quality of service (QoS) parameters.In a bursty, time division multiplex access (TDMA) network, you must limit the number of total cable modems (CMs) that can simultaneously transmit, if you want to guarantee a certain amount of access speed to all requesting users. The total number of simultaneous users is based on a Poisson distribution, which is a statistical probability algorithm.Traffic engineering, as a statistic used in telephony-based networks, signifies about 10 percent peak usage. This calculation is beyond the scope of this document. Data traffic, on the other hand, is different than voice traffic; and it will change when users become more computer savvy or when Voice over IP (VoIP) and Video on Demand (VoD) services are more available. For simplicity, assume 50 percent peak users × 20 percent of those users actually downloading at the same time. This would equal 10 percent peak usage also.All simultaneous users contend for the US and DS access.
Many modems can be active for the initial polling, but only one modem can be active in the US at any given instant in time. This is good in terms of noise contribution, because only one modem at a time adds its noise complement to the overall effect.An inherent limitation with the current standard is that some throughput is necessary for maintenance and provisioning, when many modems are tied to a single cable modem termination system (CMTS). This is taken away from the actual payload for active customers. This is known as keepalive polling, which usually occurs once every 20 seconds for DOCSIS but could occur more often. Also, per-modem US speeds can be limited by the Request-and-Grant mechanisms, as explained later in this document.Note: Remember that references to file size are in bytes made up of 8 bits. Thus, 128 kbps equals 16 KBps.
Likewise, 1 MB is actually equal to 1,048,576 bytes, not 1 million bytes, because binary numbers always yield a number that is a power of 2. A 5 MB file is actually 5 × 8 × 1,048,576 = 41.94 Mb and could be longer to download than anticipated.Assume that a CMTS card that has one DS and six US ports is in use. The one DS port is split to feed about 12 nodes. Half of this network is shown in. Figure 2 – Network Layout.500 homes per node × 80 percent cable take-rate × 20 percent modem take-rate = 80 modems per node.12 nodes × 80 modems per node = 960 modems per DS portNote: Many multiple service operators (MSOs) now quantify their systems as Households Passed (HHP) per node.
This is the only constant in today’s architectures, where you might have direct broadcast satellite (DBS) subscribers buying high speed data (HSD) service or only telephony without video service.Note: The US signal from each one of those nodes will probably be combined on a 2:1 ratio so that two nodes feed one US port.6 US ports × 2 nodes per US = 12 nodes.80 modems per node × 2 nodes per US = 160 modems per US port.DS symbol rate = 5.057 Msymbols/s or Mbaud. A filter roll-off (alpha) of about 18 percent gives 5.057 × (1 + 0.18) = 6 MHz wide “haystack,” as shown in. Figure 3 – Digital “Haystack”If 64-QAM is used, then 64 = 2 to the 6 th power (2 6). The exponent of 6 means 6 bits per symbol for 64-QAM; this gives 5.057 × 6 = 30.3 Mbps. After the entire forward error correction (FEC) and Motion Picture Experts Group (MPEG) overhead is calculated, this leaves about 28 Mbps for payload. This payload is further reduced, because it is also shared with DOCSIS signaling.Note: ITU-J.83 Annex B indicates Reed-Solomon FEC with a 128/122 code, which means 6 symbols of overhead for every 128 symbols, hence 6 / 128 = 4.7 percent. Trellis coding is 1 byte for every 15 bytes, for 64-QAM, and 1 byte per 20 bytes, for 256-QAM.
This is 6.7 percent and 5 percent, respectively. MPEG-2 is made up of 188-byte packets with 4 bytes of overhead (sometimes 5 bytes), which gives 4.5 / 188 = 2.4 percent. This is why you will see the speed listed as 27 Mbps, for 64-QAM, and as 38 Mbps, for 256-QAM. Remember that Ethernet packets also have 18 bytes of overhead, whether for a 1500-byte packet or a 46-byte packet. There are 6 bytes of DOCSIS overhead and IP overhead also, which could be a total of about 1.1 to 2.8 percent extra overhead and could add another possible 2 percent of overhead for DOCSIS MAP traffic. Actual tested speeds for 64-QAM have been closer to 26 Mbps.In the very unlikely event that all 960 modems download data at precisely the same time, they will each get only about 28 kbps.
If you look at a more realistic scenario and assume a 10 percent peak usage, you get a theoretical throughput of 280 kbps as a worst-case scenario during the busiest time. If only one customer is online, the customer would theoretically get 26 Mbps; but the US acknowledgments that must be transmitted for TCP limits the DS throughput, and other bottlenecks become apparent (such as the PC or the Network Interface Card NIC). In reality, the cable company will rate-limit this down to 1 or 2 Mbps, so as not to create a perception of available throughput that will never be achievable when more subscribers sign up.The DOCSIS US modulation of QPSK at 2 bits/symbol gives about 2.56 Mbps. This is calculated from the symbol rate of 1.28 Msymbols/s × 2 bits/symbol. The filter alpha is 25 percent, which gives a bandwidth (BW) of 1.28 × (1 + 0.25) = 1.6 MHz. Subtract about 8 percent for FEC, if it is used.
There is also approximately 5 to 10 percent of overhead for maintenance, reserved time slots for contention, and acknowledgments (“acks”). Thus, there is about 2.2 Mbps, which is shared amongst 160 potential customers per US port.Note: DOCSIS layer overhead = 6 bytes per 64-byte to 1518-byte Ethernet frame (could be 1522 bytes, if VLAN tagging is used). This also depends on the maximum burst size and whether concatenation or fragmentation are used.US FEC is variable: 128 / 1518 or 12 / 64 = 8 or 18 percent. Approximately 10 percent is used for maintenance, reserved time slots for contention, and acks.BPI security or Extended Headers = 0 to 240 bytes (usually 3 to 7).Preamble = 9 to 20 bytes.Guardtime = 5 symbols = 2 bytes.Assuming 10 percent peak usage, this gives 2.2 Mbps / (160 × 0.1) = 137.5 kbps as the worst-case payload per subscriber. For typical residential data usage (for example, web browsing) you probably do not need as much US throughput as DS. This speed might be sufficient for residential usage, but it is not sufficient for commercial service deployments.There is a plethora of limiting factors that affect “real” data throughput. These range from the Request-and-Grant cycle to DS interleaving.
Understanding the limitations will aid in expectations and optimization.The transmission of MAP messages sent to modems reduce DS throughput. A MAP of time is sent on the DS, to allow modems to request time for US transmission.
If a MAP is sent every 2 ms, it adds up to 1 / 0.002s = 500 MAPs/s. If the MAP takes up 64 bytes, that equals 64 bytes × 8 bits per byte × 500 MAPs/s = 256 kbps. If you have six US ports and one DS port on a single blade in the CMTS chassis, this is 6 × 256000 bps = 1.5 Mbps of DS throughput used to support all of the modems’ MAP messages.
This assumes that the MAP is 64 bytes and that it is actually sent every 2 ms. In reality, MAP sizes could be slightly larger, depending on the modulation scheme and the amount of US bandwidth that is used. This could easily be 3 to 10 percent overhead. Further, there are other system maintenance messages that are transmitted in the DS channel. These also increase overhead; however, the effect is typically negligible. MAP messages can place a burden on the Central Processing Unit (CPU), as well on DS throughput performance, because the CPU needs to keep track of all of the MAPs.When you place any TDMA and standard code division multiple access (S-CDMA) channel on the same US, the CMTS must send “double maps” for each physical port.
Thus, DS MAP bandwidth consumption is doubled. This is part of the DOCSIS 2.0 specification, and it is required for interoperability. Furthermore, US channel descriptors and other US control messages are also doubled.In the US path, the Request-and-Grant cycle between the CMTS and the CM can only take advantage of every other MAP at most, depending upon the Round Trip Time (RTT), the length of the MAP, and the MAP advance time. This is due to the RTT that is affected by DS interleaving and the fact that DOCSIS only allows a modem to have a single Request outstanding at any given time, as well as a “Request-to-Grant latency” that is associated with it. This latency is attributed to the communication between the CMs and the CMTS, which is protocol-dependent.
In brief, CMs must first ask permission from the CMTS to send data. The CMTS must service these Requests, check the availability of the MAP scheduler, and queue it up for the next unicast transmit opportunity. This back-and-forth communication, which is mandated by the DOCSIS protocol, produces such latency. The modem might miss every other MAP, because it is waiting for a Grant to come back in the DS from its last Request.A MAP interval of 2 ms results in 500 MAPs per second / 2 = 250 MAP opportunities per second, thus 250 PPS.
The 500 MAPs is divided by 2 because, in a “real” plant, the RTT between the Request and the Grant will be much longer than 2 ms. It could be more than 4 ms, which will be every other MAP opportunity. If typical packets made up of 1518-byte Ethernet frames are sent at 250 PPS, that would equal about 3 Mbps because there are 8 bits in a byte. So this is a practical limit for US throughput for a single modem.
If there is a limit of about 250 PPS, what if the packets are small (64 bytes)? That is only 128 kbps. This is where concatenation helps; see the section of this document.Depending on the symbol rate and modulation scheme used for the US channel, it could take over 5 ms to send a 1518-byte packet. If it takes over 5 ms to send a packet US to the CMTS, the CM just missed about three MAP opportunities on the DS. Now the PPS is only 165 or so.
If you decrease the MAP time, there could be more MAP messages at the expense of more DS overhead. More MAP messages will give more opportunities for US transmission, but in a real hybrid fiber-coaxial (HFC) plant, you just miss more of those opportunities anyway.Fortunately, DOCSIS 1.1 adds Unsolicited Grant Service (UGS), which allows voice traffic to avoid this Request-and-Grant cycle. Instead, the voice packets are scheduled every 10 or 20 ms until the call ends.Note: When a CM is transmitting a large block of data US (for example, a 20 MB file), it will piggyback bandwidth Requests in data packets rather than use discrete Requests, but the modem still has to do the Request-and-Grant cycle. Piggybacking allows Requests to be sent with data in dedicated time slots, instead of in contention slots, to eliminate collisions and corrupted Requests.A point that is often overlooked when someone tests for throughput performance is the actual protocol that is in use. Is it a connection-oriented protocol, like TCP, or connection-less, like User Datagram Protocol (UDP).
UDP sends information with no regard to received quality. This is often referred to as “best-effort” delivery. If some bits are received in error, you make do and move on to the next bits. TFTP is another example of this best-effort protocol.
This is a typical protocol for real-time audio or streaming video. TCP, on the other hand, requires an acknowledgment to prove that the sent packet was correctly received. FTP is an example of this. If the network is well maintained, the protocol might be dynamic enough to send more packets consecutively before an acknowledgment is requested.
This is referred to as “increasing the window size,” which is a standard part of the transmission control protocol.Note: One thing to note about TFTP is that, even though it uses less overhead because it uses UDP, it usually uses a step ack approach, which is terrible for throughput. This means that there will never be more than one outstanding data packet. Thus, it would never be a good test for true throughput.The point here is that DS traffic will generate US traffic in the form of more acknowledgments. Also, if a brief interruption of the US results in the drop of a TCP acknowledgment, then the TCP flow will slow down.
This would not happen with UDP. If the US path is severed, the CM will eventually fail the keepalive polling, after about 30 seconds, and it will start to scan DS again.
Both TCP and UDP will survive brief interruptions, because TCP packets will get queued or lost and DS UDP traffic will be maintained.The US throughput can limit the DS throughput as well. For example, if the DS traffic travels through coaxial or over satellite, and the US traffic travels through telephone lines, then the 28.8 kbps US throughput can limit the DS throughput to less than 1.5 Mbps, even though it might have been advertised as 10 Mbps maximum.
This is because the low speed link adds latency to the acknowledgment US flow, which then causes TCP to slow down the DS flow. To help alleviate this bottleneck problem, Telco Return takes advantage of Point-to-Point Protocol (PPP) and makes the acknowledgments much smaller.MAP generation on the DS affects the Request-and-Grant cycle on the US. When TCP traffic is handled, the acknowledgments must also go through the Request-and-Grant cycle. The DS can be severely hampered, if the acknowledgments are not concatenated on the US. For example, “gamers” might be sending traffic on the DS in 512-byte packets. If the US is limited to 234 PPS and the DS is 2 packets per acknowledgment, that would equal 512 × 8 × 2 × 234 = 1.9 Mbps.Typical Window rates are 2.1 to 3 Mbps download. UNIX or Linux devices often perform better, because they have an improved TCP/IP stack and do not need to send an ack for every other DS packet that is received.
You can verify if the performance limitation is inside the Windows TCP/IP driver. Often this driver behaves poorly during limited ack performance. You can use a protocol analyzer from the Internet.
This is a program that is designed to display your Internet connection parameters, which are extracted directly from TCP packets that you send to the server. A protocol analyzer works as a specialized web server.
It does not, however, serve different web pages; rather, it responds to all requests with the same page. The values are modified based on the TCP settings of your requesting client. It then transfers control to a CGI script that does the actual analysis and displays the results. A protocol analyzer can help you to check that downloaded packets are 1518 bytes long (DOCSIS Maximum Transmission Unit MTU) and to check that US acknowledgements run near 160 to 175 PPS. If the packets are below these rates, update your Windows drivers and adjust your UNIX or Windows NT host.You can change settings in the Registry, to adjust your Windows host.
First, you can increase your MTU. The packet size, referred to as MTU, is the greatest amount of data that can be transferred in one physical frame on the network. For Ethernet, the MTU is 1518 bytes; for PPPoE, it is 1492; and for dial-up connections, it is often 576. The difference comes from the fact that, when larger packets are used, then the overhead is smaller, you have less routing decisions, and clients have less protocol processing and device interrupts.Each transmission unit consists of header and actual data. The actual data is referred to as Maximum Segment Size (MSS), which defines the largest segment of TCP data that can be transmitted. Essentially, MTU = MSS + TCP/IP headers.
Therefore, you might want to adjust your MSS to 1380, to reflect the maximum useful data in each packet. Also, you can optimize your Default Receive Window (RWIN) after you adjust your current MTU and MSS settings: a protocol analyzer will suggest the best value. A protocol analyzer can also help you ensure these settings:.MTU Discovery ( ) = ON.Selective Acknowledgement ( ) = ON.Timestamps ( ) = OFF.TTL (Time to Live) = OKDifferent network protocols benefit from different network settings in the Windows Registry. The optimal TCP settings for cable modems seem to be different than the default settings in Windows.
Therefore, each operating system has specific information on how to optimize the Registry. For example, Windows 98 and later versions have some improvements in the TCP/IP stack. These include:.Large window support, as described in.Selective Acknowledgments (SACK) support.Fast Retransmission and Fast Recovery supportThe WinSock 2 update for Windows 95 supports TCP large windows and time stamps, which means you could use the Windows 98 recommendations if you update the original Windows Socket to version 2.
Windows NT is slightly different from Windows 9x in how it handles TCP/IP. Remember that, if you apply the Windows NT tweaks, you will see less performance increase than in Windows 9x, simply because NT is better optimized for networking.However, to change the Windows Registry requires some proficiency with Windows customization. If you do not feel comfortable with editing the Registry, then you will need to download a “ready to use” patch from the Internet, which can automatically set the optimal values in the Registry.
To edit the Registry, you must use an editor, such as Regedit (choose START Run and type Regedit in the Open field).There are many factors that can affect data throughput:.total number of users.bottleneck speed.type of services accessed.cache server usage.MAC layer efficiency.noise and errors on the cable plant.many other factors, such as limitations inside the Windows TCP/IP driverThe more users that share the “pipe,” the more the service slows down. Further, the bottleneck might be the web site that you are accessing, not your network.
When you take into consideration the service in use, regular e-mail and web surfing is very inefficient, as far as time goes. If video streaming is used, many more time slots are needed for this type of service.You can use a proxy server to cache some frequently downloaded sites to a computer that is in your local area network, to help alleviate traffic on the entire Internet.While “reservation and grant” is the preferred scheme for DOCSIS modems, there are limitations on per-modem speeds. This scheme is much more efficient for residential usage than it is for polling or pure CSMA/CD.Many systems are decreasing the homes per node ratio from 1000 to 500 to 250 to passive optical network (PON) or fiber-to-the-home (FTTH). PON, if designed correctly, could pass up to 60 people per node with no actives attached. FTTH is being tested in some regions, but it is still very cost prohibitive for most users. It could actually be worse, if you decrease the homes per node but still combine the receivers in the headend. Two fiber receivers are worse than one, but the fewer homes per fiber, the less likely you will experience laser clipping from ingress.The most obvious segmentation technique is to add more fiber optic equipment.
Some newer designs decrease the number of homes per node down to 50 to 150 HHP. It does no good to decrease the homes per node if you just combine them again in the headend (HE) anyway. If two optical links of 500 homes per node are combined in the HE and share the same CMTS US port, this could realistically be worse than if one optical link of 1000 homes per node was used.Many times, the optical link is the limiting noise contributor, even with the multitude of actives funneling back.
You must segment the service, not just the number of homes per node. It will cost more money to decrease the number of homes per CMTS port or service, but it will alleviate that bottleneck in particular. The nice thing about fewer homes per node is that there is less noise and ingress, which can cause laser clipping, and it is easier to segment to fewer US ports later.DOCSIS has specified two modulation schemes for DS and US and five different bandwidths to use in the US path. The different symbol rates are 0.16, 0.32, 0.64, 1.28, and 2.56 Msymbols/s with different modulation schemes, such as QPSK or 16-QAM. This allows flexibility to select the throughput required versus the robustness that is needed for the return system in use. DOCSIS 2.0 has added even more flexibility, which will be expanded upon later in this document.There is also the possibility of frequency hopping, which allows a “non-communicator” to switch (hop) to a different frequency.
The compromise here is that more bandwidth redundancy must be assigned and, hopefully, the “other” frequency is clean before the hop is made. Some manufacturers set up their modems to “look before you leap.”As technology becomes more advanced, ways will be found to compress more efficiently or to send information with a more advanced protocol that either is more robust or is less bandwidth intensive. This could entail the use of DOCSIS 1.1 QoS provisioning, payload header suppression (PHS), or DOCSIS 2.0 features.There is always a give-and-take relationship between robustness and throughput. The speed that you get out of a network is usually related to the bandwidth that is used, the resources allocated, the robustness against interference, or the cost.It would appear that the US throughput is limited to around 3 Mbps, due to the previously explained DOCSIS latency. It would also appear that it does not matter if you increase the US bandwidth to 3.2 MHz or the modulation to 16-QAM, which would give a theoretical throughput of 10.24 Mbps.
An increase of the channel BW and modulation does not significantly increase per-modem transfer rates, but it does allow more modems to transmit on the channel. Remember that the US is a TDMA-based, slotted contention medium where time slots are granted by the CMTS.
More channel BW means more US bps, which means more modems can be supported. Therefore, it does matter if you increase the US channel bandwidth. Also, recall that a 1518-byte packet only takes up 1.2 ms of wire time on the US and helps the RTT latency.You can also change the DS modulation to 256-QAM, which increases the total throughput on the DS by 40 percent and decreases the interleave delay for US performance. Keep in mind, however, that you will disconnect all modems on the system temporarily, when you make this change.Caution: Extreme caution should be used before you change the DS modulation. You should make a thorough analysis of the DS spectrum, to verify whether your system can support a 256-QAM signal.
Failure to do so can severely degrade your cable network performance.Caution: Issue the command to change the DS modulation to 256-QAM: VXR(config)# interface cable 3/0VXR(config-if)# cable downstream modulation 256qamFor more information on US modulation profiles and return path optimization, refer to. Also refer to. Change uw8 to uw16 for the Short and Long Interval Usage Codes (IUC), in the default mix profile.Caution: Extreme caution should be used before you increase the channel width or change the US modulation. You should make a thorough analysis of the US spectrum with a spectrum analyzer, to find a wide enough band that has an adequate carrier-to-noise ratio (CNR) to support 16-QAM.
Failure to do so can severely degrade your cable network performance or lead to a total US outage.Caution: Issue the command to increase the US channel width: VXR(config-if)# cable upstream 0 channel-width 3200000Refer to.Electrical burst noises from amplifier power supplies and from utility powering on the DS path can cause errors in blocks. This can cause worse problems with throughput quality than errors that are spread out from thermal noises. In an attempt to minimize the affect of burst errors, a technique known as interleaving is used, which spreads data over time. Because the symbols on the transmit end are intermixed then reassembled on the receive end, the errors will appear spread apart. FEC is very effective against errors that are spread apart. The errors caused by a relatively long burst of interference can still be corrected by FEC, when you use interleaving.
Because most errors occur in bursts, this is an efficient way to improve the error rate.Note: If you increase the FEC interleave value, then you add latency to the network.DOCSIS specifies five different levels of interleaving (EuroDOCSIS only has one). 128:1 is the highest amount of interleaving and 8:16 is the lowest. 128:1 indicates that 128 codewords made up of 128 symbols each will be intermixed on a 1 for 1 basis.
8:16 indicates that 16 symbols are kept in a row per codeword and are intermixed with 16 symbols from 7 other codewords.The possible values for Downstream Interleaver Delay are as follows, in microseconds (µs or usecs): I (no. Of taps)J (increment)64-QAM256-QAMInterleaving does not add overhead bits like FEC; but it does add latency, which could affect voice and real-time video. It also increases the Request-and-Grant RTT, which might cause you to go from every other MAP opportunity to every third or fourth MAP. That is a secondary effect, and it is that effect which can cause a decrease in peak US data throughput. Therefore, you can slightly increase the US throughput (in a PPS per modem way) when the value is set to a number lower then the typical default of 32.As a workaround for the impulse noise issue, the interleaving value can be increased to 64 or 128. However, when you increase this value, performance (throughput) might degrade, but noise stability will be increased in the DS.
In other words, either the plant must be maintained properly; or more uncorrectable errors (lost packets) in the DS will be seen, to a point where modems start to loose connectivity and there is more retransmission.When you increase the interleave depth to compensate for a noisy DS path, you must factor in a decrease in peak CM US throughput. In most residential cases, that is not an issue, but it is good to understand the trade-off. If you go to the maximum interleaver depth of 128:1 at 4 ms, this will have a significant, negative impact on US throughput.Note: The delay is different for 64-QAM versus 256-QAM.You can issue the command. This is an example that reduces the interleave depth to 8: VXR(config-if)# cable downstream interleave-depth 8Caution: This command will disconnect all modems on the system, when it is implemented.For US robustness to noise, DOCSIS modems allow variable or no FEC.
When you turn off US FEC, you will get rid of some overhead and allow more packets to be passed, but at the expense of robustness to noise. It is also advantageous to have different amounts of FEC associated with the type of burst. Is the burst for actual data or for station maintenance?
Is the data packet made up of 64 bytes or 1518 bytes? You might want more protection for larger packets. There is also a point of diminishing returns; for example, a change from 7 percent to 14 percent FEC might only give 0.5 dB more robustness.There is no interleaving in the US currently, because the transmission is in bursts and there is not enough latency within a burst to support interleaving. Some chip manufacturers are adding this feature for DOCSIS 2.0 support, which could have a huge impact, if you consider all of the impulse noise from home appliances.
US interleaving will allow FEC to work more effectively.Dynamic Map Advance uses a dynamic look-ahead time, in MAPs, that can significantly improve the per-modem US throughput.
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
March 2023
Categories |