Medianet Reference Guide - Cisco
Medianet Reference Guide - Cisco
Medianet Reference Guide - Cisco
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
Bandwidth Over Subscription<br />
Chapter 2<br />
<strong>Medianet</strong> Bandwidth and Scalability<br />
on the wire. The next device upstream sees an incoming microburst twice as large as normal. If the<br />
RxRing saturates, it is possible to begin dropping packets at very modest 1 second average loads. As<br />
more video is added, the probability that multiple frames will converge increases. This can also load Tx<br />
Queues, especially if the multiple high speed source interfaces are bottlenecking into a single low-speed<br />
WAN link.<br />
Another concern is when service provider policers cannot accept large, or back-to-back bursting. Video<br />
traffic that may naturally synchronize frame transmission is of particular concern and is likely to<br />
experience drops well below 90 percent circuit utilization. Multipoint TelePresence is a good example<br />
of this type of traffic. The <strong>Cisco</strong> TelePresence Multipoint Switch replicates the video stream to each<br />
participant by swapping IP headers. Multicast interfaces with a large fanout are another example. These<br />
types of interfaces are often virtual WAN links such as Dynamic Multipoint Virtual Private Network<br />
(DMVPN), or virtual interfaces such as Frame Relay. In both cases, multipoint flows fanout at the<br />
bandwidth bottleneck. The same large packet is replicated many times and packed on the wire close to<br />
the previous packet.<br />
Buffer and queue depths of the Tx interface can be overrun. Knowing the queue buffer depth and<br />
maximum expected serialization delay is a good method to determine how much video an interface can<br />
handle before drops. When multiple video streams are on a single path, consider the probability that one<br />
frame will overlap or closely align with another frame. Some switches allow the user some granularity<br />
when allocated shared buffer space. In this case, it is wise to ensure buffers that can be expected to<br />
process long groups of real-time packets and have an adequate pool of memory. This can mean<br />
reallocating memory away from queues where packets are very periodic and groups of packets are<br />
generally small.<br />
For now, some general guidelines are presented as the result of lab verification of multipoint<br />
TelePresence. Figure 2-8 shows both the defaults and tuned buffer allocation on a <strong>Cisco</strong> Catalyst 3750G<br />
Switch. Additional queue memory has been allocated to queues where tightly spaced packets are<br />
expected. By setting the buffer allocation to reflect the anticipated packet distribution, the interface can<br />
reach a higher utilization as a percent of line speed.<br />
2-14<br />
<strong>Medianet</strong> <strong>Reference</strong> <strong>Guide</strong><br />
OL-22201-01