Core IP Routing and Switching Technology - Cvt-dallas.org
Core IP Routing and Switching Technology - Cvt-dallas.org
Core IP Routing and Switching Technology - Cvt-dallas.org
You also want an ePaper? Increase the reach of your titles
YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.
<strong>Core</strong> <strong>IP</strong> <strong>Routing</strong> <strong>and</strong><br />
<strong>Switching</strong> <strong>Technology</strong><br />
February 18, 2003<br />
Tom McDermott<br />
Chiaro Networks<br />
Richardson, Texas
Outline<br />
• Changes in the Carrier Market<br />
• Evolution leading to cost overhead<br />
• Approaches to Network Cost Reduction<br />
– Combine multiple services onto one<br />
network<br />
– Collapse multiple layers of the network<br />
• An <strong>IP</strong> approach to consolidation<br />
• Technologies Involved to build a large <strong>IP</strong><br />
Router / MPLS Switch<br />
– Switch Fabric<br />
– Reliability <strong>and</strong> Redundancy<br />
– General Purpose (Soft) Packet Processing<br />
2
Market Change: Total Implosion<br />
• Data revenue per bit still going down<br />
• Data traffic still growing<br />
• Voice revenue stabilizing<br />
• It’s not best effort traffic that foots the bill.<br />
• Need Revenue-per-bit > Cost-per-bit<br />
• Must simultaneously:<br />
– Drive equipment cost down<br />
– Less expensive equipment or<br />
– More efficient network design<br />
– Drive operational costs down<br />
– Do more with less complexity<br />
– Provide services with better revenue<br />
profile than best-effort: VPN, Vo<strong>IP</strong>, etc.<br />
3
Where We are Today<br />
• A lot of legacy equipment – designed first to<br />
support TDM then adapted to fit packets.<br />
• Must cross multiple layers to transit the core:<br />
– Outside plant fiber, DWDM transmission<br />
– SONET/SDH multiplex equipment<br />
– Crossconnect equipment (protection, grooming)<br />
– <strong>Routing</strong> <strong>and</strong> <strong>Switching</strong> equipment<br />
• Multiple service devices<br />
– Frame Relay<br />
– ATM<br />
– Ethernet MAN<br />
– Packet over SONET (POS)<br />
– Everything over MPLS (soon…)<br />
• Trend: <strong>IP</strong> over Everything, Everything over <strong>IP</strong> 1<br />
– over MPLS?<br />
1<br />
Vint Cerf, Worldcom<br />
4
Backbone Architecture: Old & New<br />
SDH/<br />
SONET<br />
Legacy Network<br />
Circuit Protection<br />
Transport Layer<br />
Next Generation Network<br />
Optical Mesh(& Link) based<br />
GMPLS Protection,<br />
Wavelength-based VPN<br />
OXC<br />
Circuit Grooming & Protection<br />
EXC<br />
EXC<br />
Circuit-based Grooming<br />
Circuit-based VPN<br />
GMPLS Protection?<br />
<strong>Core</strong><br />
Router<br />
Aggregation<br />
Packet-based Grooming<br />
Packet-based VPN<br />
Packet Aggregation<br />
(fan-out)<br />
MPLS Grooming<br />
MPLS <strong>Switching</strong><br />
Packet-based Grooming<br />
Packet-based VPN<br />
Packet Aggregation<br />
<strong>IP</strong> /<br />
MPLS<br />
<strong>Core</strong><br />
Router<br />
Service<br />
Service<br />
Circuit-switched<br />
voice, data services<br />
Vo<strong>IP</strong>, VPN, Etc.<br />
Service<br />
Service<br />
5
Next Generation Backbone<br />
Network Trends<br />
• Conversion of the backbone network from<br />
circuits to packets is in process<br />
– Fine-grained grooming, aggregation, compression,<br />
switching, <strong>and</strong> protection is less expensive using<br />
packet technology.<br />
– Data traffic is already <strong>IP</strong> (although it may be carried by<br />
ATM or FR). VPN desires good TCP performance.<br />
–Voicetraffic slowly starting to migrate to <strong>IP</strong><br />
– New long-haul transmission spans going regeneratorless,<br />
<strong>and</strong> protection-less to reduce cost.<br />
– Debate: switching λ or routing packets?<br />
– <strong>Routing</strong> packets provides more value-add for carrier<br />
6
Next Generation Backbone<br />
Network Issues - 1<br />
• Truly scalable, reliable packet routers not available<br />
– Clustered routers used for scale today<br />
–Duplicatedrouters used for redundancy today<br />
→ Scalable, reliable router needed<br />
• Protection of simplified spans requires networkwide<br />
solution<br />
– Several approaches:<br />
– Link:<br />
– Protection in the network elements (works also<br />
for legacy spans), ~MPLS-FRR<br />
–Mesh:<br />
– Optical Crossconnect<br />
– Electrical Crossconnect<br />
–MPLS, GMPLS<br />
7
Next Generation Backbone<br />
Network Issues - 2<br />
• Multimedia traffic requires delay <strong>and</strong> jitter<br />
control (deterministic)<br />
–Approaches:<br />
– Over-provision links<br />
– Hope TCP does not overload, avoid downstream<br />
TCP merge<br />
– Groom traffic (MPLS)<br />
– Keep TCP out of real-time links<br />
– QoS for real-time traffic<br />
– Restrict TCP’s share of b<strong>and</strong>width on a link<br />
→ Need router with traffic management <strong>and</strong>/or large size<br />
8
Next Generation Backbone<br />
Network Issues - 3<br />
• VPN requires cost/performance tradeoff knobs<br />
(gold, silver, bronze)<br />
– Approaches<br />
– Supply an entire λ (very expensive)<br />
– Electrical multiplexing (expensive, inflexible)<br />
– Packet multiplexing (efficient, flexible, but large<br />
management effort)<br />
– Use packet QoS markers for drop probability,<br />
packet rate-limiting to assure service<br />
– Mark at the edge, tunnel through the core<br />
– T<strong>and</strong>em switching architectures scale better<br />
for large networks than a flat core<br />
9
Proposed Network Solutions<br />
• Peer (integrated) transport & packet<br />
networks<br />
– GMPLS provides integrated control-plane<br />
– Issue: all switch-able NE’s require huge routing tables,<br />
intelligent control processor<br />
• Overlay (separated) transport & packet<br />
networks<br />
– Transport layer provides UNI 1.0 API to packet layer<br />
– Issue: NE auto-discovery, <strong>IP</strong>-to-physical binding<br />
• GMPLS: envisioned for λ switching<br />
– Cost / Benefit tradeoff: another layer vs. lower cost<br />
transparent switch points<br />
– Small-medium networks: poor<br />
– Huge networks: may work<br />
10
Technologies - 1<br />
• Large Routers:<br />
– Require large, fast, non-blocking switch fabric<br />
– Traditional approaches (electronic crossbar, shared<br />
memory) have not scaled very well past 300 Gb/s<br />
– Per-trace b<strong>and</strong>width, Memory b<strong>and</strong>width<br />
limitations<br />
– Distributed Switches are being tried<br />
– Concerns about fault tolerance, rerouting, delay,<br />
reconfiguration difficulty<br />
– New approaches<br />
– High-speed signal I/O (3.125 10 Gb/s per pin)<br />
– All-optical Fabrics – need to switch fast<br />
– Difficult to schedule efficiently<br />
– Need massively parallel scheduling engine<br />
11
Crossbar <strong>Switching</strong> Architecture<br />
Network<br />
Proc.<br />
Line<br />
Card<br />
Crossbar<br />
Switch<br />
Network<br />
Proc.<br />
Line<br />
Card<br />
• Large crossbar switch<br />
can provide scale <strong>and</strong><br />
performance<br />
• Redundancy needed<br />
between internal<br />
components<br />
Network<br />
Proc.<br />
Line<br />
Card<br />
Network<br />
Proc.<br />
Line<br />
Card<br />
Global<br />
Arbitration<br />
Optical<br />
Electrical<br />
12
Optical Phased Array –<br />
Multiple Parallel Optical Waveguides<br />
Output<br />
Fibers<br />
• • •<br />
WG #1 WG #128<br />
Air Air Gap Gap<br />
Input<br />
Optical Fiber<br />
13
Optical Switch Die<br />
18 Beam Deflectors per Die<br />
128 Waveguides per Beam Deflector<br />
Magnified<br />
View<br />
0.5 inches<br />
1.2 inches<br />
Flip-chip<br />
14
64 x 64 Optical<br />
Switch<br />
• 72 Beam Deflectors<br />
• 30 nanosecond switching<br />
speed<br />
• Optically transparent<br />
• Lab test at 160 Gb/s per<br />
fiber (AT&T Labs)<br />
15
Technologies - 2<br />
• Reliable Routers:<br />
– Must be highly reliable<br />
– Avoid duplication of routers, transmission <strong>and</strong><br />
operations costs<br />
– Avoid clustering interconnect cost <strong>and</strong> extra faults<br />
– One approach is an architecture that resembles a<br />
Class 4 t<strong>and</strong>em switch.<br />
– Cost effective due to internal 1:N redundancy<br />
– Careful design to avoid single-point faults<br />
– Software outages <strong>and</strong> protocol state loss can be well<br />
addresses with stateful protection <strong>and</strong> recovery<br />
(backwards compatible to existing routers).<br />
16
Technologies - 3<br />
• High-Performance <strong>and</strong> Flexibility<br />
– Must run at wire-speed<br />
– Packet forwarding <strong>and</strong> MPLS switching at line rate<br />
– Able to alter function of line card<br />
– Changes in protocols<br />
– Traffic Management <strong>and</strong> Classification<br />
– Migrate to different service profiles<br />
– MPLS label h<strong>and</strong>ler (push, pop, swap)<br />
– Router control must not bog down as the system<br />
scales<br />
– Some autonomy to packet processing<br />
17
<strong>Technology</strong>: Soft Packet Processing<br />
• 2.5 Gb/s <strong>and</strong> 10 Gb/s Packet Processor Engines<br />
• ‘Soft’ processing of packets<br />
• <strong>IP</strong>v4 ( <strong>IP</strong>v6), FR, ATM, MPLS<br />
• Programmable Traffic Management<br />
• Classification processor: look-aside<br />
• VOQ in hardware<br />
18
Summary<br />
• Cost reduction <strong>and</strong> Revenue<br />
enhancement critical to operators<br />
• Network model has to change:<br />
– Eliminate layers in the network<br />
– Converge parallel networks<br />
– Reduce NE duplication via High Availability<br />
• Enhance better revenue-profile services<br />
– Provide appropriate service profile for each<br />
class of service<br />
19