15.01.2013 Views

U. Glaeser

U. Glaeser

U. Glaeser

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

also decides the order in which these packets should be sent out on the associated output link, presumably<br />

according to certain quality of service (QoS) policy. Finally, before the packet is forwarded to the nexthop<br />

router, the network processor modifies the packet’s header or even payload according to standard<br />

network protocols or application-specific semantics. For IP packets, at least the TTL (time-to-live) field<br />

in the header must be decremented at each hop and as a result the IP packet header checksum needs to<br />

be re-computed. Other IP header fields such as TOS (type-of-service) may also need to be modified for<br />

QoS reasons. In some cases, even the packet body need to be manipulated, e.g., transcoding of a video<br />

packet in the presence of congestion. In summary, given an input packet, the network processor needs<br />

to identify its output interface, schedule its transmission on the associated output link, and make necessary<br />

modifications to its header or payload to satisfy general protocol or application-specific requirements.<br />

Fundamentally, a network processor performs three types of tasks: packet classification, packet scheduling,<br />

and packet forwarding. Given an input IP packet, the packet classification module in the network<br />

processor decides how to process this packet, based on the packet’s header and sometimes even payload<br />

fields. In the simplest case, the result of packet classification is the output interface through which the<br />

input packet should be forwarded. To support differentiated QoS, the result of packet classification<br />

becomes a specific output connection queue into which the input packet should be buffered. In the most<br />

general case, the result of packet classification points to the software routine that is to be invoked to<br />

process the input packet; possible processing ranges from forwarding the input packet into an output<br />

interface and a buffer queue, to arbitrarily complex packet manipulation. The design challenge of packet<br />

classification is that the number of bits used in packet classification is increasing due to IPv6 and/or<br />

multiple header fields, and varying because of application-level protocols such as URL in the HTTP<br />

protocol.<br />

The packet forwarding module of a network processor physically moves an input packet from an<br />

incoming interface to its corresponding outgoing interface. The key design issues on packet forwarding<br />

are the topology of the switch fabric and the switch scheduling policy to resolve output contention, i.e.,<br />

when multiple incoming packets need to be forwarded to the same output interface. State-of-the-art<br />

network devices are based on crossbar fabrics, which are more expensive but greatly reduce the implementation<br />

complexity of the switch scheduler. Given a crossbar fabric, the switch scheduler finds a match<br />

between the incoming packets and the output interfaces so that the switch fabric is utilized with<br />

maximum efficiency and the resulting matching is consistent with the output link’s scheduling policy,<br />

which in turn depends on the QoS requirement. Algorithmically, this is a constrained bipartite graph<br />

matching problem, which is known to be NP-complete. The design challenge of switch scheduling is<br />

to find a solution that approximates the optimal solution as closely as possible and that is simple enough<br />

for efficient hardware implementation. One such algorithm is iterative random matching [1] and its<br />

optimized variant [2].<br />

Traditionally, a FIFO queue is associated with each output link of a network device to buffer all outgoing<br />

packets through that link. To support fine-grained QoS, such as per-network-connection bandwidth<br />

guarantee, one buffer queue is required for each network connection whose QoS is to be protected from<br />

the rest of the traffic. After classification, packets that belong to a specific connection are buffered in the<br />

connection’s corresponding queue. A link scheduler then schedules the packets in the per-connection<br />

queues that share the same output link in an order that is consistent with each connection’s QoS<br />

requirement. A general framework of link scheduling is packetized fair queuing (PFQ) [3], which performs<br />

the following two operations for each incoming packet, virtual finish time computation, which is<br />

O(N) computation, and priority queue sorting, which is O(log N) computation, where N is the number<br />

of active connections associated with an output interface. Intuitively, a packet’s virtual finish time<br />

corresponds to the logical time at which that packet should be sent if the output link is scheduled<br />

according to the fluid fair queuing model. After the virtual finish time for each packet is computed,<br />

packets are sent out in an ascending order of their virtual finish time. A nice property of virtual finish<br />

time is that an earlier packet’s virtual finish time is unaffected by the arrival of subsequent packets. With<br />

per-connection queuing and output link scheduling, traffic shaping is automatic if packets are dropped<br />

when they reach a queue that is full. As the complexity of both operations in link scheduling depends<br />

© 2002 by CRC Press LLC

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!