13.01.2015 Views

Digital Subscriber Line Access Multiplexer (DSLAM) Example Design

Digital Subscriber Line Access Multiplexer (DSLAM) Example Design

Digital Subscriber Line Access Multiplexer (DSLAM) Example Design

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong><br />

<strong>Multiplexer</strong> (<strong>DSLAM</strong>)<br />

<strong>Example</strong> <strong>Design</strong><br />

Application Note<br />

May 2002<br />

Order Number: 251070-001


Information in this document is provided in connection with Intel ® products. No license, express or implied, by estoppel or otherwise, to any intellectual<br />

property rights is granted by this document. Except as provided in Intel’s Terms and Conditions of Sale for such products, Intel assumes no liability<br />

whatsoever, and Intel disclaims any express or implied warranty, relating to sale and/or use of Intel products including liability or warranties relating to<br />

fitness for a particular purpose, merchantability, or infringement of any patent, copyright or other intellectual property right. Intel products are not<br />

intended for use in medical, life saving, or life sustaining applications.<br />

Intel may make changes to specifications and product descriptions at any time, without notice.<br />

<strong>Design</strong>ers must not rely on the absence or characteristics of any features or instructions marked “reserved” or “undefined.” Intel reserves these for<br />

future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them.<br />

The <strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong> may contain design defects or errors known as errata which may cause the<br />

product to deviate from published specifications. Current characterized errata are available on request.<br />

This document and the software described in it are furnished under license and may only be used or copied in accordance with the terms of the<br />

license. The information in this document is furnished for informational use only, is subject to change without notice, and should not be construed as a<br />

commitment by Intel Corporation. Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may appear in this<br />

document or any software that may be provided in association with this document. Except as permitted by such license, no part of this document may<br />

be reproduced, stored in a retrieval system, or transmitted in any form or by any means without the express written consent of Intel Corporation.<br />

Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.<br />

Copies of documents which have an ordering number and are referenced in this document, or other Intel literature may be obtained by calling<br />

1-800-548-4725 or by visiting Intel's website at http://www.intel.com.<br />

Copyright © Intel Corporation, 2002<br />

*Other names and brands may be claimed as the property of others.


Contents<br />

Contents<br />

1.0 Introduction ...............................................................................................................................5<br />

1.1 Purpose of <strong>DSLAM</strong> <strong>Example</strong> <strong>Design</strong>....................................................................................5<br />

1.2 Scope of <strong>Example</strong> <strong>Design</strong> ....................................................................................................5<br />

1.3 Execution Environment.........................................................................................................6<br />

1.3.1 Software...................................................................................................................6<br />

1.3.2 Hardware .................................................................................................................6<br />

2.0 System Overview.....................................................................................................................7<br />

2.1 Software Partitioning.............................................................................................................8<br />

2.2 Dual Chip Data Flow...........................................................................................................10<br />

2.3 StrongARM Core Initialization.............................................................................................13<br />

2.4 Microengine Initialization ....................................................................................................13<br />

3.0 Microengine Functional Blocks.......................................................................................14<br />

3.1 Receive Microblock Group..................................................................................................14<br />

3.1.1 Initialization Microblock..........................................................................................15<br />

3.1.2 Ingress Microblock.................................................................................................15<br />

3.1.3 Transform Microblock ............................................................................................16<br />

3.1.4 Egress Microblock .................................................................................................17<br />

3.2 Receive Processor Transmit Microblock Group .................................................................19<br />

3.3 Transmit Processor Receive Microblock Group .................................................................20<br />

3.3.1 Initialization Microblock..........................................................................................20<br />

3.3.2 Ingress Microblock.................................................................................................20<br />

3.3.3 Transform Microblocks...........................................................................................21<br />

3.3.4 Egress Microblock .................................................................................................21<br />

3.4 Transmit Processor Traffic Scheduler ................................................................................21<br />

3.5 Transmit Processor Traffic Shaper .....................................................................................22<br />

3.6 Transmit Processor Traffic Transmit...................................................................................22<br />

4.0 Data and Tables......................................................................................................................22<br />

4.1 Receive Processor..............................................................................................................22<br />

4.1.1 Input Data ..............................................................................................................22<br />

4.1.2 Output Data ...........................................................................................................26<br />

4.2 Transmit Processor.............................................................................................................28<br />

4.2.1 Input Data ..............................................................................................................28<br />

4.2.2 Output Data ...........................................................................................................29<br />

4.3 Connection, Routing, and Hash Tables ..............................................................................29<br />

4.3.1 Rate Manager ........................................................................................................30<br />

4.3.1.1 Port Table Entry .....................................................................................30<br />

4.3.1.2 VC Table Entry.......................................................................................31<br />

5.0 Project Configuration / Modifying the <strong>Example</strong> <strong>Design</strong> ........................................32<br />

6.0 Testing Environment............................................................................................................32<br />

7.0 Simulation Support (Scripts, etc.) ..................................................................................32<br />

7.1 Simulation for the Receive Processor Project ....................................................................32<br />

Application Note


Contents<br />

7.1.1 Initialization Scripts ................................................................................................ 32<br />

7.1.2 Test Case Scripts .................................................................................................. 33<br />

7.2 Simulation for the Transmit Processor Project ................................................................... 34<br />

7.2.1 Initialization Scripts ................................................................................................ 34<br />

7.2.2 Test Case Scripts .................................................................................................. 34<br />

7.3 Simulation for the Receive/Transmit Processor Project ..................................................... 36<br />

7.3.1 Initialization Script.................................................................................................. 36<br />

7.3.2 Test Case Scripts .................................................................................................. 36<br />

8.0 Performance ............................................................................................................................ 37<br />

9.0 Limitations ............................................................................................................................... 37<br />

10.0 Extending the <strong>Example</strong> <strong>Design</strong> ....................................................................................... 38<br />

11.0 Acronyms & Definitions...................................................................................................... 39<br />

12.0 Reference Documents .........................................................................................................40<br />

Figures<br />

1 System Overview.......................................................................................................................... 7<br />

2 Receive Processor System Partitioning ....................................................................................... 8<br />

3 Transmit Processor System Partitioning ...................................................................................... 9<br />

4 Receive Processor Data Flow for ATM AAL5 PDUs ..................................................................10<br />

5 Receive Processor Data Flow for Flow Control Packets ............................................................11<br />

6 Transmit Processor Data Flow for PDUs and Flow Control ....................................................... 12<br />

7 Receive Processor Receive Microblock Group Data Flow ......................................................... 14<br />

8 Receive Processor Transmit Microblock Group .........................................................................19<br />

9 Transmit Processor Receive Microblock Group .........................................................................20<br />

10 Receive Processor Input - Routed IP over ATM with <strong>DSLAM</strong> Header ....................................... 24<br />

11 Receive Processor Input - Bridged Ethernet over ATM with <strong>DSLAM</strong> Headers .......................... 25<br />

12 Receive Processor Input - Flow Control Packet with <strong>DSLAM</strong> Header ....................................... 26<br />

13 Receive Processor Output to IXB3208....................................................................................... 27<br />

14 Inter-IXP Header Format ............................................................................................................ 27<br />

15 Transmit Processor Input from IXB3208 .................................................................................... 28<br />

16 Transmit Processor Output.........................................................................................................29<br />

Tables<br />

1 L3 Routing Table - IP Route <strong>Example</strong> ........................................................................................ 30<br />

2 Test Case Descriptions and their Files....................................................................................... 33<br />

3 Test Case Descriptions and their Network Traffic dlls................................................................ 35<br />

4 Test Case Descriptions and their Stream Files .......................................................................... 35<br />

5 Test Case Descriptions and their Files....................................................................................... 37<br />

Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

1.0 Introduction<br />

Intel develops software example designs to demonstrate the capabilities of the IXP Network<br />

Processor Family. This document describes the implementation of example software demonstrating<br />

the IXP1240 in a <strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) environment. In<br />

particular, this example design uses the IXP1200 to route IP packets between ATM and a DSL<br />

Aggregator.<br />

This document serves as a companion to the comments in the source code, and is intended to<br />

clarify the structure and general workings of the design. The following material is covered: scope<br />

of the design; code structure and flow; data structures; inter-process signaling; initialization and<br />

startup; software and hardware execution environments; etc. At the end of this document, a list of<br />

acronyms and definitions, as well as a list of related documents are provided.<br />

The README.TXT bundled with the software should be consulted for instructions on running the<br />

project, building the code, and the actual layout of the source files.<br />

1.1 Purpose of <strong>DSLAM</strong> <strong>Example</strong> <strong>Design</strong><br />

This example design is intended as a starting point for customers designing similar applications. It<br />

is also intended for customers to get an idea of what the IXP1200 Network Processor is capable of,<br />

and what kind of performance can be expected. Users may modify the code, adding additional<br />

modules that are proprietary or more specific to their needs, and estimate performance.<br />

This <strong>Example</strong> <strong>Design</strong> demonstrates just one software architecture in which the IXP1200 can be<br />

used in <strong>DSLAM</strong>-related designs. It should be noted that this design is just a starting point, and not<br />

intended to be ’production ready’.<br />

1.2 Scope of <strong>Example</strong> <strong>Design</strong><br />

This design addresses the following:<br />

• The completed IXP1200 microcode and micro-C shall run on the simulator in the IXP1200<br />

Developer WorkBench (DWB) using the IXP1240 chip setting.<br />

• Two-IXP1240-Processor <strong>Design</strong> (using an IXB3208 as an interconnection device)<br />

• One OC-12 port input, one OC-12 port output<br />

• Segmentation and Re-assembly (SAR)<br />

• ATM Adaptation Layer 5<br />

• IP over ATM LLC/SNAP Encapsulation (RFC 1483)<br />

• Routing between ATM and DSL based on IP, supporting both Longest Prefix Match and<br />

FiveTuple lookup methods<br />

• Constant Bit Rate (CBR), Variable Bit Rate (VBR), and Unspecified Bit Rate (UBR)<br />

• Support for up to 4096 Virtual Channels (VC)<br />

• Flow Control<br />

• Use of “microblock” architecture (Receive Processor)<br />

• Use of Micro-C Compiler (Transmit Processor only)<br />

Application Note 5


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

• Static Route Table Manager<br />

• Rate Manager .dll<br />

The following are not addressed:<br />

• Ethernet and Frame Relay processing<br />

• ATM ARP<br />

• ATM signaling<br />

• TM4.1 compliance (Transmit Processor)<br />

Receive Processor software is written using microcode in microblock architecture while Transmit<br />

Processor software is written in microengine C using microblock architecture.<br />

1.3 Execution Environment<br />

1.3.1 Software<br />

1.3.2 Hardware<br />

The software execution environment supported by the Developer’s Workbench is described in the<br />

Processor_2ixp_README.txt file that accompanies the source code files for the project (<strong>DSLAM</strong><br />

directory). This includes descriptions of the directory and file structure, and also how the project<br />

can be re-configured.<br />

The software simulation of the example design is fed test data streams via the NetworkTraffic DLL<br />

and the NetSim DLL.<br />

In the simulation environment, the IP and ATM VC table management software are emulated with<br />

a combination of Transactor (simulator) foreign models and interpreted Transactor scripts.<br />

The example design was not supported on hardware at the time of the release of this document.<br />

6 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

2.0 System Overview<br />

Figure 1 illustrates the steps for moving an IP packet from the ATM/DSL Aggregator to the output<br />

of the Transmit Processor.<br />

Figure 1. System Overview<br />

Intel ® Pentium ®<br />

Processor<br />

PCI Bus<br />

Intel ® IXP1200<br />

"Receive<br />

Processor"<br />

Reassembly<br />

Validation<br />

IP Lookup<br />

Protocol Translation<br />

Classification<br />

Intel ® IX Bus<br />

Intel ® IXB3208<br />

Network<br />

Processor<br />

Intel IX Bus<br />

Intel IXP1200<br />

"Transmit<br />

Processor"<br />

Traffic Management<br />

Segmentation<br />

Transmission<br />

Incoming<br />

"<strong>DSLAM</strong> ATM"<br />

Traffic<br />

Outgoing<br />

"<strong>DSLAM</strong> ATM"<br />

Traffic<br />

Customer-provided<br />

ATM/DSL Aggregator<br />

ATM<br />

DSL<br />

A9550-01<br />

On the data path in the Receive Processor, IP over ATM traffic (coming from the ATM/DSL<br />

aggregator over the ATM port) is processed and transmitted over the IX bus as a pre-classified<br />

AAL5 PDU segmented for IXB3208 consumption. The receiving threads receive ATM cells with<br />

prepended <strong>DSLAM</strong> headers, perform reassembly of the ATM PDU, and extract the IP packet<br />

according to the AAL5 and LLC/SNAP specified in a VC connection table.<br />

Upon reception of a complete AAL5 PDU, the receiving threads (if so configured) use the<br />

hardware CRC capability of the IXP1240 to perform the CRC-32 checks necessary for AAL5. The<br />

receiving threads perform an IP lookup and any required IP header transformation, prepend an<br />

inter-IXP header containing classification data, and then queue the PDU to the transmit thread.<br />

Application Note 7


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

On the data path in the Transmit Processor, 64-byte segmented chunks of data are received from<br />

the IXB3208. They are reassembled into full PDUs, which already include padding and an AAL5<br />

trailer. The PDUs are then scheduled and shaped according to their contract, segmented and then<br />

transmitted back out to the <strong>DSLAM</strong> aggregator.<br />

2.1 Software Partitioning<br />

Figure 2 shows how the design is partitioned between IXP1200 microengines in the Receive<br />

Processor. There are four dedicated microengines receiving incoming <strong>DSLAM</strong> traffic, one<br />

microengine transmitting to the IXB3208, and one spare microengine (not shown).<br />

One ATM port, shown on the left side of the diagram supplies <strong>DSLAM</strong> cells to the receiving<br />

microengines. They are assembled into PDUs or flow control packets, validation, IP lookups, and<br />

header transformations are performed, and the resulting PDUs or flow control packets are queued<br />

to the transmitting microengine. The transmitting microengine then segments the frames into 64-<br />

byte chunks and transmits them to the IXB3208.<br />

Figure 2. Receive Processor System Partitioning<br />

Microengine<br />

0<br />

RFIFO<br />

Microengine<br />

1<br />

Microengine<br />

2<br />

SDRAM<br />

Buffers<br />

Microengine<br />

5<br />

4 Transmitting threads<br />

running on 1 Microengine<br />

TFIFO<br />

Microengine<br />

3<br />

16 Receiving threads<br />

running on 4 Microengines<br />

A9548-01<br />

8 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Figure 3 shows how the design is partitioned between IXP1240 microengines in the Transmit<br />

Processor. There are two microengines dedicated to receiving incoming pre-classified ATM traffic<br />

from the Receive Processor via the IXB3208, and four microengines dedicated to shaping,<br />

scheduling and transmitting.<br />

Figure 3. Transmit Processor System Partitioning<br />

Microengine<br />

2<br />

RFIFO<br />

Microengine<br />

0<br />

Microengine<br />

1<br />

SDRAM<br />

Buffers<br />

Microengine<br />

3<br />

Microengine<br />

4<br />

TFIFO<br />

8 Receiving Threads<br />

running on 2 Microengines<br />

Microengine<br />

5<br />

8 Scheduler and<br />

8 Shaping/ Threads<br />

transmitting on 4 Microengines<br />

On Microengines 2-5: Contexts 0, 2<br />

Scheduler Contexts 1, 3: Shaper/Transmit<br />

A9549-02<br />

One IX bus port, shown on the left side of the diagram, supplies <strong>DSLAM</strong> IXB3208 segments to the<br />

receiving microengines. They are assembled into PDUs or flow control packets, and the Inter-IXP<br />

header for each is extracted and saved. PDUs are sent to the appropriate scheduler/shaper<br />

microengines (determined by the wheel number assigned to that microengine) via message queues.<br />

Flow control packets are used by the receiving threads to start or to stop transmission to particular<br />

ports. Based on the bit-rate contract for each particular VC and destination port, PDUs are<br />

scheduled, segmented, prepended with <strong>DSLAM</strong> headers, and transmitted.<br />

Application Note 9


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

2.2 Dual Chip Data Flow<br />

Figure 4 illustrates what steps are necessary to move an IP packet from the ATM/DSL Aggregator<br />

to the output of the Receive Processor, which goes to the IXB3208.<br />

Figure 4. Receive Processor Data Flow for ATM AAL5 PDUs<br />

<strong>DSLAM</strong> / ATM PDU on Rx Port<br />

(and subsequently in the RFIFO)<br />

Note: Only steps 1 through 4<br />

are performed for middle<br />

cells in the PDU<br />

1<br />

Receive<br />

<strong>DSLAM</strong><br />

ATM<br />

Cell<br />

<strong>DSLAM</strong><br />

HDR<br />

ATM<br />

HDR<br />

If last cell in PDU<br />

PAD CLP<br />

UU LEN CRC<br />

<strong>DSLAM</strong><br />

HDR<br />

ATM<br />

HDR<br />

If first cell in PDU<br />

LLC<br />

HDR<br />

IP<br />

HDR<br />

TCP<br />

HDR<br />

Payload<br />

9<br />

Check AAL5<br />

Trailer (length<br />

and checksum)<br />

2<br />

Look up<br />

VC (using<br />

hash)<br />

5<br />

Validate AAL5<br />

and IP Headers<br />

4<br />

Move cell payload<br />

to buffer<br />

SDRAM<br />

Packet Buffer<br />

Used for<br />

computing new<br />

header<br />

(32 bytes)<br />

Cell 0<br />

(48 Bytes)<br />

Cell 1<br />

(48 Bytes)<br />

Cell N<br />

(48 Bytes)<br />

7<br />

Create<br />

new header<br />

in buffer<br />

Connection Table<br />

VP/VC/Physical<br />

Port Number<br />

6<br />

Look up route and<br />

header on first cell<br />

Route Table<br />

Destination<br />

IP/IP Port<br />

Connection<br />

Status<br />

Bit<br />

Rate<br />

Network<br />

Model<br />

Queue<br />

ID<br />

Header<br />

Info<br />

Destination<br />

Port / VC<br />

8<br />

Update with new<br />

header info<br />

Buffer<br />

Base<br />

AAL5<br />

Header<br />

Buffer<br />

Offset<br />

3<br />

Locate buffer and offset<br />

10<br />

Transmit in 64-byte segments with prepended<br />

Intel® IXB3208 Network Processor headers<br />

A9559-01<br />

10 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Figure 5 illustrates what steps are necessary to move a flow control packet from the ATM DSL<br />

Aggregator to the output of the Receive Processor, which goes to the IXB3208.<br />

Figure 5. Receive Processor Data Flow for Flow Control Packets<br />

<strong>DSLAM</strong> Flow Control Packet on Rx Port<br />

(and subsequently in the RFIFO)<br />

1<br />

Receive <strong>DSLAM</strong><br />

Flow Control<br />

Packet<br />

<strong>DSLAM</strong><br />

HDR<br />

Payload<br />

2<br />

Allocate new buffer<br />

in SDRAM<br />

SDRAM<br />

Packet Buffer<br />

Flow<br />

Control<br />

Packet<br />

3<br />

Move payload to buffer<br />

4<br />

Transmit in 64-byte segments with prepended<br />

Intel® IXB3208 Network Processor and<br />

Inter-IXP headers<br />

A9560-01<br />

On the data path in the Receive Processor, flow control packets coming from the ATM/DSL<br />

Aggregator over the ATM port are transmitted over the IX bus with an appropriate Inter-IXP<br />

header and segmented for IXB3208 consumption. No transformation is performed on flow control<br />

packets in the Receive Processor.<br />

Application Note 11


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Figure 6 illustrates what steps are necessary to move IP packets and flow control packets from the<br />

IXB3208 input to the Transmit Processor to the ATM port.<br />

Figure 6. Transmit Processor Data Flow for PDUs and Flow Control<br />

Pre-classified <strong>DSLAM</strong> / ATM PDU on Rx Port<br />

(and subsequently in the RFIFO)<br />

1<br />

Receive<br />

Intel®<br />

IXB3208<br />

segment<br />

Note: Only steps 1 and 3 are<br />

conducted for middle<br />

segments in the PDU<br />

If last 64-byte or partial 64-byte<br />

segment in PDU (if EOP is true)<br />

Payload<br />

If first 64-byte segment in PDU<br />

(if SOP is true)<br />

Inter-IXP<br />

Header<br />

Payload<br />

4<br />

Move segment<br />

payload to buffer<br />

2<br />

Store Inter-IXP<br />

header in thread<br />

mailbox<br />

3<br />

Move segment<br />

payload to buffer<br />

SDRAM<br />

Packet Buffer<br />

Segment 0<br />

(64 bytes)<br />

Segment 1<br />

(64 bytes)<br />

Segment 2<br />

(64 bytes)<br />

Thread Mailbox<br />

(contains Inter-IXP header)<br />

Queue for<br />

Traffic Management<br />

5<br />

Retrieve Inter-IXP<br />

header and queue<br />

to traffic management<br />

Segment n<br />

(64 bytes or less)<br />

6<br />

OR<br />

If PDU, Traffic Management<br />

schedules, shapes and transmits<br />

AAL5 PDU as ATM cells with <strong>DSLAM</strong><br />

header per the contract specified in<br />

the Inter-IXP header<br />

7<br />

If flow control, Receive code blocks<br />

or unblocks transmission based on<br />

port and flow ID in Inter-IXP header.<br />

A9562-01<br />

In the Transmit Processor, data is received from the IXB3208 in 64-byte (or less) segments. The<br />

last segment may not be 64 bytes, depending on the length of the PDU or flow control packet. The<br />

first segment contains the Inter-IXP header attached by the Receive Processor. This header<br />

contains information that classifies the packet:<br />

• Data or Flow Control<br />

• Protocol (ATM or other)<br />

• Unicast or Multicast<br />

12 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

• The bit rate at which the data should be sent (CBR/VBR/UBR)<br />

• The UBR Priority<br />

• Time spent in Receive Processor<br />

• Packet Size<br />

• Queue ID, Output Port, and VC<br />

The packet is stored in SDRAM. The Inter-IXP header is saved to a mailbox in scratch memory<br />

until the entire packet is received, and then it is queued to the traffic management software. The<br />

traffic management software uses information from the Inter-IXP header, vc table, and port table<br />

for scheduling, shaping, and transmission of the packet stored in SDRAM. Information regarding<br />

the traffic management software design may be found on page 42 of the <strong>DSLAM</strong> Router/Traffic<br />

Shaper Software <strong>Design</strong> Specification White Paper.<br />

2.3 StrongARM Core Initialization<br />

Before starting the microengines, the Receive Processor StrongARM Core initializes the IP<br />

Lookup (Routing) Table, and the VP/VC Lookup (Connection) Table. It also initializes the Buffer<br />

Descriptor Freelist. On the Transmit Processor, the StrongARM Core initializes the Rate Manager<br />

(see Section 4.3.1).<br />

In the simulation environment, this is done by code invoked from the Transactor startup scripts<br />

dslam_proc_a.ind and dslam_proc_b.ind. The IP and VC table management software are emulated<br />

with a combination of Transactor foreign models and interpreted Transactor scripts.<br />

2.4 Microengine Initialization<br />

In the Receive Processor, the initialization code of the transmit microblock group on Microengine<br />

5, thread 0, performs most of the initialization for the Receive Processor in general. It also signals<br />

the appropriate threads on Microengines 0-3, which start the receive microblock group. In the<br />

Transmit Processor, dslam_uc_init and dslam_system_init initialize the entire application.<br />

Application Note 13


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

3.0 Microengine Functional Blocks<br />

3.1 Receive Microblock Group<br />

Figure 7 shows the data flow for the transmit microblock group in the Receive Processor. This flow<br />

also represents the dispatch loop used to control the microblock group.<br />

Figure 7. Receive Processor Receive Microblock Group Data Flow<br />

IP Header<br />

outgoing_pkt<br />

Intel ® SA<br />

Core<br />

IXPA RFIFO<br />

FIFO_Addr<br />

+<br />

LLC_1483_Type<br />

Process<br />

Data<br />

Cell<br />

outgoing_pkt<br />

OR<br />

IXPA RFIFO<br />

Initialization<br />

mpkt_descriptor,<br />

incoming_cell_payload,<br />

fifo_addr<br />

incoming_control_pkt,<br />

next_thread_id,<br />

canned_rev_request<br />

canned_rcv_request,<br />

fifo_addr<br />

Receive<br />

Cell<br />

sdram_buf_offset,<br />

connection_index,<br />

net_model,<br />

connection_entry,<br />

fifo_addr,<br />

bd_address<br />

control_pkt_ptr,<br />

bd_address<br />

Process<br />

Control<br />

Packet<br />

outgoing_control_pkt<br />

pdu<br />

buffer<br />

A9551-01<br />

14 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

3.1.1 Initialization Microblock<br />

The following macro header describes the initialization microblock for the Receive Processor<br />

microblock group and is contained in the file<br />

$Gen_<strong>DSLAM</strong>\hyannis\microblocks\dslam_rx_ublock_group_a.uc.<br />

// <strong>DSLAM</strong>_Initialize_Receive_uBlock_Group_A<br />

//<br />

// Description:<br />

// This macro performs initialization for the Receive uBlock group<br />

// in the Receive Processor of the Generic <strong>DSLAM</strong> application.<br />

//<br />

// Inputs:<br />

// next_thread_id next thread to signal when complete<br />

// canned_receive_request fixed receive request for FIFO<br />

// rfifo_address RFIFO location from which to read<br />

// @request_inflight critical section for accessing FIFO<br />

//<br />

// Macros defined:<br />

// DESC_SIZE<br />

// PKBUF_SIZE<br />

// PORT_NUMS<br />

// NEXT_CTX0_TID<br />

// NEXT_CTX1_TID<br />

// NEXT_CTX2_TID<br />

// NEXT_CTX3_TID<br />

// CORE_STACK_INTF1<br />

//<br />

// Implementation:<br />

// Initialize all the input parameters, and send appropriate signals<br />

// to prepare for packet processing.<br />

//<br />

// Size:<br />

// 48 instructions<br />

//<br />

// <strong>Example</strong> usage:<br />

// .local next_tid, rcv_req, fifo_addr<br />

// immed[@req_inflight,0]<br />

// <strong>DSLAM</strong>_Initialize_Receive_uBlock_Group_A(next_tid, rcv_req, fifo_addr,<br />

// @req_inflight)<br />

//<br />

3.1.2 Ingress Microblock<br />

The following macro header describes the ingress microblock for receiving incoming PDUs and<br />

flow control packets. It is contained in the file<br />

$Gen_<strong>DSLAM</strong>\hyannis\microblocks\dslam_rx_ublock_group_a.uc.<br />

// <strong>DSLAM</strong>_Receive<br />

//<br />

// Description:<br />

// This is the ingress ublock for <strong>DSLAM</strong> Receive Processing uBlock<br />

// for IXP1200 Receive Processor.<br />

// Issues a receive request to the RFIFO, parses and stores various<br />

// portions of the packet descriptor, looks up the connection entry<br />

// in the connection table based on the physical port number, vpi,<br />

// and vci, and reads packet data from RFIFO and stores it in SDRAM.<br />

// CRC may be computed when reading from the RFIFO if enabled.<br />

//<br />

// Inputs:<br />

// return_status success or failure<br />

// next_thread_id used to signal next thread to issue receive<br />

Application Note 15


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

// request (canned)<br />

// receive_request receive request to be issued (canned)<br />

// @request_inflight critical section for accessing FIFO<br />

// buffer_descriptor_address address for buffer descriptor in SRAM<br />

// sdram_buffer_address address for buffer in SDRAM<br />

// sdram_buffer_offset offset into buffer in SDRAM<br />

// pdu_completion indication that last part of PDU has been<br />

// received<br />

// rfifo_address address of data packet in RFIFO<br />

// physical_port_type ATM, Ethernet, Frame Relay, etc.<br />

// connection_entry_address index of connection entry<br />

// network_model type of transformation to perform<br />

// crc_residue initial crc residue to be updated<br />

// packet_type packet type, if flow control packets<br />

// supported<br />

//<br />

// Implementation:<br />

// If physical port type is ATM,<br />

// Read from the RFIFO into SDRAM<br />

// Determine if this is the last part of a PDU<br />

// Update connection table entry with new buffer offset<br />

// Else<br />

// Fail for all unsupported physical port types<br />

//<br />

// Size:<br />

// 247 instructions (w/ CRC, w/o other protocol, w/o flow control support)<br />

// without CRC, -16 instructions<br />

// with other protocol support, +38 instructions<br />

// with flow control support, +16 instructions<br />

//<br />

// <strong>Example</strong> usage:<br />

// <strong>DSLAM</strong>_Receive(status, next_tid, rec_req, @req_inflight, bd_address,<br />

// buff_sdram, sdram_buff_offset, pdu_complete, fifo_addr,<br />

// phy_port_type, \<br />

// cnx_entry_address, net_model, temp_residue, <strong>DSLAM</strong>_packet_type)<br />

//<br />

3.1.3 Transform Microblock<br />

The following macro header describes the transform microblock for incoming PDUs. (Flow control<br />

packets are passed through the Receive Processor and not transformed in any way.) It is contained<br />

in the file $Gen_<strong>DSLAM</strong>\hyannis\microblocks\dslam_rx_ublock_group_a.uc.<br />

// ATM_IP_Route_Lookup_And_Transform<br />

//<br />

// Description:<br />

// This is a transform block that performs an IP route lookup based<br />

// on the network model, and an IP transform based on the routing<br />

// information from the lookup operation.<br />

//<br />

// Inputs:<br />

// return_status success or failure<br />

// network_model type of transformation to perform<br />

// ip_header_xfer_registers SRAM xfer registers holding IP header<br />

// llc_1483_encapsulation_type type of 1483 encapsulation in PDU<br />

// rfifo_address address of data packet in RFIFO<br />

// connection_entry_address index of connection entry<br />

// sdram_buffer_address address for buffer in SDRAM<br />

//<br />

// Implementation:<br />

// if network_model is IP_TRIE_LKUP<br />

// perform a longest-prefix-match IP lookup to get routing pointer<br />

// else if network model is IP_5TUPLE_LKUP<br />

// perform a IP 5-tuple lookup to get routing pointer<br />

16 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

// transform the IP header based on the routing information<br />

//<br />

// Size:<br />

// 221 instructions<br />

//<br />

// <strong>Example</strong> usage:<br />

// ATM_IP_Route_Lookup_And_Transform(status, net_model, $ip_header, \<br />

// llc_encap_type, fifo_addr, cnx_entry_address, sdram_buff_addr)<br />

3.1.4 Egress Microblock<br />

Two egress microblocks are available for the receive microblock group in the Receive Processor:<br />

• <strong>DSLAM</strong>_Enqueue (enqueues packets to the transmit microblock group)<br />

• <strong>DSLAM</strong>_SA_Enqueue (enqueues exception packets to the StrongARM core)<br />

The following macro headers describe the egress microblocks for outgoing PDUs and flow control<br />

packets. They are contained in the file $Gen_<strong>DSLAM</strong>\hyannis\<br />

microblocks\dslam_rx_ublock_group_a.uc.<br />

// <strong>DSLAM</strong>_Enqueue<br />

//<br />

// Description:<br />

// This is an egress block that queues either a flow control<br />

// packet or a PDU to a ring buffer for ingress into the<br />

// transmit microblock group.<br />

//<br />

// Inputs:<br />

// packet_type type of packet (control or data) (if<br />

// supporting control packets)<br />

// pdu_completion whether the PDU is complete, if packet type<br />

// is data<br />

// buffer_descriptor_address address for buffer descriptor<br />

// physical_port_type ATM, Ethernet, Frame Relay, etc.<br />

// connection_entry_address index of connection entry<br />

// pdu_length length of PDU in buffer<br />

//<br />

// Implementation:<br />

// if packet_type is control<br />

// enqueue the flow control packet<br />

// else<br />

// enqueue the PDU<br />

//<br />

// Size:<br />

// 61 instructions (without flow control packet support and without other<br />

// protocol support)<br />

// with other protocol support, +18 instructions<br />

// with flow control packet support, +25 instructions<br />

//<br />

// <strong>Example</strong> usage:<br />

// <strong>DSLAM</strong>_Enqueue(pkt_type, pdu_complete, bd_address, phy_port_type, \<br />

// cnx_entry_address, length)<br />

//<br />

// <strong>DSLAM</strong>_SA_Enqueue<br />

//<br />

// Description:<br />

// This is an egress block that queues an ATM PDU to the<br />

// StrongARM core for exception processing.<br />

//<br />

// Inputs:<br />

Application Note 17


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

// buffer_descriptor_address address for buffer descriptor<br />

// pdu_length length of PDU in buffer<br />

//<br />

// Implementation:<br />

// Prepare SRAM xfer registers used for queueing<br />

// Get the next index for the packet queue<br />

// Enqueue the PDU using packetq_send<br />

//<br />

// Size:<br />

// 17 instructions<br />

//<br />

// <strong>Example</strong> usage:<br />

// <strong>DSLAM</strong>_SA_Enqueue(bd_address, length)<br />

18 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

3.2 Receive Processor Transmit Microblock Group<br />

Figure 8 shows the data flow for the transmit microblock group in the Receive Processor. This flow<br />

also represents the dispatch loop used to control the microblock group.<br />

Figure 8. Receive Processor Transmit Microblock Group<br />

Conditionally compiled depending upon<br />

Intel ® IXB3208 Network Processor<br />

Support Definition<br />

<strong>DSLAM</strong>_Initialize_<br />

Transmit_uBlock_<br />

Group_A_IXB3208<br />

<strong>DSLAM</strong>_Initialize_<br />

Transmit_uBlock_<br />

Group_A<br />

Conditionally<br />

compiled<br />

depending upon<br />

Intel IXB3208<br />

Network Processor<br />

Support Definition<br />

<strong>DSLAM</strong>_Copy_<br />

Enqueue_IXB3208<br />

outgoing_pkt<br />

with prepended<br />

Intel IXB3208<br />

Header<br />

constants and sequences<br />

and<br />

Initializations of<br />

<strong>DSLAM</strong>_<br />

Dequeue<br />

Initializations<br />

sequences<br />

constants<br />

pdu<br />

of<br />

pdu<br />

<strong>DSLAM</strong>_Copy_<br />

Enqueue<br />

TFIFO<br />

outgoing_pkt<br />

pkt<br />

queue<br />

outgoing_pkt<br />

A9552-01<br />

Application Note 19


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

3.3 Transmit Processor Receive Microblock Group<br />

Figure 9 shows the data flow for the receive microblock group in the Transmit Processor. This flow<br />

also represents the dispatch loop used to control the microblock group.<br />

Figure 9. Transmit Processor Receive Microblock Group<br />

First<br />

Segment<br />

Processing<br />

Inter-IXP Header<br />

RFIFO<br />

64-byte<br />

(or less)<br />

segment<br />

Issue<br />

Receive<br />

Request<br />

Thread<br />

Mailbox<br />

Midsegment<br />

Processing<br />

Valid PDU<br />

Enqueue<br />

to Shaper/<br />

Scheduler<br />

Last<br />

segment<br />

Processing<br />

A9563-01<br />

3.3.1 Initialization Microblock<br />

Initialization is performed just above the beginning of the dispatch loop. Because of inefficiencies<br />

in passing references in micro-C, this initialization is not encapsulated into its own microblock<br />

function.<br />

3.3.2 Ingress Microblock<br />

The following function header describes the ingress microblock for the receive microblock group<br />

in the Transmit Processor. This function is contained in the files<br />

$Gen_<strong>DSLAM</strong>\hyannis\workbench_projects\dslam_proc_b_uC\main0.c and<br />

$Gen_<strong>DSLAM</strong>\hyannis\workbench_projects\dslam_proc_b_uC\main1.c.<br />

// ************************************************************************<br />

// <strong>DSLAM</strong>_CopyReceive<br />

// ************************************************************************<br />

//<br />

20 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

// Description:move mpacket to packet buffer and<br />

// update packet buffer address for next thread<br />

//<br />

// Outputs:<br />

// Now a global : out_exceptionif re-assembly failed, exception will be > 0<br />

//<br />

// Inputs/Outputs:<br />

// Now a global : rec_state mpacket status information<br />

// Now a global : packet_buf_addr sdram packet buffer address<br />

// Now a global : buffer_allocation_vector vector variable to keep track of<br />

which threads have a current buffer popped.<br />

//<br />

// Input:<br />

// my_mb this thread’s mailbox address for updated segment address<br />

// next_mb next thread’s mailbox address for updated segment address<br />

// next_tid next thread’s id for signaling<br />

// rec_req canned receive request for IX Bus<br />

// rfifo_addressrfifo address this thread will use<br />

//<br />

3.3.3 Transform Microblocks<br />

No transformations are done in the receive microblock group in the Transmit Processor.<br />

3.3.4 Egress Microblock<br />

One egress microblock is available for the receive microblock group in the Transmit Processor:<br />

• <strong>DSLAM</strong>_CopyEnqueue (enqueue to the traffic management system)<br />

The following function header describes the egress microblock. This function is contained in the<br />

files $Gen_<strong>DSLAM</strong>\hyannis\workbench_projects\dslam_proc_b_uC\main0.c and<br />

$Gen_<strong>DSLAM</strong>\hyannis\workbench_projects\dslam_proc_b_uC\main1.c.<br />

// ************************************************************************<br />

// <strong>DSLAM</strong>_CopyEnqueue<br />

// ************************************************************************<br />

//<br />

// Description:<br />

// Get the inter ixp header either from memory, or from the rfifo if sop.<br />

// Determine if the packet is flow control or data.<br />

// If flow control, then update the flow control table and set the bit<br />

// according the vc either on or off.<br />

// If data, then create a message for the tm41_receive code, and send it.<br />

//<br />

// Inputs:<br />

// Now a global : rec_statempacket status information<br />

// rfifo_addressrfifo address this thread will use<br />

// Now a global : packet_buf_addr packet buffer address<br />

//<br />

3.4 Transmit Processor Traffic Scheduler<br />

The TM4.1_Scheduler function schedules cells for transmit based on traffic contract associated<br />

with a VC. It schedules shaped VCs that have strict time constraints and unshaped VCs scheduled<br />

using the residual bandwidth after processing shaped VCs. The TM4.1_Scheduler function looks<br />

for work either from the message queue originating from the TM4.1_Receive function or a<br />

message queue originating from TM4.1_Shaper function. The schedule (comprising the VP/VC to<br />

be processed) for the shaped and unshaped VC is stored in a calendar queue.<br />

Application Note 21


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

3.5 Transmit Processor Traffic Shaper<br />

The TM4.1_Shaper function reads a single entry in the calendar queue. Each entry has room to<br />

contain a must-send VC index (associated with a VC configured as CBR @ PCR or VBR-rt @<br />

SCR), a could-send VC index (associated with a VC configured as VBR-rt @ PCR or VBR-nrt<br />

@SCR) and an unshaped VC index (associated with a VC configured as VBR-nrt @ PCR or<br />

UBR). The VC indices that will be processed at any given time are prioritized in the order as they<br />

appear - must-send/could-send/unshaped. If multiple VC indices are present in an entry, the VC<br />

index with the highest priority gets processed. All other VC indices are re-scheduled without<br />

transmitting a cell on such VCs. After transmitting a cell, the next cell in that VC’s queue will be<br />

requested to be scheduled. If a VP/VC index is not found in an entry, the shaper will wait for the<br />

duration (inter-cell gap + time it takes to process a cell for a given bandwidth).<br />

3.6 Transmit Processor Traffic Transmit<br />

The ATM transmit function is passed a VC index and port number. If the VC index indicates to do<br />

a port skip (no data for the port is available to be transmitted) then a port skip is done and the<br />

transmit buffer element is updated for the next cell. If flow control is asserted for the port, then a<br />

port skip is done and the transmit buffer element is updated for the next cell. A packet is de-queued<br />

from the VC queue. The packet descriptor is then read. The fabric and cell header is generated and<br />

put into the transmit buffer. The CRC is run on the packet data for this cell and put into the transmit<br />

buffer. If this is the last cell, then the packet trailer is generated and also put into the transmit buffer.<br />

The packet is then transmitted and the transmit buffer element is updated for the next cell.<br />

Next, a check is made with the number of cells en-queued for the VC and the lower queue<br />

threshold for that VC. If they are equal or the cells en-queued is less than the said threshold, a flow<br />

control message is sent onto the ready bus indicating to de-assert flow control. The number of cells<br />

en-queued is then decremented. If this is the last cell of a packet, the packet buffer is put back onto<br />

the free list of packet buffers.<br />

4.0 Data and Tables<br />

4.1 Receive Processor<br />

4.1.1 Input Data<br />

This section describes data input to and output from the Receive Processor in the <strong>DSLAM</strong><br />

<strong>Example</strong> <strong>Design</strong>.<br />

Two types of packets may be input to the Receive Processor:<br />

• Data Packets<br />

• Flow Control Packets<br />

Data packets consist of RFC1483-compliant AAL5 PDUs which have been segmented into ATM<br />

cells and prepended with a custom header.<br />

22 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Flow control packets consist of a single unsegmented data packet (having a length of two octets)<br />

prepended with the same custom header that was prepended to the ATM cells comprising the data<br />

packets.<br />

This custom header contains three pieces of information:<br />

• Whether the packet is data or flow control<br />

• The type of physical port from which the packet was received (e.g. ATM, Frame Relay,<br />

Ethernet, etc.).<br />

• The port number of the physical port from which the packet was received (0-2047).<br />

Application Note 23


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Figure 10 shows how Routed IP over ATM packets are encapsulated with LLC/SNAP and AAL5<br />

to be carried by ATM cells. These ATM cells are then prepended with the custom <strong>DSLAM</strong> header<br />

by the DSL Aggregator. The ATM cells with the prepended <strong>DSLAM</strong> headers are then input to the<br />

Receive Processor.<br />

Figure 10. Receive Processor Input - Routed IP over ATM with <strong>DSLAM</strong> Header<br />

Ethernet<br />

Data<br />

Ethernet Header<br />

IP Header<br />

TCP Header<br />

Payload<br />

Ethernet Trailer<br />

IP<br />

Data<br />

IP Header<br />

TCP Header<br />

Payload<br />

LLC/SNAP<br />

Encapsulation<br />

LLC OUI PID<br />

3 bytes 3 bytes 2 bytes IP Header TCP Header<br />

Payload<br />

AAL5<br />

CS<br />

CS-SDU Info Field<br />

Padding<br />

0-47 bytes<br />

UU<br />

1 byte<br />

CPI<br />

1 byte<br />

Length<br />

2 bytes<br />

CRC<br />

4 bytes<br />

SAR<br />

Sub-Layer<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

<strong>DSLAM</strong><br />

(input to<br />

Processor A)<br />

<strong>DSLAM</strong> Header<br />

4 bytes<br />

ATM Cell<br />

53 bytes<br />

<strong>DSLAM</strong> Header<br />

4 bytes<br />

ATM Cell<br />

53 bytes<br />

Packet Type<br />

1 bit<br />

1=data<br />

Unused<br />

18 bits<br />

Physical Port Number<br />

11 bits<br />

(0-2047)<br />

Physical Port Type<br />

2 bits<br />

1=ATM<br />

A9553-01<br />

24 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Figure 11 shows how bridged Ethernet packets are encapsulated with LLC/SNAP and AAL5 to be<br />

carried by ATM cells. These ATM cells are then prepended with the custom <strong>DSLAM</strong> header by the<br />

DSL Aggregator. The ATM cells with the prepended <strong>DSLAM</strong> headers are then input to the Receive<br />

Processor.<br />

Figure 11. Receive Processor Input - Bridged Ethernet over ATM with <strong>DSLAM</strong> Headers<br />

Ethernet<br />

Data<br />

Ethernet Header<br />

IP Header<br />

TCP Header<br />

Payload<br />

Ethernet Trailer<br />

LLC/SNAP<br />

Encapsulation<br />

LLC OUI PID Partial Ethernet<br />

3 bytes 3 bytes 2 bytes<br />

Header<br />

IP Header<br />

TCP<br />

Header<br />

Payload<br />

Ethernet<br />

Trailer<br />

AAL5<br />

CS<br />

CS-SDU Info Field<br />

Padding<br />

0-47 bytes<br />

UU<br />

1 byte<br />

CPI<br />

1 byte<br />

Length<br />

2 bytes<br />

CRC<br />

4 bytes<br />

SAR<br />

Sub-Layer<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

<strong>DSLAM</strong><br />

(input to<br />

Processor A)<br />

<strong>DSLAM</strong> Header<br />

4 bytes<br />

ATM Cell<br />

53 bytes<br />

<strong>DSLAM</strong> Header<br />

4 bytes<br />

ATM Cell<br />

53 bytes<br />

Packet Type<br />

1 bit<br />

1=data<br />

Unused<br />

18 bits<br />

Physical Port Number<br />

11 bits<br />

(0-2047)<br />

Physical Port Type<br />

2 bits<br />

1=ATM<br />

A9554-01<br />

Application Note 25


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Figure 12 shows how flow control packets are formatted. These packets are then prepended with<br />

the custom <strong>DSLAM</strong> header by the DSL Aggregator. The flow control packets with the prepended<br />

<strong>DSLAM</strong> headers are the input to the Receive Processor.<br />

Figure 12. Receive Processor Input - Flow Control Packet with <strong>DSLAM</strong> Header<br />

Flow Control<br />

Flow Control Valid<br />

1 bit<br />

Flow Control Message<br />

2 bits<br />

Flow ID<br />

13 bits<br />

Padding<br />

46 bytes<br />

<strong>DSLAM</strong><br />

(input to<br />

Processor A)<br />

<strong>DSLAM</strong> Header<br />

4 bytes<br />

Flow Control<br />

48 bytes<br />

Packet Type<br />

1 bit<br />

0=flow control<br />

Unused<br />

18 bits<br />

Physical Port Number<br />

11 bits<br />

(0-2047)<br />

Physical Port Type<br />

2 bits<br />

1=ATM<br />

Note: Flow Control Format<br />

Flow Control Valid: 0=flow control invalid, 1-flow control valid<br />

Flow Control Message: 1=go, 1=no go<br />

Flow ID: 0-8191<br />

A9555-01<br />

4.1.2 Output Data<br />

Data packets and flow control packets are output from the Receive Processor. This output is<br />

formatted so that the packets may be received by an IXB3208 chip and forwarded on to the<br />

Transmit Processor. Data and flow control packets are prepended with an “inter-IXP” header<br />

containing packet classification data for use by the Transmit Processor. Then the packets, with their<br />

prepended inter-IXP header are segmented per IXB3208 requirements. A special header is<br />

prepended to each of these segments for consumption by the IXB3208.<br />

26 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Figure 13 shows how PDUs and flow control packets output from the Receive Processor to the<br />

IXB3208 are formatted. These PDUs and flow control packets are prepended with an inter-IXP<br />

header (containing classification information for traffic management), segmented, and then prepended<br />

with IXB3208 headers. (IXB3208 headers follow a standard format as specified in the<br />

IXB3208 documentation.) The input to the Transmit Processor is the same data with the IXB3208<br />

headers stripped off by the IXB3208.<br />

Figure 13. Receive Processor Output to IXB3208<br />

PDU or<br />

Flow Control<br />

Reassembled/modified PDU<br />

or Flow Control<br />

multiple of 48 bytes<br />

Classification Inter-IXP Header Reassembled/modified PDU<br />

or Flow Control<br />

SAR Sub-Layer<br />

(output from<br />

Processor A)<br />

Intel ®<br />

IXB3208<br />

Header<br />

Payload<br />

64 bytes<br />

Intel<br />

IXB3208<br />

Header<br />

Payload<br />

64 bytes<br />

Intel<br />

IXB3208<br />

Header<br />

Payload<br />

64 bytes<br />

A9556-01<br />

Figure 14 shows the format of the Inter-IXP header, which is used to communicate packet<br />

classification data from the Receive Processor to the Transmit Processor.<br />

Figure 14. Inter-IXP Header Format<br />

31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Offset<br />

Last Byte<br />

C A M Type CLP Priority Time Last Quadword in Packet<br />

in Packet<br />

0<br />

Source VC or Vport Destination VC or Multicast Group 1<br />

Legend<br />

C<br />

A<br />

M<br />

CLP<br />

Type<br />

Priority<br />

Time<br />

Last Quadword in<br />

Packet<br />

Last Byte in<br />

Packet<br />

Source VC or<br />

Vport<br />

Destination VC or<br />

Multicast Group<br />

Control Indicator<br />

0 = command packet<br />

1 = data packet<br />

ATM Indicator<br />

0 = unicast packet is not ATM<br />

1 = unicast packet is ATM<br />

Multicast Indicator<br />

0 = unicast packet<br />

1 = multicast packet<br />

Cell Loss Priority Indicator<br />

Connection Type<br />

1 =<br />

UBR<br />

2 =<br />

VBR<br />

4 =<br />

CBR<br />

The UBR Priority<br />

Time IXP1200 A spent on the packet in 1 microsec increments.<br />

The quadword offset from the start of the packet to the last quadword of data for the<br />

packet<br />

The last byte in packet is the number of valid bytes in the last quadword 1.<br />

These appear to be unique numbers per IP address.<br />

Application Note 27


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

4.2 Transmit Processor<br />

4.2.1 Input Data<br />

This section describes data input to and output from the Transmit Processor in the <strong>DSLAM</strong><br />

<strong>Example</strong> <strong>Design</strong>.<br />

Data packets and flow control packets are input to the Transmit Processor from the Receive<br />

Processor through the IXB3208 chip. The only difference between the output of the Receive<br />

Processor and the input to the Transmit Processor is that the IXB3208 headers, which were<br />

prepended to each of the packet segments, have been removed by the IXB3208.<br />

Figure 15 shows how PDUs and flow control packets output from the Receive Processor to the<br />

IXB3208 are stripped of the IXB3208 headers and then input to the Transmit Processor.<br />

Figure 15. Transmit Processor Input from IXB3208<br />

Intel ® IXB3208<br />

Input<br />

(Processor A output)<br />

Intel<br />

IXB3208<br />

Header<br />

Payload<br />

64 bytes<br />

Intel<br />

IXB3208<br />

Header<br />

Payload<br />

64 bytes<br />

Intel<br />

IXB3208<br />

Header<br />

Payload<br />

64 bytes<br />

Intel IXB3208<br />

Output<br />

(Processor B input)<br />

Payload<br />

64 bytes<br />

Payload<br />

64 bytes<br />

Payload<br />

64 bytes<br />

28 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

4.2.2 Output Data<br />

Data packets output from the Transmit Processor consist of RFC1483-compliant AAL5 PDUs<br />

which have been segmented into ATM cells and prepended with a custom header.<br />

Figure 16 shows how PDUs are completed, segmented, and output from the Transmit Processor to<br />

the DSL Aggregator with the prepended custom <strong>DSLAM</strong> header, just as it was input to the Receive<br />

Processor.<br />

Figure 16. Transmit Processor Output<br />

Processor B<br />

Input<br />

Payload<br />

64 bytes<br />

Payload<br />

64 bytes<br />

Payload<br />

64 bytes<br />

Reassembly<br />

Inter-IXP<br />

Header<br />

AAL5 PDU<br />

AAL5<br />

CS<br />

CS-SDU Info Field<br />

Padding<br />

0-47 bytes<br />

UU<br />

1 byte<br />

CPI<br />

1 byte<br />

Length<br />

2 bytes<br />

CRC<br />

4 bytes<br />

SAR<br />

Sub-Layer<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

ATM Cell<br />

Header<br />

Payload<br />

48 bytes<br />

<strong>DSLAM</strong><br />

(output to<br />

Processor B)<br />

<strong>DSLAM</strong> Header<br />

4 bytes<br />

ATM Cell<br />

53 bytes<br />

<strong>DSLAM</strong> Header<br />

4 bytes<br />

ATM Cell<br />

53 bytes<br />

Packet Type<br />

1 bit<br />

Unused<br />

18 bits<br />

Physical Port Number<br />

11 bits<br />

Physical Port Type<br />

2 bits<br />

A9558-01<br />

4.3 Connection, Routing, and Hash Tables<br />

The Receive Processor contains two primary tables:<br />

• Connection Table<br />

• Routing Table<br />

The connection table is accessed by using a pre-computed hash table and algorithm. The ATM VP<br />

(virtual path), VC (virtual channel), and physical port numbers contained in the input custom<br />

header are used to create a hash key that is then used to look up the connection entry in the table.<br />

Each entry in the connection table contains the following:<br />

• Status of the ATM virtual circuit (VP/VC)<br />

Application Note 29


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

• Network model (IP Longest Prefix Match (LPM) or IP 5-Tuple<br />

• Buffer index which indicates where to store incoming packet segments<br />

The routing table is used to determine the IP destination of incoming packets and to what format<br />

they should be converted. This table is accessed by one of two methods as determined from the<br />

network model in the connection table:<br />

• IP Longest Prefix Match (LPM)<br />

• IP 5-Tuple<br />

Each entry in the routing table contains the following:<br />

• Destination IP Address<br />

• Netmask<br />

• Gateway<br />

• Physical Outport Port Number<br />

• VP<br />

• VC<br />

• LLC/SNAP Header<br />

Table 1. L3 Routing Table - IP Route <strong>Example</strong><br />

4.3.1 Rate Manager<br />

The Rate Manager creates both the VC and Port table. It allows easy customization of the entries<br />

and is called from the initialization script TM41.ind.<br />

4.3.1.1 Port Table Entry<br />

//////////////////////////////////////////////////////////////////////////////<br />

// Port table entry<br />

struct Unshaped_VC_{<br />

UINT unshaped_unused_0: 8;// For alignment<br />

UINT WRR_count: 8; // WRR weight associated with unshaped 0-15 VC (cells)<br />

UINT vc_index: 16;//Base address of queue associated with an unshaped 0-15<br />

VC<br />

};<br />

typedef struct Unshaped_VC_ Unshaped_VC;<br />

// 32 lw per entry * 256 ports * 4 = 32Kbytes<br />

struct Port_Entry_{<br />

// LongWord - 0<br />

UINT entry_valid: 1;// Message queue base address<br />

UINT port_unused_0: 9;// For alignment<br />

UINT current_unshaped_queue: 5;// Queue # being processed (0-15)<br />

UINT q_with_pkts_vector: 17;// Indicator for 16 VC queues and 1<br />

First_Chance_Q<br />

// LongWord - 1<br />

30 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

UINT inter_cell_gap: 16; // Shaped wheel slots, indicates port bandwidth<br />

UINT last_cell_sent: 16; // Shaped wheel slot<br />

// LongWord - 2<br />

UINT current_WRR_count: 8;// Decrementing WRR count<br />

UINT WRR_count: 8;// WRR weight assigned to this VC<br />

UINT current_unshaped_VC_index: 16; // This is the unshaped VC index<br />

currently<br />

// being processed<br />

// LongWord - 3:18<br />

Unshaped_VC unshaped_VC[16];// 16 unshaped queues and its associated<br />

weights<br />

// LongWord - 19;//Intercellgap and Lastcellsent<br />

Unshaped_VC unshaped_VC[16];// 16 unshaped queues and its associated<br />

weights<br />

// LongWord - 20:31<br />

UINT port_unused_1[13];// For alignment<br />

};<br />

4.3.1.2 VC Table Entry<br />

//////////////////////////////////////////////////////////////////////////////<br />

// VC table entry<br />

// 8 lw per entry * 4096 VCs * 4 = 131Kbytes<br />

struct VC_Entry_ {<br />

// LongWord - 0<br />

UINT CRC_Residue: 32;// CRC-32 residue.<br />

// LongWord - 1<br />

UINT vc_unused_0: 16;// For alignment<br />

UINT queue_threshold_lo: 8; // Cells<br />

UINT num_cells_enqueued: 8;// Cells<br />

// LongWord - 2<br />

UINT entry_valid: 1;// Set to 0 if VC is not configured<br />

UINT vc_unused_1: 7;// For alignment<br />

UINT unshaped_q_position: 4;// Offset in port table’ "unshaped_VC" this VC<br />

is located<br />

UINT wheel_num: 2;// Totally 4 wheel-pairs.<br />

UINT type: 2; // Shaped CBR, shaped VBR-rt, shaped VBR-nrt, unshaped<br />

classes<br />

UINT port_num: 8;// Physical port #<br />

UINT queue_threshold_hi: 8; // Cells<br />

// LongWord - 3<br />

UINT last_SCR_cell_sent: 16;// Slot location in wheel<br />

UINT current_MBS: 16;// Unused bits padding for alignment<br />

// LongWord - 4<br />

UINT last_PCR_cell_sent: 16;// Slot location in wheel<br />

UINT PCR: 16; // Shaped Wheel slots<br />

// LongWord - 5<br />

UINT MBS: 16; // Cells<br />

UINT SCR: 16; // Shaped Wheel slots<br />

// LongWord - 6<br />

UINT atm_header;// AAL5 header<br />

// LongWord - 7<br />

UINT vc_unused_2;// For alignment (being used for AAL5 Header)<br />

Application Note 31


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

};<br />

5.0 Project Configuration / Modifying the <strong>Example</strong><br />

<strong>Design</strong><br />

This example design can be assembled with a variety of options, all of which are configurable in<br />

the header files: project_config.h and system_config.h.<br />

6.0 Testing Environment<br />

The testing environment is the same as the execution environment - for details refer to Section 1.3<br />

of this document.<br />

7.0 Simulation Support (Scripts, etc.)<br />

The three Developer WorkBench projects in this design can be run using the simulator within the<br />

DWB. Test streams, foreign model DLLs, and the simulator scripts that support them, can be used<br />

to exercise the projects by providing various network traffic inputs to the WorkBench applications.<br />

7.1 Simulation for the Receive Processor Project<br />

7.1.1 Initialization Scripts<br />

The file dslam_proc_a.ind is the primary initialization script for the Receive Processor project. It<br />

invokes all other simulator scripts, including the dslam_perf_proc_a.ind script and the test case<br />

scripts described below. The primary functions performed by this script are:<br />

• Memory Map Initialization<br />

• Queue Descriptor Table Initialization<br />

• Scratch Memory Initialization<br />

• SRAM Task Message Area Initialization<br />

• Scratch Task Message Area Initialization<br />

• Fast Port Index Initialization in the Queue Descriptor Table<br />

• Route Table Manager Initialization<br />

• Hash Table Manager Initialization<br />

• Route Additions to the Route Table<br />

• Invocation of a Selected Test Case Script<br />

• Start of Packet Reception<br />

32 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

7.1.2 Test Case Scripts<br />

Table 2.<br />

The first nine of ten test cases for the Receive Processor project can be executed using either<br />

NetworkTraffic.dll or NetSim.dll. The tenth test case (which simulates pass-through of flow<br />

control packets) must be executed using NetworkTraffic.dll. (At the time of this writing,<br />

NetSim.dll did not support simulation of flow control packets.) To switch between the use of<br />

NetworkTraffic.dll and NetSim.dll, the .dll file must be specified in the WorkBench under<br />

Simulation/IX Bus Device Simulator/Port I/O. The following table shows the test case descriptions<br />

and the associated script files which must be specified through the initialization script file,<br />

dslam_proc_a.ind.<br />

Test Case Descriptions and their Files<br />

Test<br />

Case Description<br />

NetworkTraffic.dll<br />

(all file names start with:<br />

dslam_proc_a_ )<br />

NetSim.dll<br />

(all file names start with:<br />

dslam_proc_a_ )<br />

NetSim TCS File<br />

(all file names start with:<br />

dslam_proc_a_ )<br />

001<br />

Vary<br />

Physical<br />

Port<br />

test_case_001.ind test_case_001_netsim.ind test_case_001.tcs<br />

002 Vary VP/VC test_case_002.ind test_case_002_netsim.ind test_case_002.tcs<br />

003 Vary IP test_case_003.ind test_case_003_netsim.ind test_case_003.tcs<br />

004<br />

005<br />

006<br />

007<br />

008<br />

Minimum<br />

Packet<br />

Maximum<br />

Packet<br />

Medium<br />

Packet<br />

IP LPM<br />

Lookup<br />

IP 5-Tuple<br />

Lookup<br />

test_case_004.ind test_case_004_netsim.ind test_case_004.tcs<br />

test_case_005.ind test_case_005_netsim.ind test_case_005.tcs<br />

test_case_006.ind test_case_006_netsim.ind test_case_006.tcs<br />

test_case_007.ind test_case_007_netsim.ind test_case_007.tcs<br />

test_case_008.ind unsupported unsupported<br />

009<br />

MPoA<br />

Encapsulation<br />

test_case_009.ind test_case_009_netsim.ind test_case_009.tcs<br />

010 Flow Control test_case_010.ind unsupported unsupported<br />

Each of the above scripts performs the following functions:<br />

• Initialization of Either NetworkTraffic.dll or NetSim.dll<br />

• Addition of Entries to the Connection Table, Including VPI, VCI, and Network Model<br />

• Generation of ATM Frames for Simulated Input (for use with NetworkTraffic.dll)<br />

• Generation of ATM Frames for Simulated Input from a NetSim Traffic Configuration<br />

Specification (.tcs) file (for use with NetSim.dll)<br />

• Setup of 5-Tuple Rules for IP Route Lookup Using the 5-Tuple Lookup Algorithm<br />

Application Note 33


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

7.2 Simulation for the Transmit Processor Project<br />

7.2.1 Initialization Scripts<br />

The file, dslam_proc_b.ind, is the primary initialization script for the Transmit Processor Project.<br />

It invokes all other simulator scripts, including the tm41.ind script and the test case scripts<br />

described below. The primary functions performed by this script are:<br />

• Memory Map Initialization<br />

• Scratch Memory Initialization<br />

• VC Table Initialization via Rate Manager .dll<br />

• Port Table Initialization via Rate Manager .dll<br />

• VPort Ready Vector Initialization<br />

• VC Queue Heads Initialization<br />

• Traffic Params Initialization<br />

• Flow Control Vector Initialization<br />

• Invocation of a Selected Test Case Script<br />

• Start of Packet Reception<br />

• Packet buffer address mailboxes initialization<br />

• Descriptor mailboxes initialization<br />

• Rate manager initialization (including population of the VC table)<br />

7.2.2 Test Case Scripts<br />

Test input for the Transmit Processor project is assigned through the NetworkTraffic.dll. The<br />

following table shows the test case descriptions and the associated script files that must be<br />

specified through the initialization script file, dslam_proc_b.ind.:<br />

34 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Table 3.<br />

Test Case Descriptions and their Network Traffic dlls<br />

Test Case Description NetworkTraffic.dll<br />

011<br />

012<br />

013<br />

014<br />

015<br />

016<br />

017<br />

018<br />

019<br />

Unassigned Bit Rate<br />

(single 64-byte cell input)<br />

Variable Bit Rate (single<br />

64-byte cell input)<br />

Constant Bit Rate (single<br />

64-byte cell input)<br />

All bit rates (single 64-byte<br />

cell input)<br />

Unassigned Bit Rate<br />

(multiple 64-byte cell<br />

input)<br />

Variable Bit Rate (multiple<br />

64-byte cell input)<br />

Constant Bit Rate (multiple<br />

64-byte cell input)<br />

All bit rates (multiple 64-<br />

byte cell input)<br />

Flow-control (single 64-<br />

byte cell input)<br />

dslam_proc_b_test_case_011.ind<br />

dslam_proc_b_test_case_012.ind<br />

dslam_proc_b_test_case_013.ind<br />

dslam_proc_b_test_case_014.ind<br />

dslam_proc_b_test_case_015.ind<br />

dslam_proc_b_test_case_016.ind<br />

dslam_proc_b_est_case_017.ind<br />

dslam_proc_b_test_case_018.ind<br />

dslam_proc_b_test_case_019.ind<br />

Each of the above scripts performs the following functions:<br />

• Initialization of the NetworkTraffic.dll<br />

• Generation of pre-classified ATM frames segmented into 64-byte segments for simulated input<br />

(for use with NetworkTraffic.dll)<br />

Currently, test input for the Transmit Processor project is assigned through streams built into the<br />

Developer WorkBench. These streams may be assigned through Simulation/IX Bus Device<br />

Simulator/Port I/O. The following table shows the test case descriptions and the associated stream<br />

files:<br />

Table 4.<br />

Test Case Descriptions and their Stream Files<br />

Test Case Description WorkBench Stream File<br />

011 Unassigned Bit Rate dslam_proc_b_test_case_0011.strm<br />

012 Variable Bit Rate dslam_proc_b_test_case_0012.strm<br />

013 Constant Bit Rate dslam_proc_b_test_case_0013.strm<br />

014 All Bit Rates dslam_proc_b_test_case_0014.strm<br />

Application Note 35


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

7.3 Simulation for the Receive/Transmit Processor Project<br />

7.3.1 Initialization Script<br />

The file, dslam_proc_2ixp.ind, is the primary initialization script for the Receive/Transmit<br />

Processor Project. It invokes all other simulator scripts, including the dslam_perf_proc_a.ind<br />

script and the test case scripts described below. The primary functions performed by this script are<br />

a combination of the dslam_proc_a.ind and dslam_proc_b.ind scripts:<br />

• Memory map initialization<br />

• Queue descriptor table initialization<br />

• Scratch memory initialization<br />

• SRAM task message area initialization<br />

• Scratch task message area initialization<br />

• Fast port index initialization in the queue descriptor table<br />

• Route table manager initialization<br />

• Hash table manager initialization<br />

• Rate manager initialization<br />

• Route additions to the route table<br />

• Invocation of a selected test case script<br />

• Start of packet reception<br />

7.3.2 Test Case Scripts<br />

Four of ten test cases for the 2-processor project can be executed using either the<br />

NetworkTraffic.dll or the NetSim.dll. The remainder must be executed using the<br />

NetworkTraffic.dll. (At the time of this writing, the NetSim.dll did not support simulation of flow<br />

control packets or packets for five-tuple lookup.) To switch between the use of the<br />

NetworkTraffic.dll and the NetSim.dll, the .dll file must be specified in the workbench under<br />

Simulation/IX Bus Device Simulator/Port I/O.<br />

The following table shows the test case descriptions and the associated script files that must be<br />

specified through the initialization script file, dslam_proc_2ixp.ind.<br />

36 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

Table 5.<br />

Test Case Descriptions and their Files<br />

Test<br />

Case Description<br />

NetworkTraffic.dll<br />

(all file names start with:<br />

dslam_proc_2ixp_)<br />

NetSim.dll<br />

(all file names start with:<br />

dslam_proc_2ixp_)<br />

NetSim TCS File<br />

(all file names start with:<br />

dslam_proc_2ixp_)<br />

021<br />

022<br />

023<br />

024<br />

025<br />

026<br />

027<br />

028<br />

029<br />

IP LPM with<br />

UBR<br />

IP LPM with<br />

VBR<br />

IP LPM with<br />

CBR<br />

IP 5TUPLE<br />

with UBR<br />

IP 5TUPLE<br />

with VBR<br />

IP 5TUPLE<br />

with CBR<br />

IP LPM with<br />

CBR/VBR/<br />

UBR<br />

IP 5TUPLE<br />

with CBR/<br />

VBR/UBR<br />

IP LPM and<br />

IP 5TUPLE<br />

with CBR/<br />

VBR/UBR<br />

test_case_021.ind test_case_021_netsim.ind test_case_021.tcs<br />

test_case_022.ind test_case_022_netsim.ind test_case_022.tcs<br />

test_case_023.ind test_case_023_netsim.ind test_case_023.tcs<br />

test_case_024.ind test_case_024_netsim.ind test_case_024.tcs<br />

test_case_025.ind test_case_025_netsim.ind test_case_025.tcs<br />

test_case_026.ind test_case_026_netsim.ind test_case_026.tcs<br />

test_case_027.ind test_case_027_netsim.ind test_case_027.tcs<br />

test_case_028.ind unsupported unsupported<br />

test_case_029.ind test_case_029_netsim.ind test_case_029.tcs<br />

030 Flow Control test_case_030.ind unsupported unsupported<br />

8.0 Performance<br />

The overall performance of the system is currently limited by the traffic shaping/scheduling and<br />

transmit portion of the application, and has been measured at speeds just over 400 Mbs.<br />

9.0 Limitations<br />

The current implementation of the example design only allows for traffic to pass from the DSL<br />

aggregator to the WAN, and not vice versa. The design could be relatively easily extended to cover<br />

traffic in the other direction, but the input from the WAN would need to deliver a custom header<br />

similar to the one provided by the aggregator, with information on physical ports to send to.<br />

Similarly, the design only implements the AAL5 segmented ATM transport protocol for the WAN.<br />

The code has hooks built in for handling both Ethernet and frame relay, which can be filled in to<br />

that end. This version also doesn’t implement the StrongARM side of packet exception handling,<br />

nor does it implement any PCI functionality. The example design also uses a custom header for<br />

physical port information, which, as of the initial release of this document, is not producible with<br />

our current hardware. Therefore, the design has only had limited software testing, and has, as of<br />

Application Note 37


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

this time, not been proven on hardware. The custom header is also just that...custom. This header is<br />

fully dependent upon the customer’s implementation, and associated code would have to be<br />

modified to comply with that design.<br />

10.0 Extending the <strong>Example</strong> <strong>Design</strong><br />

This implementation allows traffic to pass from the DSL aggregator to the the WAN. The design<br />

could easily be extended to cover traffic in the reverse direction. In order to do so, data from the<br />

WAN requires a custom header similar to the one provided by the aggregator indicating the<br />

physical port to which the data should be sent.<br />

This design implements AAL5 segmented ATM transport protocol on the WAN interface. It could<br />

be extended to Ethernet and Frame Relay through the built-in hooks. PCI support has not been<br />

implemented.<br />

This design uses a custom header for an assumed physical port information and can be easily<br />

modified to reflect actual hardware configurations.<br />

StrongARM processing of exception packets has not been implemented.<br />

The following “hooks” or support exist in the design to make it easier to add more functionality:<br />

• Ethernet<br />

• Frame Relay<br />

• 2nd Fast Port Use<br />

• Single Chip<br />

38 Application Note


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

11.0 Acronyms & Definitions<br />

Term<br />

AAL<br />

AAL5<br />

ABR<br />

API<br />

ARP (or ATM ARP)<br />

ATM<br />

CBR<br />

CRC<br />

CS (or AAL5-CS)<br />

<strong>DSLAM</strong><br />

DWB<br />

IP<br />

IXB3208<br />

MAC<br />

PDU<br />

Transactor<br />

UBR<br />

VBR<br />

VC<br />

VP<br />

WAN<br />

ATM Adaptation Layer<br />

Definition<br />

ATM Adaption Layer 5 (data)<br />

Available Bit Rate<br />

Application Programming Interface<br />

Address Resolution Protocol<br />

Asynchronous Transfer Mode<br />

Constant Bit Rate<br />

Cyclic Redundancy Check<br />

Convergence Sub-Layer<br />

<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong><br />

Developer’s Workbench - Integrated Development<br />

environment for the IXP1200 Network Processor<br />

Internet Protocol<br />

IX Bus device used in multiprocessor IXP1200 systems<br />

where the intention is to scale for additional port count.<br />

Supports speeds up to 83 MHz.<br />

Media <strong>Access</strong> Controller<br />

Protocol Data Unit<br />

IXP1200 Software Simulator<br />

Unspecified Bit Rate<br />

Variable Bit Rate<br />

Virtual Channel<br />

Virtual Path<br />

Wide Area Network<br />

Application Note 39


<strong>Digital</strong> <strong>Subscriber</strong> <strong>Line</strong> <strong>Access</strong> <strong>Multiplexer</strong> (<strong>DSLAM</strong>) <strong>Example</strong> <strong>Design</strong><br />

12.0 Reference Documents<br />

Title<br />

Description<br />

RFC1483.htm<br />

README.TXT<br />

<strong>DSLAM</strong> Router/Traffic Shaper<br />

SW <strong>Design</strong> Specification<br />

A Network Processor-Based<br />

Next Generation <strong>DSLAM</strong><br />

Multiprotocol Encapsulation over ATM Adaptation Layer 5 Request<br />

for Comments document - bundled with source code.<br />

Release notes bundled with source code.<br />

Note that there are two README.TXT files. One is in the<br />

atm_ether project source directory, and is a “Quick Start and<br />

Source Code Guide.” The second README.TXT file can be found<br />

in the vxworks sub-directory, and describes how to run the project<br />

on hardware.<br />

White Paper<br />

White Paper<br />

40 Application Note

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!