09.03.2013 Views

Deliverable D3.1 Test Bed Overview - Nobel 2

Deliverable D3.1 Test Bed Overview - Nobel 2

Deliverable D3.1 Test Bed Overview - Nobel 2

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Instrument type: Integrated Project<br />

Proposal No. IST-511780<br />

MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networks<br />

Priority name: Information Society Technologies<br />

<strong>Deliverable</strong> <strong>D3.1</strong><br />

<strong>Test</strong> <strong>Bed</strong> <strong>Overview</strong><br />

Due date of deliverable: September 30, 2004<br />

Actual submission date: October 01, 2004<br />

Start date of project: July 1, 2004 Duration: 36 months<br />

Lead contractor for this deliverable:<br />

T-Systems/Deutsche Telekom<br />

Revision 1<br />

Project co-funded by the European Commission within the Sixth Framework Programme (2002-2006)<br />

Dissemination Level<br />

PU Public X<br />

PP Restricted to other programme participants (including the Commission Services)<br />

RE Restricted to a group specified by the consortium (including the Commission Services)<br />

CO Confidential, only for members of the consortium (including the Commission Services)


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Abstract<br />

Page 2 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

In this deliverable the overall design, the actual status, the functions provided to the MUPPET project<br />

and the interfaces/functions for the inter-test bed connections of the Southern-, Central-, Northern-,<br />

Western- and Eastern European local test beds are preliminary described. Furthermore a description of<br />

the network functions, interface functions and local PoPs of the involved NRENs are included. Last<br />

but not least, the MUPPET partners, who will provide applications and are located remotely from the<br />

five local test beds, have added their requirements to the transport network and a description of the<br />

local NREN PoPs to be used in MUPPET. With this information, including the views of all<br />

networking participants, a first picture of the local test beds and their interconnections is completed.<br />

Additionally to these strictly to the MUPPET project related topics, this deliverable includes a<br />

summary of the participation of three MUPPET partners (Marconi ONDATA and Marconi SpA,<br />

TILAB, T-Systems/Deutsche Telekom) at the OIF World Interoperability <strong>Test</strong>s and Demonstrations<br />

at SuperComm 2004, as a pre-project work, which was an excellent starting point for project<br />

MUPPET.<br />

In the following a short summary of the five tests bed descriptions and their interconnections is given:<br />

Local test beds summary:<br />

1. Southern Europe test bed: Is composed of ASON/GMPLS transport networks domains, optical<br />

transparent and SDH based, and an IP/MPLS domain interconnected by control plane interfaces<br />

E-NNI and UNI, respectively. Additional E-NNI connections will be established to the<br />

TILAB/NOBEL network domain. To all these network domains the following application labs<br />

will be connected: Grid Node, SAN Lab and Video Communication Lab<br />

2. Central Europe test bed: Is an ASON/GMPLS network domain, interconnected to the<br />

GSN+Demonstrator domains via UNI and E-NNI control plane interfaces. Broadband<br />

video/HDTV applications are used for visualising the network functions<br />

3. Northern Europe test bed: The Acreo National Broadband <strong>Test</strong> bed is build according to the<br />

GMPLS architecture, including control of multiple layers (IP/Ethernet/Optical Layer) by means of<br />

one common control plane<br />

4. Western Europe test bed: The TID test bed is an IP/MPLS network, which is composed of Layer-<br />

2 switches and IP routers.<br />

5. Eastern Europe test bed: The PSNC test bed is a broadband Ethernet network (GE and 10GE<br />

links) over a WDM transport infrastructure, following a low-overhead architecture for multigigabit<br />

networks.<br />

<strong>Test</strong> bed interconnections summary:<br />

1. Southern Europe test bed - GARR: Short-term interconnection via ATM link possible, the final<br />

goal is a dark fibre interconnection. Furthermore there is a control plane E-NNI interconnection<br />

based on an IPSEC tunnel between Southern Europe – Central Europe test bed up running since<br />

April 2004, the OIF World Interoperability <strong>Test</strong>s and Demonstration were performed<br />

2. Central Europe test bed - DFN: Short-term interconnection via Ethernet link (fixed data volume).<br />

A dark fibre interconnection is currently under investigation. Today the E-NNI control plane<br />

connection to the Southern Europe test bed is the only test bed interconnection being in use<br />

3. Northern Europe test bed - NORDUNet: The connectivity through the Nordic NRENs<br />

(SUNET/NORDUNET) to GEANT is still an issue to be solved.<br />

4. Western Europe test bed - RedIRIS: Already interconnected via an ATM link (PVC 34 Mbit/s)<br />

5. Eastern Europe test bed - PIONIER: This interconnection will be set up using a dedicated<br />

Ethernet link or VLAN with guaranteed bandwidth


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 3 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

6. NRENs – GEANT: All the four NREN hosting the links to the MUPPET test bed, i.e. RedIRIS in<br />

Spain, GARR in Italy, DFN in Germany, PSNC in Poland, NORDUNet for the Nordic countries,<br />

are connected at 10 Gbit/s to GÉANT, ensuring that testing at high speed can be planned in a near<br />

future. The successor of GÉANT will offer even greater link speed and richer services, in<br />

particular in term of provisioning time and level of automatism.<br />

This first resume shows clearly, that the integration of these different kinds of test beds over different<br />

NREN networks and GEANT to an overall European test bed will need considerable efforts, close<br />

cooperation and pragmatic solutions.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Document Information<br />

Page 4 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Status and Version: Version 1.00, final<br />

Date of Issue: 01.10.2004<br />

Dissemination level: Public<br />

Author(s):<br />

Main author Hans-Martin Foisel T-Systems<br />

Labs Loa Andersson ACREO<br />

Javier Hurtado TID<br />

Isidro Cabello Medina TID<br />

Jesus Felipe Lobo Poyo TID<br />

Giovanni Carones TILAB<br />

Carlo Cavazzoni TILAB<br />

Marco Vitale TILAB<br />

Pino Castrogiovanni TILAB<br />

Marzio Alessi TILAB<br />

Giovanni Carones TILAB<br />

Roberto Morro TILAB<br />

Michal Przybylski PSNC<br />

Christoph Gerlach T-Systems<br />

Fritz Joachim Westphal T-Systems<br />

NRENs Juergen Rauschenbach DFN<br />

Mauro.Campanella GARR<br />

Miguel Angel Sotos RedIRIS<br />

Vendor Jan Spaeth ONDATA<br />

Piergiorgio Sessarego MCSPA<br />

Appl. provider Lars Dittmann DTU<br />

Henrik Wessing DTU<br />

Rodolfo Baroso CSP<br />

Valentina Bruno CSP<br />

Peter Holleczek FAU<br />

Susanne Naegele-Jackson FAU<br />

Approved by: Jan Spaeth Project co-ordinator


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Table of Contents<br />

Page 5 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Abstract..................................................................................................................................................1<br />

Document Information .........................................................................................................................4<br />

Table of Contents ..................................................................................................................................5<br />

1 Introduction.....................................................................................................................................7<br />

1.1 Purpose and Scope.................................................................................................................7<br />

1.2 Reference Material.................................................................................................................7<br />

1.2.1 Reference Documents.......................................................................................................7<br />

1.2.2 Acronyms and Abbreviations ...........................................................................................8<br />

1.3 Document History................................................................................................................10<br />

1.4 Document overview.............................................................................................................10<br />

2 Optical Internetworking Forum: World Interoperability <strong>Test</strong>s and Demonstration at<br />

SuperComm 2004 –Participation of MUPPET partners.................................................................11<br />

2.1 Introduction..........................................................................................................................11<br />

2.2 Overall test description ........................................................................................................11<br />

2.3 Ethernet over SDH/SONET adaptation interoperability tests..............................................15<br />

2.4 Example of local lab tests ....................................................................................................16<br />

2.5 Configuration of <strong>Test</strong> bed Environments at TILAB and DT's labs .....................................16<br />

3 MUPPET test bed descriptions....................................................................................................18<br />

3.1 Southern Europe test bed .....................................................................................................19<br />

3.1.1 The TILAB Networking <strong>Test</strong> <strong>Bed</strong>..................................................................................19<br />

3.1.1.1 IP/MPLS layer ........................................................................................................19<br />

3.1.1.2 ASON/GMPLS layer ..............................................................................................20<br />

3.1.1.3 NOBEL Domain (IP/MPLS and ASON/GMPLS Layers)......................................23<br />

3.1.2 The TILAB Grid Node ...................................................................................................24<br />

3.1.3 The TILAB Video Communication Lab.........................................................................28<br />

3.1.4 The TILAB SAN Lab .....................................................................................................30<br />

3.1.4.1 Physical Location and Connection to the Project’s resources ................................30<br />

3.1.4.2 Architecture ............................................................................................................30<br />

3.1.4.3 CDN Architecture...................................................................................................31<br />

3.1.4.4 TILAB SAN Lab Components ...............................................................................31<br />

3.1.4.5 SAN LAB Developments .......................................................................................33<br />

3.1.5 CSP .................................................................................................................................33<br />

3.2 Central Europe test bed........................................................................................................35<br />

3.2.1 FAU ................................................................................................................................37<br />

3.3 Northern Europe test bed .....................................................................................................41<br />

3.3.1 Background.....................................................................................................................41<br />

3.3.2 <strong>Test</strong> bed description........................................................................................................41<br />

3.3.3 Architecture ....................................................................................................................42<br />

3.3.4 Networking technologies................................................................................................42<br />

3.3.5 Control plane ..................................................................................................................43<br />

3.3.6 Data plane.......................................................................................................................43<br />

3.3.7 Context............................................................................................................................44


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 6 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

3.3.8 Interconnections..............................................................................................................44<br />

3.3.8.1 Local NRENs..........................................................................................................44<br />

3.3.8.2 Control Plane ..........................................................................................................44<br />

3.3.8.3 Data plane ...............................................................................................................44<br />

3.3.9 Possible test/demos.........................................................................................................44<br />

3.4<br />

3.3.10 Future..............................................................................................................................45<br />

3.3.11 DTU................................................................................................................................45<br />

Western Europe test bed ......................................................................................................46<br />

3.5 Eastern Europe test bed........................................................................................................49<br />

3.5.1 Eastern Europe test bed phase 1 .....................................................................................49<br />

3.5.2 Eastern Europe test bed phase 2 .....................................................................................50<br />

4 <strong>Test</strong> bed interconnections.............................................................................................................51<br />

4.1 Southern Europe test bed - GARR.......................................................................................52<br />

4.1.1 Southern Europe test bed access points ..........................................................................52<br />

4.1.2 GARR .............................................................................................................................53<br />

4.2 Central Europe test bed - DFN.............................................................................................55<br />

4.2.1 Central Europe test bed access points.............................................................................55<br />

4.2.2 DFN ................................................................................................................................56<br />

4.3 Northern Europe test bed - NORDUnet...............................................................................58<br />

4.3.1 Northern Europe test bed access points ..........................................................................58<br />

4.3.2 NORDUnet .....................................................................................................................59<br />

4.4 Western Europe test bed - RedIRIS .....................................................................................59<br />

4.4.1 Western Europe test bed access points ...........................................................................61<br />

4.4.2 RedIRIS ..........................................................................................................................61<br />

4.4.2.1 Capabilities available for MUPPET........................................................................63<br />

4.4.2.2 Description of the POP to be used by TID test bed ................................................63<br />

4.5 Eastern Europe test bed - PIONIER.....................................................................................65<br />

4.6 GEANT................................................................................................................................66<br />

4.6.1 The GÉANT network .....................................................................................................66<br />

4.6.2 Capability of transport network functions available for MUPPET ................................67<br />

5 Conclusion .....................................................................................................................................68


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

1 Introduction<br />

1.1 Purpose and Scope<br />

Page 7 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

There are two objectives this deliverable is aiming at:<br />

• Giving a first comprehensive description of the overall design, the actual status and the functions<br />

provided to the MUPPET project and the interfaces/functions for the inter-test bed connections of<br />

the Southern-, Central-, Northern-, Western- and Eastern European local test beds<br />

• Giving a first description of the potential test bed interconnections (and the application sites<br />

related to these), including the network functions provided by the NRENs and GEANT<br />

Therefore this deliverable will build the bases for the next activities concerning the integration of the<br />

local test beds to the European scale MUPPET test bed, for the architectural investigations and for the<br />

practical implementation of the test bed interconnections and integration.<br />

1.2 Reference Material<br />

1.2.1 Reference Documents<br />

[1] ITU-T/ASON: www.itu.int/ITU-T<br />

[2] IETF/GMPLS: www.ietf.org<br />

[3] OIF: www.oiforum.com<br />

[4] H.-M. Foisel, ECOC2004, “Optical Internetworking Forum: World Interoperability <strong>Test</strong>s<br />

and Demonstrations”, post deadline paper Th4.5.2<br />

[5] OIF-UNI 1.0 Release 2, “OIF-UNI-01.0-R2-Common - User Network Interface (UNI) 1.0<br />

Signalling Specification, Release 2: Common Part” and “OIF-UNI-01.0-R2-RSVP - RSVP<br />

Extensions for User Network Interface (UNI) 1.0 Signalling, Release 2”<br />

[6] OIF E-NNI 1.0 Signalling, “OIF-E-NNI-Sig-01.0 - Intra-Carrier E-NNI Signalling<br />

Specification”<br />

[7] ITU-T G.8080/Y.1304, “Architecture for the Automatically Switched Optical Network<br />

(ASON)”<br />

[8] ITU-T G.7713, “Distributed Call And Connection Management (DCM)”.<br />

ITU-T G.7713.1, “Distributed Call and Connection Management (DCM) based on PNNI”.<br />

ITU-T G.7713.2, “Distributed Call and Connection Management: Signalling mechanism<br />

using GMPLS RSVP-TE”.<br />

ITU-T G.7713.3, “Distributed Call and Connection Management: Signalling mechanism<br />

using GMPLS CR-LDP”<br />

[9] ITU-T G.7715, “Architecture and Requirements for Routing in the Automatic Switched<br />

Optical Networks”.<br />

ITU-T G.7715.1, “ASON Routing Architecture and Requirements for Link State Protocols”<br />

[10] ITU-T G.7041, “Generic Framing Procedure (GFP)”<br />

[11] ITU-T G.806, “Characteristics of Transport Equipment – Description Methodology and<br />

Generic Functionality”<br />

[12] ITU-T G.707, “Network Node Interface for the Synchronous Digital Hierarchy (SDH)”<br />

[13] ITU-T G.783, “Characteristics of Synchronous Digital Hierarchy (SDH) Equipment<br />

Functional Blocks”<br />

[14] ITU-T G.7042, “Link Capacity Adjustment Scheme (LCAS)”<br />

[15] ITU-T G.709, “Interfaces for the Optical Transport Network (OTN)”<br />

[16] Clear Pond: www.clearpondtech.com


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 8 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

[17] VIOLA, a German national scale test network: www.viola-testbed.de<br />

[18] NorthernLight, a Nordic lambda network facility:<br />

www.nordu.net/development/northernlight.html.<br />

[19] DANTE network: http://www.dante.net<br />

1.2.2 Acronyms and Abbreviations<br />

ACL Access List<br />

ASON Automatic Switched Optical Network<br />

ASTN Automatic Switched Transport Network<br />

ATM Asynchronous Transfer Mode<br />

AToM Any Transport over MPLS<br />

BGP Border Gateway Protocol<br />

CDN Content Distribution Network<br />

CP Control Plane<br />

CPU Central Processing Unit<br />

CSPF Constrained Shortest Path First<br />

CWDM Coarse WDM<br />

dB Decibel<br />

DCN Data Communication Network<br />

Demux Demultiplexer<br />

DHCP Dynamic Host Configuration Protocol<br />

DSCP Differentiated Services Code Point<br />

DP Data Plane<br />

DWDM Dense WDM<br />

ELH Extended LH<br />

EMS Element Management Software<br />

E-NNI External Network Node Interface<br />

ETH Ethernet<br />

FC Fibre Channel<br />

FE Fast Ethernet<br />

FEC Forward Error Correction<br />

Gbps Gigabit per second<br />

GE Gigabit Ethernet<br />

GE / GbE Gigabit Ethernet<br />

GFP Generic Framing Procedure<br />

GMPLS Generalized Multi Protocol Label Switching<br />

GUI Graphical User Interface<br />

HD(D) Hard Disk (Drive)<br />

HDTV High Definition Television<br />

HW Hardware<br />

IA Implementation Agreement<br />

IDE Integrated Drive Electronics<br />

IEEE Institute of Electrical and Electronics Engineers<br />

IETF Internet Engineering Task Force<br />

IGMP Internet Group Messaging Protocol<br />

I-NNI Internal Network Node Interface<br />

IP Internet Protocol<br />

iSCSI SCSI over IP<br />

ITU-T International Telecommunication Union-Telecommunication Standardization Sector<br />

JBOD Just a Bunch Of Disks<br />

KB/MB/GB/TB Kilo/Mega/Giga/Tera-Byte


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 9 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

LAN PHY Local Area Network Physical Layer (Interface)<br />

LCAS Link Capacity Adjustment Scheme<br />

LH Long Haul<br />

LION Label Interworking in Optical Networks (former IST Project)<br />

LMP Link Management Protocol<br />

LSP Label Switched Path<br />

MAC Media Access Control<br />

MAN Metropolitan Area Network<br />

MPEG Moving Pictures Experts Group<br />

MPLS Multi-Protocol Label-Switching<br />

MUPPET Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

NE Network Element<br />

NMS Network Management Software<br />

NNI Network Network Interface (aka Network Node Interface)<br />

NOBEL Next generation Optical network for Broadband in Europe<br />

NREN National Research Network<br />

O/E Optical / Electrical<br />

OADM Optical Add-Drop Multiplexer<br />

OIF Optical Internetworking Forum<br />

OSPF Open Shortest Path First<br />

OXC Optical Cross Connect<br />

PDU Protocol Data Unit<br />

PoP Point of Presence<br />

PVC Permanent VC<br />

QoS Quality of Service<br />

RAID Redundant Array of Independent Disks<br />

RFC Request for Comments<br />

RSVP-TE Resource Reservation Protocol – Traffic Engineering extension<br />

RTP Real Time Protocol<br />

SAN Storage Area Network<br />

SCSI Small Computer Systems Interface<br />

SDH Synchronous Digital Hierarchy<br />

SNMP Simple Network Management Protocol<br />

SONET Synchronous Optical Network<br />

STM Synchronous Transport Module<br />

SW Software<br />

TCP Transmission Control Protocol<br />

TLS Transparent LAN Service<br />

TOS Type Of Service<br />

UDP User Datagram Protocol<br />

ULH Ultra LH<br />

UNI User Network Interface<br />

URL Uniform Resource Locator<br />

VC Virtual Container<br />

VCAT Virtual Concatenation<br />

VLAN Virtual LAN<br />

VPLS Virtual Private LAN Services<br />

VPN Virtual Private Network<br />

W3C WWW Consortium<br />

WDM Wavelength Division Multiplex<br />

WDM Wavelength Division Multiplexing<br />

WRED Weighted Random Early Detection<br />

WWW World Wide Web


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

XC Cross Connect<br />

1.3 Document History<br />

Page 10 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Version Date Authors Comment<br />

0.01 22/07/2004 H.-M. Foisel Initial document<br />

structure<br />

0.02 09/08/2004 H.-M. Foisel Update doc structure<br />

based on comments by<br />

WP3 participants and<br />

new MUPPET<br />

template<br />

0.03 15/09/2004 MUPPET partners First draft document<br />

0.04 27/09/2004 MUPPET partners Second draft document<br />

1.00 01/10/2004 MUPPET partners,<br />

Hans-Martin Foisel,<br />

Christoph Gerlach,<br />

Ronald Müller<br />

1.4 Document overview<br />

Final version of the<br />

document<br />

This document is structured into three main parts:<br />

Section 2 contains a short summary of the participation of three MUPPET partners (Marconi, TILAB,<br />

T-Systems/Deutsche Telekom) at the OIF World Interoperability <strong>Test</strong>s and Demonstrations at<br />

SuperComm 2004, which fosters and supports the ongoing activities within MUPPET.<br />

Section 3 is dealing with the description of the overall design, the actual status and the functions<br />

provided to the MUPPET project and the interfaces/functions for the inter-test bed connections of the<br />

Southern-, Central-, Northern-, Western- and Eastern European local test beds. Additionally, the<br />

MUPPET partners, who will provide applications and are located remotely from the five local test<br />

beds, have added their requirements to the transport network and a description of the local NREN<br />

PoPs to be used in MUPPET.<br />

In section 4 the network functions, interface functions and local PoPs descriptions of the involved<br />

NRENs and the local test beds are included, building the bases for the next activities related to the<br />

practical implementation of the test bed interconnections.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 11 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

2 Optical Internetworking Forum: World Interoperability <strong>Test</strong>s<br />

and Demonstration at SuperComm 2004 –Participation of<br />

MUPPET partners<br />

2.1 Introduction<br />

The evolution of high bandwidth data applications and optical technologies has posed challenging<br />

deployment issues for today’s carriers embedded with legacy management and operations systems. As<br />

data and optical network convergence issues came to light, carriers began to evaluate and introduce<br />

intelligent control plane (CP) mechanisms and enhanced data stream mappings that deliver<br />

operational benefits in a multi-vendor environment. However, lack of standardization on previous<br />

implementations poses a different set of challenges in addressing multi-vendor and multi-domain<br />

interoperability.<br />

A number of standardisation bodies [1-3] and forums, among them ITU-T (Automatic Switched<br />

Optical Networks/ASON), IETF (Generalized Multi-Protocol Label Switching/GMPLS) and the<br />

Optical Internetworking Forum/OIF (Control plane interfaces: User-Network-Interface/UNI and<br />

External Network-Network-Interface/E-NNI), are aiming at making this interoperability happen in a<br />

standard compliant way. The OIF perform additionally the next step of paramount importance for<br />

future deployments: Interoperability tests of new network functions. This chapter reports on the first<br />

world wide distributed interoperability test as a joint effort of vendors and carriers in the first half of<br />

this year 2004, particularly the role of the MUPPET partners: Marconi ONDATA (Germany) and<br />

Marconi SpA (Italy), TILAB/Telecom Italia and T-Systems/Deutsche Telekom [4]. This participation<br />

was seen as a pre-MUPPET activity, which is well aligned with the planned work in MUPPET and<br />

was taken as a great opportunity to preliminary implement interoperable solutions on a worldwide<br />

scale, which serve as a basis and strongly supports the now ongoing work within MUPPET.<br />

2.2 Overall test description<br />

These interoperability tests, while based on a multi-domain network scenario, were covering two main<br />

areas.<br />

• CP interfaces: UNI (Client network – transport network domain) and E-NNI (between<br />

transport network domains), based on OIF specifications [5-6], which are aligned with the<br />

corresponding ITU-T standards [7 - 9]<br />

• Most efficient Ethernet over SDH/SONET adaptation, based on ITU-T Recommendations<br />

related to GFP-F, VCAT, LCAS [10 - 15]<br />

The tests have been designed on a global stage with seven carrier labs across three continents,<br />

interworking through intelligent control plane mechanisms created by a multi-vendor environment of<br />

fifteen vendor participants (Table 1 and 2).<br />

Table 2.2-1: Carrier participants<br />

Asia China Telecom, KDDI, NTT<br />

Europe Deutsche Telekom, Telecom Italia<br />

USA AT&T, Verizon


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Table 2.2-2: Vendor participants<br />

Alcatel Marconi ONDATA and<br />

Marconi SpA<br />

Page 12 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Fujitsu<br />

ADVA Optical Networking NEC Lucent Technologies<br />

Avici Systems Nortel Networks Mahi Networks<br />

CIENA Corporation Siemens AG Tellabs<br />

Cisco Systems Sycamore Networks Turin Networks<br />

All seven carrier labs were interconnected by CP connections based on IPSEC tunnels over the public<br />

Internet (Figure 2.2-1). Additionally three labs, Verizon, AT&T and Deutsche Telekom, were<br />

interconnected by a data connection of VC-4/STS-3c.<br />

NTT<br />

CT<br />

Asia USA Europe<br />

KDDI<br />

AT&T<br />

All others are CP<br />

interconnections<br />

CP & DP<br />

Verizon<br />

CP & DP<br />

Figure 2.2-1: Topology overview of the worldwide test bed<br />

CP: Control Plane; DP: Data Plane<br />

At the carrier labs, CP and DP complete tests for UNI, E-NNI and Ethernet over SDH/SONET<br />

adaptation were performed, in which CP-only tests could be accomplished between the labs (virtual<br />

connections), excepting the inter-lab tests between Verizon, AT&T and Deutsche Telekom.<br />

For actively monitoring the connection configurations in this multi-domain environment, locally in<br />

the labs but even more globally, despite the fact that no central monitoring instance could provide this<br />

function, a de-centralised approach was chosen based on a topology display software from Clear Pond<br />

DT<br />

TI


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 13 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

[16]. In such a way the actual global topology could be monitored online and additionally used for<br />

presentations and demos in the carrier labs and finally at the SuperComm 2004.<br />

Control plane interface interoperability tests<br />

These interoperability tests were based on OIF specifications (Implementation Agreements/IA) [3, 5,<br />

6], which are aligned with the corresponding ITU-T standards [7-9]:<br />

• IA: UNI 1.0 Signalling Specification, Release 2<br />

• IA: E-NNI-01.0 (Signalling)<br />

• Draft: E-NNI-01.0 (Routing)<br />

These CP interfaces allow requests of connections, from the client side (Switched Connections) or by<br />

EMS/NMS (Soft Permanent Connections) of a domain, over multiple ASON/GMPLS domains by<br />

using the CP functionality only (Figure 2.2-2). As such, multi-domain connections can be set up<br />

without involvement of the EMS/NMS of the intermediate domains (assuming this action is covered<br />

by Service Level Agreements). These new functions will significantly ease connection configuration<br />

of such multi-domain networks.<br />

Client<br />

network<br />

D<br />

Client<br />

network<br />

C<br />

UNI<br />

UNI<br />

Optical network A<br />

I-NNI<br />

Carrier domain<br />

Figure 2.2-2: Reference network for control plane interfaces (UNI, E-NNI) interoperability tests<br />

In Table 2.2-3 the performed interoperability tests are listed. The tests were carried out first locally in<br />

each carrier lab, then per continent and finally at global scale.<br />

Table 2.2-3: Control plane interface tests performed<br />

E-NNI<br />

Optical networkB<br />

I-NNI<br />

<strong>Test</strong> <strong>Test</strong> case UNI-C UNI-N E-NNI<br />

1 Basic routing functionality - - X<br />

2 Routing functionality for virtual links - - X<br />

3 Connection initiated by UNI X X -<br />

4 Dual-domain connection initiated by EMS - - X<br />

UNI<br />

UNI<br />

UNI<br />

Client<br />

network<br />

A<br />

Client<br />

network<br />

B


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 14 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

5 Dual-domain connection initiated by UNI X X X<br />

6 Multi-domain connection initiated by UNI X X X


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 15 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

2.3 Ethernet over SDH/SONET adaptation interoperability tests<br />

Most efficient mapping<br />

of Ethernet over<br />

SDH/ SONET<br />

ASON/ GMPLS network<br />

based on SDH/ SONET<br />

Payload only is<br />

transmitted<br />

Figure 2.3-1: Reference network for Ethernet over SDH/SONET adaptation interoperability<br />

tests<br />

These tests (Table 2.3-1) were based on ITU-T Rec.: G.7041/GFP-F, G.707/VCAT, G.7042/LCAS,<br />

etc. [10 - 15] and carried out locally at the labs of Telecom Italia, Verizon, AT&T and Deutsche<br />

Telekom, in which the latter three have additionally accomplished inter-lab tests (Figure 2.3-1). They<br />

show clearly that actual, standard compliant implementations of this most efficient Ethernet over<br />

SDH/SONET adaptation are really interoperable, even on global scale.<br />

Table 2.3-1: Ethernet over SDH/SONET tests performed<br />

<strong>Test</strong> <strong>Test</strong> cases<br />

1 Partial bandwidth (B), FE over STS-1/VC-3<br />

2 Full B, FE over STS-1-2v/VC-3-2v<br />

3 Full B, GE over STS-1-21v/VC-3-21v<br />

4 Partial B, GE over STS-1-3v/VC-3-3v<br />

5 Partial B, FE over STS-1-Xv/VC-3-Xv, LCAS<br />

6 Partial B, GE over STS-1-Xv/VC-3-Xv, LCAS<br />

7 Full B, FE over STS-3c/VC-4<br />

8 Full B, GE over STS-3c-7v/VC-4-7v<br />

9 Partial B, GE over STS-3c-1v/VC-4-1v<br />

10 Partial B, GE over STS-3c-Xv/VC-4-Xv, LCAS


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

2.4 Example of local lab tests<br />

Page 16 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Based on previous ASON/GMPLS demonstrator activities, TILAB and Deutsche Telekom, have<br />

participated in these OIF tests, completing the areas of the carriers involvement in the work of the<br />

OIF, contributions to specifications and participation at interoperability tests and demos. This shows<br />

the carriers’ interest and support and sets the bar for future work in this field. Table 2.4-1 shows an<br />

example of the network functionalities available and tested at DT’s lab.<br />

Table 2.4-1: Network functions tested in DT’s lab<br />

Vendor Network functions<br />

Ciena Corporation UNI(N) 1.0R2, E-NNI, GFP-F/VCAT/LCAS<br />

Marconi Corporation UNI(N) 1.0R2, E-NNI, GFP-F/VCAT/LCAS<br />

Tellabs UNI(N) 1.0R2, E-NNI, GFP-F/VCAT/LCAS<br />

Cisco Systems UNI(C) 1.0R2<br />

Alcatel GFP-F/VCAT<br />

Lucent Technologies GFP-F/VCAT/LCAS<br />

ADVA Optical Networking GFP-F/VCAT<br />

2.5 Configuration of <strong>Test</strong> bed Environments at TILAB and DT's labs<br />

GFP<br />

VCAT<br />

w/ o LCAS<br />

GFP<br />

VCAT<br />

w/ o LCAS<br />

ADVA<br />

FSP1500<br />

Alcatel<br />

1660<br />

VC-4 XC<br />

E-NNI to AT&T<br />

Marconi<br />

UNI<br />

Ciena<br />

MSH E-NNI<br />

CD<br />

VC-4 XC VC-3/ 4 XC<br />

Cisco<br />

GFP<br />

VCAT<br />

LCAS<br />

VC-4 chan.<br />

GSR12406<br />

E-NNI<br />

Internet<br />

Router<br />

Hub<br />

E-NNI<br />

GFP<br />

VCAT<br />

LCAS<br />

Tellabs<br />

6350<br />

VC-12 XC<br />

UNI to Cisco<br />

UNI<br />

VC-4 chan.<br />

Cisco<br />

GSR12406<br />

Lucent<br />

λ Unite<br />

VC-3/4<br />

E-NNI to TILAB<br />

E-NNI to NTT<br />

Adva<br />

FSP1500<br />

Tellabs<br />

6345<br />

VC-12 XC<br />

Figure 2.5-1: <strong>Test</strong> network configuration at Deutsche Telekom / Berlin<br />

GFP<br />

VCAT<br />

LCAS<br />

GFP<br />

VCAT<br />

w/ o LCAS<br />

This chapter's purpose is to describe the networks established in the respective labs during the OIF<br />

interoperability tests (Figure 2.5-1 and Figure 2.5-2).<br />

Both local test networks are composed of a core area of network elements (NE) interconnected by E-<br />

NNI interfaces, enabling therefore control plane based connection configuration crossing multiple<br />

network domains. User network domains are attached to these core NEs via UNI interfaces,<br />

supporting the configuration of switched connections over multiple domains. Furthermore NE with<br />

GFP<br />

VCAT<br />

LCAS


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 17 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Ethernet over SDH adaptation functions are also connected to the core NE, which provides the needed<br />

any-to-any interconnectivity for the interoperability tests.<br />

UNI and E-NNI signalling was transmitted out-of-band, based on a separate Ethernet network. E-NNI<br />

information between distant nodes, for inter-lab multi-domain connections, was exchanged via IPSec<br />

tunnels using the public Internet. In this way multi-domain interoperability could be tested per<br />

continent and on a global scale.<br />

Figure 2.5-2: <strong>Test</strong> network configuration at TILAB / Torino


3 MUPPET test bed descriptions<br />

This chapter contains the detailed description of the five local tests beds, their architecture and the<br />

functions provided to the MUPPET project.<br />

Telefonica I+D<br />

ACREO<br />

DTU<br />

Western Europe<br />

test bed<br />

RedIRIS<br />

Northern Europe<br />

test bed<br />

SUNET<br />

NORDUnet<br />

DARENET<br />

TILAB<br />

GEANT<br />

GARR<br />

DFN<br />

Southern Europe<br />

test bed<br />

PIONIER<br />

FAU<br />

Figure 2.5-1: Planned MUPPET test bed configuration<br />

Central Europe<br />

test bed<br />

Eastern Europe<br />

test bed<br />

T-Systems<br />

PSNC<br />

All five tests beds will be connected to the respective national NRENs and via these to GEANT,<br />

insuring in this way interconnectivity at European scale. These five tests beds and their<br />

interconnectivities are building the basis of the MUPPET European scale ASON/GMPLS test bed,<br />

which will be used for network technology evaluations and investigations of high-end applications.


3.1 Southern Europe test bed<br />

3.1.1 The TILAB Networking <strong>Test</strong> <strong>Bed</strong><br />

The TILAB test bed represents the advanced networking infrastructure of the Southern Europe<br />

MUPPET test bed. Located in TILAB premises in Torino, the networking test bed leverages the<br />

resources of the “Optical Networking” Lab: space, cablings, measurement instruments, traffic<br />

generators, etc. The TILAB test bed is connected to the following TILAB application laboratories:<br />

• TILAB Grid Node<br />

• TILAB SAN Lab<br />

• TILAB Video Communication Lab<br />

It will be connected by means of IP links and leased lines to other Partner’s sites and other MUPPET<br />

test beds in Europe.<br />

The networking test bed, whose structure is represented in Figure 3.1-1, is based on an IP/MPLS layer<br />

over an optical layer with ASON/GMPLS capabilities. The interworking between the two layers will<br />

be realised by means of Optical UNI (User to Network Interface).<br />

Leased Lines<br />

GARR (GEANT)<br />

RESEAU<br />

UNI<br />

TILAB<br />

Grid Node<br />

Optical/Digital<br />

domain<br />

• Marconi optical/digital<br />

cross-connects<br />

TILAB<br />

SAN lab<br />

IP/MPLS Network<br />

TILAB<br />

VideoCom<br />

Photonic<br />

domain<br />

E-NNI<br />

NOBEL<br />

domain<br />

• Equipment provided by TILAB<br />

• Equipment from NOBEL Project<br />

IP/MPLS network<br />

(NOBEL DOMAIN)<br />

Figure 3.1-1: Structure of the TILAB test bed<br />

IP/MPLS layer<br />

• TILAB transparent<br />

OXCs<br />

ASON/GMPLS<br />

layer<br />

3.1.1.1 IP/MPLS layer<br />

The IP/MPLS layer is realised with commercial routers provided by TILAB and will be enhanced<br />

with integration of IP equipment from the test bed of the IST Project NOBEL.<br />

The commercial routers to be used within the project are on order and have the following<br />

characteristics:<br />

• Packet Over Sonet/SDH (POS) interfaces at 155 Mbit/s line speed, having UNI-C 1.0 r2 as<br />

signalling interface<br />

• Gigabit Ethernet (GbE) interfaces, suitable to house the UNI 2.0 when available.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 20 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

3.1.1.2 ASON/GMPLS layer<br />

The ASON/GMPLS layer contains three different domains:<br />

• The “Photonic” domain is a network of transparent optical cross-connects controlled by a<br />

distributed GMPLS control plane developed by TILAB. This domain is actually the evolution<br />

of the IST Project LION test bed, some of the future functionalities will be developed by the<br />

NOBEL project<br />

• The “Optical/digital” domain will be deployed during the MUPPET Project by integrating<br />

Marconi optical cross-connects with SDH VC-4 switching capability and a distributed control<br />

plane<br />

• Finally, the test bed will be completed by integrating a third domain (“<strong>Nobel</strong>” domain) fully<br />

developed by the NOBEL Project. This domain will be mainly based on optical/crossconnects.<br />

3.1.1.2.1 Photonic Domain<br />

The photonic domain derives directly from the test bed developed during the IST LION project even<br />

if some parts of it (like the Tellium equipment) are no longer available. The current configuration<br />

consists of six optical network elements (OXCs) provided by TILAB. The photonic domain has<br />

ASON capabilities supporting both soft-permanent and switched connections.<br />

The transport plane of the photonic domain is composed by FSC (Fibre Switching Capable) devices;<br />

for the time being, no transponders, amplifiers or multiplexers/demultiplexers are available, but it’s<br />

planned to enhance the test bed with some optical devices in order to simulate impairments on some<br />

links. The logical architecture of the transport plane is shown in Figure 3.1-2: six optical cross<br />

connects (OXC) are implemented using three different switching fabrics. Two of them are 16x16<br />

electro-mechanical matrices and are used to implement OXC1 and OXC4, while the third is a 32x32<br />

matrix (adopting MEMs technology) with power detectors on input and output ports and is subdivided<br />

into four 8x8 elements to implement the remaining OXCs.<br />

OXC 1<br />

OXC 2<br />

OXC 3<br />

OXC 4<br />

OXC 5<br />

OXC 6<br />

Figure 3.1-2: Transport plane of the photonic domain<br />

The links interconnecting the OXCs (consisting of a pair of optical fibres) are grouped in bundles: for<br />

example, the connection between OXC2 and OXC4 is realized by a bundle of three pairs of fibres,<br />

while the connection between OXC1 and OXC3 is realized by two bundles of two pairs each; this is<br />

the way chosen to reduce the amount of information that must be flooded by the routing protocol.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 21 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

All nodes (apart from OXC3) have some free I/O ports that can be allocated, by means of control<br />

plane configuration, for interconnecting client devices or devices that are part of other domains,<br />

realizing therefore different network scenarios.<br />

The control plane of the photonic domain is implemented using a distributed approach where every<br />

node has its own instance of the processes needed to manage the resources and the setup/tear-down of<br />

connections. In particular, the architecture of the control plane is in line with Rec. ITU-T G.8080 in<br />

terms of components used to manipulate transport network resources in order to provide the requested<br />

functionalities. The implemented components (as shown in Figure 3.1-3) are the Connection<br />

Controller (CC), the Routing Controller (RC), the Link Resource Manager (LRM) and the related<br />

Protocol Controller.<br />

Control Plane<br />

RC<br />

CC<br />

LRM<br />

OSPF RSVP LMP CCI<br />

Ethernet<br />

port<br />

Ethernet<br />

port<br />

To neighbor nodes<br />

Ethernet<br />

port<br />

Figure 3.1-3: Control plane architecture<br />

Ethernet<br />

port<br />

Every node runs control plane processes on a PC running the Linux operating system. Each node has<br />

point-to-point Fast Ethernet control channels towards neighbour nodes, in order to simulate a network<br />

scenario where Optical Supervisory Channels (OSC) are used as control channels; nevertheless all<br />

nodes have separate connections to a Data Communication Network (DCN) for management and<br />

configuration. The signalling protocol adopted within the domain (I-NNI) is RSVP-TE with GMPLS<br />

extensions while the routing protocol is OSPF-TE, again with GMPLS extensions and some<br />

proprietary adaptation in order to re-use standard messages to advertise parameters of a FSC network.<br />

The nodes have also neighbour discovery capabilities according to control channel management and<br />

link property correlation procedures of the LMP protocol. By means of this protocol, each node<br />

exchanges its local configuration information for control channels and TE-links with its neighbours,<br />

in order to check its consistency before advertising it using the routing protocol. In this way the link<br />

state database maintained at every node by the routing protocol has reliable information for path<br />

calculation. Moreover, the routing protocol advertises resource utilization and availability so that path<br />

calculation can take care of busy TE-links choosing an alternative (compared to the one with<br />

minimum cost) path.<br />

The signalling interface towards client devices (UNI) is compliant with OIF UNI1.0 and adopts<br />

RSVP-TE protocol; it is implemented trough out-of-fibre Fast Ethernet links.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

3.1.1.2.2 Optical/digital Domain<br />

Page 22 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The optical/digital domain of the test bed, consisting of at least three optical/digital cross-connects<br />

provided by Marconi Spa, is not yet available and will be deployed during the project lifetime.<br />

In order to reach the required capability and flexibility foreseen by the project objectives, a minimum<br />

set of requirements has been defined:<br />

• SDH VC-4 switching capability<br />

• Distributed control plane with signalling provided by UNI-N 1.0 interfaces towards the client<br />

devices (UNI-N 2.0 for GbE interfaces when available) and I/E-NNI towards the other nodes<br />

of the network<br />

• One STM-1 optical module with 16 ports for each network element<br />

• One STM-16 module with 4 ports for each network element<br />

• Initially two GbE modules with GFP/VCAT/LCAS capabilities to allow the immediate<br />

realization of bidirectional GbE connections between two network elements;<br />

• A third GbE card with UNI-N 2.0 (and the replacement or upgrade to UNI-N 2.0 for the other<br />

two GbE cards) as soon as the UNI 2.0 support will be available.<br />

A possible layout for the optical/digital domain is presented in Figure 3.1-4.<br />

16 * STM1<br />

Marconi<br />

OXC<br />

(1)<br />

4 * STM16<br />

16 * STM1 opt.<br />

a a’<br />

c<br />

4 * GbE 16 * STM1 4 * GbE<br />

c’<br />

Marconi<br />

OXC<br />

(2)<br />

16 * STM1<br />

Figure 3.1-4: Layout of the optical/digital domain.<br />

4 * GbE<br />

Marconi<br />

OXC<br />

(3)<br />

In order to emulate long distance links a-a’, b-b’ and c-c’ between network nodes, the adoption of a<br />

WDM system (see Figure 3.1-5) has been planned as a future upgrade of the optical/digital domain.<br />

b’<br />

b


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

100 km<br />

Page 23 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

a a’<br />

b b’<br />

c<br />

c’<br />

Figure 3.1-5: WDM System to emulate long distance links<br />

With reference to Figure 3.1-6, this WDM system will be implemented by integrating some already<br />

existing components in the TILAB premises (mainly the MUX-DEMUX, the line amplifiers and the<br />

G.655 optical fibre) with other components (mainly the transponders) to be provided within<br />

MUPPET.<br />

Figure 3.1-6: WDM System Components.<br />

3.1.1.3 NOBEL Domain (IP/MPLS and ASON/GMPLS Layers)<br />

This section contains a brief description of the equipments and functionalities that have been or will<br />

be implemented within the IST-NOBEL project activities, and will also be exploited within the scope<br />

and objectives of the MUPPET project. For more details please refer to the related documentation<br />

(<strong>Deliverable</strong> D7) issued by the IST-NOBEL project.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 24 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The IP/MPLS layer will contain (when completed) a certain number of Gigabit Switch Routers<br />

equipped with Fast Ethernet, Gigabit Ethernet and POS interfaces at 155 Mbit/s and 2.5 Gbit/s. Some<br />

routers will support the UNI-C 1.0 R2 on the POS interfaces ate 155 Mbit/s and, when available, the<br />

UNI 2.0 on the GbE interfaces<br />

The ASON/GMPLS will contain a 160 Gbit/s Multi-service SDH network element, equipped with<br />

interfaces ranging from 2 Mbit/s to 10 Gbit/s and Ethernet mapping capabilities. The Control Plane<br />

will have advanced GMPLS functionalities like the E-NNI and UNI ones.<br />

3.1.2 The TILAB Grid Node<br />

The TILAB Grid Node will be based on the experimental Grid platform existing at TILAB premises.<br />

The proposed scenario is described in Figure 3.1-7 and it is composed of some POP sites, connected<br />

through an IP/MPLS backbone. Data transmission will be than realized using the ASON/GMPLS test<br />

bed. This infrastructure is equipped with generic PCs hardware and Grid software such as Globes<br />

Toolkit v3. Technology will be based on SOAP/XML to meet high flexibility.<br />

New Services demonstrations will be based on the dynamic interaction of the Grid platform and the<br />

Network. Goals will be achieved by performing interaction between Grid nodes and Network Control<br />

Servers such as QoS and VPN servers.<br />

Final<br />

User<br />

Grid<br />

Node<br />

Site 1<br />

Site 2<br />

Service<br />

Web<br />

Portal<br />

HTTP<br />

Interaction<br />

QoS VPN CDN<br />

Control Framework<br />

Optical Optical<br />

Network<br />

Network<br />

IP/MPLS<br />

Figure 3.1-7: TILAB Grid Node.<br />

SOAP/XML<br />

(Grid Service)<br />

Interaction<br />

Site 3<br />

Grid<br />

Node<br />

MUPPET MUPPET MUPPET MUPPET MUPPET MUPPET MUPPET MUPPET Network Network Network Network Network Network Network Network<br />

Virtual<br />

Organization<br />

Let see in more detail, the different sites and the IP/MPLS backbone as in Figure 3.1-8. The<br />

infrastructure uses a set of routers equipped with Ethernet, Fast Ethernet (FE) and Serial interfaces.<br />

Such a network permits the access of a generic client through a portal, to a set of applications (e.g.:<br />

file zip, digital mark) installed on remote servers. Moreover, the network connects servers to share the<br />

available applications, CPU and memory resources.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Eth<br />

Eth<br />

PoP<br />

Server X2<br />

G<br />

C<br />

CISCO<br />

4500<br />

PoP<br />

CISCO<br />

1721<br />

Se<br />

Pitagora<br />

CE<br />

Eth<br />

FE<br />

Se<br />

Inv-2<br />

CE<br />

FE<br />

Se<br />

G I<br />

Web Server X1<br />

Internet<br />

Eth<br />

CISCO<br />

7204<br />

FE<br />

Socrate<br />

PE<br />

Client<br />

FE<br />

FE<br />

CISCO<br />

3640<br />

FE<br />

FE<br />

Epicuro<br />

CISCO<br />

2621<br />

Eth<br />

Platone<br />

FE<br />

Se<br />

Page 25 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Eth<br />

Aristotele<br />

PE<br />

Eth<br />

Se<br />

CISCO<br />

7204<br />

Se<br />

PoP<br />

C Custom grid-services<br />

Se<br />

Eth<br />

Fiab1<br />

CE<br />

Ethernet interface (Eth)<br />

Fast Ethernet interface (FE)<br />

Serial interface (Se)<br />

I Index Service<br />

(grid-services available register)<br />

G Globus Toolkit 3.2<br />

(container + basic grid-services)<br />

Figure 3.1-8: TILAB test bed for the Grid Node.<br />

In a first phase, the test bed uses only a couple of servers located into two different POP sites. These<br />

boxes are Linux equipped, with generic PCs hardware and Grid software such as Globus Toolkit v3.2.<br />

Table 3.1-1 describes the operative system installed, the HW and SW technical features of the used<br />

servers.<br />

Server features<br />

Operative System Linux - Redhat 9.0A<br />

Hardware<br />

Pentium IV 2.4 GHz<br />

RAM 904 MByte<br />

HD 80GB<br />

Level2 cache 512kbyte<br />

Software Apache 2.0 (installed only on the Web Server)<br />

Table 3.1-1: Server features.<br />

The test bed integrates different models of routers from Cisco Systems. Detailed technical<br />

specifications of each device can be found in Table 3.1-2 .<br />

CISCO<br />

2500<br />

Eth


Aristotele<br />

Epicuro<br />

(cisco 2621)<br />

Platone<br />

(cisco 3640)<br />

Socrate<br />

(cisco 7204)<br />

Inv-2<br />

(cisco 1721)<br />

Pitagora<br />

(cisco 4500)<br />

Router Features Fiab1<br />

(cisco 2500)<br />

7200 Software (C7200-<br />

P-M), Version 12.3(1a),<br />

RELEASE SOFTWARE<br />

(fc1)<br />

C2600 Software<br />

(C2600-TELCO-M),<br />

Version 12.2(2)T,<br />

RELEASE SOFTWARE<br />

(fc1)<br />

3600 Software (C3640-<br />

TELCO-M), Version<br />

12.3(1a), RELEASE<br />

SOFTWARE (fc1)<br />

7200 Software (C7200-<br />

JS-M), Version<br />

12.3(1a), RELEASE<br />

SOFTWARE (fc1)<br />

C1700 Software<br />

(C1700-SY-M), Version<br />

12.2(11)T8, RELEASE<br />

SOFTWARE (fc1)<br />

4500 Software (C4500-<br />

IS-M), Version<br />

12.2(19a), RELEASE<br />

SOFTWARE (fc2)<br />

IOS (tm) 2500 Software (C2500-<br />

I-L), Version 12.1(20),<br />

RELEASE SOFTWARE<br />

(fc2)<br />

System Bootstrap,<br />

Version 12.2(1r) [dchih<br />

1r], RELEASE<br />

SOFTWARE (fc1)<br />

System Bootstrap,<br />

Version 11.3(2)XA4,<br />

RELEASE SOFTWARE<br />

(fc1)<br />

System Bootstrap,<br />

Version 11.1(20)AA2,<br />

EARLY DEPLOYMENT<br />

RELEASE SOFTWARE<br />

(fc1)<br />

System Bootstrap,<br />

Version 12.2(1r) [dchih<br />

1r], RELEASE<br />

SOFTWARE (fc1)<br />

System Bootstrap,<br />

Version 12.2(7r)XM1,<br />

RELEASE SOFTWARE<br />

(fc1)<br />

System Bootstrap,<br />

Version 5.1(1) [daveu<br />

1], RELEASE<br />

SOFTWARE (fc1)<br />

ROM System Bootstrap,<br />

Version 11.0(10c),<br />

SOFTWARE<br />

7200 Software (C7200-<br />

BOOT-M), Version<br />

12.0(13)S, EARLY<br />

DEPLOYMENT<br />

RELEASE SOFTWARE<br />

(fc1)<br />

- C2600 Software<br />

(C2600-TELCO-M),<br />

Version 12.2(2)T,<br />

RELEASE SOFTWARE<br />

(fc1)<br />

- 7200 Software (C7200-<br />

BOOT-M), Version<br />

12.0(13)S, EARLY<br />

DEPLOYMENT<br />

RELEASE SOFTWARE<br />

(fc1)<br />

4500-XBOOT<br />

Bootstrap Software,<br />

Version 10.1(1),<br />

RELEASE SOFTWARE<br />

(fc1)<br />

BOOTLDR 3000 Bootstrap<br />

Software (IGS-BOOT-<br />

R), Version 11.0(10c),<br />

RELEASE SOFTWARE<br />

(fc1)<br />

Uptime 22 hours, 36 minutes 21 hours, 53 minutes 21 hours, 55 minutes 21 hours, 57 minutes 22 hours, 0 minutes 22 hours, 0 minutes 22 hours, 3 minutes<br />

cisco 7204VXR<br />

(NPE225) processor<br />

(revision A)<br />

cisco 2621 (MPC860)<br />

processor (revision<br />

0x102)<br />

cisco 3640 (R4700)<br />

(revision 0x00)<br />

Processor cisco 2500 (68030) cisco 4500 (R4K) cisco 1721 (MPC860P) cisco 7204VXR<br />

(NPE225) (revision A)<br />

14336K/2048K 32768K/16384K 29492K/3276K 114688K/16384K 59392K/6144K 27648K/5120K 114688K/16384K<br />

Processor memory<br />

[bytes]<br />

ID 25856777<br />

ID JAB035101C4<br />

(1042591346)<br />

ID 25860188,<br />

ID FOC07120AWP<br />

(633392346), with<br />

hardware revision<br />

0000,<br />

ID 01325035, R4600<br />

CPU at 100Mhz,<br />

Implementation 32, Rev<br />

2.0<br />

Processor board ID 06018704, with<br />

hardware revision<br />

00000001<br />

R527x CPU at 262Mhz,<br />

Implementation 40, Rev<br />

10.0, 2048KB L2<br />

Cache<br />

M860 processor: part<br />

number 0, mask 49<br />

ID 26429838, R4700<br />

CPU at 100Mhz,<br />

Implementation 33, Rev<br />

1.0<br />

R527x CPU at 262Mhz,<br />

Implementation 40, Rev<br />

10.0,<br />

MICA-6DM Firmware:<br />

CP ver 2940 -<br />

7/24/2002, SP ver 2940<br />

- 7/24/2002.<br />

2048KB L2 Cache, 4<br />

slot VXR midplane,<br />

Version 2.5<br />

MPC860P processor:<br />

part number 5, mask 2<br />

4 slot VXR midplane,<br />

Version 2.5


File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

- Bridging software<br />

- Bridging software<br />

- Bridging software<br />

- Bridging software<br />

- Bridging software<br />

- Bridging software<br />

Software - Bridging software<br />

- X.25 software,<br />

Version 3.0.0<br />

- X.25 software,<br />

Version 3.0.0<br />

- X.25 software,<br />

Version 3.0.0<br />

- X.25 software Version<br />

3.0.0,<br />

- X.25 software Version<br />

3.0.0<br />

- X.25 software,<br />

Version 3.0.0<br />

- X.25 software Version<br />

3.0.0,<br />

- SuperLAT software<br />

(copyright 1990 by<br />

Meridian Technology<br />

Corp)<br />

- SuperLAT software<br />

(copyright 1990 by<br />

Meridian Technology<br />

Corp)<br />

- SuperLAT software<br />

(copyright 1990 by<br />

Meridian Technology<br />

Corp),<br />

- G.703/E1 software,<br />

Version 1.0<br />

- Basic Rate ISDN<br />

software Version 1.1<br />

- Basic Rate ISDN<br />

software, Version 1.1<br />

- TN3270 Emulation<br />

software<br />

1 2 2 4 - 4 4<br />

Ethernet/IEEE 802.3<br />

interface(s)<br />

- - 1 3 3 2 1<br />

FastEthernet/IEEE<br />

802.3 interface(s)<br />

4<br />

2 2 - 4 2 - 1 Serial network<br />

interface(s)<br />

- 2 Serial(sync/async)<br />

network interface(s)<br />

Serial network<br />

interface(s)<br />

1 - - - 4 - -<br />

ISDN Basic Rate<br />

interface(s)<br />

- 1 - - - - 1<br />

ATM network<br />

interface(s)<br />

32K bytes 128K bytes 32K bytes 125K 125K 32K 125K<br />

non-volatile<br />

configuration<br />

memory [bytes]<br />

46976K bytes of ATA<br />

PCMCIA card at slot 0<br />

(Sector size 512 bytes)<br />

16384K<br />

(Read/Write)<br />

8192K bytes<br />

(Read/Write)<br />

46976K bytes of ATA<br />

PCMCIA card at slot 0<br />

(Sector size 512 bytes)<br />

16384K bytes<br />

(Read/Write)<br />

8192K bytes<br />

(Read/Write)<br />

8192K bytes (Read<br />

ONLY)<br />

Processor board<br />

System flash [bytes]<br />

- 4096K bytes of Flash<br />

internal SIMM (Sector<br />

size 256K)<br />

16384K bytes of<br />

processor board<br />

PCMCIA Slot1 flash<br />

(Read/Write)<br />

- 4096K bytes of Flash<br />

internal SIMM (Sector<br />

size 256K)<br />

- 4096K bytes<br />

(Read/Write)<br />

Processor board<br />

Boot flash [bytes]<br />

0x2102 0x2102 0x2102 0x2102 0x2102 0x2102 0x2102<br />

Configuration<br />

register<br />

Table 3.1-2: Router features.<br />

Page 27 of 68


3.1.3 The TILAB Video Communication Lab<br />

The activities related to the high quality videoconference experiments will be held utilizing the<br />

equipment and the infrastructure available at the TILAB Video Communication laboratory. This<br />

laboratory consists of generic PCs hardware, audio/video acquisition peripherals and a<br />

videoconference software platform developed by TILAB. In the proposed scenario this HW and SW<br />

infrastructure will be used for point-to-point and multipoint video communication experiments in the<br />

ASON/GMPLS test bed.<br />

TILAB videoconference platform consists of multiple distributed client-server components; current<br />

version of client software is a PC Windows application that offers the following functionalities:<br />

• call setup: signalling component for point-to-point communication is based on SIP protocol, while<br />

multipoint communication currently uses a proprietary signalling protocol transported over TCP<br />

• audio/video streams acquisition with different configurable video format (CIF, QCIF) and frame<br />

rate<br />

• audio/video encoding with configurable bandwidth values; currently supported CODEC are:<br />

o Audio: ITU.T G.723.1, AMR<br />

o Video: ITU.T H263, MPEG4 ASP<br />

• audio/video streams transmission using RTP protocol<br />

• receipt and visualization of other participant audio/video streams<br />

TILAB videoconference platform server environment consists of the following functional elements:<br />

• A/V Cross Connector: this element is responsible for the optimized replication of all audio/video<br />

streams, so that each participant can see/listen everyone else; Figure 3.1-9 shows a single server<br />

configuration, but multiple servers are supported for performance and scalability (a Resource<br />

Manager Server is responsible of RTP audio/video traffic load balancing between multiple<br />

Reflector Servers)<br />

• Application Server: consists of a Presence and Signalling Server module, that includes presence<br />

handling and call setup functionalities (through a SIP Proxy for point-to-point communication),<br />

and a Web Application module, that offers web-based conference management functions; all these<br />

modules can coexist on a single server, as illustrated in Figure 3.1-9, or can be distributed on<br />

multiple servers for performance and scalability<br />

At the laboratory an internal 100 Mbps Ethernet communication infrastructure is available; Figure<br />

3.1-9 shows current laboratory configuration, with network interconnections and protocols involved<br />

between different modules of TILAB videoconference platform. Peer-to-peer RTP media streams<br />

between clients are used for point-to-point communication, while in multipoint scenarios all RTP<br />

streams are sent to the Reflector Server(s) for replication towards other conference participants.<br />

In a first phase the test bed will use only two servers, whose HW and SW technical features are<br />

described in Table 3.1-3; client workstations for test bed experiments will be equipped with the<br />

following peripherals:<br />

- Pan/Tilt/Zoom control web-cams with auto tracking capabilities<br />

- Video acquisition boards and DV cameras<br />

- 20” LCD or larger widescreen monitor<br />

By the end of the year is planned to have two such client workstations, tuned for high quality point-topoint<br />

video communication: the target for first experiments is to have a bandwidth usage of more than<br />

1 Mbps for each audio/video stream.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 29 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Figure 3.1-9: TILAB Video Communication Lab.<br />

Server 1 (A/V Cross Connector)<br />

Operative System Windows Server 2003<br />

Hardware<br />

Pentium IV 2 GHz<br />

RAM 1024 MB<br />

HD 60GB<br />

512kbyte cache<br />

Software - TILAB videoconference platform modules:<br />

Reflector, Resource Manager<br />

Server 2 (Application Server)<br />

Operative System Windows Server 2000<br />

Hardware<br />

Dual processor Pentium III EB 933 MHz<br />

RAM 1024 MB<br />

HD 2x30GB<br />

256kbyte cache<br />

Software - Apache 2.0<br />

- Jakarta Tomcat 4.1<br />

- MySql 4.0<br />

- TILAB videoconference platform modules:<br />

Presence & Signalling Server, Web Application<br />

Table 3.1-3: Servers features.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

3.1.4 The TILAB SAN Lab<br />

Page 30 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

3.1.4.1 Physical Location and Connection to the Project’s resources<br />

The TILAB SAN Lab Laboratory provides support to the Project’s experiments with content and<br />

storage applications and its networking solutions. The TILAB SAN Lab is physically located at the<br />

TILAB premises in Turin and is connected to the other MUPPET Lab and network resources by<br />

means of the following connections:<br />

Internal Lab connections: Fibre Channel, Ethernet and Gigabit Ethernet. Switch Ethernet Cisco<br />

Systems Catalyst 1900 e 2900, Multilayer switch 3550 with Fast Ethernet e Gigabit Ethernet<br />

Interfaces. The Ethernet Cisco Catalyst switch family provides intelligent network services (QoS ,<br />

rate-limiting, security filtering), multicast management, besides its traditional LAN switching<br />

capabilities. Furthermore a multilayer switch Cisco Catalyst 3550 12T 10/100/1000 and a<br />

multilayer switch Cisco Catalyst 3550-24 with 24 10/100 ports and 2 GbE ports provide GbE<br />

networking capabilities in order to connect the application servers to the storage infrastructure<br />

servers.<br />

RESEAU Network: via GbE direct access<br />

ASON/GMPLS test bed: via dark fibre<br />

Other TILAB Labs (GRID Node, Tele-Presence Lab): via the FE Connection to the TILAB LAN. The<br />

Cisco 3550-24 multilayer switch is also connected through its GbE F.O. interface towards the<br />

Reseau Network Router.<br />

3.1.4.2 Architecture<br />

The TILAB SAN Lab provides test bed for both Content Networking and Storage Networking test<br />

scenarios and also includes server applications architecture.<br />

The Storage Lab offers complete multi-vendor storage architecture based on FC and IP Connectivity<br />

consisting of:<br />

2 Independent FC SANs:<br />

SAN1-Row storage: 584 GB. The SAN Storage is the Chaparral JBOD RIO RAID SRF116 with<br />

its external controller RIO RAID connected via Fibre Channel to the storage array. The<br />

controller’s management Web graphical software provides the user with the configurations<br />

tools. The internal SAN connections are in FC technology. Specifically there is a FC link<br />

between the JBOD RIO RAID and the external controller and another one between the<br />

Controller and the Cisco SN5428. In fact the Storage Router, besides its iSCSI gateway<br />

function, works as FC switch.<br />

SAN2-Row Storage 1.2 TB. The SAN storage is the Brownie Axus BR-1600 in Integrated Drive<br />

Electronics (IDE), (or ATA) technology with its internal integrated controller RIO RAID. The<br />

controller’s management graphical RAIDCare Storage Manager Software provides the user<br />

with the configurations and monitoring tools. The Brownie AXUS BR-1600 is connected via<br />

Fibre Channel to the Storage Router CISCO SN5428.<br />

There is also a third SAN (SAN3-Row storage: ~80 GB), which can be added and included (with its<br />

storage resources) in one of the existent SAN architectures. The Storage Array is the JBOD Compaq<br />

StorageWorks RAID Array 4RA100. There are also available for the Project tests some 18 GB HD.<br />

Its internal connections are also in FC technology. Specifically the Compaq Storage can be connected<br />

via FC to the StorageWorks FC-AL switch 8. The switch is also connected via FC towards the<br />

Compaq DL 380 server cluster. The FC-AL switch can also be connected via FC to the Cisco SN5428<br />

Storage Router.<br />

The Cisco SN5428 is also connected via FE/GbE in optical fibre to the multilayer switch Ethernet<br />

Catalyst 3550 12T, itself being connected to the Catalyst 3550-24, Catalyst 2900 and Catalyst 1900


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 31 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

network apparatuses. The terminal client and the monitoring systems are connected to these<br />

apparatuses.<br />

The Application servers are connected to the storage servers (Clustered servers) via Fibre Channel and<br />

they are connected via FE/GbE to the Ethernet switch. The storage servers (IP connected via FE/GbE<br />

between it selves) are connected to the SANs via FC The network Impairment server is connected via<br />

GbE to the storage servers through the Ethernet Catalyst 3550multilayer switch.<br />

The following figure describes a possible SAN LAB Architecture scheme.<br />

Storage<br />

Fiber<br />

Copper<br />

FE: FastEthernet<br />

GbE: Gigabit Ethernet<br />

FC: Fibre Channel<br />

Infrastructure<br />

Servers<br />

Controller<br />

SAN 2<br />

(FC)<br />

Monitoring<br />

FC Switch + iSCSI<br />

Gateway<br />

(GbE)<br />

Reseau<br />

Network<br />

Client<br />

(FE)<br />

(GbE)<br />

Workstations (FE)<br />

(FC)<br />

(FE)<br />

(FE)<br />

ASON/GMPLS<br />

test bed<br />

TILAB<br />

LAN<br />

FE/GbE<br />

switched/routed<br />

network<br />

(FC) SAN 1<br />

FC switch<br />

Storage<br />

(GbE)<br />

1 MUPPET<br />

Confidential<br />

(FE)<br />

Application<br />

Servers<br />

(FE)<br />

(FC)<br />

Network<br />

Impairment<br />

Emulator<br />

Clustered<br />

Servers<br />

Figure 3.1-10: Potential SAN LAB Architecture scheme<br />

3.1.4.3 CDN Architecture<br />

Two independent Content Distribution Networks are also available for the Project content networking<br />

applications scenarios. The CDNs are:<br />

• Cisco ICDN 2.1<br />

• Cisco ACNS 5.0 release 5.1.5.b.2 (March 2004)<br />

3.1.4.4 TILAB SAN Lab Components<br />

3.1.4.4.1 Hardware<br />

Storage Array: Chaparral JBOD RIO RAID SRF116 with 8 72 GB HD FC Dual port 10000 rpm, total<br />

storage size 584 GB. This Storage Array is designed to support a 1,16 TB over a 3U rack and to<br />

be scalable to 8 TB.<br />

Storage Array: JBOD Compaq StorageWorks RAID Array 4RA100, 1 GBit/s Fibre Channel for SAN<br />

FC. There are also available for the Project tests some 18 GB HD.<br />

Storage Array: Integrated System Brownie Axus BR-1600 in Integrated Drive Electronics Technology<br />

(IDE), also known as ATA. This Storage Array provides a different technological solution than<br />

the SCSI RIO RAID one. The AXUS BR-1600 System is managed by an integrated chassisinternal<br />

controller. The storage unit is composed by 6 200 GB HD IDE 7200 rpm, total storage


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 32 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

size: 1,2 TB. The system uses the RAIDCare Storage Manager GUI Software for set-up and<br />

monitoring functions.<br />

External Controller: RIO RAID, Fibre Channel-connected to the Storage Array System with a web<br />

Graphical management software for user configuration and set-up.<br />

Switch Fibre Channel: Compaq StorageWorks FC-AL switch 8, connected to the Storage Array<br />

RA4100. It supports 8 1 Gbit/s ports FC plus an external module with other 3 ones. The switch is<br />

able to manage a cluster of 2 Proliant DL 380server.<br />

Storage Router: The Cisco SN5428 provides the SAN with iSCSI (and then also IP networking)<br />

capabilities. It is a multi-protocol platform for networking storage, which integrates the iSCSI/IP<br />

and Fibre Channel technologies, and, as iSCSI gateway, it allows the SAN to use the whole IP<br />

networking functions towards the external network resources. Cisco SN5428 also implements<br />

Fibre Channel switching capabilities. Therefore, the servers join storage resource through the<br />

iSCSI ports. The Cisco SN5428 Router also offers 2 Gigabit Ethernet Ports in order to connect to<br />

TCP/IP networks and 8 standard 1 or 2 Gigabit Fibre Channel ports. These FC ports support<br />

fabric switch functions or they are usable as simple direct interfaces to the storage devices<br />

connected. The Cisco Storage Router also provides management capabilities for storage<br />

networking with IP-networking tools, as SNMP management, IPSec and offers services like<br />

VLAN and Quality of Service (QoS). Furthermore, it supports SAN typical security capabilities,<br />

as LUN Mapping, LUN Masking and zoning Fibre Channel.<br />

Application Servers: 2 Compaq Proliant DL 360 Servers, OS Windows 2000 Server SP3, 1,13 GHz<br />

Intel Pentium III processor, 1,28 GB RAM and 18 GB HD. The applications implementing the<br />

capabilities requested by the end users in order to request storage resources are installed on these<br />

DL 360 servers. On these two servers is also installed an Intel PRO/1000T Server Adapter<br />

network card in order to be connected via GE to the other network devices.<br />

Storage Software Support (Infrastructure Server): 2 Proliant DL 360 Server. They have the same<br />

hardware requirements as the application servers, but instead with OS Linux Red Hut 8.0. On<br />

these servers, there is a TCP Offload Engine (TOE) Alacritech 1x1000 Copper single-port card<br />

which implements TCP directly in hardware and a Host Bus Adapter (HBA) card for the FC link<br />

to the Cisco SN5428 and to the SAN SCSI resources.<br />

Network Impairment (Emulation of IP Network) Support: 1 Compaq Proliant DL 360 server, OS<br />

Windows 2000 Server SP3, 1.13 GHz Intel Pentium III processor, 1,28 GB RAM, and 18 GB HD.<br />

Client: On client devices accessing servers there is an Intel PRO/1000T Desktop Network Card. This<br />

card provides the bandwidth the high performance desktop PCs need, in order to connect at<br />

10/100/1000 Mbit/s.<br />

Monitoring Support: 1 Compaq Proliant DL 360 server, OS Windows 2000 Server SP3, 1.13 GHz<br />

Intel Pentium III processor, 1,28 GB RAM, and 18 GB HD.<br />

3.1.4.4.2 Software<br />

Storage software: the Muppet Project tests need to deploy storage software: The choice about what<br />

software to deploy depends on both the application involved and the kind of experiments<br />

scheduled. This software typically provides a storage services suite. Services include mirroring,<br />

replication and snapshot, backup and restore, business continuity, disaster recovery capabilities.<br />

Monitoring Software: The SNMP pollers are SNMPc and SNMP Traffic Grapher. SNMPc is a<br />

Windows application. Its GUI is able to automate the configuration file management and to<br />

provide users with additional data about the monitored devices. SNMPc is able to draw more<br />

parameters on the same charts. SNMP Traffic Grapher is a freeware poller, simpler than SNMPc<br />

and with a few basic functionalities. It allows a continuous polling of a huge number of device<br />

and automatically stores data, creating daily text files. PerfMon (Performance Monitor) is native<br />

Win2000 e WIN NT software. It is able to monitor server/workstation parameters, both on its


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 33 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

device and on external connected server. These are both hardware parameters (as network<br />

interfaces, disks, RAM, cache, processor) and software.<br />

Network Impairment Software: DummyNet allows the management of intelligent traffic introducing<br />

bandwidth limitations and controlled packet loss between servers and workstations. DummyNet<br />

captures Ethernet packets and redirect them towards one or more pipes. These pipes simulate the<br />

limitations effects, introducing them with custom policies (as protocol, addresses ports,<br />

interfaces). By configuring the pipes (sequentially or separately) and by managing the queuing<br />

mechanism, it is possible to create even very complex real network conditions simulations<br />

(asymmetrical traffic, classes of services, multiples links).<br />

Measure software: IOMeter is Intel Software now available in freeware version with C language<br />

Source Code. IOMeter is a benchmark software for the server I/O operations performance access<br />

Its configuration includes:<br />

1) The agent (Dynamo), both on the tested server and on the client terminals. Every agent<br />

defines the disk quota on which to make the I/O operations, creating at the same time the<br />

IOBW.TST file. For every test, Dynamo creates I/O Operations as block component on<br />

IOBW.TST file and it makes this both locally (from agents installed on servers) and through the<br />

network (from agent installed on client terminals). Every agent monitors system performance,<br />

keeping useful parameters and periodically sending them to the Controller (IOMeter).<br />

2) The Controller (IOMeter) installed on the test server (also installable on the file server), while<br />

handling the Dynamo agent configuration, gathers, aggregates and elapses the data received. The<br />

controller therefore reports the outcomes from the measurements.<br />

3.1.4.5 SAN LAB Developments<br />

Depending on the Project storage application scenarios, it would be possible to improve the SAN<br />

LAB existent architecture with some Storage software and with more storage resources. This would<br />

easily be accomplished by scaling the existent heavy flexible SAN architecture.<br />

3.1.5 CSP<br />

“CSP – Innovazione nelle ICT” is a non-profit Information-and-Communication-<br />

Technology Research Centre, recognised by the Italian Ministry of Education, University and<br />

Scientific Research. Shareholders include: local government (CSI-Piemonte, City of Turin),<br />

University (Turin Polytechnic, University of Turin) and industrial organisations (Unione Industriale,<br />

Federpiemonte). CSP is a World Wide Web Consortium (W3C) affiliate member.<br />

CSP contribution will focus on the integration of grid computing middleware and storage technologies<br />

to demonstrate the usage of network and storage resources as elementary building blocks to assemble<br />

a new class of services, to exploit dynamically configurable networking and storage resources in grid<br />

computing environments and to develop a new paradigm for storage on–demand service provisioning.<br />

The CSP test bed will be composed of:<br />

- A grid enabled Linux cluster, providing applications and services accessible both interactively<br />

and through a portal;<br />

- Grid client nodes;<br />

- A storage area network (SAN) providing FC/iSCSI access to storage.<br />

This infrastructure is actually connected and accessible through the Italian academic network GARR<br />

and to the Telecom Italia’s network RESEAU.<br />

The Small Computer Systems Interface (SCSI) enables host computer systems to perform block data<br />

input/output (I/O) operations to a variety of peripheral devices. Target devices may include disk and<br />

tape devices, optical storage devices, as well as printers and scanners. The traditional SCSI connection<br />

between a host system and peripheral devices is based on parallel cabling.<br />

Parallel SCSI cabling has inherent distance and device support limitations. For storage applications,<br />

these limitations have fostered the development of new technologies based on networking<br />

architectures such as Fibre Channel and Gigabit Ethernet. Storage area networks (SANs) based on


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 34 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

serial gigabit transports overcome the distance, performance, scalability and availability restrictions of<br />

parallel SCSI implementations. By leveraging SCSI protocols over networked infrastructures, storage<br />

networking enables flexible high-speed block data transfers for a variety of applications, including<br />

tape backup, server clustering, storage consolidation, and disaster recovery. The Internet SCSI<br />

(iSCSI) protocol defines a means to enable block storage applications over TCP/IP networks.<br />

The SCSI protocol demands stability, data integrity and, in current implementations, expects high<br />

bandwidth on demand. IP networks, by contrast, are inherently unreliable, may drop packets under<br />

congested conditions, and have highly variable bandwidth. The TCP layer is meant to deal with the<br />

instability and packet loss that may accompany IP transport, while higher speed wide area connections<br />

can alleviate bandwidth issues for block storage data. In addition, the internal mechanisms of the<br />

iSCSI protocol provide additional monitoring of TCP connections and for recovering from lost or<br />

corrupted command and data PDUs.<br />

For high performance storage networking applications, the iSCSI protocol is dependent on several<br />

other technologies to make it a viable partner to Fibre Channel SANs. Imposing TCP overhead on<br />

servers, for example, is unacceptable for storage applications where server CPU cycles are at a<br />

premium. For optimum performance, iSCSI adapters require TCP/IP off-load engines (TOEs) to<br />

minimize processing overhead. TCP off-load engines will greatly assist iSCSI’s ability to provide<br />

enterprise-class solutions that run at or near wire speeds.<br />

Storage applications using iSCSI will also benefit greatly from the introduction of 10 Gigabit<br />

Ethernet. Ten gigabit and faster speed Ethernet enables scalable IP SANs that support higher<br />

populations of servers and storage devices and a variety of storage applications that can be run<br />

concurrently over the same network infrastructure. With TCP off-load engines on servers and large<br />

data pipes in the network, iSCSI solutions can achieve an enterprise-ready status for IP-based SANs.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

3.2 Central Europe test bed<br />

Page 35 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The Central European test bed is based on an ASON/GMPLS enabled transport network<br />

demonstrator, which was part of the OIF World Interoperability <strong>Test</strong>s and Demonstration at the<br />

SuperComm 2004 (see Chapter 2) and will be significantly extended/enhanced during the MUPPET<br />

project.<br />

The test bed network architecture is fully compliant to the ITU-T G.8080 [7] ASON and the<br />

architecture the OIF is following (Figure 3.2-1), composed of well separated network domains, linked<br />

together at the control plane and date plane level by User-Network-Interfaces (UNI) with the different<br />

types of client networks and via External Network-Network-Interfaces (E-NNI) to other transport<br />

network domains. All control plane interfaces were tested during the OIF World Interoperability <strong>Test</strong>s<br />

and could therefore interoperate globally.<br />

User<br />

signalling<br />

Clients<br />

e.g. IP,<br />

ATM,<br />

Ethernet<br />

OCC<br />

I-NNI<br />

I-NNI<br />

UNI<br />

CCI<br />

CCI CCI E-NNI ON<br />

CCI<br />

CCI CCI E-NNI ON<br />

XC<br />

ASON control plane<br />

XC XC<br />

Transport plane<br />

NMS<br />

NMI-A<br />

OCC OCC<br />

NMI-T<br />

OCC: Optical Connection Controller NMI-A: Network Management Interface for the<br />

CCI: Connection Control Interface ASON Control Plane<br />

NNI: Network Network Interface NMI-T: Network Management Interface for the<br />

NMS: Network Management System<br />

UNI: User Network Interface<br />

Transport Plane<br />

Figure 3.2-1 ITU-T ASON architecture (G.8080) and OIF architecture, the Central Europe <strong>Test</strong><br />

<strong>Bed</strong> is compliant with<br />

Corresponding to this network architecture, Figure 3.2-2 gives an overview of the ASON/GMPLS<br />

test network at T-Systems/Deutsche Telekom, highlighting the Marconi network domain, which will<br />

be included in the European MUPPET test bed. Beside the local interconnections to other transport<br />

and client network domains via UNI and E-NNI control plane interfaces and Ethernet over SDH<br />

adaptation functions towards Ethernet based metro networks, a first attempt was made to depict the<br />

external interfaces to the other local test beds and application locations within the MUPPET<br />

consortium. Furthermore the interconnection to the German national VIOLA project [17] is shown,<br />

which is planned for a future cooperation.<br />

Figure 3.2-3 shows the detailed topology of the MUPPET test bed in Berlin. This ASON GMPLS<br />

domain will be composed of three XCs, interconnected by I-NNI, STM-64 interfaces. The distributed<br />

control plane enables the following network functions within the network domain:<br />

• Fast connection provisioning, modification and tear down


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 36 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

• Multiple resilience functions: 1+1 protection, shared protection, restoration<br />

• Auto-discovery functions, e.g. neighbour discovery<br />

Access<br />

GE<br />

ASON/GMPLS<br />

TN#1<br />

E-NNI<br />

E-NNI<br />

ASON/GMPLS<br />

TN#2<br />

UNI<br />

IP-Client #1<br />

MAN<br />

E-NNI<br />

GE<br />

UNI<br />

IP-Client #2<br />

ASON/GMPLS<br />

transport network<br />

Video<br />

application<br />

Video<br />

application<br />

Figure 3.2-2 ASON/GMPLS test network configuration at Deutsche Telekom<br />

E-NNI<br />

= GE/SX<br />

= GE/LX<br />

Access<br />

ASON/GMPLS<br />

TN#1<br />

ASON/GMPLS<br />

TN#2<br />

UNI<br />

IP - Client #1<br />

GE<br />

E-NNI<br />

= STM-16<br />

= STM-64<br />

MAN<br />

S11- P1<br />

S11- P2<br />

S11- P3<br />

S11- P4<br />

E-NNI<br />

GE<br />

S10-P1<br />

S10-P2<br />

S10 -P3<br />

S10 -P4<br />

S13-P1 S13-P1<br />

UNI<br />

Marconi<br />

OXC<br />

VC-4 XC<br />

All LR<br />

S13-P2 S13-P2<br />

S13-P3<br />

S13-P4 S13-P4<br />

IP - Client #2<br />

= STM-1<br />

Video<br />

application<br />

I-NNI<br />

I-NNI<br />

Figure 3.2-3 Detailed topology of the MUPPET Central Europe <strong>Test</strong> <strong>Bed</strong><br />

S1<br />

S3<br />

S3<br />

S2<br />

S4<br />

Video<br />

application<br />

S11-P1<br />

S11- P2<br />

S11- P3<br />

Marconi<br />

OXC<br />

VC-4 XC<br />

All LR<br />

S5<br />

I-NNI<br />

S1- P1<br />

S1-P2<br />

S1-P3<br />

S1-P4<br />

Marconi<br />

OXC<br />

VC-4 XC<br />

All LR<br />

S10-P1<br />

S10-P2<br />

S11-P16<br />

S1-P1<br />

S1- P2<br />

S1- P3<br />

S1- P4<br />

E-NNI<br />

E-NNI<br />

S10-P3<br />

S10-P4<br />

E-NNI<br />

GE<br />

GE<br />

GE<br />

ACREO<br />

TILAB<br />

VIOLA<br />

PSNC<br />

FAU<br />

E-NNI<br />

E-NNI<br />

E-NNI<br />

GE<br />

ACREO<br />

TILAB<br />

VIOLA<br />

PSNC<br />

FAU


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 37 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Domain internal data plane functions are:<br />

• Interfaces: STM-16/STM-64 (LR) and Gigabit-Ethernet (GE) with SX/LX reach<br />

• VC-4 switching granularity<br />

• Ethernet mapping over SDH compliant to GFP-F/T, VCAT, LCAS<br />

o Optional: FC mapping over SDH (GFP-T, VCAT, LCAS)<br />

EMS connection and port configuration functions:<br />

• SDH ports: traditional, I-NNI, E-NNI, UNI-N<br />

• GE: GFP-F, GFP-T mapping configuration over VC-4 VCAT/LCAS enabled connections<br />

MUPPET central Europe test bed will have the following external interface functionalities:<br />

Control plane functions:<br />

• UNI-N 1.0 R2 (future options: UNI2.0 Ethernet)<br />

• OIF E-NNI with external signalling/routing<br />

• Support of switched connection (initiated by UNI-C) configurations over multiple network<br />

domains<br />

• Support of soft permanent connection (initiated by EMS) configurations over multiple network<br />

domains<br />

Data plane functions:<br />

• Interfaces: STM-1/16 (LR) and GE (LX)<br />

• VC-4 switching granularity<br />

• Ethernet mapping over SDH compliant to GFP-F/T, VCAT, LCAS<br />

EMS connection and port configuration functions:<br />

• SDH ports: traditional, I-NNI, E-NNI, UNI-N<br />

• GE: GFP-F, GFP-T mapping configuration over VC-4 VCAT/LCAS enabled connections<br />

• Support of soft permanent connection (initiated by EMS) configurations over multiple network<br />

domains<br />

Furthermore broadband video applications (video, TV, HDTV) are available to visualize the network<br />

functionalities and performance, but also to serve as data traffic source beside SDH and Ethernet<br />

traffic generators/test equipment. All these applications will be Ethernet based (GE interface).<br />

3.2.1 FAU<br />

Although video transmissions over the Internet are becoming increasingly popular, there are still<br />

application areas that have not been able to use the Internet for their video and audio processing. As<br />

the Internet only offers best-effort transmissions and does not provide Quality-of-Service (QoS)<br />

guarantees, applications based on high-resolution video with interactive services that rely on short<br />

latencies have so far been forced to use Asynchronous Transfer Mode (ATM) networks.<br />

The contribution of the Friedrich-Alexander University of Erlangen-Nuremberg (FAU) to the<br />

MUPPET project focuses on such bi-directional high quality applications in the areas of tele-medicine


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 38 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

and professional broadcasting and relies on the MUPPET network to provide very low latency<br />

transmissions with end-to-end delays well below 150ms for high volumes of data.<br />

The SDI-over-x technologies employed by the FAU typically map uncompressed Serial Digital<br />

Interface (SDI) signals to network transport units within a few microseconds and can be applied to<br />

both Ethernet (SDI-over-IP) and optical networks (SDI-over-lambda). The application is currently<br />

based on ATM technology. As far as the MUPPET application scenario is concerned, the<br />

interconnection could be provided over dark fibre, Gigabit Ethernet (end-to-end) or Synchronous<br />

Digital Hierarchy (SDH) technology (end-to-end). The test bed applications of the FAU involve 1-3<br />

SDI sources; each source requires a minimum of 300 Mbps of bandwidth. To ensure sufficient QoS,<br />

the bandwidth demands may have to be increased to 600 Mbps to allow for Forward Error Correction<br />

(FEC) mechanisms. The minimum network requirements specified for the interconnection to the<br />

MUPPET infrastructure will be refined and detailed during the project and will be investigated for<br />

different levels of QoS.<br />

The FAU could be interconnected to the MUPPET test bed in several different ways: One possibility<br />

would be to connect the FAU over IP (Figure 3.2-4); this could only be a temporary solution,<br />

however, as the application scenario of the FAU would have to scale down to a bandwidth of 50 Mbps<br />

per video source.<br />

Figure 3.2-4 Connecting the FAU to the MUPPET test bed over IP<br />

Another option could be to connect the FAU to the current G-WiN and GEANT networks via a GE<br />

over MPLS connection (Figure 3.2–5):<br />

Figure 3.2-5 Connecting the FAU to the MUPPET test bed via GE over MPLS and G-WiN


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 39 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Depending on the developments of the national and international networks, there could also be other<br />

options in the near future for connecting the FAU to the MUPPET test bed: One possibility could be a<br />

connection of the FAU via VIOLA and GEANT2 (GN2) using Gigabit Ethernet and SDH (Figure<br />

3.2–4):<br />

Figure 3.2-6 Connecting the FAU to the MUPPET test bed via VIOLA<br />

As the GEANT2 (GN2) and X-WiN implementations become available, there could also be an option<br />

to either connect the FAU based on a dedicated SDH link between GEANT and the FAU (Figure 3.2-<br />

7) or to link the FAU to MUPPET using an X-WiN lambda connection (Figure 3.2-8).<br />

Figure 3.2-7 Connecting the FAU to the MUPPET test bed via GEANT with a dedicated SDH<br />

link


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 40 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Figure 3.2-8 Connecting the FAU to the MUPPET test bed via lambda and X-WiN


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

3.3 Northern Europe test bed<br />

3.3.1 Background<br />

Page 41 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The Acreo National Broadband <strong>Test</strong> bed is build primarily to support Swedish and International<br />

research and industry.<br />

The test bed consist of three major parts:<br />

• Advanced service testing in Hudiksvall 1 , including TV, telephony and Internet connectivity, and<br />

with a planned extension to HDTV.<br />

• Transmission tests on a link between Hudiksvall and Stockholm<br />

• GMPLS test in Stockholm<br />

3.3.2 <strong>Test</strong> bed description<br />

Figure 3.3-1: ACREO National Broadband <strong>Test</strong> <strong>Bed</strong><br />

The description of the Acreo National Broadband <strong>Test</strong> bed is focusing on the MUPPET related<br />

aspects, e.g. are the experiences of the <strong>Test</strong> Pilot customers in Hudiksvall not addressed.<br />

1 Hudiksvall is a Swedish town about 450 km north of Stockholm.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

3.3.3 Architecture<br />

Page 42 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The Acreo National Broadband <strong>Test</strong> bed is build according to the GMPLS architecture described in<br />

“Generalized Multi-Protocol Label Switching Architecture” (draft-ietf-ccamp-gmpls-architecture-<br />

07.txt). Although this document is still an Internet Draft, it is approved by the IESG as a Standards<br />

track RFC and waiting for publication in the RFC editors queue.<br />

The GMPLS architecture is a multi-layer-architecture, meaning that a common control plane<br />

potentially controls several data planes.<br />

3.3.4 Networking technologies<br />

The test bed is built to primarily test transparent optical networking technologies, including control of<br />

multiple layers (IP/ETH/WDM) by means of GMPLS. This is done in a network where all nodes run a<br />

common control plane and it is possible to establish connectivity over several layers, e.g. an Ethernet<br />

VLAN between two switches that are interconnected through an OXC. The common control plane<br />

controls routers, switches and OXCs and the connectivity (VLAN) from the router to the switch and<br />

from the switch (VLAN/WDM) to the OXC that switches on the lambda-level.<br />

CPE<br />

CPE<br />

STB CPE<br />

STB CPE<br />

VoIP STB CPE<br />

VoIP STB CPE<br />

VoIP STB<br />

VoIP STB<br />

VoIP<br />

VoIP<br />

Ericsson Lab Network<br />

Hudiksvall<br />

City<br />

Network<br />

Sånglärkan<br />

L2<br />

L2 Eth<br />

L2<br />

Eth<br />

Vallvägen<br />

Protected WDM<br />

transmission link,<br />

380 km<br />

Services Services<br />

Eth<br />

Eth<br />

Edge<br />

Router<br />

Knösta<br />

L3<br />

L3<br />

Eth<br />

L2<br />

Management<br />

Main Node<br />

Eth<br />

Hudiksvall<br />

L3<br />

Lab node<br />

Edge<br />

L3<br />

Router<br />

Management<br />

Services Services<br />

Acreo<br />

Kista<br />

L2<br />

Lab node<br />

Mo<br />

L3 L3<br />

Eth<br />

CWDM<br />

Figure 3.3-2: <strong>Test</strong> bed Hudiksvall: Focus on Access<br />

Sollentuna<br />

Energi<br />

L2 Network


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Intra Kista<br />

SCINT<br />

Juniper<br />

M5<br />

SCINT<br />

Acreo<br />

Juniper<br />

M5<br />

DTM<br />

OXC<br />

Extreme<br />

Black<br />

Diamond<br />

3.3.5 Control plane<br />

V<br />

H<br />

K<br />

DTM<br />

Juniper<br />

M5<br />

DTM<br />

OXC<br />

DTM<br />

S<br />

H<br />

DTM<br />

Page 43 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Extreme<br />

Black<br />

Diamond<br />

Extreme<br />

Black<br />

Diamond<br />

Figure 3.3-3: The Stockholm Network<br />

K<br />

Free Lambda<br />

DTM<br />

GbEthernet<br />

GbE SCINT<br />

Lumentis<br />

GMPLS enabled<br />

DWDM Metro<br />

OXC<br />

AMT<br />

Juniper<br />

M5<br />

SCINT<br />

The Acreo <strong>Test</strong> bed control plane is based on the extensions of IETF routing protocols (OSPF and<br />

BGP) and the IETF signalling protocol for GMPLS; RSVP-TE. This is a multi-layer control plane,<br />

meaning that both L3, L2 and L1 connectivity is set up through the same control plane.<br />

3.3.6 Data plane<br />

The data plane is currently built on WDM, Ethernet and IP in the core network, towards the customers<br />

(test pilots) Acreo run single channel wavelengths and Ethernet.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

3.3.7 Context<br />

Page 44 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The choice of technology is not accidental, but very much modelled on existing metro networks and<br />

broadband islands in Sweden. The idea here is to meet some key requirements from the Swedish<br />

networking industry.<br />

3.3.8 Interconnections<br />

3.3.8.1 Local NRENs<br />

The connectivity through the Nordic (NORDUNET) and Swedish NREN (SUNET) to GEANT is still<br />

an issue to solve. Acreo will request a GE from our lab to GEANT (and settle for a FE or lower if<br />

necessary).<br />

3.3.8.2 Control Plane<br />

Our interfaces to other networks are through Juniper routers, the Juniper router will terminate the<br />

control and switch the data plane.<br />

There are also issues around control plane incompatibility due several standardisation efforts. It is<br />

possible, but not very likely to find a control plane converter. It is not currently seen as possible for<br />

Acreo to take on such an effort.<br />

The Juniper router currently has support for the following GMPLS functions/protocols:<br />

- RSVP-TE for GMPLS: RFC 3473<br />

- OSPF-TE for GMPLS: draft-ietf-ccamp-ospf-gmpls-extensions-12.txt<br />

- SONET-SDH extensions: draft-ietf-ccamp-gmpls-sonet-sdh-08.txt<br />

- Signalling for non-adjacent RSVP: draft-ietf-mpls-lsp-hierarchy-08.txt<br />

- GMPLS overlay: draft-ietf-ccamp-gmpls-overlay-04.txt<br />

- Others: RFC 3471, draft-ietf-ccamp-gmpls-routing-09.txt<br />

3.3.8.3 Data plane<br />

The data plane as described in this section is still very much an open issue, our first take is that Acreo<br />

will try to find connectivity “the obvious way”- through SUNET and NORDUNET to GEANT. Today<br />

Acreo is unsure what type of connectivity that will give. There is a risk that it will be on the level IP<br />

only. Acreo is looking for points of presence of GEANT to see if there are other options.<br />

One possible solution will be to look at tunnelling Ethernet through an IPSEC or GRE tunnel if only<br />

IP is available. This has been done before but will nevertheless take some implementation effort.<br />

With extreme luck Acreo will be able to meet an October deadline for data plane connectivity to the<br />

MUPPET partners, realistically Acreo can do this before end of year.<br />

3.3.9 Possible test/demos<br />

Acreo has several goals that would be possible to implement in the MUPPET project, e.g.:<br />

• To establish a control plane that can establish connectivity from a point in our network, across<br />

nodes in our network, across one or several other networks to a remote termination point. It is<br />

obvious that this needs to be implemented in several steps.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 45 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

• Given that the data plane is Ethernet, try to set up a VLAN across to a site in another test bed,<br />

preferably using a third test bed as intermediary.<br />

3.3.10 Future<br />

In the future Acreo sees a network where control of all or several layers are integrated, sometimes this<br />

is called IP / Optical Integration. There might be several different paths to such a network. This is for<br />

further discussion.<br />

3.3.11 DTU<br />

DTU<br />

The Technical University of Denmark (DTU) is one of Northern Europe's leading universities, and the<br />

operator of the Danish Research Network, DAREnet, is located at the university’s premises. Hence,<br />

the connections to the Nordic countries through NORDUnet, an international collaboration between<br />

the Nordic national networks for research and education, are terminated at DTU.<br />

For experimental activities NORDUnet offers the NorthernLight network [18], which is a Lambda<br />

network facility connecting Stockholm with Copenhagen, Oslo, Amsterdam and Helsinki. Figure<br />

3.3-4 shows the NorthernLight experimental network.<br />

Figure 3.3-4: NorthernLight Lambda network for connecting DTU with GÉANT. Source:<br />

http://www.nordunet.net/development/northernlight.html]<br />

In each POP a Cisco ONS 15454 is equipped with a number of GbE ports configured to allow GbE<br />

channels between any two nodes in the network. Thus, DTU is connected to Stockholm through a<br />

GbE connection, terminating in Stockholm at the same physical premises as DANTE and SUNET.<br />

The ASON UNI is obtained by tunnelling the signalling through the GbE layer 2 network.<br />

There are at the moment no specific application requirements except from GRID computing, which is<br />

a significant activity at DTU and some the institutes in its surroundings.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

3.4 Western Europe test bed<br />

Page 46 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The TID test bed for Muppet (Western Europe <strong>Test</strong> bed) is an IP/MPLS network with the architecture<br />

shown in Figure 3.4-1. It is composed of Layer-2 switches and GigaSwitchRouters GSR, with a<br />

specific MPLS area, supporting FE and GBE interfaces for internal connections and an STM1<br />

155Mbps ATM interface for the external interconnection to RedIRIS, with a PVC 34 Mbps defined<br />

for Muppet traffic forwarding.<br />

In the near future this ATM connection will probably be changed to a connection based on VLANs<br />

using Gigabit Ethernet link. With the collaboration of RedIRIS, it will be possible to provide a GBE<br />

link between TID test bed and RedIRIS, so the Ethernet frames will go directly across the link,<br />

increasing the efficiency in the transport of IP traffic between TID test bed and other Muppet's<br />

partners via RedIRIS-Geant.<br />

The TID <strong>Test</strong> bed offers the following functionalities:<br />

• Layer-2 and layer-3 connectivity based on Ethernet, using standard protocols to optimise the paths<br />

of data: STP for layer-2 and OSPF for unicast routing at layer-3. Other functions like IP-policies<br />

and routing-policies are supported, greatly increasing the routing capabilities in the test bed.<br />

• MPLS support, using different signalling protocols like LDP and RSVP and allowing the<br />

definition of different types of tunnels LSP:<br />

o Static-LSP, configuring node-by-node the path of the LSP.<br />

o Dynamic-LSP, including “explicit-LSP” and “CSPF-based LSP”, in which different<br />

conditions and limitations can be imposed to create the path of the LSP (nodes,<br />

bandwidth, type of interfaces, hops, colour and so on). Other issues like secondary-path<br />

and fast-reroute are also supported, enhancing the MPLS functionality of the test bed.<br />

• Layer-3 VPN based on MPLS. This functionality allows service providers to offer to their<br />

customers VPN services with an IP backbone, using MPLS for forwarding VPN traffic and using<br />

BGP for distributing VPN routes. This mechanism allows:<br />

o Service providers to increase scalability and flexibility.<br />

o Customers to remove the need to build their own IP backbone and also to get free of<br />

inter-site connection issues.<br />

o Customers to use private IPv4 addresses, removing the restriction for the customers to use<br />

globally unique addresses range.<br />

• Layer-2 VPN based on MPLS. This functionality allows the encapsulation and transport of layer-<br />

2 (Ethernet) frames across the network according to Martini Draft. Three specific functions are<br />

supported in the test bed:<br />

o Point-to-point LSPs for Virtual Leased Line services (VLL). This allows transport of<br />

Ethernet frames between two end points, so these points keep connected at layer-2 in a<br />

transparent way.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 47 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

o Point-to-multipoint LSPs for Transparent LAN services (TLS). This function allows<br />

transport of Ethernet and VLAN traffic for multiple sites that belong to the same layer-2<br />

broadcast domain. Static and dynamic label assignment is allowed for the virtual circuits<br />

labels.<br />

o Virtual Private LAN Services (VPLS), which use TLS functions to interconnect the layer-<br />

2 traffic of different customers into the MPLS network, creating LSP tunnels through the<br />

MPLS area for each client, which are provisioned for client-specific service. Thus, with<br />

VPLS the MPLS area appears as a logical layer-2 switch to each client site.<br />

• Multicast support, allowing the possibility to forward IP-packets to a group of receivers, which<br />

have explicitly expressed interest for a multicast content. Multicast routing is used to transmit<br />

traffic from a source to a group of receivers. Any host in TID test bed or in another Muppet<br />

partner’s test bed can be a source, and the receivers can also be in any test bed (TID’s one or<br />

another partner’s one) as long as they are members of the group to which the multicast packets are<br />

addressed, so only the members of a group can receive the multicast data stream. Multicast<br />

routing protocols are supported: PIM-SM, SSM, IGMP, IGMP-snooping allowing the optimal<br />

forwarding of the multicast frames and packets across the network, according to the changing<br />

location of sources and receivers so the distribution paths adapts to that.<br />

• Quality of Service support. Different mechanisms related to QoS are supported that allows<br />

implement the DiffServ functions in the test bed. These mechanisms are the following:<br />

o Traffic prioritisation, to differentiate between several types of traffic, segregating them<br />

into different priority queues. When the traffic has been identified and classified, it can be<br />

assigned to any of the priority queues to ensure the proper prioritisation. Priority can be<br />

allocated based on any combination of layer-2 (802.1p), layer-3 or layer-4 traffic. Strict<br />

priority and other mechanisms like Weighted-fair-queuing are also supported.<br />

o ToS rewrite, which provides access to the ToS field in an IP packet (or DHCP in<br />

DiffServ) in order to mark it with the desired value. In the access point to the network the<br />

packets are first of all classified to differentiate the traffic flows, rewriting (marking) the<br />

ToS/DHCP value, thus allowing the traffic prioritisation into the network according to the<br />

defined QoS scheme.<br />

o Rate limiting, which is a mechanism devised to control the bandwidth used by the<br />

incoming traffic to the network, on a per-flow basis. A flow meeting certain criteria can<br />

have its packets re-prioritised or dropped if its bandwidth usage exceeds a specified limit.<br />

This function can be applied, for example, to control the input traffic of users, according<br />

to their SLAs.<br />

o WRED. This mechanism is used to alleviate traffic congestion by randomly dropping<br />

packets once the traffic exceeds a defined limit but before the congestion thresholds.<br />

When applied to a port it can prevent the congestion in it, reducing latency on that port<br />

and greatly improving TCP protocol behaviour.<br />

• Security support and other. Security is based on the configuration of ACL (access lists) and filters<br />

that helps to control access and filter traffic going through the network. The ACLs allow to define<br />

traffic profiles in the access points of the network based on layer-3 and layer-4 parameters, in<br />

order to filter the traffic, to permit or deny it, dropping the undesired one. Layer-2 security filters<br />

are also supported, performing filtering based on MAC addresses. Other functions related to<br />

security like password authentication, Secure-Shell protocol and port-based authentication are<br />

also supported. Additionally functions like NAT are supported, which allows for the mapping of


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 48 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

IP addresses used within TID test bed, to different IP addresses used within another partner’s<br />

network.<br />

Figure 3.4-1: TID-<strong>Test</strong> bed Muppet


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

3.5 Eastern Europe test bed<br />

Page 49 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Eastern Europe test bed purpose is to reflect the simplified, low-overhead architecture of multi-gigabit<br />

networks that are being built by many NRENs. These networks are usually built on dark fibres with<br />

relatively cheap and simple transport (Gigabit Ethernet/10 Gigabit Ethernet over (D/C)WDM<br />

(optional). By their nature these networks are not equipped with strict QoS/BoD mechanisms, nor<br />

ASON/GMPLS features what makes them a bit difficult for integration with ASON/GMPLS<br />

technologies for end-to-end service. However even more sophisticated mechanisms included in<br />

Ethernet switching devices allows them to preserve some kind of QoS, by using traffic prioritisation<br />

and bandwidth reservation<br />

For the purpose of MUPPET project, PSNC will provide its test bed in two phases:<br />

Phase 1 – test bed comprised of data channel, data source/sink, control channel and domain controller<br />

Phase 2 – real GE switch-based test bed with all elements of Phase 1.<br />

The test bed in both phases 1 & 2 will be connected to MUPPET test bed via the following network<br />

domains:<br />

• POZMAN network (Gigabit Ethernet)<br />

• PIONIER network (10GE)<br />

• GEANT network (IP/MPLS currently)<br />

We discuss the test bed topology, functionality and interconnection below:<br />

3.5.1 Eastern Europe test bed phase 1<br />

EE test bed in phase 1 will use PC machines connected to the rest of MUPPET test bed. Two<br />

machines with different functionality will be directly connected to test bed.<br />

GE (UNI) domain controller will serve as control channel client (be it UNI or other protocol),<br />

allowing to process control messages incoming from and sent to MUPPET test bed. In result,<br />

domain controller will create internal commands configuring another PC.<br />

PC acting as a data source/sink will initially simulate future GE network. It will receive<br />

configuration commands and setup appropriate filters to simulate realistic data path behaviour.<br />

Such test bed construction will allow for implementation of following functionality:<br />

• topology exposure<br />

• data connection request<br />

• data connection teardown<br />

• data connection modification<br />

• status enquiry


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

MUPPET<br />

Control (UNI?)<br />

3.5.2 Eastern Europe test bed phase 2<br />

Page 50 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

DATA PC acting as data source, sink,<br />

simulating future GE network<br />

Internal communication<br />

Figure 3.5-1: <strong>Test</strong> bed Phase 1<br />

GE (UNI) domain controller<br />

In the second phase the network simulator will be replaced by a real GE-switch based test bed,<br />

allowing to create real VLANs with required quality between other MUPPET clients and local EE<br />

test bed clients. This phase will require more complex functionality of domain controller, which<br />

will have to interface with vendor-specific management interface of GE switches.<br />

The test bed shall support the same set of control messages as phase 1.<br />

MUPPET<br />

DATA<br />

Control (UNI?)<br />

Internal communication<br />

GE (UNI) domain controller<br />

Figure 3.5-2: <strong>Test</strong> bed Phase 2<br />

Depending on the project evolution and requirements, additional devices can be connected as the<br />

client machines, including: traffic generators and analysers, audio/videoconferencing equipment,<br />

streaming servers and grid clients.


4 <strong>Test</strong> bed interconnections<br />

This chapter contains a first attempt to describe the planned or potential interconnections of the local<br />

test beds to the NRENs from both network domain perspectives and additionally a short description of<br />

the functionalities, which can be provided to the MUPPET test bed by the NRENs and GEANT.<br />

Figure 3.5-1 and Figure 3.5-2 are showing a first draft of the MUPPET test bed topology, including<br />

the characteristics of each local test bed and the planned interconnections.<br />

TILAB<br />

ASON/ GMPLS<br />

network<br />

Telefonica I+D<br />

Western Europe<br />

test bed<br />

Southern Europe<br />

test bed<br />

IP/MPLS<br />

network<br />

Central Europe<br />

test bed<br />

DT<br />

FAU<br />

Eastern Europe<br />

test bed<br />

ASON/ GMPLS<br />

network<br />

Northern Europe<br />

test bed<br />

PSNC<br />

Ethernet<br />

network<br />

ACREO<br />

UNI<br />

DTU<br />

GMPLS<br />

network<br />

Figure 3.5-1: Proposed lab interconnections to build a MUPPET pan European test bed<br />

Telefonica I+D<br />

Western Europe<br />

test bed<br />

ACREO<br />

DTU<br />

RedIRIS<br />

Northern Europe<br />

test bed<br />

GARR<br />

SUNET<br />

NORDUnet<br />

DARENET<br />

GEANT<br />

TILAB/<br />

Telecom Italia<br />

DFN<br />

Southern Europe<br />

test bed<br />

PIONIER<br />

FAU<br />

Eastern Europe<br />

test bed<br />

Central Europe<br />

test bed<br />

PSNC<br />

T-Systems<br />

Deutsche Telekom<br />

Figure 3.5-2: Resulting interconnections for the MUPPET test bed


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

4.1 Southern Europe test bed - GARR<br />

Interconnection<br />

Page 52 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The interconnection between the TILAB test bed in Torino and the Italian NREN (GARR) is at the<br />

moment not yet in place and will be developed in the next months according to the project plan.<br />

For the time being a set of different options has been highlighted, also by matching the foreseen<br />

requirements of the experimental activities with the project roadmap in terms of gradually enhancing<br />

the interconnections of the five local test beds as described in the Technical Annex.<br />

Said that, the interconnection types that have been listed among the different options and could be<br />

available in the future are:<br />

• ATM interconnection (34 and/or 155Mbit/s) to the GARR POP in Torino. This could be a<br />

first and quick step to guarantee a good level of bandwidth in few months to start as soon as<br />

possible some experimental activity.<br />

• Ethernet interconnection to the GARR POP in Torino. This could be achieved in two ways:<br />

o by ordering a commercial Ethernet (Fast Ethernet and/or Gigabit Ethernet) service.<br />

o by having the availability of one or more dark fibre pairs that will physically link the<br />

GE interfaces on a switch/router in TILAB to the GE interfaces on a router at the<br />

GARR POP.<br />

The second way, although preferable from the point of view of the connection flexibility,<br />

could require much more effort and time than the first one.<br />

Finally it should be mentioned that the only interconnection today available to a local MUPPET test<br />

bed is the control plane (E-NNI) interconnection to DT (T-System labs in Berlin) based on an IPSec<br />

tunnel over the Internet.<br />

Interface & Network Functionalities<br />

For what concerns the external interface functionalities that will be available towards the other local<br />

test beds, these are in principle the same that will be implemented within the TILAB test bed:<br />

• UNI-N 1.0 R2 (UNI 2.0 for GE interfaces when available)<br />

• E-NNI as per OIF2002.476.21, OIF2003.179.08 and OIF2003.259.02.<br />

on the control plane, and:<br />

• STM-1, STM-16 and GE interfaces<br />

• Ethernet mapping over SDH, in compliance with GFP, VCAT, LCAS (G.7041, G.707,<br />

G.7042)<br />

on the data plane.<br />

Finally the network in TILAB will have two switching functionalities:<br />

• VC-4 switching granularity<br />

• Fibre Switching Capability (FSC)<br />

4.1.1 Southern Europe test bed access points<br />

Two GARR Point of Presence are of interest for the MUPPET project, in the towns of Milano and<br />

Torino.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 53 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

GARR network in Milano is based on three main PoPs (Caldera, Lancetti and Colombo) connected by<br />

dark fibres lit by coarse WDM equipments. The PoP located in Via Lancetti hosts the international<br />

connection to GÉANT and circuit to Rome and Bologna. The PoP thus provides very high speed<br />

switching between Italy and all the other three test beds in Europe.<br />

The Torino PoP is currently hosted in the Physics department of the university of Torino and is<br />

connected by a 2,5 Gb/s SDH circuit to Milano providing ample capacity. Local interconnections<br />

between the TILAB tested and the other institution can be accepted in various formats, with<br />

preference for Gigabit Ethernet.<br />

GARR provides also Gigabit/s connectivity to GRID computing centres in Italy, for example INFN-<br />

CNAF in Bologna, allowing seamless collaboration and integration if needed.<br />

4.1.2 GARR<br />

Consortium GARR (short name GARR) is an Association under the Italian law, established by the<br />

national Academic and Research Community. The aim of Consortium is to plan, manage, and operate<br />

the Italian National Research and Education data transmission Network by implementing the most<br />

advanced technical solutions and services.<br />

The GARR network connects all the research centres and institutes and all the universities in Italy, at<br />

present about 350 institutions. GARR has just started a new project to connect all Italian schools of all<br />

grades.<br />

The current version of the GARR network provides connectivity to world-wide research networks<br />

through a 10 Gb/s circuit to the GÉANT pan-European network and has separate connections in<br />

Milano and Rome to the general Internet at multiple of 2.5 Gb/s.<br />

Its core backbone is a mesh based on Wave Division Multiplexing and SDH leased circuits of up to<br />

2,5 Gb/s connected by state-of-the-art routing and switching equipment. Figure 4.1-1 depicts the<br />

backbone at September 2004. Users’ access link speed ranges from 2 Mb/s up to 1 Gb/s. It provides to<br />

its user base of about 2 million researcher and students transport of both IPv4 and IPv6 protocol and<br />

advanced services like multicast and quality of service.<br />

The network is evolving rapidly and it is increasing the amount meshing between its PoPs and the<br />

number of PoPs.<br />

Up to date information about GARR network topology can be found at: http://www.garr.it/<br />

Capability of transport network functions available for MUPPET<br />

GARR can offer standard Best Effort IPv4 and IPv6 connectivity up to Gigabit speed. In addition the<br />

network can offer dedicated capacity in form of layer 2 or layer 3 MPLS links using a combination of<br />

AtoM and CCC technologies. Capacity assurances can be defined for the links using appropriate<br />

Quality of Service techniques, like Premium IP.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Figure 4.1-1 The GARR Network (Sep 2004)<br />

Page 54 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

4.2 Central Europe test bed - DFN<br />

Page 55 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

This chapter describes the interconnections to the DFN network in Berlin (DT lab location) and in<br />

Erlangen (location of application lab of FAU).<br />

4.2.1 Central Europe test bed access points<br />

MUPPET central Europe test bed will have the following external interface functionalities:<br />

Control plane functions:<br />

• UNI-N 1.0 R2 (future options: UNI2.0 Ethernet)<br />

• OIF E-NNI with external signalling/routing<br />

• Support of switched connection (initiated by UNI-C) configurations over multiple network<br />

domains<br />

• Support of soft permanent connection (initiated by EMS) configurations over multiple network<br />

domains<br />

Data plane functions:<br />

• Interfaces: STM-1/16 (LR) and GE (LX)<br />

• VC-4 switching granularity<br />

• Ethernet mapping over SDH compliant to GFP-F/T, VCAT, LCAS<br />

EMS connection and port configuration functions:<br />

• SDH ports: traditional, I-NNI, E-NNI, UNI-N<br />

• GE: GFP-F, GFP-T mapping configuration over VC-4 VCAT/LCAS enabled connections<br />

• Support of soft permanent connection (initiated by EMS) configurations over multiple network<br />

domains<br />

The roadmap to continuously increasing the test bed interconnections from DT lab to the other<br />

MUPPET local test beds in terms of bandwidth but also related to the network functions is at a first<br />

approach as follows:<br />

Today available:<br />

• ·Control plane E-NNI interconnection to the Southern Europe <strong>Test</strong> <strong>Bed</strong>/TILAB based on an IPSec<br />

tunnel over the public Internet. This interconnection is up running since April 2004, when the OIF<br />

World Interoperability <strong>Test</strong>s and Demonstration were performed<br />

• Best effort IP interconnection up to 2 Mbit/s, fixed traffic volume per month, over the DFN<br />

network<br />

Beginning 2005<br />

• Potentially available up to STM-1 interconnection (BRAIN project, see chapter 4.2.2)<br />

For enabling a most flexible access to the DFN PoP in Berlin a dark fibre interconnection is currently<br />

under investigation.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

4.2.2 DFN<br />

Page 56 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The Verein zur Förderung eines Deutschen Forschungsnetzes e.V. – known as DFN-Verein – is<br />

supporting the science and research in Germany with a high performance communications<br />

infrastructure. About 600 universities, research and scientific institutions are connected to this<br />

network. Access of up to 10 Gbit/s allows at present mainframe computers to be used for distributed<br />

applications for example in climatic research, in particle physics or in vehicle and aeroplane<br />

construction. High-performance links to the Internet2-initiative ABILENE and to the US-Internets (12<br />

Gbit/s in total), as well the connection to the European research network GÉANT (10 Gbit/s) enable<br />

science and research to overcome network barriers.<br />

The G-WiN is based on WDM and SDH technology. The meshed infrastructure is adopted to the<br />

traffic streams and optimised regularly. The dominant trunk capacity is 10 Gbit/s. Every of the 27<br />

core nodes has at least 2 links to other core nodes for reliability. The core nodes are equipped with<br />

high performance routers (GSR) and additional switching devices and workstations for service<br />

provisioning and statistic and measurement purposes.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 57 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Figure 4.2-1: Capability of transport network functions available for MUPPET<br />

The most important service today is the DFNInternet, it provides IP connectivity with an access speed<br />

of up to 2,5 Gbit/s. However, distributed computing - so-called Grid computing – promotes the<br />

continuous expansion of existing network services. In special cases Gigabit Ethernet links can be<br />

provided between Grid locations in Germany and also internationally. If no physical link is available<br />

the “Any Transport over MPLS” (AToM) encapsulation is used inside the G-WiN. For international<br />

links a combination with the CCC service in GÉANT was tested and used successfully.<br />

Description of the POP to be used by the Central Europe test bed


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 58 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

The G-WiN core node in Berlin is located at the Konrad-Zuse-Institut (ZIB) in Berlin-Dahlem. It<br />

provides 10 Gbit/s links to Hamburg and, more important in relation to MUPPET, to Frankfurt/Main<br />

(GÉANT access). TSI is connected to the G-WiN (34 Mbit/s contracted). This access link can<br />

certainly be used for the beginning providing just IP connectivity. The maximal speed available would<br />

be 155 Mbit/s at this access link, which belongs to the Berlin fibre network BRAIN.<br />

The G-WiN part of BRAIN is described at http://www.brain.de:<br />

Figure 4.2-2: G-WiN Network Topology<br />

The access of the German Telekom is marked as T-Nova. The node between the T-Nova and ZIB (TU<br />

Berlin) does not provide GE facilities at the moment. If no other (physical) link is available an IP<br />

connection is the only solution that can be offered for now.<br />

However, the tendering process for the successor network (X-WiN) is planned to be completed later<br />

this year and the X-WiN will be operational 1 st of January 2006 by latest. It is to be expected, that this<br />

will enable more options.<br />

4.3 Northern Europe test bed - NORDUnet<br />

4.3.1 Northern Europe test bed access points<br />

The Acreo interfaces to other networks are through Juniper routers, the Juniper router will terminate<br />

the control and switch the data plane.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 59 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

There are also issues around control plane incompatibility due several standardisation efforts. It is<br />

possible, but not very likely to find a control plane converter. It is not currently seen as possible for<br />

Acreo to take on such an effort.<br />

The Juniper router currently has support for the following GMPLS functions/protocols:<br />

- RSVP-TE for GMPLS: RFC 3473<br />

- OSPF-TE for GMPLS: draft-ietf-ccamp-ospf-gmpls-extensions-12.txt<br />

- SONET-SDH extensions: draft-ietf-ccamp-gmpls-sonet-sdh-08.txt<br />

- Signalling for non-adjacent RSVP: draft-ietf-mpls-lsp-hierarchy-08.txt<br />

- GMPLS overlay: draft-ietf-ccamp-gmpls-overlay-04.txt<br />

- Others: RFC 3471, draft-ietf-ccamp-gmpls-routing-09.txt<br />

4.3.2 NORDUnet<br />

The connectivity through the Nordic NRENs (SUNET/NORDUNET) to GEANT is still an issue to<br />

solve. We will request a GE from our lab to GEANT (and settle for a FE or lower if necessary).<br />

4.4 Western Europe test bed - RedIRIS<br />

Right now, the interconnection between TID and RedIRIS is based on an STM1-155 Mbps ATM link.<br />

Specifically, to interconnect the Muppet test bed in TID and RedIRIS a 34 Mbps ATM permanent<br />

virtual circuit PVC is established between the corresponding nodes in these networks. This circuit is<br />

available to support traffic from/to the test bed in TID and other remote test bed interconnected via<br />

GEANT, specifically TILAB according to the proposed test bed interconnections included in the <strong>D3.1</strong><br />

draft.<br />

The physical and logical interconnection schemes between TID and RedIRIS are shown in Figure<br />

4.4-1 and Figure 3.1-2. They are based on the following elements:<br />

TID test bed:<br />

• An ATM switch (N1) supporting the interface STM1-155Mbps/ATM with the distant node R1 in<br />

RedIRIS, using a single mode optical fibre. This switch connects to an internal router (N2) also<br />

with an STM1-155Mbps/ATM interface, forwarding layer-2 ATM cells directly. This link uses<br />

multimode fibre.<br />

• A router (N2) supporting an interface STM1-155Mbps/ATM interface with the ATM switch N1<br />

described above. This router also has a Fast Ethernet interface (100 Mpbs), which allows the<br />

connection to the rest of the test bed supporting Ethernet traffic, so this router (N2) implements<br />

the Ethernet-ATM adaptation according to RFC-1483. In this router (N2) an ATM PVC 34 Mbps<br />

is defined, connecting the N2 node with the distant R2 node in RedIRIS.<br />

RedIRIS network:<br />

• An ATM switch (R1) supporting the interface STM1-155Mbps/ATM with the distant node N1 in<br />

TID test bed, using a single mode optical fibre. This switch connects to an internal router (R2)<br />

also with an STM1-155Mbps/ATM interface, forwarding layer-2 ATM cells directly. This link<br />

uses multimode fibre.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 60 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

• A router (R2) supporting an interface STM1-155Mbps/ATM interface with ATM switch R1<br />

described above. In this router R2 an ATM PVC 34 Mbps is defined, connecting the R2 node with<br />

the distant N2 node in TID test bed.<br />

• The internal connection of N2 node with the rest of the NREN network and the connection with<br />

Geant is to be defined by RedIRIS so the final objective is to forward the ATM traffic<br />

(corresponding to the PVC 34 Mbps originated in TID test bed) to Muppet partners in a<br />

transparent way.<br />

In the near future the ATM connection will probably be changed to a connection based on VLANs<br />

using Gigabit Ethernet links (GBE), but this point is still under discussion with RedIRIS staff. In a<br />

future stage the possibility to provide this GBE link between TID test bed and RedIRIS will be<br />

hopefully solved allowing the Ethernet frames to go directly across the link with VLANs, increasing<br />

the efficiency to transport IP traffic between TID test bed and other Muppet’s partners via RedIRIS-<br />

Geant.<br />

This new scenario has to be discussed with RedIRIS in order to detail the TID test bed interconnection<br />

and to specify and define the corresponding tests that could be implemented.<br />

Figure 4.4-1: fig2_RedIRIS-TID_Connect-Muppet_ATM_phy


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 61 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Figure 4.4-2: fig3_RedIRIS-TID_Connect-Muppet_ATM_logic<br />

4.4.1 Western Europe test bed access points<br />

The interconnection between TID test bed (Western Europe test bed) and the Spanish NREN RedIRIS<br />

will be based on the STM1-155Mbps ATM link now installed and configured between them, so the<br />

traffic will be transported with a defined PVC 34Mbps. Nowadays, the interfaces STM1-155Mbps<br />

ATM in the access points of TID and RedIRIS are installed and running properly, so it is only<br />

necessary to configure the PVC between the corresponding nodes in both networks.<br />

4.4.2 RedIRIS<br />

RedIRIS is the Spanish Academic and Research Network. It connects more than 260 centres,<br />

mainly Universities and Research Centres. Now RedIRIS has a 10Gbps POS link to the research<br />

worldwide Community (link to GEANT, access to Abilene, ESNet, Canarie and all the European and<br />

American research Networks), with a 2.5 Gbps core inside Spain. It's a meshed network with a PoP in<br />

each one of the Autonomous Regions of Spain, 18 in total. The link access goes from 2 Mbps to<br />

GigaEthernet.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Figure 4.4-3: RedIRIS Network<br />

Page 62 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

In order to promote Education and Research Global Connectivity, we participate in two<br />

important projects funded by the European Commission: ALICE, which main objective is to develop<br />

a network infrastructure between the Research Networks of South America and also with GEANT.<br />

The other project is EUMEDCONNECT with the same purpose but applied to the countries of the<br />

South side of the Mediterranean Sea. This will create the biggest Research and Academic community<br />

in the world, including GEANT, South America, North Africa and the close ties of Internet2.<br />

With the commercial world, RedIRIS have peerings with the Spanish commercial providers in the<br />

ESPANIX, the Spanish Internet Exchange Point, some of them IPv4 and IPv6, and direct peerings<br />

with Telia and Global Crossing, for the Commodity Internet.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 63 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

Figure 4.4-4:RedIRIS connections to other networks<br />

4.4.2.1 Capabilities available for MUPPET<br />

RedIRIS provide connectivity to all the centres with IPv4 and IPv6 native access, with a 2.5<br />

Gbps core. GigaEthernet links can be provided to Universities and Research Centres. It offers<br />

advanced services like multicast, Quality of Service, and VPNs inside the network. LSPs using CCC<br />

has been tested successfully in projects like ATRIUM, through GEANT, and other internal projects;<br />

also, we are using L2 VPNs inside the network. With all these elements, the network can provide L2<br />

connectivity when necessary.<br />

4.4.2.2 Description of the POP to be used by TID test bed<br />

The POP is located in Madrid, in RedIRIS premises. It is the Madrid RedIRIS PoP; with the<br />

main advantage that is located in the same premises as the RedIRIS core POP of the network, with the<br />

links to GEANT. So The Madrid POP has a direct 2.5 and Gigabit Ethernet links to the routers in<br />

charge of the GEANT links. Between the Madrid POP and the TID test bed there is a 155 link, with a<br />

34 Mbps PVC to be used now. There are no GigaEthernet facilities at the moment, but there will be<br />

more options for interconnections in the near future.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Figure 4.4-5:The Madrid POP<br />

Page 64 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04


4.5 Eastern Europe test bed - PIONIER<br />

OXC<br />

NNI<br />

(IPSEC)<br />

Eth/SDH/MPLS<br />

or Eth/SDH<br />

or Eth<br />

OXC<br />

Other testbed<br />

GEANT2<br />

ETH<br />

<strong>Test</strong> PC<br />

NNI<br />

(IPSEC)<br />

Eth/MPLS<br />

or Eth/SDH<br />

or Eth<br />

PIONIER<br />

GEANT2 BORDER<br />

DEVICE<br />

GEANT2 BORDER<br />

DEVICE<br />

Figure 4.5-1: <strong>Test</strong> bed Interconnections<br />

POZMAN<br />

EE <strong>Test</strong>bed<br />

UNI<br />

<strong>Test</strong> PC<br />

The test bed interconnection has been shown on the picture above. Current connection plan bases on<br />

the following assumptions:<br />

• Data (traffic) sources and receivers (<strong>Test</strong> PCs) in all test beds will be connected using popular<br />

Ethernet (FE, GE) technology<br />

• All data transfer will be performed using IP over Ethernet technology<br />

• No routing shall be implied between <strong>Test</strong> PCs (LAN-like connection)<br />

EE <strong>Test</strong> bed - POZMAN<br />

This interconnection will be built using dedicated Ethernet link or Ethernet VLAN with bandwidth<br />

guarantee.<br />

POZMAN<br />

Connection within POZMAN network will be set up using dedicated Ethernet link or VLAN with<br />

bandwidth guarantee.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Page 66 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

POZMAN-PIONIER<br />

This interconnection will be set up using dedicated Ethernet link or VLAN with bandwidth guarantee.<br />

PIONIER<br />

Connection within PIONIER network will be set up using VLAN on over-provisioned network or<br />

VLAN with bandwidth guarantee.<br />

PIONIER - GEANT<br />

This interconnection setup will depend on available GEANT interfaces and GEANT transport<br />

infrastructure. Currently we are able to connect to GEANT via VLAN on over-provisioned link<br />

between PIONIER 10GE switch and GEANT Juniper M160 router 10GE interface. There is also the<br />

possibility to implement a QoS policy for that interface.<br />

GEANT – other test beds<br />

The direct requirement for interconnection to other test bed clients is to have these clients connected<br />

via Ethernet interface, and use Ethernet transport (i.e. Ethernet over SDH encapsulation with<br />

transition to native Ethernet at GEANT border device)<br />

4.6 GEANT<br />

4.6.1 The GÉANT network<br />

The GÉANT project is a collaboration between 26 National Research and Education Networks across<br />

Europe, the European Commission, and DANTE. DANTE is the project's co-coordinating partner.<br />

The project began in November 2000 and was originally due to finish in October 2004; however,<br />

because of the project's success, and in order to permit a smooth transition to the next generation of<br />

the network (GÉANT-2), the project has now been extended until 30 June 2005. The GN2 project has<br />

started on September the 1 st 2004.<br />

GÉANT principal purpose has been to develop the GÉANT network - a multi-gigabit pan-European<br />

data communications network, reserved specifically for research and education use. The project also<br />

covers a number of other activities relating to research networking. These include network testing,<br />

development of new technologies and support for some research projects with specific networking<br />

requirements.<br />

The GÉANT network connects to each NREN via an access link. These access links are also being<br />

continually upgraded, helping to ensure that bottlenecks between NRENs and the European backbone<br />

do not reduce the benefit of the network.<br />

Each participating NREN has one access link to the GÉANT network at speed up to 10 Gb/s SDH.<br />

GÉANT offers IPv4 and IPv6 connections and offers various services.<br />

Up to date information about DANTE network topology can be found at [19]


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

Figure 4.6-1: The GÉANT Network<br />

Page 67 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

4.6.2 Capability of transport network functions available for MUPPET<br />

GÉANT offers two services of interest to MUPPET:<br />

IPv4 packet Quality of Service in the form of two classes of Premium IP and Less than Best Effort.<br />

Premium IP is implemented according to the DiffServ expedited forwarding per hop behaviour and<br />

guarantees the equivalent of a leased line;<br />

Layer 2 Virtual Private Networks, in the form of MPLS label switched paths.<br />

These two services can be combined with the equivalent services offered by the NRENs hosting the<br />

links to main MUPPET test bed sites to offer a test bed to test bed interconnection with QoS<br />

guarantees at layer 3 or Layer 2.<br />

The high speed of the GÉANT network allows these links to grow up to Gigabit speed.<br />

All the four NREN hosting the links to the MUPPET test bed, i.e. RedIRIS in Spain, GARR in Italy,<br />

DFN in Germany, PSNC in Poland, NORDUNet for the Nordic countries, are connected at 10 Gbit/s<br />

to GÉANT, ensuring that testing at high speed can be planned in a near future.<br />

The successor of GÉANT will offer even greater link speed and richer services, in particular in term<br />

of provisioning time and level of automatism.


IST – Integrated Project MUPPET<br />

Multi-Partner European <strong>Test</strong> <strong>Bed</strong>s for Research Networking<br />

5 Conclusion<br />

Page 68 of 68<br />

File:<br />

MUPPET_D31_<strong>Test</strong><strong>Bed</strong><strong>Overview</strong>_Sep04<br />

This deliverable “<strong>Test</strong> <strong>Bed</strong> <strong>Overview</strong>” represents a first step towards an integration of the five local<br />

test beds to a European scale network, by providing<br />

• A description of the overall design, the actual status and the functions provided to the MUPPET<br />

project and the interfaces/functions for the inter-test bed connections of the Southern-, Central-,<br />

Northern-, Western- and Eastern European local test beds, and<br />

• A first description of the potential test bed interconnections (and the application sites related to<br />

these), including the network functions provided by the NRENs and GEANT<br />

Therefore this deliverable builds the bases for the next activities, for the architectural investigations<br />

and for the practical implementation of the test bed interconnections and integration.

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!