Flexible Distributed Testbed for High Performance Network Evaluation


Flexible Distributed Testbed for High Performance Network Evaluation

As an example, a photograph of the testbedarrangement at Queen Mary, University of London,is shown in Figure 3.is directly connected by optical fibre links with theSpanish academic network provider “Red Iris”which is connected with GÉANT.Node / RouterDesktop PCNode / RouterDesktop PCNode / RouterDesktop PCNode / RouterDesktop PCConfigurationServer & GatewayFast EthernetSwitch CISCOCatalyst 2900 24 portsInternet Access / OtherLaboratory FacilitiesInternet Access / OtherLaboratory FacilitiesFigure 3:- Photograph of the QMUL TestbedThe corresponding floorplan is presented inFigure 4.Configurable Ethernet LineConfiguration Virtual LANSerial port (switch console)Figure 5:- Girona Testbed LayoutConfigurationServerWindowWhite BoardDoorPrinterDisplay BoardsClient/ServerMultimedia PCsPatchPanelPatchPanelClient/ServerMultimedia PCsWindowCore Network Routersand LAN Switches(in rack mounting)VDU, Keyboard& Mouse SwitchFigure 4:- QMUL Testbed FloorplanA similar testbed is in place at the Universitat deGirona. Although it has similar functionalities itwas conceived at a lower scale as a low costversion in order to collaborate with the QMULlaboratory setting inter-testbed experiments.Figure 5 shows the basic distribution of the Gironatestbed while Figure 6 displays several images.Moreover, their laboratory facilities consist of SunWorkstations, PCs, CISCO routers and bridgeswitches.In the laboratory, the routers and Linux-PCs support MPLS and QoS in order to set updifferent configurations and topologies. The testbedFigure 6:- Photographs of the Girona TestbedThe aim of this testbed is to use it for differenttypes of experiments and simulations comprising:cluster/grid technologies, QoS routing/MPLS,network protection mechanisms, distributedsimulations, etc. In the following paragraphs twodifferent configurations are described as examplesof the use of the testbed. The first one is related tocluster software evaluation/ comparison and thesecond is related to MPLS routing experiments.The main objective of the cluster softwareexperiment was the evaluation and comparative ofdifferent free distributed cluster software anddifferent forms of cluster configuration. Basicallythe Linux OS was installed on the cluster PC alongwith the different cluster software: MessagePassing Interface (MPI) [7], Parallel Virtual Machine(PVM) [8], and Multi-computer Operating System

for unIX (MOSIX) [9]. See the main characteristicsin Table 1. It was also compared different ways ofusing the Ethernet ports and the switch (e.g. use ofone, two, three, or four ports along with the Linuxchannel bonding driver, and the emulation of acrossbar using the switch VLANs).SoftwareLevelProcesscommunicationCollectivecommunicationoperationsProcessassignmentDynamicclusterconfigurationSupport fordifferentarchitecturesPVM App. Async. Yes Static Yes YesMPI App. Async. Yes Static No YesMOSIX Syst.Pipes,namedpipes,socketsand filesNoDynamic(loadbalancing)Yes NoTable 1: Main Characteristics of the Cluster SoftwareThe experiments carried out where for instancethe evaluation of the migration time, the evaluationof the communications cost, different networkconfigurations, etc. All the experiments wherecarried out 10 times and the 95% confidenceintervals calculated. For instance Figure 7 showsthe MOSIX migration cost results.Time (ms)50045040035030025020015010050Migration Cost in MOSIX00 1 2 3 4 5 6Transferred Data (Mbytes)Figure 7:- MOSIX Migration cost as a function of theamount of data transferred using 1 to 4 NICs per node.The main objective of the MPLS routingexperiment was the test of an open-source Linuxbasedset of routing software including an MPLSkernel patch still under development. The softwareincluded Zebra/Quagga [10] routing modules andmpls-linux [11] packages. Several networkconfigurations and tests were made, for instance aTraffic Engineering experiment using two differentLabel Switched Paths (LSPs) between the sameorigin-destination node pair. The nodes were1,& 1,& 1,& 1,&configured to send a type of traffic through an LSPand another type of traffic through the other LSP.There were also carried out experiments on faultrecovery mechanisms at an MPLS level developinga simple client/server utility that monitors thephysical links in order to detect a failure. SeeFigure 8 for the network configuration of thisparticular experiment.Network IP1VLAN 1eth0 X=1192.168.5.XLabel 1000VLAN 10192.168.0.Xeth4 X=1HOST 1eth1 X=1Label 1015eth4 X=2HOST 2eth3 X=2VLAN 11192.168.3.Xeth3 X=3HOST 3eth2 X=3Label 3000HOST 4Label 3005eth1 X=4 eth2 X=4192.168.1.X192.168.2.XVLAN 14VLAN 13Label 3015Label 1005Label 1010Label 3010Working LSPFAILUREeth0 X=3192.168.4.XVLAN 12Network IP2Backup LSPFigure 8:- MPLS Fault Restoration ExperimentE. Inter-Testbed TunnellingInterconnection between the testbed islands canbe readily achieved via the public Internet.However, for a number of experiments, particularlyinvolving MPLS, it is desirable to provide theinterconnection at layer-2, as seen by varioustestbed router entities. To achieve this, layer-2tunnelling within IP datagrams is employed to forma virtual private wire service.Software at the ends of the tunnel provides theencapsulation and decapsulation functions asillustrated in Figure 9.AutonomousSystem 1IP DatagramPublic IPPublicInfrastructureOptional IPSecEncodingMPLSTestbed IPAutonomousSystem 2TunnelTerminationPointFigure 9:- Testbed Tunnelling InterfaceAs far as the Autonomous System BorderRouters (ASBRs) are concerned in AS1 and AS2,respectively, they have a direct wire link betweenthem. This permits actions such as inter-AS LabelSwitched Path (LSP) splicing to be carried out.Furthermore, if security is a consideration, thetunnel payload can be encrypted using IPSec.

This is a particularly appealing approachbecause it permits the easy interconnection oflarge numbers of islands and it is also cheap, as noleased-line or virtual connections need to be agreedwith the public Internet operators. However, the“link” characteristics are limited to the “bestefforts” service offered by the Internet and thetunnel termination software performance.III. EXAMPLE EVALUATION SCENARIOSTo illustrate the general utility of this form oftestbed a couple of more advanced experimentalsetups are now considered simply by way of anexample.A Inter-Provider MPLS ResearchWith the growing uptake of Multi-Protocol LabelSwitching (MPLS), recently research interest hasbeen devoted to the use of MPLS for supportingVirtual Private Networks (VPNs). A taxonomy ofvarious VPN approaches, as described in rfc4026[12]. A further goal is to allow the MPLSinfrastructure to be dynamically configured inresponse to particular instantaneous VPNcommunity requirements. Within a domain thenecessary signalling support is available, via RSVP-TE [13]. However, a key challenge is to enabledynamic MPLS VPNs to be supported betweenprovider domains without divulging sensitiveoperator information. The testbed becomes aneffective means of replicating this situation and toexplore the viability of various inter-domainsignallingAn example solution is considered. Assume auser “A” has created a VPN including the specialresource(s). This was achieved by the usercontacting a mediating agent that we refer to asthe Dynamic VPN Manager (DVM). The DVM thenliaised with the connection management softwareat the ingress LSR point(s) to establish LSPs to andfrom the resources within the AS domain. The pathtaken by the LSPs is not known to the DVM;however, it has the ability to specify connectionQoS requirements. This allows Layer-2 resiliencepath switching to be carried out transparently tothe DVM.At some later time the user “B” wishes to join theVPN group. It starts this process by contacting itslocal DVM. Due to inter-domain DVMadvertisements, B’s local DVM is aware that thetarget VPN exists and that it is managed by theDVM in AS1. It uses inter-domain Network-Network Interface (NNI) signalling to negotiate thejoining operation on behalf of B. If this request ispermitted, the DVM in AS2 uses local connectionmanagement to establish LSP(s) from B to therelevant AS boundary router. The DVM in AS1 alsosets up LSP(s) from its local VPN group memberusers and resources to its own AS boundaryrouter. In both cases the TSpec can be used toensure QoS constraints are observed. So far, twoincomplete MPLS tunnels have been created, eachterminating at the local AS boarder router. Thefinal stage is for the boarder routers to exchangelabels to enable the splicing together of thetunnels. This can be done by piggy-backing MPLSlabels on BGP routing messages [14], althoughother means are also possible. The crucial point isthat the operator of AS2 has no knowledge of theLSP tunnel path taken in AS1. The complete LSP isnot created in a true end-to-end fashion; rather itis formed from the concatenation of separatetunnels. The RSVP-TE signalling messages neednot contain the complete source-routed path, onlythat portion of the path associated with a givensegment. The complete process is illustrated inFigure 10.Step 1:DVM1 tells local ingress LER (PE1)to form an LSP segment from A toASBR1AStep 2: DVM1 agrees with adjacentDVM2 to form an inter-domainLSPAAPE1PE1Step 3: DVM1 tells ASBR1 to expect alabel for new LSP from ASBR2PE1ASBR1ASBR1ASBR1Step 4: ASBR2 gives label to ASBR1 to permita label swapping entry in the LIB atASBR1 to splice the LSP segments (i)and (ii)APE1 (i) ASBR1(i)(i)(i)DVM1DVM2ASBR2ASBR2ASBR2ASBR2PE2Step 2A: DVM2 tells local ASBR2 toform an LSP segment to PE2in preparationPE2Step 3A: DVM2 tells local ASBR2 tosend a label for new LSPsegment to ASBR1PE2Step 4A: ASBR2 enters label swappingdetails in its LIB to splice theLSP segments (ii) and (iii)Figure 10:- Inter-Domain LSP CreationThe testbeds in Girona and QMUL can beconfigured to model the network components atAS1 and AS2. Using the tunneling mechanismoutlined in Section II.E it is possible to build acomplete representation of this scenario and toexplore the various interactions. This is currentlyallowing the inter-DVM handshaking protocols tobe refined and to examine the interaction betweenthe network-specific connection management andthe more generic VPN management functions.Furthermore, logical partitioning of the testbedresources allows this experiment to be carried outconcurrently with unrelated ones with nodisruption.B. Remote Resource ManagementA key aim of grid computing is to enable a userto solve problems on a distributed platform in a(ii)(iii)(iii)(iii)PE2BBBB

eliable and confidential (secure) manner within aspecified time. User applications embrace bothcommercial and academic communities. Typicalacademic applications may involve climate changecalculations. These are processor intensive but nottime critical. Conversely, commercial applicationsare time sensitive and must have guaranteedconfidentiality. An example might be the evaluationof airflows over a new commercial airframe.Clearly, the data supplied and the results obtainedwould be strictly confidential. The operator and theresource infrastructure MUST have mechanisms inplace to guarantee the isolation of this data fromanyone outside, possibly including the operatorthemselves.Distributed and parallel computations, suchmost scientific ones, need to locate and accesslarge amounts of data and to identify the suitablecomputing platforms on which to process thisinformation. Most often applications need to scaledynamically across distributed computingplatforms such as clusters and grids. Therefore, itis also necessary to consider the resourcemanagement function itself. Techniques such asover-booking of finite resources and the preemptivescheduling of tasks of differing prioritiesimprove efficiency and flexibility. Both of theseconcepts have been considered in SHUFFLE EUproject[15] but not applied to an MPLSenvironment.The testbed is now being used to extend andevaluate these concepts in relation to ubiquitouscomputing. Within this framework, user-basedresource agents liaise with separate resourcemanagement islands that may, or may not, beowned by a single operator. As such the testbedcan be readily configured to act as a collection ofclient agents and resource management entitiesscattered across AS domains. Within this context amechanism for resource discovery, booking, datatransfer and results assimilation is beingdeveloped. In addition, associated with a givenResource Manager, a number of RM modulesperform a distributed resource management tasks.The resources involved are classified intocategories such as:• Computational Resources (CR) includingboth processors and cache and the mainmemories. There are many different ways toquantify and monitor the use of this kind ofresources, which have to be studied andone selected or even new ones proposed.• Data Storage (DS) resources that includeany type of storage devices (hard disk,DVD, etc). The main measure is thecapacity but the characteristics of thedevice have also to be taken into accountwhen making the resource allocationdecisions.• Bandwidth (BW) referring to thecommunications capacity required locallybetween the RM modules. This is separatefrom the communication resources,between the clients and the resourceclusters which are handled by theoperator’s connection management / trafficengineering system.Usually an application will ask the RM for themultiple types of resource – via the DVM. A typicalsituation that will be considered is when the userdoes not know in advance (or at least not exactly)the required resources. In this case the RM andRM modules will provide mechanisms formonitoring and accounting the used resources(with the possibility of predicting future needs) andmechanisms to limit the amount of allocatedresources.The RM is in charge of the resourcemanagement; the DVM’s role is simply to requestthose resources on behalf of the VPN users. TheDVM usually asks for specific functions of the RM(ask for booking/releasing resources, etc). Howeverin several cases the RM may also take the initiativeand notify the DVM of any unexpectedchange/problem in the resources monitored (e.g. ahard disk failure). The RM module will also dealwith prioritised scheduling of tasks, security andauditing.The RM module also has to interact with specificlocal resources i.e. with the operating systemsrunning in the computers where the resources are.This could be performed directly using anyavailable services or functionalities offered byOperating Systems and other managementsoftware, but the possibility of placing an RM agent(proxy module) on the computers where theresources reside is also considered in order to givebetter functionality, control and performance. TheRM architecture is therefore decentralised toensure robustness, scalability and efficiency. Eachcomputational platform under RM could beequipped with a RM agent and those agentsorganise themselves in various virtual networktopologies to sense the underlying physicalenvironment and trigger accordingly applications'reconfiguration.Decision components are embodied in each agentto evaluate the surrounding environment anddecide based on a resource sensitive model (RSM)how to balance the resource consumption ofapplication's entities in the physical layer. RSM

provides a normalized measure of the improvementin resource availability an entity would receive bymigrating between nodes. The RSM uses theprofiled information about applications' entities todecide which ones are the most beneficial tomigrate. Therefore we are considering networksensitive virtual network topologies that adjustthemselves according to the underlying networktopologies and conditions. Indeed it may bepossible to extend the approach to federations ofRMs each sharing the RM module resources in acooperative manner, though this extension wouldhave significant commercial hurdles to overcome.In this scenario new management techniques forhigh performance networks are needed. Thepresented testbed is able to carry out complexexperimentation in distributed computation insupport of grid computing or agile computing[16].IV. CONCLUSIONS AND FURTHER WORKThis paper has provided a description of ageneric and yet high performance testbedarrangement based on off-the-shelf technology thatcan nevertheless be used to construct and evaluatecomplex networking scenarios. The use of acentralised server and separate control and dataplanes permits the rapid reconfiguration of the testenvironment whilst ensuring that concurrentexperiments do not interfere with each other.Further work is currently underway to consideradditional experimental scenarios associated with“agile computing”. In addition, the presence ofInternet accessible KVM switches is now beingconsidered for “hands-on” remote experimentation.At the very least this may provide a valuablevehicle in support of distance learning activities.However, one aspect that requires further study isautomating the scheduled access to testbedresources. Supporting this would allow thetestbeds to become an open resource wherevarious researchers can use the facilities availableby simply performing an online booking process.[5] ZEBRA: http://www.zebra.org/[6] XORP: http://www.xorp.org/[7] http://www.mpi-forum.org/ andhttp://www-unix.mcs.anl.gov/mpi/mpich/[8] http://www.csm.ornl.gov/pvm/pvm_home.html andhttp://www.netlib.org/pvm3/[9] http://www.mosix.org andhttp://openmosix.sourceforge.net[10] http://www.quagga.net/ andhttp://www.zebra.org/[11] http://mpls-linux.sourceforge.net/ andhttp://perso.enst.fr/~casellas/mpls-linux/[12] L. Andersson, T. Madsen, “ProviderProvisioned Virtual Private Network (VPN)Terminology”, IETF Request for Comments:4026, March 2005.[13] D. Awduche et al., “RSVP-TE: Extensions toRSVP for LSP Tunnels”, IETF Request forComments: 3209, December 2001.[14] Y. Rekhter, E. Rosen, “Carrying LabelInformation in BGP-4”, IETF Request forComments: 3107, May 2001.[15] IST-1999-11014 SHUFFLE -http://www.elec.qmul.ac.uk/research/projects/shuffle.html[16] N. Suri et al., “Agile Computing: Bridging theGap between Grid Computing and Ad-hocPeer-to-Peer Resource Sharing”, 3rdIEEE/ACM International Symposium onCluster Computing and the Grid(CCGRID’03), pp 618-625, 2003.REFERENCES[1] K. Shimano et al., “Demonstrations of LayersInterworking between IP and OpticalNetworks on the IST-LION TestBed”, Proc.Optoelectronics and CommunicationsConference, (OECC2002), 2002.[2] PIONIER2001 -http://mpls.man.poznan.pl/index.html[3] A testbed of Terabit IP routers running MPLSover DWDM (ATRIUM) -http://www.alcatel.be/atrium[4] Acreo National Broadband Testbed -http://www.acreo.se/templates/Page____271.aspx

More magazines by this user
Similar magazines