The Open Network Lab Ken Wong Applied Research Laboratory Computer Science and Engineering Department http://www.arl.wustl.edu/~kenw kenw@arl.wustl.edu.

Slides:



Advertisements
Similar presentations
IP Router Architectures. Outline Basic IP Router Functionalities IP Router Architectures.
Advertisements

Engineering Patrick Crowley, John DeHart, Mart Haitjema, Fred Kuhns, Jyoti Parwatikar, Ritun Patney, Jon Turner, Charlie Wiseman, Mike Wilson, Ken Wong,
NetFPGA Project: 4-Port Layer 2/3 Switch Ankur Singla Gene Juknevicius
Router Architecture : Building high-performance routers Ian Pratt
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) SriramGopinath( )
Spring 2002CS 4611 Router Construction Outline Switched Fabrics IP Routers Tag Switching.
10 - Network Layer. Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving.
CN2668 Routers and Switches Kemtis Kunanuraksapong MSIS with Distinction MCTS, MCDST, MCP, A+
Jon Turner (and a cast of thousands) Washington University Design of a High Performance Active Router Active Nets PI Meeting - 12/01.
PA3: Router Junxian (Jim) Huang EECS 489 W11 /
1 IP Forwarding Relates to Lab 3. Covers the principles of end-to-end datagram delivery in IP networks.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) Sriram Gopinath( )
Applied research laboratory David E. Taylor Users Guide: Fast IP Lookup (FIPL) in the FPX Gigabit Kits Workshop 1/2002.
Washington WASHINGTON UNIVERSITY IN ST LOUIS January 7, MSR Tutorial John DeHart Washington University, Applied Research Lab
Washington WASHINGTON UNIVERSITY IN ST LOUIS Packet Routing Within MSR Fred Kuhns
1 - Charlie Wiseman - 05/11/07 Design Review: XScale Charlie Wiseman ONL NP Router.
IP1 The Underlying Technologies. What is inside the Internet? Or What are the key underlying technologies that make it work so successfully? –Packet Switching.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Connecting Devices CORPORATE INSTITUTE OF SCIENCE & TECHNOLOGY, BHOPAL Department of Electronics and.
Washington WASHINGTON UNIVERSITY IN ST LOUIS 1 DTI Visit - John DeHart- 4/25/2001 Agenda l WU/ARL Background – John DeHart (15 minutes) l DTI Background.
Forwarding.
Monitoring Troubleshooting TCP/IP Chapter 3. Objectives for this Chapter Troubleshoot TCP/IP addressing Diagnose and resolve issues related to incorrect.
1 12-Jan-16 OSI network layer CCNA Exploration Semester 1 Chapter 5.
1 IEX8175 RF Electronics Avo Ots telekommunikatsiooni õppetool, TTÜ raadio- ja sidetehnika inst.
Washington WASHINGTON UNIVERSITY IN ST LOUIS MSR Tasks for Fall 2001 Fred Kuhns, John DeHart and Ken Wong.
Field Programmable Port Extender (FPX) 1 Remote Management of the Field Programmable Port Extender (FPX) Todd Sproull Washington University, Applied Research.
Washington WASHINGTON UNIVERSITY IN ST LOUIS Packet Classification in the SPC arl/projects/msr/work/msrcfy.ppt.
Jon Turner Extreme Networking Achieving Nonstop Network Operation Under Extreme Operating Conditions DARPA.
CS 283Computer Networks Spring 2013 Instructor: Yuan Xue.
LonWorks Introduction Hwayoung Chae.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco Public 1 OSI network layer CCNA Exploration Semester 1 – Chapter 5.
Graciela Perera Department of Computer Science and Information Systems Slide 1 of 18 INTRODUCTION NETWORKING CONCEPTS AND ADMINISTRATION CSIS 3723 Graciela.
Chapter 3 Part 3 Switching and Bridging
Advanced Computer Networks
6. The Open Network Lab Overview and getting started
Supercharged PlanetLab Platform, Control Overview
LESSON Networking Fundamentals Understand TCP/IP.
IP: Addressing, ARP, Routing
Lab A: Planning an Installation
Networking CS 3470, Section 1 Sarah Diesburg
Chapter 5 Network and Transport Layers
EE 122: Lecture 19 (Asynchronous Transfer Mode - ATM)
Chapter 8 ARP(Address Resolution Protocol)
Chapter 4: Routing Concepts
Introduction to the Junos Operating System
Using the Open Network Lab
Using the Open Network Lab
Chapter 3 Part 3 Switching and Bridging
An NP-Based Router for the Open Network Lab
Using the Open Network Lab
CS 1652 The slides are adapted from the publisher’s material
Chapter 5 Network and Transport Layers
Hubs Hubs are essentially physical-layer repeaters:
Charlie Wiseman†, Ken Wong†, Tilman Wolf*, and Sergey Gorinsky†
Using the WUGS-20 GigE Line Card
techX and ONL Summer 2008 Plans
Demonstration of a High Performance Active Router DARPA Demo - 9/24/99
An NP-Based Router for the Open Network Lab Overview by JST
Supercharged PlanetLab Platform, Control Overview
Next steps for SPP & ONL 2/6/2007
Network Core and QoS.
Bridges and Extended LANs
Computer Science Division
EE 122: Lecture 7 Ion Stoica September 18, 2001.
Chapter 4 Network Layer Computer Networking: A Top Down Approach 5th edition. Jim Kurose, Keith Ross Addison-Wesley, April Network Layer.
ONL Plugin Exercises (Jan 2005)
Packet Forwarding 2/22/2019 CS/ECE UIUC, Fall 2006.
Chapter 3 Part 3 Switching and Bridging
Ch 17 - Binding Protocol Addresses
The Open Network Lab Ken Wong, Jonathan Turner, et. al. Applied Research Laboratory Computer Science and Engineering Department
Network Core and QoS.
Presentation transcript:

The Open Network Lab Ken Wong Applied Research Laboratory Computer Science and Engineering Department http://www.arl.wustl.edu/~kenw kenw@arl.wustl.edu http://www.onl.wustl.edu (ONL) National Science Foundation ANI-023826, CNS-0551651, REL-0632580

ONL Lab Overview Gigabit routers PCs serve as hosts easily configured thru Remote Lab Interface embedded processors for adding new features PCs serve as hosts half on shared subnets Net configuration switch link routers in virtual topologies traffic generation Tools for configuration and collecting results monitoring traffic data capture and playback Open source all hw & sw sources on web 4 Gigabit Ethernet Switch 3 GigE Network Configuration Switch 16

People Who Make it Happen Jon Turner Principle Investigator Ken Wong Admin, Web site, Dist. Sched. Jyoti Parwatikar RLI, Software development Charlie Wiseman Web site, Ops Dist. Sched. Fred Kuhns SPC software FPX hardware John Dehart FPX hardware System integration

Instructional Objectives Deepen understanding of networking Fundamentals: routing, queueing, pkt classification, error and congestion control Advanced: pkt scheduling, protocols, services (e.g., guaranteed) Develop experimental CS skills measurement and data analysis selection of exploration trajectories Experience with adv. network technology distributed router with distributed scheduler router plugins network processors (to come)

Network Configuration Sample ONL Session Bandwidth Usage Network Configuration Routing Tables Queue Length Queue Parameters

Field Prog. Port Extender ONL Hardware Gigabit Router Field Prog. Port Extender Smart Port Card

Testbed Organization YOU Internet onl logins onl server usr Internet onl logins onlsrv onl server onlusr control network CP 2 3 GE 1 2,3 NSP1 CP 2 3 GE 1 2,3 NSP2 16 CP 2 3 GE 1 2,3 NSP3 CP 2 3 GE 1 2,3 NSP4 4-7 4-7 4-7 4-7 experiment network configuration switch

Major Software Components SSH logins Host daemon YOU ONL daemon and SSH proxy usr Internet onl server Remote Lab Interface (RLI) onl03 onl logins onlBSD1,2 SSH tunnel CP 2 3 GE 1 2,3 NSP1 CP 2 3 GE 1 2,3 NSP2 16 CP 2 3 GE 1 2,3 NSP3 CP 2 3 GE 1 2,3 NSP4 4-7 4-7 4-7 4-7 switch controller and SPC control message handler configuration switch

Getting Started onl.arl.wustl.edu, onl.wustl.edu, video www.onl.wustl.edu video tutorial get an account

After Logging in download Remote Lab Interface Software extra links getting started testbed status reservations install Java runtime environment verify that you can SSH to onl.arl.wustl.edu configure SSH tunnels

SSH Tunnel Configuration Unix or Unix-like (e.g., cygwin) ssh -L 7070:onlsrv:7070 onl.arl.wustl.edu I use this (defined as an alias) from Linux MS Windows PuTTY (GUI-based) I use this from MS XP laptop See Tutorial for URL (free software) MS Windows SSH client tool from SSH Corp Follow instructions on ONL web pages 100s have successfully installed the tunnel But don’t agonize over it If problems, send email to your grader/consultant

PuTTY SSH Tunnel Configuration external host name = onl.arl.wustl.edu session name = rli local port 7070 remote host onlsrv remote port 7070

Topology Demo 1/2 Topology Add Cluster Add Host Generate Default Routes Spin handle and Port 0

Configuring Topology Port 0 used for Add hosts and Control Processor. Spin handle rotates ports. Add hosts and links as needed. For all ports Drag icons to improve visual layout Cluster includes: NSP router, GE switch, fixed set of hosts Actual hardware NOT been allocated until after commit

Save Configuration You do NOT need to be connected to the ONL testbed until you commit !!!

Resource Reservation Maximum resource request Make reservation Current reservations

Commit and Testbed Status Allocate hardware and configure cluster has been initialized cluster is initializing

Commit Done Can SSH to control net hostname from onlusr (no password) Logical names bound to actual hardware Dark links: commit done

Verifying Host Configuration nestat –i: Display interface summary MAC address IPv4 address of testbed interface ifconfig eth0 Display interface config info

Internal Interface IP Addresses Control Network: 10.0.0.X CP 192.168.1.33 192.168.1.34 192.168.1.32 192.168.1.128 192.168.1.48 x 192.168.1.112 192.168.1.96 192.168.1.64 192.168.1.80 Data Network: 192.168.N.Y where N = NSP# Y = 16 x (1 + port#)

Default Route Tables CIDR network addresses RTs are really Select Port 2 => Route Table CIDR network addresses RTs are really Forwarding Tables default routes added after pkt 65

Semantics of Route Table Entry Filter Table Flow Table Input Demux Result Proc. & Priority Resolution bypass headers 192.168.1.16/28  0 192.168.1.32/28  1 192.168.1.48/28  2 192.168.1.64/28  3 192.168.1.80/28  4 192.168.1.96/28  5 192.168.2.0/24  6 192.168 =c0.a8 (hex) 1 01 00 6 5 2 4 3 RT implementation uses space-efficient variant of multibit trie. 16: 0001 0000 32: 0010 0000 48: 0011 0000 64: 0100 0000 80: 0101 0000 96: 0110 0000 0 192.168.1.16 => c0.a8.01.10 => 1100 0000 1010 1000 0000 0001 0001 0000 1 192.168.1.32 => c0.a8.01.20 => 1100 0000 1010 1000 0000 0001 0010 0000 2 192.168.1.48 => c0.a8.01.30 => 1100 0000 1010 1000 0000 0001 0011 0000 3 192.168.1.64 => c0.a8.01.40 => 1100 0000 1010 1000 0000 0001 0100 0000 4 192.168.1.80 => c0.a8.01.50 => 1100 0000 1010 1000 0000 0001 0101 0000 5 192.168.1.96 => c0.a8.01.60 => 1100 0000 1010 1000 0000 0001 0110 0000 6 192.168.1.112 => c0.a8.01.70 => 1100 0000 1010 1000 0000 0001 0111 0000 7 192.168.1.128 => c0.a8.01.80 => 1100 0000 1010 1000 0000 0001 1000 0000

Packet Processing Example Port 2 (Ingress Side) hash on IP header Port 3 (Egress Side) n1p2 interface n1p3 interface ATM switch fabric (qids 440-553) (qids 504-511) dgram voq 0 + s h i m RT EM GM CARL - s h i m EM GM CARL PS PS 64         ATM cells 184 ATM cells voq 7 ATM cells res flow pkt  AAL5 frame AAL5 frame  pkt (qids 256-439) shim+hdr+pkt body shim+hdr+pkt body (AAL5 frame with shim) Packet Scheduler (PS) Weighted Deficit Round Robin (WDRR) or Distributed Queueing (DQ) on ingress side WDRR on egress side Shim is a special header Contains internal metadata Store and Forward queues

How Fast Can a Queue Overflow? Given Q = 250,000-byte queue All pkts are L = 1,500 bytes long Average input traffic rate Rin = 500 Mbps Link rate Rout = 1 Mbps Fluid Model Queue Backlog B(t) Input Volume Vin(t) Output Volume Vout(t) Cases Constant (deterministic) input rate Deterministic, bursty input rate Stochastic (Poisson) input rate

Pause for Topology Demo 2/2 Simple traffic monitoring Add links

Add Monitoring Display chart name poll every 0.3 sec

Ping Chart select max y-value = 0.004 window width halved

Adding A Link drag from port 7 to port 6

Delete Route Group shift+select route deleted shift+select

Add Route To Loopback route added match n1p2 address

Pause for Traffic Demo 1/2 Traffic monitoring iperf traffic generator

Bandwidth Monitoring sndr rcvr

Generating Traffic with Iperf available at http://dast.nlanr.net/projects/Iperf/ installed on all onl hosts Sample uses iperf –s -u run as UDP server on port 5001 iperf –c server –u –b 20m –t 300 run as client sending UDP packets to server at 20 Mb/s for 300 secs. iperf –s –w 4m run as TCP server on port 5001 set max window to 4 MB iperf –c server –w 4m –t 300 run as client, sending as fast as possible, with max window 4 MB

iperf Traffic Generator Start UDP server (receiver) server report Start client (sender) 200 Mbps bw for 10 sec client sending rate server receive rate (bottleneck)

Built-in Monitoring Points Easy to monitor traffic using real-time displays Built-in monitoring points Link bandwidth Port-to-port switch bandwidth Queue lengths Packet loss and much more Can use same monitoring interface to display data generated by End hosts (e.g., TCP window size) Software plugins installed at router ports

Pause for Traffic Demo 2/2 Link rate (capacity -- Mbps) General match filters Queue size (capacity -- bytes) Emulating propagation delay (delay plugin)

Some Bottleneck Experiments Configuration Senders: n1p2, n1p3 Receivers: n1p4, n1p5 Bottleneck link 1.7-1.6 (300 Mbps) Queue 300: 150,000 bytes Forward path through 1.7-1.6 Return path through 1.6-1.7 Flows 8 second staggered starting time (shell script) Use iperf traffic generator Two 200 Mbps UDP flows through bottleneck Pkt scheduling: Equal rate vs. 1:3 rate Two TCP flows through bottleneck With and without 50 msec ACK delay sndr rcvr

Configuring For UDP Experiment At egress port 7 (the bottleneck) Install GM filter to direct traffic to queue 300 Configure queue 300 to be 150,000 bytes Monitor VOQ VCI bw, queue 300 length, and pkt drops port 6 port 7 port 5 port 4 port 3 port 2 ingress egress VCI for VOQ 7 NSP ATM switch 8 VOQs Bandwidth to OPP x Monitor VCI traffic corresponding to VOQ x

Monitoring Points   a Port 2 (Ingress Side) ATM Switch b c d e (qids 504-511) voq 0 RT EM GM PS     c voq 7 res flow (qids 256-439) d EM GM PS  184    e 64 dgram (qids 440-553) (Egress Side)

credit #bytes per round in WDRR PS Queue Table ATM switch fabric Port 7 (Ingress Side) RT EM GM voq 0 voq 7  PS    (qids 504-511) (Egress Side) 64 184 dgram (qids 440-553) res flow (qids 256-439) credit #bytes per round in WDRR PS

Adding a General Match Filter GM filter matches all pkts protocol (* matches any protocol) Queue 300 Priority 50 (higher than RTs 60) 150,000 bytes Egress link rate = 300 Mbps

Classification and Route Lookup (CARL) Three lookup tables. route table for routing datagrams – best prefix flow table for reserved flows – exact match filter table for management – general match (adr prefixes, proto, ports) Lookup processing. parallel check of all three return highest priority primary entry and highest priority auxiliary entry each filter table entry has assignable priority all flow entries share same priority, same for routes Route Table Filter Table Flow Table Input Demux Result Proc. & Priority Resolution bypass headers Route lookup & flow filters share off-chip SRAM limited only by memory size General filters done on-chip total of 32

Lookup Contents Route table (best match) – ingress only output port, Queue Identifier (QID) packet counter incremented when entry returned as best match for packet Flow table (exact match) – both ingress and egress output port – for ingress Queue Identifier (QID) – for egress or SPC packet and byte counters updated for all matching packets Filter table (general match) – ingress or egress for highest priority primary filter, returns QID packet counter incremented only if used same for highest priority auxiliary filter If packet matches both primary and auxiliary entries, copy of pkt is made.

Iperf Server And Client Scripts Puts $n1p2, etc. into Linux environment Run “iperf –s –u” on hosts $n1p4 and $n1p5 Run as UDP server Send at 200 Mbps for 20 seconds Run as UDP client sending to n1p4

Running Receiver and Sender Scripts Run receiver script Run sender script Sending (client) rate and receive (server) rate for two flows

VOQ Bandwidth and Q300 Length Both senders send at 200 Mbps 1.2 to 1.7 and 1.3 to 1.7 Only n1p2 traffic 200 Mbps goes to n1p4 n1p2 and n1p3 traffic 120 Mbps to n1p4 (1.6-1.4) 180 Mbps to n1p5 (1.6-1.5) Queue 300 is full Only n1p3 traffic 200 Mbps goes to n1p5

Separate Queues For Each Flow 150 Mbps each flow Q300 and Q301 filled

bandwidth-delay product TCP Scripts 4 MB receiver window to handle bandwidth-delay product

Both TCP Flows Compete Equally

Delay ACK Pkts By 50 Msec Define GM filter at egress port 6 matches all pkts directs traffic to SPC delay plugin Define standard delay plugin default delay is 50 msec change delay thru msg interface Binding between filter and plugin SPC QID (a value between 8 and 127) delay plugin

Working Smarter Tutorial  Summary Information  Password-Free SSH Traffic generation shell scripts e.g., ~kenw/bin/{[ut]sndrs-1nsp,[ut]rcvrs-1nsp} Reference Web pages Tutorial  Summary Information FAQs by category (to come)

Open Network Laboratory Web Site onl.arl.wustl.edu, www.onl.wustl.edu, onl.wustl.edu Look at tutorial Look at video Register and try it out Lab Questions? See your grader(s) and instructor ONL Problems? Verify that the problem persists, then: Email testbed-ops@onl.arl.wustl.edu Give concise description of problem Give instructions for reproducing problem It would help if you can reduce the experiment to the simplest case

The End

Queue Table  Port 7 (Ingress Side) ATM switch fabric (Egress Side) EM GM voq 0 voq 7  PS    (qids 504-511) (Egress Side) 64 184 dgram (qids 440-553) res flow (qids 256-439)

tcp – equal share

udp – 3:1 share

40 sec instead of 20 sec

The End