B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler, C. Barb, A. Joglekar Presented by Sunjun Kim Jonathan di Costanzo 2009/04/13.

Slides:



Advertisements
Similar presentations
PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle.
Advertisements

1 The ns-2 Network Simulator H Plan: –Discuss discrete-event network simulation –Discuss ns-2 simulator in particular –Demonstration and examples: u Download,
Logically Centralized Control Class 2. Types of Networks ISP Networks – Entity only owns the switches – Throughput: 100GB-10TB – Heterogeneous devices:
Packet Switching COM1337/3501 Textbook: Computer Networks: A Systems Approach, L. Peterson, B. Davie, Morgan Kaufmann Chapter 3.
1 Dynamic DNS. 2 Module - Dynamic DNS ♦ Overview The domain names and IP addresses of hosts and the devices may change for many reasons. This module focuses.
PlanetLab Operating System support* *a work in progress.
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
Introduction CSCI 444/544 Operating Systems Fall 2008.
Transparent Checkpoint of Closed Distributed Systems in Emulab Anton Burtsev, Prashanth Radhakrishnan, Mike Hibler, and Jay Lepreau University of Utah,
1 Web Server Performance in a WAN Environment Vincent W. Freeh Computer Science North Carolina State Vsevolod V. Panteleenko Computer Science & Engineering.
Emulatore di Protocolli di Routing per reti Ad-hoc Alessandra Giovanardi DI – Università di Ferrara Pattern Project Area 3: Problematiche di instradamento.
Extensible Networking Platform IWAN 2005 Extensible Network Configuration and Communication Framework Todd Sproull and John Lockwood
A Case for Virtualizing Nodes on Network Experimentation Testbeds Konrad Lorincz Harvard University June 1, 2015June 1, 2015June 1, 2015.
Receiver-driven Layered Multicast S. McCanne, V. Jacobsen and M. Vetterli SIGCOMM 1996.
1 Modeling and Emulation of Internet Paths Pramod Sanaga, Jonathon Duerig, Robert Ricci, Jay Lepreau University of Utah.
Emulab.net: An Emulation Testbed for Networks and Distributed Systems Jay Lepreau and many others University of Utah Intel IXA University Workshop June.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
1 Cluster or Network? An Emulation Facility for Research Jay Lepreau Chris Alfeld David Andersen (MIT) Mac Newbold Rob Place Kristin Wright Dept. of Computer.
1 Fast, Scalable Disk Imaging with Frisbee University of Utah Mike Hibler, Leigh Stoller, Jay Lepreau, Robert Ricci, Chad Barb.
Integrated Scientific Workflow Management for the Emulab Network Testbed Eric Eide, Leigh Stoller, Tim Stack, Juliana Freire, and Jay Lepreau and Jay Lepreau.
Scalability and Accuracy in a Large- Scale Network Emulator Amin Vahdat, Ken Yocum, Kevin Walsh, Priya Mahadevan, Dejan Kostic, Jeff Chase, and David Becker.
Student Projects in Computer Networking: Simulation versus Coding Leann M. Christianson Kevin A. Brown Cal State East Bay.
An Integrated Experimental Environment for Distributed Systems and Networks B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler,
How To Use It...  Submit ns script via web form  Relax while emulab …  Generates config from script & stores in DB  Maps specified virtual topology.
Emulab Federation Preliminary Design Robert Ricci with Jay Lepreau, Leigh Stoller, Mike Hibler University of Utah USC/ISI Federation Workshop December.
Emulab.net Current and Future: An Emulation Testbed for Networks and Distributed Systems Jay Lepreau University of Utah December 12, 2001.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
1 MASTERING (VIRTUAL) NETWORKS A Case Study of Virtualizing Internet Lab Avin Chen Borokhovich Michael Goldfeld Arik.
Connecting LANs, Backbone Networks, and Virtual LANs
Edge Based Cloud Computing as a Feasible Network Paradigm(1/27) Edge-Based Cloud Computing as a Feasible Network Paradigm Joe Elizondo and Sam Palmer.
Mobile IP Performance Issues in Practice. Introduction What is Mobile IP? –Mobile IP is a technology that allows a "mobile node" (MN) to change its point.
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
9/14/2015B.Ramamurthy1 Operating Systems : Overview Bina Ramamurthy CSE421/521.
© 2006 Cisco Systems, Inc. All rights reserved.1 Microsoft Network Load Balancing Support Vivek V
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
M i SMob i S Mob i Store - Mobile i nternet File Storage Platform Chetna Kaur.
A Mobile-IP Based Mobility System for Wireless Metropolitan Area Networks Chung-Kuo Chang; Parallel Processing, ICPP 2005 Workshops. International.
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
Guide to Linux Installation and Administration, 2e1 Chapter 2 Planning Your System.
Bridging the Gap: Turning a Network Simulation into an Emulation Mac Newbold.
1 University of Maryland Linger-Longer: Fine-Grain Cycle Stealing in Networks of Workstations Kyung Dong Ryu © Copyright 2000, Kyung Dong Ryu, All Rights.
Emulab and its lessons and value for A Distributed Testbed Jay Lepreau University of Utah March 18, 2002.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Testbeds Breakout Tom Anderson Jeff Chase Doug Comer Brett Fleisch Frans Kaashoek Jay Lepreau Hank Levy Larry Peterson Mothy Roscoe Mehul Shah Ion Stoica.
Institute of Technology Sligo - Dept of Computing Sem 2 Chapter 12 Routing Protocols.
1 © 2003, Cisco Systems, Inc. All rights reserved. CCNA 3 v3.0 Module 9 Virtual Trunking Protocol.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Large-scale Virtualization in the Emulab Network Testbed Mike Hibler, Robert Ricci, Leigh Stoller Jonathon Duerig Shashi Guruprasad, Tim Stack, Kirk Webb,
STORE AND FORWARD & CUT THROUGH FORWARD Switches can use different forwarding techniques— two of these are store-and-forward switching and cut-through.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 Based upon slides from Jay Lepreau, Utah Emulab Introduction Shiv Kalyanaraman
1 Wide Area Network Emulation on the Millennium Bhaskaran Raman Yan Chen Weidong Cui Randy Katz {bhaskar, yanchen, wdc, Millennium.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Querying the Internet with PIER CS294-4 Paul Burstein 11/10/2003.
An Integrated Experimental Environment for Distributed Systems and Networks B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler,
1 Evaluation of Cooperative Web Caching with Web Polygraph Ping Du and Jaspal Subhlok Department of Computer Science University of Houston presented at.
CS 283Computer Networks Spring 2013 Instructor: Yuan Xue.
15.1 Chapter 15 Connecting LANs, Backbone Networks, and Virtual LANs Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or.
@Yuan Xue CS 283Computer Networks Spring 2011 Instructor: Yuan Xue.
Deterlab Tutorial CS 285 Network Security. What is Deterlab? Deterlab is a security-enhanced experimental infrastructure (based on Emulab) that supports.
HOW TO BUILD A BETTER TESTBED Fabien Hermenier Robert Ricci LESSONS FROM A DECADE OF NETWORK EXPERIMENTS ON EMULAB TridentCom ’
COMP1321 Digital Infrastructure Richard Henson March 2016.
1 Scalability and Accuracy in a Large-Scale Network Emulator Nov. 12, 2003 Byung-Gon Chun.
Sem 2 v2 Chapter 12: Routing. Routers can be configured to use one or more IP routing protocols. Two of these IP routing protocols are RIP and IGRP. After.
Network Processing Systems Design
Virtual Local Area Networks (VLANs) Part I
Oracle Solaris Zones Study Purpose Only
Storage Virtualization
Multiple Processor Systems
Presentation transcript:

B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler, C. Barb, A. Joglekar Presented by Sunjun Kim Jonathan di Costanzo 2009/04/13

Motivation Netbed structure Validation and testing Netbed contribution Conclusion 2

Motivation Netbed structure Validation and testing Netbed contribution Conclusion 3

 Researchers need a platform in which they can develop, debug, and evaluate their systems  One lab is not enough, lack of resources  Need more computers  Scalability in terms of distance and number of nodes can’t be reached  Requires a huge amount of time to develop large scale experiments 4

 Simulation: NS  Live networks: PlanetLab  Emulation: Dummynet, NSE controlled, repeatable environment Achieves realismNot easy to repeat the experiment again controlled packet loss and delay Manual configuration is boring Loses accuracy due to abstraction 5

 Derives from “Emulab Classic”  A universally-available time- and space-shared network emulator  Automatic configuration from NS script  Add Virtual topologies for network experimentations  Integrates simulation, emulation, and live-network with wide-area nodes experimentation in a single framework 6

 Accuracy  Provide artifact-free environment  Universality  Anyone can use anything the way he wants  conservative policy for the resource allocation  No multiplexing (virtual machine)  The resource of one node can be fully utilized 7

 Local-Area Resources  Distributed Resources  Simulated Resources  Emulated Resources  WAN emulator (integrated yet)  PlanetLab  ModelNet (still in work) 8

Motivation Netbed structure Validation and testing Netbed contribution Conclusion 9

Resource Life cycle 10

 3 clusters  168 in Utah, 48 PCs in Kentucky & 40 in Georgia  Each node can be used as  Edge node, router, traffic-shaping node, traffic generator  Exclusivity of a machine during an experiment  The OS is given but entirely replaceable 11

12

 Also called wide-area resources  nodes in approximatively 30 sites  provides characteristic live network  Very few nodes  These nodes are shared between many users  FreeBSD Jail mechanism (kind of Virtual machine)  Non-root access 13

14

 Based on nse (NS-emulation)  Enables interaction with real traffics  Provides scalability beyond physical resources  Many simulated nodes can be multiplexed 15

 VLANs  Emulate wide-area links within a local-area  Dummynet  Emulates queue & bandwidth limitation, introducing delays and packet loss between physical nodes  nodes act as Ethernet bridges  transparent to experimental traffic 16

Resource Life cycle 17

18

$ns duplex-link $A $B 1.5Mbps 20ms BA DB ABBA SpecificationGlobal Resource AllocationNode Self-ConfigurationExperiment ControlSwap OutParsingSwap In 19

 Experiment creation  A project leader propose a project on the web  A netbed staff accept or reject the project  All the experiment will be accessible from the web  Experiment managment  Log on allocated nodes or on the usershost (fileserver)  The fileserver send the OS images, home and project directories to the other nodes 20

21

 Experimenters use ns scripts with Tcl  can do as many functions & loops as they want  Netbed defines a small set of ns extension  Possibility of chosing a specfic hardware  simultation, emulation, or real implementation  Program objects can be defined using a Netbed- specific ns extension  Possibility of using graphical UI 22

 Front-end Tcl/ns parser  Recognizes subset of ns relevant to topology & traffic generation  Database  Store an abstraction of everything about the exeriment ▪ Fixed generated events ▪ Information about Hardwares, users & experiments ▪ procedures 23

24

 Binds abstractions from the database to physical or simulated entities  Best effort to match with specifications  On-demand allocations (no reservations)  2 different algorithms for local and distributed nodes (different constraints)  Simulated annealing  Genetic algorithm 25

 Over-reservation of the bottleneck  inter-switch bandwith is to small (2 Gbps)  Against their conservative policy  Dynamic changes of the topology are allowed  Add and remove nodes  Consistent naming across instantiations  Virtualization of IP addresses and host names 26

 Dynamic linking and loading from the DB  Let have the proper context (hostname, disk image, script to start the experiment)  No persistent configuration states  Only volatile memory on the node  If requiered, the current soft state can be stored in the DB as a hard state  Swap out / Swap in 27

 Local Nodes  All nodes are rebooted in parallel  Contact the masterhost which loads the kernel directed by the database  A second level boot may be requiered  Distributed nodes  Boot from a CD-ROM then contact the masterhost  A new FreeBSD Jail is instantiated  Tested Master Control Client 28

 Netbed supports dynamic experiment control  Start, stop and resume processes, traffic generators and network monitors  Signals between nodes  Used of a Publish/Subscribe event routing system  The static events are retrieved from the DB  Dynamics events are possible 29

 ns configuration files is only high-level control  Experimenters can made some low-level controls  On local node: root privileges ▪ Kernel modification & access to raw sockets  On distributed: Jail-restricted root privileges ▪ Access to raw socket with a specific IP address  Each local node support separated network isolated from the experimental one  Enable to control a node via a tunnel as we where on it without interfering 30

 Netbed try to prevent idling  3 metrics: traffic, use of pseudo-terminal devices & CPU load average  To be sure, a message is sent to the user who can disapprove manually  A challenge for distributed nodes with several Jails  Netbed proposes automated batch experiments  When no interaction is required  Enables to wait for available resources 31

Motivation Netbed structure Validation and testing Netbed contribution Conclusion 32

 1 st row : emulation overhead  Dummynet gives better results than nse 33

 They expect to have better results with future improvements of nse 34

 5 nodes are communicating with 10 links  Evaluation of a derivative of DOOM  Their goal is to sent 30 tics/sec 35

 Challenges  Depends on physical artifacts (cannot be cloned)  Should evaluate arbitrary programs  Must run continuoustly  Minibed: 8 separated Netbed nodes  Test mode: prevent hardware modifications  Full-test mode: provides isolated hardware 36

Motivation Netbed structure Validation and testing Netbed contribution Conclusion 37

 All-in-one set of tools  Automated and efficient realization of virtual topologies  Efficient use of resources through time-sharing and space-sharing  Increase of fault-tolerance (resource virtualization) 38

 Examples  The “dumbbell” network ▪ 3h15 --> 3 min  Improvement in the utilization of a scarce and expensive infrastructure: 12 months & 168 PC in Utah ▪ Time-sharing (swapping): 1064 nodes ▪ Space-sharing (isolation): 19,1 years  Virtualization of name and IP addresses ▪ No problem with the swappings 39

 Experiment creation and swapping  Mapping  Reservation  Reboot issuing  Reboot  Miscellaneous  Double time to boot on a custom disk image 40

 Mapping local resources: assign  Match the user’s requirements  Based on simulated annealing  Try to minimizes the number of switch and inter- switch bandwidth  Less than 13 seconds 41

 Mapping local resources: assign 42

 Mapping distributed resources: wanassign  Different constraints ▪ Fully connected via the internet ▪ “Last mile”: type instead of topology ▪ Specific topologies may be guaranteed by requesting particular network characteristics (bandwidth, latency & loss) ▪ Based on a genetic algorithm 43

 Mapping distributed resources: wanassign  16 nodes 100 edges : ~1sec  256 nodes & 40 edges/nodes : 10min~2h 44

 Disk reloading  2 possibilities ▪ complete disk image loading ▪ incremental synchronization (hash tables on files or blocks)  Good ▪ Faster (in their specific case) ▪ No corruption  Bad ▪ Waste of time when similar images are needed repeatly ▪ Pace reloading of freed node (reserved for 1 user) 45

 Disk reloading  Frisbee  Performance techniques: ▪ Uses a domain-specific algorithm to skip unused blocks ▪ Delivers images via a custom reliable multicast protocol  117 sec for 80 nodes, write 550MB instead of 3GB 46

 Scaling of simulated resources  Simulated nodes are multiplexed on 1 physical node ▪ Must deal with real time taking into account the user’s specification : rate of events  Test of a live TCP at 2Mb CBR ▪ 850MHz PC with UDP background 2Mb CBR / 50ms ▪ Able to have 150 links for 300 nodes ▪ Problem of routing in very complex topologies 47

 Possibility to program different batch experiment, with the modification of only 1 parameter by 1  The Armada file system from Oldfield & Kotz  7 bandwidths x 5 latencies x 3 application settings x 4 configs of 20 nodes  420 tests in 30 hrs (4.3 min ~ per experiment) 48

Motivation Netbed structure Validation and testing Netbed contribution Conclusion 49

 Netbed deals with 3 test environments  Reuse of ns script  Quick setup of the test environment  Virtualization techniques provide the artifact-free environment  Enables qualitatively new experimental techniques 50

 Reliability/Fault Tolerance  Distributed Debugging: Checkpoint/Rollback  Security “Petri Dish” 51