Emulab.net Current and Future: An Emulation Testbed for Networks and Distributed Systems Jay Lepreau University of Utah December 12, 2001.

Slides:



Advertisements
Similar presentations
Distributed Data Processing
Advertisements

PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle.
Logically Centralized Control Class 2. Types of Networks ISP Networks – Entity only owns the switches – Throughput: 100GB-10TB – Heterogeneous devices:
1 Planetary Network Testbed Larry Peterson Princeton University.
Winter 2008 Evaluation Tools1 Brief Overview of Networking Evaluation Methods and Tools.
Design Deployment and Use of the DETER Testbed Terry Benzel, Robert Braden, Dongho Kim, Clifford Informatino Sciences Institute
Extensible Networking Platform IWAN 2005 Extensible Network Configuration and Communication Framework Todd Sproull and John Lockwood
Chapter 9 Designing Systems for Diverse Environments.
Dr. Zahid Anwar. Simplified Architecture of Linux Cluster Simplified Architecture of a Single Computer Simplified architecture of an enterprise cluster.
Emulab.net: An Emulation Testbed for Networks and Distributed Systems Jay Lepreau and many others University of Utah Intel IXA University Workshop June.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
CSCD 433/533 Advanced Computer Networks Lecture 1 Course Overview Fall 2011.
1 Experiments and Tools for DDoS Attacks Roman Chertov, Sonia Fahmy, Rupak Sanjel, Ness Shroff Center for Education and Research in Information Assurance.
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
Lowering the Barrier to Wireless and Mobile Experimentation Brian White, Jay Lepreau, Shashi Guruprasad University of Utah HotNets-I October.
1 Cluster or Network? An Emulation Facility for Research Jay Lepreau Chris Alfeld David Andersen (MIT) Mac Newbold Rob Place Kristin Wright Dept. of Computer.
1 GENI: Global Environment for Network Innovations Jennifer Rexford On behalf of Allison Mankin (NSF)
1 Fast, Scalable Disk Imaging with Frisbee University of Utah Mike Hibler, Leigh Stoller, Jay Lepreau, Robert Ricci, Chad Barb.
OCT1 Principles From Chapter One of “Distributed Systems Concepts and Design”
1 Implementing the Emulab-PlanetLab Portal: Experiences and Lessons Learned Kirk Webb Mike Hibler Robert Ricci Austin Clements Jay Lepreau University of.
Internet In A Slice Andy Bavier CS461 Lecture.
1 A Large-Scale Network Testbed Jay Lepreau Chris Alfeld David Andersen Kevin Van Maren University of Utah September.
An Integrated Experimental Environment for Distributed Systems and Networks B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler,
How To Use It...  Submit ns script via web form  Relax while emulab …  Generates config from script & stores in DB  Maps specified virtual topology.
Emulab Federation Preliminary Design Robert Ricci with Jay Lepreau, Leigh Stoller, Mike Hibler University of Utah USC/ISI Federation Workshop December.
1 A Large-Scale Network and Distributed Systems Testbed Jay Lepreau Chris Alfeld David Andersen (MIT) Kristin Wright University of Utah
1 25\10\2010 Unit-V Connecting LANs Unit – 5 Connecting DevicesConnecting Devices Backbone NetworksBackbone Networks Virtual LANsVirtual LANs.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
1 MASTERING (VIRTUAL) NETWORKS A Case Study of Virtualizing Internet Lab Avin Chen Borokhovich Michael Goldfeld Arik.
Using Virtualization in the Classroom. Using Virtualization in the Classroom Session Objectives Define virtualization Compare major virtualization programs.
 A network simulator is a piece of software or hardware that predicts the behavior of a network, without an actual network being present.
Section 11.1 Identify customer requirements Recommend appropriate network topologies Gather data about existing equipment and software Section 11.2 Demonstrate.
Chapter 5 Networks Communicating and Sharing Resources
Chapter 1 An Introduction to Networking
1 WHY NEED NETWORKING? - Access to remote information - Person-to-person communication - Cooperative work online - Resource sharing.
Virtualization Lab 3 – Virtualization Fall 2012 CSCI 6303 Principles of I.T.
B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler, C. Barb, A. Joglekar Presented by Sunjun Kim Jonathan di Costanzo 2009/04/13.
Common Devices Used In Computer Networks
Using Virtualization in the Classroom. Using Virtualization in the Classroom Session Objectives Define virtualization Compare major virtualization programs.
VIRTUAL PRIVATE NETWORK By: Tammy Be Khoa Kieu Stephen Tran Michael Tse.
Emulab and its lessons and value for A Distributed Testbed Jay Lepreau University of Utah March 18, 2002.
1 Testbeds Breakout Tom Anderson Jeff Chase Doug Comer Brett Fleisch Frans Kaashoek Jay Lepreau Hank Levy Larry Peterson Mothy Roscoe Mehul Shah Ion Stoica.
Architectures and Algorithms for Future Wireless Local Area Networks  1 Chapter Architectures and Algorithms for Future Wireless Local Area.
Microsoft Management Seminar Series SMS 2003 Change Management.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Large-scale Virtualization in the Emulab Network Testbed Mike Hibler, Robert Ricci, Leigh Stoller Jonathon Duerig Shashi Guruprasad, Tim Stack, Kirk Webb,
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 Based upon slides from Jay Lepreau, Utah Emulab Introduction Shiv Kalyanaraman
Hands-On Virtual Computing
11 ROUTING IP Chapter 3. Chapter 3: ROUTING IP2 CHAPTER INTRODUCTION  Understand the function of a router.  Understand the structure of a routing table.
An Integrated Experimental Environment for Distributed Systems and Networks B. White, J. Lepreau, L. Stoller, R. Ricci, S. Guruprasad, M. Newbold, M. Hibler,
1 Emulab's Current Support For IXPs: An example of support for non-PCs.
CS 283Computer Networks Spring 2013 Instructor: Yuan Xue.
@Yuan Xue CS 283Computer Networks Spring 2011 Instructor: Yuan Xue.
CSCD 433/533 Advanced Computer Networks Lecture 1 Course Overview Spring 2016.
Deterlab Tutorial CS 285 Network Security. What is Deterlab? Deterlab is a security-enhanced experimental infrastructure (based on Emulab) that supports.
HOW TO BUILD A BETTER TESTBED Fabien Hermenier Robert Ricci LESSONS FROM A DECADE OF NETWORK EXPERIMENTS ON EMULAB TridentCom ’
Model: DS-600 5x 10/100/1000Mbps Ethernet Port Centralized WLAN management and Access Point Discovery Manages up to 50 APs with access setting control.
Intro to Web Server Load Testing © Thank you for attending Introduction to Web Server Load Testing.
1 Scalability and Accuracy in a Large-Scale Network Emulator Nov. 12, 2003 Byung-Gon Chun.
Instructor Materials Chapter 1: LAN Design
Architecture and Algorithms for an IEEE 802
GGF15 – Grids and Network Virtualization
Based on work by DoIT Network Services, UW-Madison
CSCD 433/533 Advanced Computer Networks
Software Defined Networking (SDN)
CSCD 433/533 Advanced Computer Networks
ModelNet: A Large-Scale Network Emulator for Wireless Networks Priya Mahadevan, Ken Yocum, and Amin Vahdat Duke University, Goal:
CSCD 433/533 Advanced Computer Networks
ORBIT Radio Grid Testbed – Project Highlights Nov 3, 2010
Host and Small Network Relaying Howard C. Berkowitz
Presentation transcript:

emulab.net Current and Future: An Emulation Testbed for Networks and Distributed Systems Jay Lepreau University of Utah December 12, 2001

The Main Players Undergrads –Chris Alfeld, Chad Barb Grads –Dave Andersen, Shashi Guruprasad, Abhijeet Joglekar, Indrajeet Kumar, Mac Newbold Staff –Mike Hibler, Rob Ricci, Leigh Stoller, Kirk Webb Alumni –Various

What? A configurable Internet emulator in a room –Today: 328 nodes, 1646 cables, 4x BFS (switch) –virtualizable topology, links, software Bare hardware with lots of tools An instrument for experimental CS research Universally available to any remote experimenter Simple to use

What’s a Node? Physical hardware: PCs, StrongARMs Virtual node: –Router (network emulation) –Host, middlebox (distributed system) Future physical hardware: IXP1200 +

Why? “We evaluated our system on five nodes.” -job talk from university with 300-node cluster “We evaluated our Web proxy design with 10 clients on 100Mbit ethernet.” “Simulation results indicate...” “Memory and CPU demands on the individual nodes were not measured, but we believe will be modest.” “The authors ignore interrupt handling overhead in their evaluation, which likely dominates all other costs.” “Resource control remains an open problem.”

Why 2 “You have to know the right people to get access to the cluster.” “The cluster is hard to use.” “ runs FreeBSD 2.2.x.” “October’s schedule for is…” “ is tunneled through the Internet.”

Complementary to Other Experimental Environments Simulation –Fast prototyping, easy to use, but less realistic Small static testbeds –Real hardware and software, but hard to configure and maintain, and lack scale Live networks –Realistic, but hard to control, measure, or reproduce results emulab complements and also helps validate these environments

“Programmable Patch Panel” PC Web/DB/SNMP Switch Mgmt Users Internet Control Switch/Router Serial Sharks PowerCntl

Experiment Creation Process

Zoom In: One Node

Fundamental Leverage: Extremely Configurable Easy to Use

Key Design Aspects Allow experimenter complete control –Configurable link bandwidth, latency, and loss rates, via transparently interposed “traffic shaping” nodes that provide WAN emulation … but provide fast tools for common cases –OS’s, state mgmt tools, IP, batch,... –Disk loading – 6GB disk image FreeBSD+Linux Unicast tool: 88 seconds to load Multicast tool: 40 nodes simultaneously in < 5 minutes Virtualization –of all experimenter-visible resources –node names, network interface names, network addrs –Allows swapin/swapout, easily scriptable

Key Design Aspects (cont’d) Flexible, extensible, powerful allocation algorithm –Matches desired “virtual” topology to currently available physical resources Persistent state maintenance: –none on nodes, all in database –work from known state at boot time Familiar, powerful, extensible configuration language: ns Separate, isolated control network

Obligatory Pictures

Then

Now

A Few Research Issues and Challenges Network management of unknown and untrusted entities Security (root!) Scheduling of experiments Calibration, validation, and scaling Artifact detection and control NP-hard virtual --> physical mapping problem Providing a reasonable user interface ….

How To Use It... Submit ns script via web form Relax while emulab … –Generates config from script & stores in DB –Maps specified virtual topology to physical nodes –Allocate resources –Provides user accounts for node access –Assigns IP addresses and host names –Configures VLANs –Loads disks, reboots nodes, configures OSs –Yet more odds and ends... –Runs experiment –Reports results Takes ~3 min to set up 25 nodes

An “Experiment” emulab’s central operational entity Directly generated by an ns script, … then represented entirely by database state Steps: Web, compile ns script, map, allocate, provide access, assign IP addrs, host names, configure VLANs, load disks, reboot, configure OS’s, run, report

Mapping Example

Automatic mapping of desired topologies and characteristics to physical resources NP-hard problem: graph to graph mapping Algorithm goals: –Minimize likelihood of experimental artifacts (bottlenecks) –“Optimal” packing of many simultaneous experiments – Extensible for heterogeneous hardware, software, new features Randomized heuristic algorithm: simulated annealing Typically completes in < 1 second May move to genetic algorithm

Mapping Results < 1 second for first solution, 40 nodes “Good” solution within 5 seconds Apparently insensitive to number of node “features”

Disk Loading 13 GB generic IDE 7200 rpm drives Was 20 minutes for 6 GB image Now 88 seconds Unicast – domain-specific compression Multicast – “Frisbee”

Testbed Users 26 Active Projects –20 External –7 “active” active network projects SANDS (TASC)** Activecast (Kentucky)** AMP NodeOS (NAI Labs)** Active proxies (UMass) XML-based content routing (MIT) Janos, Agile protocols (Utah)** –3 “not-so-active” DARPA AN projects –4 other active security projects

Users… Two OSDI’00 and three SOSP’01 papers –20% SOSP general acceptance rate –60% SOSP acceptance rate for emulab users! More emulab’s under construction: –Kentucky, Duke, CMU, Cornell –Others intended: MIT, WUSTL, Princeton, HPLabs, Intel/UCB, Mt. Holyoke, …

Ongoing and Future Work Federation of many diverse “testbeds” –Challenge: heterogeneous sites –Challenge: resource allocation Wireless nodes, mobile nodes IXP1200 nodes, tools, code fragments –Routers, high-capacity shapers Simulation/emulation transparency Event system Scheduling system Topology generation tools and GUI Data capture, logging, visualization tools Microsoft OSs, high speed links, more nodes!

A Global-scale Testbed Federation key Bottom-up “organic” growth –Local autonomy and priority –Existing hardware resources –Provides diverse hardware PCs Wireless, mobile Real routers, switches (Wisconsin, …) Network processors (IXP’s) Research switches (WUSTL)

NSF ITR Proposal (Nov 01) Global-scale testbed Utah primary Subcontractors: –Brown co-PI (resource allocation) –MIT (RON overlay, wireless) –Duke (ModelNet, early adopter) –Mt. Holyoke (diverse users, education) $5M, 5 years, almost no hardware

Types of Sites High-end facilities Generic clusters Generic labs “Virtual machines” (leverage ANETS R&D) Internet2 links between some sites

Result… Loosely coupled distributed system Controlled isolation “Internet Petri Dish”

New Stuff: Extending to Wireless and Mobile Problems with existing approaches  Same problems as wired domain  But worse (simulation scaling,...)  And more (no models for new technologies,...)

Our Approach: Exploit a Dense Mesh of Devices  Density enables broad range of emulation  Wireless  Deploy devices throughout campus  Measure NxN path characteristics (e.g. power, interference, bit error rate)  Employ diversity: 900 MHz, Bluetooth, IEEE  Mobile  Leverage passive “couriers” Assign PDAs to students walking to class Equip public transit system with higher-end devices  Provides a realistic, predictable mobile testbed

Possible User Interfaces  Specify desired device and path properties  emulab selects closest approximation  Specify desired spatial layout  emulab selects closest mapping  Manually select from deployed devices

Wireless Virtual to Physical Mapping

Available for universities, labs, and companies, for research and teaching, at: