Faithful Reproduction of Network Experiments Dimosthenis Pediaditakis Charalampos Rotsos Andrew W. Moore Computer Laboratory,

Slides:



Advertisements
Similar presentations
Virtual Switching Without a Hypervisor for a More Secure Cloud Xin Jin Princeton University Joint work with Eric Keller(UPenn) and Jennifer Rexford(Princeton)
Advertisements

Virtualisation From the Bottom Up From storage to application.
Estinet open flow network simulator and emulator. IEEE Communications Magazine 51.9 (2013): Wang, Shie-Yuan, Chih-Liang Chou, and Chun-Ming Yang.
Bart Miller. Outline Definition and goals Paravirtualization System Architecture The Virtual Machine Interface Memory Management CPU Device I/O Network,
Institute of Computer Science Foundation for Research and Technology – Hellas Greece Computer Architecture and VLSI Systems Laboratory Exploiting Spatial.
Xen , Linux Vserver , Planet Lab
SDN and Openflow.
ECE 526 – Network Processing Systems Design Software-based Protocol Processing Chapter 7: D. E. Comer.
Keith Wiles DPACC vNF Overview and Proposed methods Keith Wiles – v0.5.
Towards High-Availability for IP Telephony using Virtual Machines Devdutt Patnaik, Ashish Bijlani and Vishal K Singh.
An Adaptable Benchmark for MPFS Performance Testing A Master Thesis Presentation Yubing Wang Advisor: Prof. Mark Claypool.
G Robert Grimm New York University Disco.
UC Berkeley 1 Time dilation in RAMP Zhangxi Tan and David Patterson Computer Science Division UC Berkeley.
Chapter 13 Embedded Systems
Figure 1.1 Interaction between applications and the operating system.
Virtualization for Cloud Computing
Jennifer Rexford Princeton University MW 11:00am-12:20pm SDN Software Stack COS 597E: Software Defined Networking.
Xen and the Art of Virtualization. Introduction  Challenges to build virtual machines Performance isolation  Scheduling priority  Memory demand  Network.
Network Simulation Internet Technologies and Applications.
Methodologies, strategies and experiences Virtualization.
Tanenbaum 8.3 See references
EstiNet Network Simulator & Emulator 2014/06/ 尉遲仲涵.
Dual Stack Virtualization: Consolidating HPC and commodity workloads in the cloud Brian Kocoloski, Jiannan Ouyang, Jack Lange University of Pittsburgh.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
Microkernels, virtualization, exokernels Tutorial 1 – CSC469.
Bottlenecks: Automated Design Configuration Evaluation and Tune.
+ CS 325: CS Hardware and Software Organization and Architecture Cloud Architectures.
Network Aware Resource Allocation in Distributed Clouds.
Improving Network I/O Virtualization for Cloud Computing.
Faithful Reproduction of Network Experiments Dimosthenis Pediaditakis Charalampos Rotsos Andrew W. Moore Computer Laboratory,
1 Heterogeneity in Multi-Hop Wireless Networks Nitin H. Vaidya University of Illinois at Urbana-Champaign © 2003 Vaidya.
Politecnico di Torino Dipartimento di Automatica ed Informatica TORSEC Group Performance of Xen’s Secured Virtual Networks Emanuele Cesena Paolo Carlo.
Architectures of distributed systems Fundamental Models
Building a Parallel File System Simulator E Molina-Estolano, C Maltzahn, etc. UCSC Lab, UC Santa Cruz. Published in Journal of Physics, 2009.
Windows 2000 Course Summary Computing Department, Lancaster University, UK.
High Performance Computing on Virtualized Environments Ganesh Thiagarajan Fall 2014 Instructor: Yuzhe(Richard) Tang Syracuse University.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
“Trusted Passages”: Meeting Trust Needs of Distributed Applications Mustaque Ahamad, Greg Eisenhauer, Jiantao Kong, Wenke Lee, Bryan Payne and Karsten.
02/09/2010 Industrial Project Course (234313) Virtualization-aware database engine Final Presentation Industrial Project Course (234313) Virtualization-aware.
An Architecture and Prototype Implementation for TCP/IP Hardware Support Mirko Benz Dresden University of Technology, Germany TERENA 2001.
Simics: A Full System Simulation Platform Synopsis by Jen Miller 19 March 2004.
Shivkumar Kalyanaraman Rensselaer Polytechnic Institute 1 Based upon slides from Jay Lepreau, Utah Emulab Introduction Shiv Kalyanaraman
Full and Para Virtualization
Computer Simulation of Networks ECE/CSC 777: Telecommunications Network Design Fall, 2013, Rudra Dutta.
Technical Reading Report Virtual Power: Coordinated Power Management in Virtualized Enterprise Environment Paper by: Ripal Nathuji & Karsten Schwan from.
(MRC) 2 These slides are not approved for public release Resilient high-dimensional datacenter 1 Control Plane: Controllers and Switches.
An Efficient Gigabit Ethernet Switch Model for Large-Scale Simulation Dong (Kevin) Jin.
Cloud Computing Lecture 5-6 Muhammad Ahmad Jan.
Disco: Running Commodity Operating Systems on Scalable Multiprocessors Presented by: Pierre LaBorde, Jordan Deveroux, Imran Ali, Yazen Ghannam, Tzu-Wei.
© 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Understanding Virtualization Overhead.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
1 Scalability and Accuracy in a Large-Scale Network Emulator Nov. 12, 2003 Byung-Gon Chun.
Open Source Virtualization Andrey Meganov RHCA, RHCX Consultant / VDEL
Computer System Structures
SDN controllers App Network elements has two components: OpenFlow client, forwarding hardware with flow tables. The SDN controller must implement the network.
Virtualization.
SDN challenges Deployment challenges
Is Virtualization ready for End-to-End Application Performance?
Xen and the Art of Virtualization
Presented by Yoon-Soo Lee
Current Generation Hypervisor Type 1 Type 2.
The Multikernel: A New OS Architecture for Scalable Multicore Systems
NOX: Towards an Operating System for Networks
Comparison of the Three CPU Schedulers in Xen
OS Virtualization.
Software Defined Networking (SDN)
Shadow: Scalable and Deterministic Network Experimentation
CPU SCHEDULING.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Presentation transcript:

Faithful Reproduction of Network Experiments Dimosthenis Pediaditakis Charalampos Rotsos Andrew W. Moore Computer Laboratory, Systems Research Group University of Cambridge, UK

ANCS 2014, Marina del Rey, Califoria, USA2 Research on networked systems: Present 1 GbE 10 GbE WAN link: 40++ Gbps 100 Mbps 1 GbE How can we experiment with new architectures?

Performance of widely available tools ANCS 2014, Marina del Rey, Califoria, USA3 A simple experiment – 2-pod Fat-Tree – 1 GbE links – 10K 5MB TCP flows Simulation (ns3) – Flat model – 2.75x lower throughput Emulation ( MiniNet ) – 4.5x lower throughput – Skewed CDF

Why not simulation Fidelity – Modelling abstractions – Real stacks or applications? Scalability – Network size – Network speed (10Gbps ++) – Poor execution time scalability Reproducibility – Replication of configuration – Repeatability of results (same rng seeds) ANCS 2014, Marina del Rey, Califoria, USA4 Example: NS2 / NS3 Fidelity Scalability Reproducibility Simulation

Why not real-time emulation Fidelity – Real stacks or applications – Heterogeneity support – SDN devices Scalability – CPU bottleneck Network speed Network size Reproducibility – Replication of configuration – Repeatability of results ANCS 2014, Marina del Rey, Califoria, USA5 Example: MiniNet Fidelity Scalability Reproducibility Simulation Emulation

In an ideal world... Fidelity – Real stacks or applications – Heterogeneity support – Realistic SDN switch model Scalability – 10GbE, 100Gbps... – 100s of nodes Reproducibility – Replication of configuration – Repeatability of results ANCS 2014, Marina del Rey, Califoria, USA6 What if we could achieve: Fidelity Scalability Reproducibility Simulation Emulation Our vision

High-level experiment description, automation – Python API (MiniNet style) Real OS components, applications – Xen based emulation – Fine-grained resources control – Heterogeneous deployments Hardware resources scaling – Time dilation (revisiting DieCast), unmodified guests – Users can trade execution speed for fidelity and scalability Network control plane fidelity – Support for unmodified SDN platforms – Empirical OpenFlow switch model (extensible) ANCS 2014, Marina del Rey, Califoria, USA7

Deploying an experiment with SELENA ANCS 2014, Marina del Rey, Califoria, USA8 OVS Bridge Selena compiler Selena compiler

Scaling resources via Time Dilation Create a scenario, choose TDF Linear and symmetric scaling of “perceived” by the guest OS resource – Network I/O, CPU, disk I/O Control independently the guest’s “perception” of available resources – CPUs  Xen Credit2 – Network  Xen VIF QoS, NetEm/DummyNet – Disk I/O  within guests via cgroups/rctl ANCS 2014, Marina del Rey, Califoria, USA9

The concept of Time-Dilation ANCS 2014, Marina del Rey, Califoria, USA10 I command you to slow down 1 tick = (1/C_Hz) seconds Real Time 10 Mbits data Real time rate REAL = 10 / (6*C_Hz) Mbps 2x Dilated time (TDF = 2) (tick rate)/2, C_Hz tick rate, 2*C_Hz OR Virtual time 10 Mbits data rate VIRT = 10 / (3*C_Hz) Mbps = 2 * rate REAL

PV-guest time dilation ANCS 2014, Marina del Rey, Califoria, USA11 XEN Hypervisor rdtsc VIRQ_TIMER Hypervisor_set_timer_op XEN Clock Source TSC value XEN VIRQ set next event Wall clock time – Time since epoch – System time (boot) – Independent clock mode (rdtsc) Timer interrupts – Scheduled timers – Periodic timers – Loop delays

OpenFlow Toolstack X-Ray ANCS 2014, Marina del Rey, Califoria, USA12 Network OS ASIC OF Agent Control App Control App Control App Control App Control Channel Available capacity, synchronicity ASIC driver  policy configuration : - latency and semantics in - Scarce co-processor resources - Switch OS scheduling is non-trivial Control application complexity How critical is SDN control plane performance for the data plane performance ? Limited PCI bus capacity

Building an OpenFlow switch model Measure an off-the-shelf switch device – Measure message processing performance (OFLOPS) – Extract latency and loss characteristics of: flow table management the packet interception / injection mechanism Statistics counters extraction Configurable switch model – Replicate latency and loss characteristics – Implementation: Mirage-OS based switch ANCS 2014, Marina del Rey, Califoria, USA13

Evaluation roadmap Methodology 1.Run experiment on real hardware 2.Reproduce results in: – MiniNet – NS3 – SELENA 3.Compare against “real” ANCS 2014, Marina del Rey, California, USA14 Dimensions of fidelity 1.Throughput 2.Latency 3.Control plane 4.Application performance 5.Scalability

Latency fidelity ANCS 2014, Marina del Rey, Califoria, USA15 Setup - 18 nodes, 1Gbps links flows MiniNet, Ns3 accuracy: 32% and 44% Selena accuracy 71% with 5x dilation 98.7% with 20x dilation mininetns3 PlatformExecution Time Mininet120s Ns-3172m 51s SELENA (TDF=20)40m SELENA

SDN Control plane Fidelity ANCS 2014, Marina del Rey, Califoria, USA16 1Mb TCP flows completion time exponential arrival λ = 0.02 Stepping behavior: - TCP SYN & SYNACK loss Mininet switch model: - does not capture this throttling effect Stepping behavior: - TCP SYN & SYNACK loss Mininet switch model: - does not capture this throttling effect The model is not capable to capture transient switch OS scheduling effects of the real switch.

Scalability analysis Star topology, 1 GbE links, multi Gbit sink link Dom-0 is allocated 4-cores – Why tops at 250% CPU utilisation ? Near linear scalability ANCS 2014, Marina del Rey, Califoria, USA17 OVS Bridge

Application fidelity (LAMP) ANCS 2014, Marina del Rey, Califoria, USA18 2-pod Fat-Tree – 1 GbE links – 10x switches – 4x Clients – 4x WebServers: Apache2, PHP, MySQL, Redis, Wordpress

SELENA usage guidelines SELENA is primarily a NETWORK emulation framework – Perfect match: network bound applications – Allows experimentation with: CPU, disk, Network relative performance Real applications / SDN controllers / network stacks – Improved fidelity and scalability Outperforms common simulation / emulation tools Time dilation is exciting but not a panacea – Hardware-specific performance characteristics, e.g.: Disks, cache size, per-core lock contention, Intel DDIO Rule of thumb for choosing TDF – Low Dom-0 and Dom-U utilisation – Observation time-scales matter ANCS 2014, Marina del Rey, Califoria, USA19

ANCS 2014, Marina del Rey, Califoria, USA20 \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\

SELENA is free and open. Give it a try: ANCS 2014, Marina del Rey, California, USA21

Backup slides

MiniNet and Ns Gbps and 5.3Gbps SELENA - 10x dilation: 99.5% accuracy - executes 9x faster than Ns3 Throughput fidelity ANCS 2014, Marina del Rey, Califoria, USA24 PlatformExecution Time Mininet120s Ns-3175m 24s SELENA (TDF=10)20m ns3 mininet SELENA

Scalability Multi-machine emulation – Synchronization among host – Efficient placement Optimize guest-2-guest Xen communications Auto-tuning of TDF ANCS 2014, Marina del Rey, Califoria, USA25

A layered SDN controller hierarchy ANCS 2014, Marina del Rey, Califoria, USA26 4 pod, Fat-Tree topology, 1GbE links 32 Gbps aggregate traffic The layered control-plane architecture Question: How does a layered controller hierarchy affect performance ? 1 st Layer Controller2 nd Layer Controller More layers – Control decisions taken higher in the hierarchy – Flow setup latency increases Network, Request pipelining, CPU load – Resilience

Limitations of ns-3 Layer 2 models – CSMA Link: Half duplex -> lower throughput. The only wired model supporting Ethernet. – Point-to-point link model: IP only -> Cannot use switches. Distributed -> Synchronisation is not a good fit for DC experiments. Time scalability is similar to CSMA. Layer 3 models – TCP socket model No window scaling

Containers vs Xen Heterogeneity (OS, network stacks) OS-level time virtualization is easier Resource management – Containers: cgroups, kernel noise, convoluted tuning – Xen: Domain-0 -- Xen -- Dom-U isolation Can run MiniNet in a time-dilated VM ANCS 2014, Marina del Rey, Califoria, USA28

Why not just scale network rates Non uniform resource and time scaling – User space applications – Kernel (protocols, timers, link emulation) Not capturing the packet-level protocol effects – E.g. TCP window sizing – Queueing fidelity Lessons learned via MiniNet use cases – JellyFish topology – TCP-incast effect ANCS 2014, Marina del Rey, Califoria, USA29

Related work

ANCS 2014, Marina del Rey, Califoria, USA31

ANCS 2014, Marina del Rey, Califoria, USA32

ANCS 2014, Marina del Rey, Califoria, USA33

ANCS 2014, Marina del Rey, Califoria, USA34

ANCS 2014, Marina del Rey, Califoria, USA35

ANCS 2014, Marina del Rey, Califoria, USA36

ANCS 2014, Marina del Rey, Califoria, USA37

ANCS 2014, Marina del Rey, Califoria, USA Mbps 1 GbE Research on networked systems: Past

Network experimentation trade-offs ANCS 2014, Marina del Rey, Califoria, USA39 Fidelity Scalability Reproducibility Emulation: Sacrifice scalability Emulation: Sacrifice scalability Simulation: Sacrifice fidelity Simulation: Sacrifice fidelity Simulation – supported by design Emulation – MiniNet was the pioneer – How to maintain across different platforms ??

SIMULATIONEMULATION SELENA HYBRIDTESTBEDS Reproducibility Real Net Stacks Unmodified App Hardware Req. Scalability Fidelity Exec. speed SELENA: Standing on the shoulders of giants Fidelity: Emulation, Xen, real OS components, SDN support Reproducibility: Python API to describe experiments (MiniNet approach) Scalability: Time dilation (DieCast approach) ANCS 2014, Marina del Rey, Califoria, USA 40 Full user control: Trade execution speed for fidelity and scalability

API and experimental workflow ANCS 2014, Marina del Rey, Califoria, USA41 Experiment description Python API Selena compiler Selena compiler

SELENA’s Emulation model over Xen ANCS 2014, Marina del Rey, Califoria, USA42 OVS Bridge

Implementing Time-Dilation ANCS 2014, Marina del Rey, Califoria, USA43 Linux Guest Xen Hypervisor Periodic VIRQ_TIMER is not really used TSC value Trap – Emulate - scale “rdtsc” Native “rdtsc” (constant, invariant) - Start-of-day: dilated wallclock time - VPCU time: _u.tsc_timestamp = tsc_stamp; _u.system_time = system_time; _u.tsc_to_system_mul = tsc_to_system_mul; VCPUOP_set_singleshot_timer set_timer(&v->singleshot_timer, dilatedTimeout); Periodic VIRQ_TIMER implemented (but is not really used)

Summarizing the elements of Fidelity Resource scaling via time dilation Real Stacks and other OS components Real Applications – Including SDN controllers Realistic SDN switch models – Why is it important ? – How it affects performance ? ANCS 2014, Marina del Rey, California, USA44

Work in progress API compatibility with MiniNet Further improve scalability - Multi-machine emulation - Optimize guest-2-guest Xen communications Features and use cases – SDN coupling with workload consolidation – Emulation of live VM migration – Energy-aware data-centers ANCS 2014, Marina del Rey, California, USA45

Research on networked systems: past, present, future Animation: 3 examples of networks. Examples will show the evolution of “network-characteristics” on which research is conducted: – Past: 2-3 Layers, Hierarchical, TOR, 100Mbps, bare metal OS – Present: Fat-tree, 1Gbps links, Virtualization, WAN links – Near future: Flexible architectures, 10Gbps, Elastic resource management, SDN controllers, OF switches, large scale (DC), The point of this slide is that real-world systems progress at a fast pace (complexity, size) but common tools have not kept up with this pace I will challenge the audience to think: – Which of the 3 examples of illustrated networks they believe they can model with existing tools – What level of fidelity (incl. Protocols, SDN, Apps, Net emulation) – What are the common sized and link speeds they can model ANCS 2014, Marina del Rey, California, USA46

A simple example with NS-3 Here I will assume a simple star-topology 10x clients, 1x server, 1x switch (10Gbps aggregate) I will provide the throughput plot and explain why performance sucks Point out that NS3 is not appropriate for faster networks Simplicity of models + non real applications Using DCE: even slower, non full POSIX- compliant ANCS 2014, Marina del Rey, California, USA47

A simple example with MiniNet Same as before Throughput plot Better fidelity in terms of protocols, applications etc – Penalty in performance Explain what is the bottleneck, especially in relation to MiniNet’s implementation ANCS 2014, Marina del Rey, California, USA48

Everything is a trade-off Nothing comes for free when it comes to modelling and the 3 key- experimentation properties MiniNet aims for fidelity – Sacrifices scalability NS-3 aims for scalability (many abstractions) – Sacrifices fidelity, +scalability limitations The importance of Reproducibility – MiniNet is a pioneer – difficult to maintain from machine to machine MiniNet cannot guarantee that at the level of performance, only at the level of configuration ANCS 2014, Marina del Rey, California, USA49 Fidelity Scalability Reproducibility

SELENA: Standing on the shoulders of giants Fidelity: use Emulation – Unmodified apps and protocols: fidelity + usability – XEN: Support for common OS, good scalability, great control on resources Reproducible experiments – MiniNet approach, high-level experiment descriptions, automation Maintain fidelity under scale – DieCast approach: time dilation (will talk more later on that) The user is the MASTER: – Tuning knob: Experiment Execution speed ANCS 2014, Marina del Rey, California, USA50

SELENA Architecture Animation here: 3 steps show how an experiment is – Specified (python API) – compiled – deployed Explain mappings of network entities-features to Xen emulation components Give hints of optimization tweaks we use under the hood ANCS 2014, Marina del Rey, California, USA51 Experiment description Python API Selena compiler Selena compiler

Time Dilation and Reproducibility Explain how time dilation also FACILITATES reproducibility across different platforms Reproducibility – Replication of configuration Network architecture, links, protocols Applications Traffic / workloads How we do it in SELENA: Python API, XEN API – Reproduction of results and observed performance Each platform should have enough resources to rund faithfully the experiment How we do it in SELENA: time dilation – An older platform/hardware will require a different minimum TDF to reproduce the same results ANCS 2014, Marina del Rey, California, USA52

Demystifying Time-Dilation 1/3 Explain the concept in high-level terms – Give a solid example with a timeline Similar to slide 8: dilation/nsdi06-tdf-talk.pdfhttp://sysnet.ucsd.edu/projects/time- dilation/nsdi06-tdf-talk.pdf Explain that everything happens at the H/V level – Guest time sandboxing (experiment VMs) – Common time for kernel + user space – No modifications for PV guests Linux, FreeBSD, ClickOS, OSv, Mirage ANCS 2014, Marina del Rey, California, USA53

Demystifying Time-Dilation 2/3 Here we explain the low-level staff Give credits to DieCast, but also explain the incremental work we did Best to show/explain with an animation ANCS 2014, Marina del Rey, California, USA54

Demystifying Time-Dilation 3/3 Resources scaling – Linear and symmetric scaling for Network, CPU, ram BW, disk I/O – TDF only increases the perceived performance headroom of the above – SELENA allows for configuring independently the perceived speeds of CPU Network Disk I/O (from within the guests at the moment -- cgroups) Typical workflow 1.Create a scenario 2.Decide the minimum necessary TDF for supporting the desired (will see more later on that) 3.Independently scale resources, based on the requirements of the users and the focus of their studies ANCS 2014, Marina del Rey, California, USA55

Summarizing the elements of Fidelity Resource scaling via time dilation (already covered) Real Stacks and other OS components Real Applications – Including SDN controllers Realistic SDN switch models – Why is it important – How much can it affect observed behaviours ANCS 2014, Marina del Rey, California, USA56

Inside an OF switch Present a model of an OF switch internals – Show components – Show paths / interactions which affect performance Data plane (we do not model that currently) Control plane ANCS 2014, Marina del Rey, California, USA57 Random image from the web. Just a placeholder

Building a realistic OF switch model Methodology for constructing an empirical model – PICA-8 – OFLOPS measurements Collect, analyze, extract trends Stochastic model – Use a mirage-switch to implement the model Flexible, functional, non-bloated code Performant: uni-kernel, no context switches Small footprint: scalable emulations ANCS 2014, Marina del Rey, California, USA58

Evaluation methodology 1.Run experiment on real hardware 2.Reproduce results in: 1.MiniNet 2.NS3 3.SELENA (for various TDF) 3.Compare each one against “real” We evaluate multiple aspects of fidelity: – Data-Plane – Flow-level – SDN Control – Application ANCS 2014, Marina del Rey, California, USA59

Data-Plane fidelity Figure from paper Explain Star-topology Show comparison of MiniNet + NS3 – Same figures from slides 2+3 but now compared against Selena + real Point out how increasing TDF affects fidelity ANCS 2014, Marina del Rey, California, USA60

Flow-Level fidelity Figure from paper Explain Fat-tree topology ANCS 2014, Marina del Rey, Califorina, USA61

Execution Speed Compare against NS3, MiniNet Point out that SELENA executes faster than NS3 – NS3 however replicates only half speed network Therefore difference is even bigger ANCS 2014, Marina del Rey, California, USA62

SDN Control plane Fidelity Figure from paper Explain experiment setup Point out shortcomings of MiniNet – As good as OVS is Point out terrible support for SDN by NS3 ANCS 2014, Marina del Rey, California, USA63

Application level fidelity Figure from paper Explain the experiment setup Latency aspect Show how CPU utilisation matters for fidelity – Open the dialogue for the performance bottlenecks and limitations and make a smooth transition to the next slide ANCS 2014, Marina del Rey, California, USA64

Near-linear Scalability Figure from paper Explain how is scalability determined for a given TDF ANCS 2014, Marina del Rey, California, USA65

Limitations discussion Explain the effects of running on Xen Explain what happens if TDF is low and utilisation is high Explain that insufficient CPU compromises – Emulated network speeds – Capability of guests to utilise the available bandwidth – Skews the performance of networked applications – Adds excessive latency Scheduling also contributes ANCS 2014, Marina del Rey, California, USA66

A more complicated example Showcase the power of SELENA :P Use the MRC2 experiment ANCS 2014, Marina del Rey, California, USA67

Work in progress API compatibility with MiniNet Further improve scalability - Multi-machine emulation - Optimize guest-2-guest Xen communications Features and use cases – SDN coupling with workload consolidation – Emulation of live VM migration – Incorporate energy models ANCS 2014, Marina del Rey, California, USA68

SELENA is free and open. Give it a try: ANCS 2014, Marina del Rey, California, USA69