Supercharging PlanetLab A High Performance,Multi-Alpplication,Overlay Network Platform Reviewed by YoungSoo Lee CSL.

Slides:



Advertisements
Similar presentations
Router Internals CS 4251: Computer Networking II Nick Feamster Spring 2008.
Advertisements

Router Internals CS 4251: Computer Networking II Nick Feamster Fall 2008.
IP Router Architectures. Outline Basic IP Router Functionalities IP Router Architectures.
Engineering Patrick Crowley, John DeHart, Mart Haitjema, Fred Kuhns, Jyoti Parwatikar, Ritun Patney, Jon Turner, Charlie Wiseman, Mike Wilson, Ken Wong,
NetFPGA Project: 4-Port Layer 2/3 Switch Ankur Singla Gene Juknevicius
Supercharging PlanetLab : a high performance, Multi-Application, Overlay Network Platform Written by Jon Turner and 11 fellows. Presented by Benjamin Chervet.
© Jörg Liebeherr ECE 1545 Packet-Switched Networks.
PlanetLab Operating System support* *a work in progress.
1 ELEN 602 Lecture 18 Packet switches Traffic Management.
CSC457 Seminar YongKang Zhu December 6 th, 2001 About Network Processor.
Router Architecture : Building high-performance routers Ian Pratt
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) SriramGopinath( )
What's inside a router? We have yet to consider the switching function of a router - the actual transfer of datagrams from a router's incoming links to.
4-1 Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving side, delivers.
1 Router Construction II Outline Network Processors Adding Extensions Scheduling Cycles.
10 - Network Layer. Network layer r transport segment from sending to receiving host r on sending side encapsulates segments into datagrams r on rcving.
IXP2400 Protocol Offloading Yan Luo Chris Baron
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
Ubiquitous Component Remoting Support on Overlay Network Adaptation support with Ontology-based annotation Roaming support of wireless component communication.
Connecting Devices and Multi-Homed Machines. Layer 1 (Physical) Devices Repeater: Extends distances by repeating a signal Extends distances by repeating.
4: Network Layer4b-1 Router Architecture Overview Two key router functions: r run routing algorithms/protocol (RIP, OSPF, BGP) r switching datagrams from.
A Scalable, Cache-Based Queue Management Subsystem for Network Processors Sailesh Kumar, Patrick Crowley Dept. of Computer Science and Engineering.
Paper Review Building a Robust Software-based Router Using Network Processors.
David M. Zar Applied Research Laboratory Computer Science and Engineering Department ONL Stats Block.
Traffic Management - OpenFlow Switch on the NetFPGA platform Chun-Jen Chung( ) Sriram Gopinath( )
Improving Network I/O Virtualization for Cloud Computing.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Applied research laboratory David E. Taylor Users Guide: Fast IP Lookup (FIPL) in the FPX Gigabit Kits Workshop 1/2002.
Router Architecture Overview
Patrick Crowley and Jon Turner and John DeHart, Mart Haitjema Fred Kuhns, Jyoti Parwatikar, Ritun Patney, Charlie Wiseman, Mike Wilson, Ken Wong, Dave.
1 Flow Identification Assume you want to guarantee some type of quality of service (minimum bandwidth, maximum end-to-end delay) to a user Before you do.
1 - Charlie Wiseman - 05/11/07 Design Review: XScale Charlie Wiseman ONL NP Router.
Precomputation- based Prefetching By James Schatz and Bashar Gharaibeh.
1 CSE 5346 Spring Network Simulator Project.
Silberschatz, Galvin, and Gagne  Applied Operating System Concepts Module 12: I/O Systems I/O hardwared Application I/O Interface Kernel I/O.
Network Layer4-1 Chapter 4 Network Layer All material copyright J.F Kurose and K.W. Ross, All Rights Reserved Computer Networking: A Top Down.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Mart Haitjema Block Design Review: ONL NP Router Multiplexer (MUX)
1 Washington WASHINGTON UNIVERSITY IN ST LOUIS Fred Kuhns - 3/15/2016 Allocate and free code option instance, NPE resources and interface bandwidth. Manage.
Advanced Operating Systems CS6025 Spring 2016 Processes and Threads (Chapter 2)
Graciela Perera Department of Computer Science and Information Systems Slide 1 of 18 INTRODUCTION NETWORKING CONCEPTS AND ADMINISTRATION CSIS 3723 Graciela.
Supercharged PlanetLab Platform, Control Overview
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Module 12: I/O Systems I/O hardware Application I/O Interface
An NP-Based Router for the Open Network Lab
Design of a Diversified Router: Project Management
A Proposed Architecture for the GENI Backbone Platform
techX and ONL Summer 2008 Plans
Design Issues for the GENI Backbone Platform
An NP-Based Router for the Open Network Lab Overview by JST
Supercharged PlanetLab Platform, Control Overview
Next steps for SPP & ONL 2/6/2007
IXP Based Router for ONL: Architecture
Operating System Concepts
CS703 - Advanced Operating Systems
Design of a Diversified Router: November 2006 Demonstration Plans
Code Review for IPv4 Metarouter Header Format
Code Review for IPv4 Metarouter Header Format
Implementing an OpenFlow Switch on the NetFPGA platform
A High Performance PlanetLab Node
IXP Based Router for ONL: Architecture
Multithreaded Programming
Jinquan Dai, Long Li, Bo Huang Intel China Software Center
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Network Layer: Control/data plane, addressing, routers
Project proposal: Questions to answer
Design of a Diversified Router: Project Management
Module 12: I/O Systems I/O hardwared Application I/O Interface
Cluster Computers.
CS Introduction to Operating Systems
Presentation transcript:

Supercharging PlanetLab A High Performance,Multi-Alpplication,Overlay Network Platform Reviewed by YoungSoo Lee CSL

Overview  PlanetLab has become a popular experimental platform.  But PlanetLab are subject to high latency, high delay jitter, poor performance. What is the Solution?

Solution General purpose server NP subsystems Supercharged PlanetLab Platform

SSP’s Objectives  Higher level of both IO performance and processing performance.  Reasonably straightforward for PlanetLab users take advantage of the capabilities  Require t.hat legacy PlanetLab applications run on the system without change.

System Overview  Modern NPs are not designed to be shared.  Application are divided into Fast Path and Slow Path

System Components  Line Card(LC) : LC forwards each arriving packet to the system component, and queues outgoing packets for transmission.  General Purpose Processing Engines(GPE).  Network Processing Engines(NPE)  Control Processor(CP)

Network Processor Issues  NP products have been developed for use in conventional router.  Micro-Engines(ME) : packet processing.  DRAM : packet buffer.  SRAM : implementing lookup table & linked list queues.  MP : overall system control

Network Processor Issues  NP provides a different mechanism for coping with the memory latency gap, hardware multithreading.  Operate in a round-robin fashion.

Sharing the NPE  Develop software for the NPE that allows it to be shared by the fast path segments of many different slices.  Process 8 packets concurrently using the hardware thread contexts.

Sharing the NPE  Rx : packet received from switch are copied to DRAM & passes a pointer main packet processing pipeline.  Substr. : determines which slice the packet belongs & strips outer header form the packet.  Parse : preconfigured set of Code Option & form lookup key.

Sharing the NPE  Lookup : provides generic lookup capability.  Hdr Format : makes necessary changes to the slice-specific packet header.  Queue Manager : implements a configurable collection of queues.  Tx : forwards the packet to the output.

Enhancing GPE Performance  Boost performance of the GPEs.  1. Use higher performance hardware configurations than usual for PlanetLab.  2. Improve the latency of PlanetLab applications.  Changing coarse-grained scheduling paradigm

Enhancing GPE Performance Token-based scheduling Max. number of token : 100 Min. number of token : 50 N vServer pre-empted 100(N-1) ms at a time. Token-based scheduling The Min. & Max. token allocation were set same value Varied from 2 to 16

Enhancing GPE Performance

Overall System

Slice Configuration  ① : CP obtains slice configuration data using the standard PlanetLab mechanism of periodically polling the PLC  ② : Slices are assigned to one of GPE by GNM.  ③ : Corresponding entry is made in a local copy of my PLC.  ④ : The LNM on each of the GPEs periodically poll myPLC to obtain new slice configurations. ① ② ③ ④

Port Assignment  ① : LRM reservation request th the GRM.  ② : If the requested port number is available, it makes the appropriate assignment to LRM.  ③ : GRM configures the Line Card so that LC will forward to the right GPE. ① ②③

NPE Assignment  ① : LRM forwards the request to the GRM.  ② : GRM selects the most appropriate NPE to host the slice and returns its id to the LRM  ③ : LRM then interacts with the MP. ① ② ③

Evaluation  Implement two different application.  IPv4 & Internet Indirection Infrastructure.

IPv4-Fair Queueing Mechanism

IPv4 - Throughput

IPv4 - Latency

Internet Indirection Infrastructure Performing throughput and latency tests similar those we did for IPv4 application. Achieve 30~40% higher result.

IPv4 & III- Throughput

III- GPE vs NPE