Download presentation
Presentation is loading. Please wait.
Published byRylie Bamber Modified over 9 years ago
1
Supercharging PlanetLab A High Performance,Multi-Alpplication,Overlay Network Platform Reviewed by YoungSoo Lee CSL
2
Overview PlanetLab has become a popular experimental platform. But PlanetLab are subject to high latency, high delay jitter, poor performance. What is the Solution?
3
Solution General purpose server NP subsystems Supercharged PlanetLab Platform
4
SSP’s Objectives Higher level of both IO performance and processing performance. Reasonably straightforward for PlanetLab users take advantage of the capabilities Require t.hat legacy PlanetLab applications run on the system without change.
5
System Overview Modern NPs are not designed to be shared. Application are divided into Fast Path and Slow Path
6
System Components Line Card(LC) : LC forwards each arriving packet to the system component, and queues outgoing packets for transmission. General Purpose Processing Engines(GPE). Network Processing Engines(NPE) Control Processor(CP)
7
Network Processor Issues NP products have been developed for use in conventional router. Micro-Engines(ME) : packet processing. DRAM : packet buffer. SRAM : implementing lookup table & linked list queues. MP : overall system control
8
Network Processor Issues NP provides a different mechanism for coping with the memory latency gap, hardware multithreading. Operate in a round-robin fashion.
9
Sharing the NPE Develop software for the NPE that allows it to be shared by the fast path segments of many different slices. Process 8 packets concurrently using the hardware thread contexts.
10
Sharing the NPE Rx : packet received from switch are copied to DRAM & passes a pointer main packet processing pipeline. Substr. : determines which slice the packet belongs & strips outer header form the packet. Parse : preconfigured set of Code Option & form lookup key.
11
Sharing the NPE Lookup : provides generic lookup capability. Hdr Format : makes necessary changes to the slice-specific packet header. Queue Manager : implements a configurable collection of queues. Tx : forwards the packet to the output.
12
Enhancing GPE Performance Boost performance of the GPEs. 1. Use higher performance hardware configurations than usual for PlanetLab. 2. Improve the latency of PlanetLab applications. Changing coarse-grained scheduling paradigm
13
Enhancing GPE Performance Token-based scheduling Max. number of token : 100 Min. number of token : 50 N vServer pre-empted 100(N-1) ms at a time. Token-based scheduling The Min. & Max. token allocation were set same value Varied from 2 to 16
14
Enhancing GPE Performance
15
Overall System
16
Slice Configuration ① : CP obtains slice configuration data using the standard PlanetLab mechanism of periodically polling the PLC ② : Slices are assigned to one of GPE by GNM. ③ : Corresponding entry is made in a local copy of my PLC. ④ : The LNM on each of the GPEs periodically poll myPLC to obtain new slice configurations. ① ② ③ ④
17
Port Assignment ① : LRM reservation request th the GRM. ② : If the requested port number is available, it makes the appropriate assignment to LRM. ③ : GRM configures the Line Card so that LC will forward to the right GPE. ① ②③
18
NPE Assignment ① : LRM forwards the request to the GRM. ② : GRM selects the most appropriate NPE to host the slice and returns its id to the LRM ③ : LRM then interacts with the MP. ① ② ③
19
Evaluation Implement two different application. IPv4 & Internet Indirection Infrastructure.
20
IPv4-Fair Queueing Mechanism
22
IPv4 - Throughput
23
IPv4 - Latency
24
Internet Indirection Infrastructure Performing throughput and latency tests similar those we did for IPv4 application. Achieve 30~40% higher result.
25
IPv4 & III- Throughput
26
III- GPE vs NPE
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.