Presentation is loading. Please wait.

Presentation is loading. Please wait.

By L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion.

Similar presentations


Presentation on theme: "By L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion."— Presentation transcript:

1 by L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion leader Michael Wilson Mar 3, 2005 CS7702 Research seminar A blueprint for introducing disruptive technology into Internet

2 Outline IntroductionIntroduction ArchitectureArchitecture PlanetLabPlanetLab ConclusionConclusion

3 Introduction Widely-distributed applications make own forwarding decisions Network-embedded storage, peer-to-peer file sharing, content distribution networks, robust routing overlays, scalable object location, scalable event propagation Network-embedded storage, peer-to-peer file sharing, content distribution networks, robust routing overlays, scalable object location, scalable event propagation Network elements (layer-7 switches & transparent caches) do application-specific processing But Internet is ossified.. Until recently: Recently: Internet Figures courtesy planet-lab.org

4 This paper proposes using overlay networks to achieve it..

5 Overlay network A virtual network of nodes & logical links, built atop existing network, to implement a new service Provides opportunity for innovation as no changes in Internet Eventually, ‘weight’ of these overlays will cause emergence of new architecture Similar to Internet itself (an overlay) causing evolution of underlying telephony network Similar to Internet itself (an overlay) causing evolution of underlying telephony network Figure courtesy planet-lab.org This paper speculates what this new architecture will look like..

6 Outline IntroductionIntroduction ArchitectureArchitecture PlanetLabPlanetLab ConclusionConclusion

7 Goals Short-term: Support experimentation with new services  Testbed Experiment at scale (1000s of sites) Experiment under real-world conditions diverse bandwidth/ latency/ loss diverse bandwidth/ latency/ loss wide-spread geographic coverage wide-spread geographic coverage Potential for real workloads & users Low cost of entry Medium-term: Support continuous services that serve clients  Deployment platform support seamless migration of application from prototype to service, through design iterations, that continues to evolve Long-term: Microcosm for next generation Internet!

8 Architecture Design principles Slice-abilitySlice-ability Distributed control of resourcesDistributed control of resources Unbundled (overlay) managementUnbundled (overlay) management Application-centric interfacesApplication-centric interfaces

9 Slice-ability A slice is horizontal cut of global resources across nodes Processing, memory, storage.. Processing, memory, storage.. Each service runs in a slice Service is a set of programs delivering some functionality Service is a set of programs delivering some functionality Node slicing must be secure be secure use resource control mechanism use resource control mechanism be scalable be scalable Figure courtesy planet-lab.org Slice ~ a network of VMs

10 Virtual Machine VM is the environment where a program implementing some aspect of the service runs Each VM runs on a single node & uses some of the node’s resources VM must be No harder to write programs, protection from other VMs, fair sharing of resources, restriction of traffic generation No harder to write programs, protection from other VMs, fair sharing of resources, restriction of traffic generation Multiple VMs run on each node with VMM (Virtual Machine Monitor) arbitrating node’s resources VMM (Virtual Machine Monitor) arbitrating node’s resources

11 Virtual Machine Monitor (VMM) a kernel-mode driver running in the host operating system Has access to the physical processor & manages resources between host OS & VMs prevents malicious or poorly designed applications running in virtual server from requesting excessive hardware resources from the host OS prevents malicious or poorly designed applications running in virtual server from requesting excessive hardware resources from the host OS With virtualization, two interfaces now API for typical services & API for typical services & Protection Interface used by VMM Protection Interface used by VMM VMM used here is Linux VServer..

12 A node.. Figure courtesy planet-lab.org

13 Across nodes (ie. across network) Node manger (one per node; part of VMM) When service managers provide valid tickets When service managers provide valid tickets Allocates resources, creates VMs & returns a lease Resource Monitor (one per node) Tracks node’s available resources (using VM’s interface) Tracks node’s available resources (using VM’s interface) Tells agents about available resources Agents (centralized) Collect resource monitor reports Collect resource monitor reports Advertise tickets Issue tickets to resource brokers Resource Broker (per service) Obtain tickets from agents on behalf of service managers Obtain tickets from agents on behalf of service managers Service Managers (per service) Obtain tickets from broker Obtain tickets from broker Redeem tickets with node managers to create VM Redeem tickets with node managers to create VM Start service Start service

14 Obtaining a Slice Agent Service Manager Broker Courtesy Jason Waddle’s presentation material

15 Obtaining a Slice Agent Service Manager Broker Resource Monitor Courtesy Jason Waddle’s presentation material

16 Obtaining a Slice Agent Service Manager Broker Resource Monitor ticket Resource Monitor ticket Courtesy Jason Waddle’s presentation material

17 Obtaining a Slice Agent Service Manager Broker ticket Courtesy Jason Waddle’s presentation material

18 Obtaining a Slice Agent Service Manager Broker ticket Courtesy Jason Waddle’s presentation material

19 Obtaining a Slice Agent Service Manager Broker ticket Courtesy Jason Waddle’s presentation material

20 Obtaining a Slice Agent Service Manager Broker ticket Courtesy Jason Waddle’s presentation material

21 Obtaining a Slice Agent Service Manager Broker ticket Node Manager Courtesy Jason Waddle’s presentation material

22 Obtaining a Slice Agent Service Manager Broker ticket Courtesy Jason Waddle’s presentation material

23 Obtaining a Slice Agent Service Manager Broker ticket Courtesy Jason Waddle’s presentation material

24 Architecture Design principles Slice-abilitySlice-ability Distributed control of resourcesDistributed control of resources Unbundled (overlay) managementUnbundled (overlay) management Application-centric interfacesApplication-centric interfaces

25 Distributed control of resources Because of dual role of testbed, two types of users Researchers Researchers Likely to dictate how services are deployed & Node properties Node owners/ clients Node owners/ clients Likely to restrict what services run on their nodes & how resources are allocated to them  De-centralize control between the two Central authority provides credentials to service developers Central authority provides credentials to service developers Each node independently grants or denies a request, based on local policy Each node independently grants or denies a request, based on local policy

26 Architecture Design principles Slice-abilitySlice-ability Distributed control of resourcesDistributed control of resources Unbundled (overlay) managementUnbundled (overlay) management Application-centric interfacesApplication-centric interfaces

27 Unbundled (overlay) management Independent sub-services, running in own slices discover set of nodes in overlay & learn their capabilities monitor health & instrument behavior of these nodes establish a default topology manage user accounts & credentials keep software running on each node up-to-date & extract tracing & debugging info from a running node Some are part of core system (user a/c..) Some are part of core system (user a/c..) Single, agreed-upon version Others can have alternatives, with a default, replaceable over time Others can have alternatives, with a default, replaceable over time Unbundling requires appropriate interfaces Eg. hooks in VMM interface to get status of each node’s resources Sub-services may depend on each other Eg. resource discovery service may depend on node monitor service

28 Architecture Design principles Slice-abilitySlice-ability Distributed control of resourcesDistributed control of resources Unbundled (overlay) managementUnbundled (overlay) management Application-centric interfacesApplication-centric interfaces

29 Application-centric interfaces Promote application development by letting it run continuously (deployment platform) Problem: difficult to simultaneously create testbed & use it for writing applications  API should remain largely unchanged while underlying implementation changes  If alternative API emerges, new applications must be written to it but original should be maintained for legacy applications

30 Outline IntroductionIntroduction ArchitectureArchitecture PlanetLabPlanetLab ConclusionConclusion

31 PlanetLab Phases of evolution 1.Seed phase 100 centrally managed machines 100 centrally managed machines Pure testbed (no client workload) Pure testbed (no client workload) 2.Researchers as clients Scale testbed to 1000 sites Scale testbed to 1000 sites Continuously running services Continuously running services 3.Attracting real clients Non-researchers as clients Non-researchers as clients

32 PlanetLab today Services Berkeley’s OceanStore – RAID distributed over Internet Intel’s Netbait – Detect & track worms globally UW’s ScriptRoute – Internet measurement tool Princeton’s CoDeeN – Open content distribution network Courtesy planet-lab.org

33 Related work Internet2 (Abilene backbone) Closed commercial routers -> no new functionality in the middle of network Closed commercial routers -> no new functionality in the middle of networkEmulab Not a deployment platform Not a deployment platform Grid (Globus) Glues together modest number of large computing assets with high bandwidth pipes but Glues together modest number of large computing assets with high bandwidth pipes but planetlab emphasizes on scaling the less bandwidth applications across wider collection of nodes ABONE (from active networks) Focuses on supporting extensibility of forwarding function but Focuses on supporting extensibility of forwarding function but planetlab is more inclusive ie. apps throughout the network including those involving storage component XBONE Supports IP-in-IP tunneling, w/ GUI for specific overlay configurations Supports IP-in-IP tunneling, w/ GUI for specific overlay configurations Alternative: package as desktop application Eg. Napster, KaZaa Needs to be immediately & widely popular Needs to be immediately & widely popular Difficult to modify system once deployed unless compelling applications Difficult to modify system once deployed unless compelling applications Not secure Not secure KaZaa exposed all files on local system

34 Conclusion An open, global network test-bed, for pioneering novel planetary-scale services (deployment). A model for introducing innovations (service- oriented network architecture) into the Internet through overlays. Whether a single winner emerges & gets subsumed into Internet or services continue to define their own routing, remains a subject of speculation..

35 References PlanetLab: An overlay testbed for broad-coverage services by B. Chun et. al., Jan 2003PlanetLab: An overlay testbed for broad-coverage services by B. Chun et. al., Jan 2003

36 Backup slides

37 Overlay construction problems Dynamic changes in group membership – Members may join and leave dynamically – Members may die Dynamic changes in network conditions and topology – Delay between members may vary over time due to congestion, routing changes Knowledge of network conditions is member specific – Each member must determine network conditions for itself

38 Testbed’s mode of operation as deployment platform


Download ppt "By L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion."

Similar presentations


Ads by Google