Presentation is loading. Please wait.

Presentation is loading. Please wait.

Inferring the Topology and Traffic Load of Parallel Programs in a VM environment Ashish Gupta Peter Dinda Department of Computer Science Northwestern University.

Similar presentations


Presentation on theme: "Inferring the Topology and Traffic Load of Parallel Programs in a VM environment Ashish Gupta Peter Dinda Department of Computer Science Northwestern University."— Presentation transcript:

1 Inferring the Topology and Traffic Load of Parallel Programs in a VM environment Ashish Gupta Peter Dinda Department of Computer Science Northwestern University

2 Overview Motivation behind parallel programs in a VM environment Goal: To infer the communication behavior Offline implementation Evaluating with parallel benchmarks Online Monitoring in a VM environment Conclusions

3 Virtuoso: A VM based abstraction for a Grid environment

4

5 Motivation A distributed computing environment based on Virtual Machines –Raw machines connected to user’s network –Our Focus: Middleware support to hide the Grid complexity

6

7

8 Motivation A distributed computing environment based on Virtual Machines –Raw machines connected to user’s network –Our Focus: Middleware support to hide the Grid complexity Our goal here: Efficient execution of Parallel applications in such an environment

9 Parallel Application Behavior Intelligent Placement and virtual networking of parallel applications VM Encapsulation Virtual Networks With VNET

10 VNET Abstraction: A set of VMs on same Layer 2 network Virtual Ethernet LAN

11 Goal of this project Through low level packet traffic monitoring and analysis Inferring communication properties of parallel applications –Topology –Bandwidth requirements –Other ?

12 Goal of this project Low Level Traffic Monitoring ? An online topology inference framework for a VM environment Application Topology

13 Approach Design an offline framework Evaluate with parallel benchmarks If successful, design an online framework for VMs

14 An offline topology inference framework Goal: A test-bed for traffic monitoring and evaluating topology inference methods

15 The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization

16 The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization

17 The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization

18 The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization

19 The offline method Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization PVMPOV Inference

20 Synced Parallel Traffic Monitoring Traffic Filtering and Matrix Generation Matrix Analysis and Topology Characterization Infer.pl

21 Parallel Benchmarks Evaluation Goal: To test the practicality of low level traffic based inference

22 Parallel Benchmarks used Synthetic benchmarks: Patterns –N-dimensional mesh-neighbor –N-dimensional toroid-neighbor –N-dimensional hypercubes –Tree reduction –All-to-All Scheduling mechanism to generate deadlock free and efficient schemes 123

23 Application benchmarks NAS PVM benchmarks –Popular benchmarks for parallel computing –5 benchmarks PVM-POV : Distributed Ray Tracing Many others possible… The inference not PVM specific –Applicable to all communication. –e.g. MPI, even non-parallel apps

24 Patterns application 2-D Mesh 3-D Toroid3-D Hypercube Reduction TreeAll-to-All

25 PVM NAS benchmarks Parallel Integer Sort

26 Traffic Matrix for PVM IS benchmark

27 Placement of host 1 is crucial on the network

28 An Online Topology Inference Framework: VTTIF Goal: To automatically detect, monitor and report the global traffic matrix for a set of VMs running on a overlay network

29 Overall Design VNET –Abstraction: A set of VMs on same Layer 2 network –Virtual Ethernet LAN

30 A VNET virtual layer VNET Layer Physical Layer A Virtual LAN over wide area

31 Overall Design VNET –Abstraction: A set of VMs on same Layer 2 network Extend VNET to include the required features –Monitoring at Ethernet packet level The Challenge here –Lacks manual control –Detecting interesting parallel program communication ?

32 Detecting interesting phenomenon Reactive MechanismsProactive Mechanisms Certain address properties Based on Traffic rate Etc. Provide support for queries by external agent Rate based monitoring Non-uniform discrete event sampling What is the Traffic Matrix for the last n seconds ? Like a Burglar AlarmVideo Surveillance

33 Traffic Analyzer Rate based Change detection Traffic Matrix Query Agent VM Network Scheduling Agent VNET daemon VM VNET overlay network To other VNET daemons Physical Host

34 Traffic Matrix Aggregation Each VNET daemon keeps track of local traffic matrix –Need to aggregate this information for a global view –When the rate falls, the local daemons push the traffic matrix (When do you push the traffic matrix ?) –Operation is associative: reduction trees for scalability The proxy daemon

35 Evaluation Used 4 Virtual Machines over VNET NAS IS benchmark

36 Conclusions Possible to infer the topology with low level traffic monitoring A Traffic Inference Framework for Virtual Machines Ready to move on to future steps: Adaptation for Performance

37 Current Work Capabilities for dynamic adaptation into VNET Spatial Inference  Network Adaptation for Improved Performance Prelim Results: Improved performance upto 40% in execution time Looking into benefits of Dynamic Adaptation

38 For more information http://virtuoso.cs.northwestern.edu VNET is available for download PLAB web site: plab.cs.northwestern.edu


Download ppt "Inferring the Topology and Traffic Load of Parallel Programs in a VM environment Ashish Gupta Peter Dinda Department of Computer Science Northwestern University."

Similar presentations


Ads by Google