NanoHUB.org online simulations and more Network for Computational Nanotechnology 1 Autonomic Live Adaptation of Virtual Computational Environments in a.

Slides:



Advertisements
Similar presentations
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
Advertisements

The System Center Family Microsoft. Mobile Device Manager 2008.
Virtualization and Cloud Computing. Definition Virtualization is the ability to run multiple operating systems on a single physical system and share the.
The Case for Enterprise Ready Virtual Private Clouds Timothy Wood, Alexandre Gerber *, K.K. Ramakrishnan *, Jacobus van der Merwe *, and Prashant Shenoy.
Cloud Computing to Satisfy Peak Capacity Needs Case Study.
Clouds C. Vuerli Contributed by Zsolt Nemeth. As it started.
Xen , Linux Vserver , Planet Lab
Grant agreement n° SDN architectures for orchestration of mobile cloud services with converged control of wireless access and optical transport network.
Green Cloud Computing Hadi Salimi Distributed Systems Lab, School of Computer Engineering, Iran University of Science and Technology,
“It’s going to take a month to get a proof of concept going.” “I know VMM, but don’t know how it works with SPF and the Portal” “I know Azure, but.
Virtualisation in the optimised Datacenter Paul Butterworth Enterprise Technology Strategist Microsoft Corporation.
Towards Virtual Networks for Virtual Machine Grid Computing Ananth I. Sundararaj Peter A. Dinda Prescience Lab Department of Computer Science Northwestern.
Protection Mechanisms for Application Service Hosting Platforms Xuxian Jiang, Dongyan Xu, Rudolf Eigenmann Department of Computer Sciences, Center for.
Automatic Run-time Adaptation in Virtual Execution Environments Ananth I. Sundararaj Advisor: Peter A. Dinda Prescience Lab Department of Computer Science.
Virtual techdays INDIA │ 9-11 February 2011 Cross Hypervisor Management Using SCVMM 2008 R2 Vikas Madan │ Partner Consultant II, Microsoft Corporation.
Towards an Integrated Multimedia Service Hosting Overlay Dongyan Xu, Xuxian Jiang Department of Computer Sciences Center for Education and Research in.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Virtualization in Data Centers Prashant Shenoy
The Whats and Whys of Whole System Virtualization Peter A. Dinda Prescience Lab Department of Computer Science Northwestern University
Dynamic Topology Adaptation of Virtual Networks of Virtual Machines Ananth I. Sundararaj Ashish Gupta Peter A. Dinda Prescience Lab Department of Computer.
A Policy-Based Optical VPN Management Architecture.
Inferring the Topology and Traffic Load of Parallel Programs in a VM environment Ashish Gupta Peter Dinda Department of Computer Science Northwestern University.
Virtualization for Cloud Computing
Center for Autonomic Computing Intel Portland, April 30, 2010 Autonomic Virtual Networks and Applications in Cloud and Collaborative Computing Environments.
+ Virtualization in Clusters and Grids Dr. Lizhe Wang.
Adaptive Server Farms for the Data Center Contact: Ron Sheen Fujitsu Siemens Computers, Inc Sever Blade Summit, Getting the.
Virtualization Performance H. Reza Taheri Senior Staff Eng. VMware.
Cyberaide Virtual Appliance: On-demand Deploying Middleware for Cyberinfrastructure Tobias Kurze, Lizhe Wang, Gregor von Laszewski, Jie Tao, Marcel Kunze,
Yury Kissin Infrastructure Consultant Storage improvements Dynamic Memory Hyper-V Replica VM Mobility New and Improved Networking Capabilities.
Virtual Infrastructure in the Grid Kate Keahey Argonne National Laboratory.
Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration Based on paper by Laura Grit, David Irwin, Aydan.
A Cloud is a type of parallel and distributed system consisting of a collection of inter- connected and virtualized computers that are dynamically provisioned.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
M.A.Doman Short video intro Model for enabling the delivery of computing as a SERVICE.
Microsoft and Community Tour 2011 – Infrastrutture in evoluzione Community Tour 2011 Infrastrutture in evoluzione.
MDC-B350: Part 1 Room: You are in it Time: Now What we introduced in SP1 recap How to setup your datacenter networking from scratch What’s new in R2.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
Challenges towards Elastic Power Management in Internet Data Center.
NanoHUB.org online simulations and more Network for Computational Nanotechnology Prof. Mark Lundstrom, Director Prof. Gerhard Klimeck, Technical Director.
From Virtualization Management to Private Cloud with SCVMM 2012 Dan Stolts Sr. IT Pro Evangelist Microsoft Corporation
Focus on SCVMM features and an introduction on how to implement into your current environment. Overview of System Center Virtual Machine Manager 2012 Jim.
Center for Autonomic Computing Intel Portland, April 30, 2010 Autonomic Virtual Networks and Applications in Cloud and Collaborative Computing Environments.
Magellan: Experiences from a Science Cloud Lavanya Ramakrishnan.
Power-Aware Scheduling of Virtual Machines in DVFS-enabled Clusters
A dynamic optimization model for power and performance management of virtualized clusters Vinicius Petrucci, Orlando Loques Univ. Federal Fluminense Niteroi,
SC2012 Infrastructure Components Management Justin Cook (Data # 3) Principal Consultant, Systems Management Noel Fairclough (Data # 3) Consultant, Systems.
LegendCorp What is System Center Virtual Machine Manager (SCVMM)? SCVMM at a glance Features and Benefits Components / Topology /
Enabling the Future Service-Oriented Internet (EFSOI 2008) Supporting end-to-end resource virtualization for Web 2.0 applications using Service Oriented.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
System Center Lesson 4: Overview of System Center 2012 Components System Center 2012 Private Cloud Components VMM Overview App Controller Overview.
CoreGRID Workpackage 5 Virtual Institute on Grid Information and Monitoring Services Michał Jankowski, Paweł Wolniewicz, Jiří Denemark, Norbert Meyer,
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Turn Bare Metal Into Silver Lining With SCVMM 2012, Today! Mark Rhodes OBS SESSION CODE: SEC313 (c) 2011 Microsoft. All rights reserved.
1 TCS Confidential. 2 Objective : In this session we will be able to learn:  What is Cloud Computing?  Characteristics  Cloud Flavors  Cloud Deployment.
Microsoft Virtual Academy Module 12 Managing Services with VMM and App Controller.
SYSTEM CENTER VIRTUAL MACHINE MANAGER 2012 Gorazd Šemrov Microsoft Consulting Services
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
© 2012 Eucalyptus Systems, Inc. Cloud Computing Introduction Eucalyptus Education Services 2.
Let's build a VMM service template from A to Z in one hour Damien Caro Technical Evangelist Microsoft Central & Eastern Europe
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Windows Server 2012 Overview Michael Leworthy Senior Product Manager Microsoft Corporation WSV205.
Towards an integrated multimedia service hosting overlay Dongyan Xu Xuxian Jiang Proceedings of the 12th annual ACM international conference on Multimedia.
Towards a High Performance Extensible Grid Architecture Klaus Krauter Muthucumaru Maheswaran {krauter,
New Paradigms: Clouds, Virtualization and Co.
Management of Virtual Execution Environments 3 June 2008
Cloud Computing Dr. Sharad Saxena.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Department of Computer Science Northwestern University
Managing Services with VMM and App Controller
Towards Predictable Datacenter Networks
Presentation transcript:

nanoHUB.org online simulations and more Network for Computational Nanotechnology 1 Autonomic Live Adaptation of Virtual Computational Environments in a Multi-Domain Infrastructure Paul Ruth, Junghwan Rhee, Dongyan Xu Department of Computer Science and Center for Education and Research in Information Assurance and Security (CERIAS) Rick Kennell, Sebastien Goasguen Rosen Center for Advanced Computing Purdue University West Lafayette, Indiana, USA IEEE International Conference on Autonomic Computing (ICAC’06)

nanoHUB.org online simulations and more Network for Computational Nanotechnology 2 Outline of Talk Motivations Overall architecture Design and implementation Real-world deployment in nanoHUB Related work Conclusion Demo

nanoHUB.org online simulations and more Network for Computational Nanotechnology 3 Motivations Formation of shared distributed cyberinfrastructure (CI)  Spanning multiple domains  Serving users/user communities with diverse computation needs  Exhibiting dynamic resource availability and workload Need for virtual distributed environments (VIOLINs), each with  Customizability and legacy application compatibility  Administrative privileges  Isolation, security, and accountability  Autonomic adaptation capability - A unique opportunity brought by virtualization (VMs and VNs)

nanoHUB.org online simulations and more Network for Computational Nanotechnology 4 Adaptive VIOLINs Internet Duke U. U. Florida nanoHUB Physical cluster Virtual clusters (VIOLINs)

nanoHUB.org online simulations and more Network for Computational Nanotechnology 5 Autonomic VIOLIN Adaptation Adaptation triggers:  Dynamic availability of infrastructural resources  Dynamic resource needs of applications running inside Adaptation actions:  Resource re-allocation  Scale adjustment (adding/deleting virtual machines)  Re-location (migrating virtual machines) Adaptation goals:  Improving application performance  Increasing infrastructural resource utilization  Maintaining user/application transparency  Minimizing infrastructure administrator attention

nanoHUB.org online simulations and more Network for Computational Nanotechnology 6 Research Challenges Autonomic live adaptation mechanisms  VM Resource monitoring and scaling  Application profiling and non-intrusive sensing of application needs  Live VIOLIN re-location across domains Adaptation policies  VIOLIN adaptation model  Infrastructure resource availability and topology  Application resource needs  Application configuration and topology  Optimal VIOLIN adaptation decision-making  Goals (cost vs. gains)?  When to adapt?  How and how much to adapt?  Where to migrate?

nanoHUB.org online simulations and more Network for Computational Nanotechnology 7 Dom0 Overall Architecture VIOLIN Switch Monitoring Daemon VIOLIN Switch Monitoring Daemon VIOLIN Switch Monitoring Daemon Adaptation Manager Dom0 VMs Physical Network Scale Up CPU Update Migrate VIOLIN Switch VMM

nanoHUB.org online simulations and more Network for Computational Nanotechnology 8 VIOLIN Adaptation Policies Maintain desirable resource utilization level Reclaim resource if under-utilized over a period Add resource if over-utilized over a period  Scale up local resource share  Migrate to other host(s)  Balance host workload  Intra-domain migration first  Minimize migration Re-adjust resource according to application needs

nanoHUB.org online simulations and more Network for Computational Nanotechnology 9 Implementation and Deployment Extension to non-adaptive VIOLIN  Based on Xen 3.0 (w/ VM Live migration capability)  Enabling live VIOLIN migration across domains  IP addresses of VMs  Root file systems of VMs  Leveraging Xen libraries for VM resource monitoring (xenstat, xentop)  Extending VIOLIN switch for inter-VM bandwidth monitoring Deployment in nanoHUB  On-line, on-demand simulation service for nanotechnology community  Web interface for regular users  “My workspace” interface for advanced users  Local infrastructure: two clusters in two subnets

nanoHUB.org online simulations and more Network for Computational Nanotechnology 10 nanoHUB Deployment Overview Delegated trust Local Virtual Machines Migratable Isolated from Local infrastructure VIOLIN Virtual Cluster Virtual Infrastructure over WAN

nanoHUB.org online simulations and more Network for Computational Nanotechnology 11 VIOLIN in nanoHUB Simulation job VIOLIN In the backround:

nanoHUB.org online simulations and more Network for Computational Nanotechnology 12 VIOLIN in nanoHUB Autonomic property: Users focus on simulation semantics and results, unaware of VIOLIN creation, setup, and adaptation.

nanoHUB.org online simulations and more Network for Computational Nanotechnology 13 Impact of Migration on App. Execution End-to-end execution time of NEMO3D w/ and w/o live VIOLIN migration

nanoHUB.org online simulations and more Network for Computational Nanotechnology 14 VIOLIN Adaptation Scenario Without AdaptationWith Adaptation Domain 1Domain 2 1. Initially VIOLIN 1, 2, 3 are computing, VIOLIN 2 is about to be finished. Domain 1Domain 2 2. After VIOLIN 2 is finished, before adaptation VIOLIN 1 VIOLIN 4 VIOLIN 5VIOLIN 3 VIOLIN 2

nanoHUB.org online simulations and more Network for Computational Nanotechnology After VIOLIN 2 is finished, before adaptation 3. After adaptation VIOLIN Adaptation Scenario Without AdaptationWith Adaptation Domain 1Domain 2Domain 1Domain 2 VIOLIN 1 VIOLIN 4 VIOLIN 5VIOLIN 3 VIOLIN 2

nanoHUB.org online simulations and more Network for Computational Nanotechnology 16 VIOLIN Adaptation Scenario 4. After VIOLIN 4, 5 are created Without AdaptationWith Adaptation Domain 1Domain 2Domain 1Domain 2 3. After adaptation VIOLIN 1 VIOLIN 4 VIOLIN 5VIOLIN 3 VIOLIN 2

nanoHUB.org online simulations and more Network for Computational Nanotechnology After VIOLIN 4, 5 are created5. After VIOLIN 1, 3 are finished VIOLIN Adaptation Scenario Without AdaptationWith Adaptation Domain 1Domain 2Domain 1Domain 2 VIOLIN 1 VIOLIN 4 VIOLIN 5VIOLIN 3 VIOLIN 2

nanoHUB.org online simulations and more Network for Computational Nanotechnology ALL VIOLINs are finished5. After VIOLIN 1, 3 are finished VIOLIN Adaptation Scenario Without AdaptationWith Adaptation Domain 1Domain 2Domain 1Domain 2 VIOLIN 1 VIOLIN 4 VIOLIN 5VIOLIN 3 VIOLIN 2

nanoHUB.org online simulations and more Network for Computational Nanotechnology 19 Limitations and Future Work Simple, heuristic adaptation policy  Application of machine learning and data mining techniques Centralized adaptation manager  Hierarchical or peer-to-peer adaptation managers Imprecise application resource demand inference  Multi-dimensional, fine-grain resource demand profiling Campus-wide infrastructure  Evaluation and deployment in wide-area infrastructure

nanoHUB.org online simulations and more Network for Computational Nanotechnology 20 Related Work VNET (Northwestern U.) Cluster-on-Demand (COD) (Duke U.) Virtual Workspaces on Grid (Argonne National Lab) SoftUDC (HP Labs) WOW and IPOP (U. Florida)

nanoHUB.org online simulations and more Network for Computational Nanotechnology 21 Conclusions Autonomically adaptive virtual infrastructures (VIOLINs)  A new opportunity brought by virtualization technologies  Decoupled from underlying shared infrastructure  Intelligent, first-class entities with user-transparent resource provisioning Key benefits  Application performance improvement  Infrastructure resource utilization  Management convenience (at both virtual and physical levels) “The Cray motto is: adapt the system to the application - not the application to the system.” - Steve Scott, CTO, Cray Inc. on “adaptive supercomputing”, March 2006

nanoHUB.org online simulations and more Network for Computational Nanotechnology 22 Thank you. For more information: URL: Google: “Purdue VIOLIN FRIENDS”