Virtual Cluster Development Environment (VCDE) By Dr.S.Thamarai Selvi Professor & Head Department of Information Technology Madras Institute of Technology.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

A Scalable Approach to Deploying and Managing Appliances Kate Keahey Rick Bradshaw, Narayan Desai, Tim Freeman Argonne National Lab, University of Chicago.
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
Virtual Workspaces in the Grid Kate Keahey Argonne National Laboratory Ian Foster, Tim Freeman, Xuehai Zhang, Daniel Galron.
Legacy code support for commercial production Grids G.Terstyanszky, T. Kiss, T. Delaitre, S. Winter School of Informatics, University.
Virtual Machine Technology Dr. Gregor von Laszewski Dr. Lizhe Wang.
Virtualisation From the Bottom Up From storage to application.
Virtualization and Cloud Computing. Definition Virtualization is the ability to run multiple operating systems on a single physical system and share the.
XEN AND THE ART OF VIRTUALIZATION Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, lan Pratt, Andrew Warfield.
Introduction CSCI 444/544 Operating Systems Fall 2008.
Xen , Linux Vserver , Planet Lab
Network Implementation for Xen and KVM Class project for E : Network System Design and Implantation 12 Apr 2010 Kangkook Jee (kj2181)
Cloud Computing and Virtualization Sorav Bansal CloudCamp 2010 IIT Delhi.
Chapter 8: Network Operating Systems and Windows Server 2003-Based Networking Network+ Guide to Networks Third Edition.
Virtualization for Cloud Computing
Xen and the Art of Virtualization Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Alex Ho, Rolf Neugebauer, Ian Pratt, Andrew Warfield.
VIRTUALISATION OF HADOOP CLUSTERS Dr G Sudha Sadasivam Assistant Professor Department of CSE PSGCT.
Tanenbaum 8.3 See references
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
Chapter 2 Operating System Overview Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Virtualization Concept. Virtualization  Real: it exists, you can see it.  Transparent: it exists, you cannot see it  Virtual: it does not exist, you.
An Introduction to Xen Prof. Chih-Hung Wu
Virtual Infrastructure in the Grid Kate Keahey Argonne National Laboratory.
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
Microkernels, virtualization, exokernels Tutorial 1 – CSC469.
Virtualization Lab 3 – Virtualization Fall 2012 CSCI 6303 Principles of I.T.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
Secure & flexible monitoring of virtual machine University of Mazandran Science & Tecnology By : Esmaill Khanlarpour January.
V IRTUALIZATION Sayed Ahmed B.Sc. Engineering in Computer Science & Engineering M.Sc. In Computer Science.
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Hadi Salimi Distributed Systems Lab, School of Computer Engineering, Iran University of Science and Technology, Fall 2010 Performance.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
COMS E Cloud Computing and Data Center Networking Sambit Sahu
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Grid Execution Management for Legacy Code Applications Grid Enabling Legacy Code Applications Tamas Kiss Centre for Parallel.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Ihr Logo Operating Systems Internals & Design Principles Fifth Edition William Stallings Chapter 2 (Part II) Operating System Overview.
The xCloud and Design Alternatives Presented by Lavone Rodolph.
GAAIN Virtual Appliances: Virtual Machine Technology for Scientific Data Analysis Arihant Patawari USC Stevens Neuroimaging and Informatics Institute July.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
Presented By, Bhargavi Konduru.  Nowadays, most electronic appliances have computing capabilities that run on embedded operating system (OS) kernels,
VMware vSphere Configuration and Management v6
Ian Gable University of Victoria 1 Deploying HEP Applications Using Xen and Globus Virtual Workspaces A. Agarwal, A. Charbonneau, R. Desmarais, R. Enge,
A. Frank - P. Weisberg Operating Systems Structure of Operating Systems.
System Center Lesson 4: Overview of System Center 2012 Components System Center 2012 Private Cloud Components VMM Overview App Controller Overview.
Full and Para Virtualization
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Protecting The Kernel Data through Virtualization Technology BY VENKATA SAI PUNDAMALLI id :
Tool Integration with Data and Computation Grid “Grid Wizard 2”
2.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition System Programs (p73) System programs provide a convenient environment.
Cloud Computing – UNIT - II. VIRTUALIZATION Virtualization Hiding the reality The mantra of smart computing is to intelligently hide the reality Binary->
Grid Execution Management for Legacy Code Architecture Exposing legacy applications as Grid services: the GEMLCA approach Centre.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
Workspace Management Services Kate Keahey Argonne National Laboratory.
Virtualization for Cloud Computing
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Presented by Yoon-Soo Lee
Virtual Servers.
Group 8 Virtualization of the Cloud
Virtual Cluster Development Environment
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Virtualization Techniques
Sky Computing on FutureGrid and Grid’5000
Chapter 2: The Linux System Part 1
Xen and the Art of Virtualization
Sky Computing on FutureGrid and Grid’5000
Hypervisor A hypervisor or virtual machine monitor (VMM) is computer software, firmware or hardware that creates and runs virtual machines. A computer.
Presentation transcript:

Virtual Cluster Development Environment (VCDE) By Dr.S.Thamarai Selvi Professor & Head Department of Information Technology Madras Institute of Technology Anna University, Chennai, India.

Agenda Virtualization Xen Machines VCDE overview VCDE Architecture VCDE Component details Conclusion

Virtualization Virtualization is a framework or methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others.  They can allow you to run multiple operating systems simultaneously on a single machine

Need for Virtualization Virtualization enables the introduction of new system capabilities without adding complexity to already existing complex hardware and software. Recent advances in virtualization technologies can be applied to different research areas, such as virtualized resource management, monitoring and recovering from attacks in Grid computing

It is a type of virtualization in which the entire OS runs on top of the hypervisor and communicates with it directly, typically resulting in better performance. The kernels of both the OS and the hypervisor must be modified, however, to accommodate this close interaction. A paravirtualized Linux operating system, for example, is specifically optimized to run in a virtual environment. Full virtualization, in contrast, presents an abstract layer that intercepts all calls to physical resources.  Ex. Xen Machine Paravirtualization

Hypervisor - The hypervisor is the most basic virtualization component. It's the software that decouples the operating system and applications from their physical resources. A hypervisor has its own kernel and it's installed directly on the hardware, or "bare metal." It is, almost literally, inserted between the hardware and the OS. Virtual Machine - A virtual machine (VM) is a self-contained operating environment—software that works with, but is independent of, a host operating system. In other words, it's a platform-independent software implementation of a CPU that runs compiled code. The VMs must be written specifically for the OSes on which they run. Virtualization technologies are sometimes called dynamic virtual machine software.

Xen Machines Xen is an open-source Virtual Machine Monitor or Hypervisor for both 32- and 64-bit processor architectures. It runs as software directly on top of the bare-metal, physical hardware and enables you to run several virtual guest operating systems on the same host computer at the same time. The virtual machines are executed securely and efficiently with near-native performance.

Hypervisor Control In Xen  Domain0 is given greater access to the hardware and the hypervisor. It has: A guest OS running above the domain Hypervisor Manager software to manage elements within other existing domains.

VCDE Objective Design and Development of Virtual Cluster Development Environment for Grids using Xen machines The remote deployment of Grid environment to execute any application written in parallel or sequential application has been automated by VCDE

VCDE Architecture SCHEDULER JOB SUBMISSION PORTAL VIRTUAL CLUSTER SERVICEVIRTUAL INFORMATION SERVICE VIRTUAL CLUSTER SERVER DISPATCHER VIRTUAL CLUSTER MANAGER EXECUTOR MODULE JOB STATUS SERVICE RESOURCE AGGREGATOR MATCH MAKER SECURITY SERVER GLOBUS CONTAINER COMPUTENODE 1 VIRTUAL MACHINE CREATOR VIRTUAL MACHINE CREATOR VIRTUAL MACHINE CREATOR COMPUTENODE 2 COMPUTENODE n VIRTUAL CLUSTER DEVELOPMENT ENVIRONMET CLUSTER HEAD NODE USER POOL JOB POOL HOST POOL NETWORK MANAGER IP POOL TRANSFER MODULE

The VCDE Components Virtual cluster service and Virtual information service Virtual cluster server User pool Job status service Job pool Network service Resource Aggregator Dispatcher Match maker Host pool Virtual Cluster Manager Executor

Globus Toolkit Services Two custom services are developed and deployed in Globus tool kit and running as virtual workspace, the underlying virtual machine is based on Xen VMM.  Virtual cluster service which is used to create Virtual Clusters  Virtual information service which is used to know the status of virtual resources.

Job Submission client This component is responsible getting the user requirements for the creation of virtual cluster. When the user is accessing the Virtual Cluster Service the user’s identity is verified using grid-map file. The Virtual Cluster Service contacts the Virtual Cluster Development Environment (VCDE) to create and configure the Virtual Cluster.  The inputs are type of Os, disk size, Host name etc.

Virtual Cluster Service (VCS) It is the central or core part of the Virtual Cluster Development Environment. The Virtual Cluster Service contacts the VCDE for virtual machine creation. The Virtual Cluster Server maintains the Dispatcher, Network Manager, Resource Aggregator, User Manager, and Job Queue.

Resource Aggregator This module fetches all the resource information from physical cluster and these information are updated periodically to the Host Pool. The Host Pool maintains the Head and Compute node’s logical volume partition, logical volume disk space total and free, ram size total and free, Kernel type, gateway, broadcast address, network address, netmask etc.

Match Maker The Match Making process compares the User’s requirements with the physical resource availability. The physical resource information such as Disk space, RamSizeFree, Kernel Version, Operating Systems are gathered from the resource aggregator via virtual cluster server module. In this module the rank of matched host is calculated by using RamSizeFree and disk space. The details are returned as Hashtable with hostname and rank and send it to the UserServiceThread.

Host, User and Job pools Host pool gets the list of hosts form the information aggregator and identifies the list of free nodes in order to create virtual machines on the physical nodes. User pool is responsible for maintaining the list of authorized users. It also has the facility to allow which users are allowed to create the virtual execution environment. We can also limit the number of jobs for each user. Job pool maintains a user request as jobs in Queue from the user manager module. It processes the user request one by one for the dispatcher module to the input for match maker module

Job Status Job Status service accesses the Job Pool through VCDE Server and displays the virtual cluster status and job status dynamically.

Dispatcher Dispatcher is invoked when the job is submitted to the Virtual Cluster Server. The Dispatcher module gets the job requirements and updates in the job pool with job id. After that, the dispatcher sends the job to match making module with user's Requirements available in the host pool. The matched hosts are identified and the ranks for the matched resources are computed.  The rank is calculated based on the free ramsize. The resource which has more free ramsize gets the highest rank.

Scheduler The Scheduler module is invoked after the matching host list is generated by the match making module. The resources are ordered based on the rank. The node having the highest rank is considered as the Head node for the Virtual Clusters. Virtual machines are created as compute nodes from the matched host list and the list of these resources are sent to Dispatcher Module.

Virtual Cluster Manager Virtual Cluster Manager Module (VCM) is implemented by using the Round-Robin Algorithm. Based on the user’s node count, VCM creates the first node as the head node and others as compute nodes. The VCM waits until it receives the message on successful creation of Virtual Cluster and the completion software installation.

Virtual Machine Creator The two main functions of the virtual machine creator are  Updating Resource Information and  Creation of Virtual Machines  The resource information viz., hostname, OS, Architecture, Kernel Version, Ram disk, Logical Volume device, Ram Size, Broadcast Address, Net mask, Network Address and Gateway Addresses are getting updated in the host pool through VCS. Based the message received from the Virtual Cluster Manager it starts to create the virtual machines.  If the message received from the VCM is “Head Node”, it starts to create the Virtual Cluster Head Node with required software  else if the message received from the Virtual Cluster Manager is “Client Node”, it creates the compute node with minimal software.

Automation of GT (1) Prerequisite software for the Globus installation has been automated. The required softwares are  JDK  Ant  Tomcat web server  Junit  Torque

Automation of GT (2) All the steps required for the Globus installation also been automated.  Globus package installation  Configurations like SimpleCA, RFT and other services.

Security Server The Security Server is to perform mutual authentication dynamically. When the Virtual Cluster installation and configuration is completed, the Security client running in the virtual cluster head node sends the certificate file, signing policy file and the user's identity to the Security server running in VCS.

Executor Module After the formation of virtual clusters, the executor module is invoked. This module fetches the job information from the job pool and creates RSL file and contacts the Virtual Cluster Head Node’s Job Managed Factory Service and submits this job description RSL file. It gets the job status and updates the same in the job pool.

Transfer Module The job executable, input files and RSL file are transferred using the transfer manager to the Virtual Cluster Head Node. After the execution of the job, the output file is transferred to the head node of the physical cluster.

Virtual Information Service The resource information server fetches the Xen Hypervisor status, hostname, operating system, privileged domain id and name, Kernel Version, Ramdisk, Logical Volume Space, Total and Free Memory, Ram Size details, Network related information and the details of the created virtual cluster.

VCDE Architecture SCHEDULER JOB SUBMISSION PORTAL VIRTUAL CLUSTER SERVICEVIRTUAL INFORMATION SERVICE VIRTUAL CLUSTER SERVER DISPATCHER VIRTUAL CLUSTER MANAGER EXECUTOR MODULE JOB STATUS SERVICE RESOURCE AGGREGATOR MATCH MAKER SECURITY SERVER GLOBUS CONTAINER COMPUTENODE 1 VIRTUAL MACHINE CREATOR VIRTUAL MACHINE CREATOR VIRTUAL MACHINE CREATOR COMPUTENODE 2 COMPUTENODE n VIRTUAL HEAD NODE VIRTUAL COMPUTE NODE 1 VIRTUAL COMPUTE NODE n VIRTUALCLUSTER VIRTUAL CLUSTER DEVELOPMENT ENVIRONMET CLUSTER HEAD NODE USER POOL JOB POOL HOST POOL NETWORK MANAGER IP POOL TRANSFER MODULE

Fedora 4 nodes 512 MB 10 GB Fedora 4 nodes 512 MB 10 GB Fedora 4 nodes 512 MB 10 GB Fedora 4 nodes 512 MB 10 GB VM CREATOR HEAD NODE SLAVE NODE 2 SLAVE NODE 3 SLAVE NODE 1 VIRTUAL CLUSTER VIRTUAL CLUSTER FORMATION VCDE SERVER Ethernet

Image Constituents Node TypeImage SizeVM Image Constituents Head Node2.0 GB file system imageFedora Core 4 GT4.0.1 Binary Installer Jdk1.6 Apache Ant 1.6 PostgreSQL 7.4 Torque Mpich Junit Jakarta-tomcat FASTA Application and Nucleotide sequence database Compute Node1.0 GB file system imageFedora core 4 Mpich Torque-1.2.0

Experimental Setup In our testbed, We have created the physical cluster with four nodes, one Head Node and three compute nodes.  The operating system in the head node is Scientific Linux 4.0 with  2.6 Kernel  Xen 3.0.2,  GT4.0.5  VCDE Server and VCDE Scheduler In the compute node, VM Creator is the only module running.

Conclusion The VCDE (Virtual Cluster Development Environment) has been designed and developed for creating virtual clusters automatically to satisfy the requirements of the users. There is no human intervention in the process of creating the virtual execution environment. The complete automation takes more time, so in the near future, the performance of the VCDE will be improved

References 1. Foster, I., C. Kesselman, J. Nick, and S. Tuecke, “The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration”, 2002: Open Grid Service Infrastructure WG, Global Grid Forum. 2. Foster, I., C. Kesselman, and S. Tuecke, “The Anatomy of the Grid: Enabling Scalable Virtual Organizations”, International Journal of Supercomputer Applications, (3): p Goldberg, R., “Survey of Virtual Machine Research”, IEEE Computer, (6): p Keahey, K., I. Foster, T. Freeman, X. Zhang, and D. Galron, “Virtual Workspaces in the Grid”, ANL/MCS-P , Figueiredo, R., P. Dinda, and J. Fortes, "A Case for Grid Computing on Virtual Machines”, 23rd International Conference on Distributed Computing Systems Reed, D., I. Pratt, P. Menage, S. Early, and N. Stratford, “Xenoservers: Accountable Execution of Untrusted Programs”, 7th Workshop on Hot Topics in Operating Systems,1999. Rio Rico, AZ: IEEE Computer Society Press. 7. Barham, P., B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebar, I. Pratt, and A. Warfield, “Xen and the Art of Virtualization”, ACM Symposium on Operating Systems Principles (SOSP). 8. Sugerman, J., G. Venkitachalan, and B.H. Lim, “Virtualizing I/O devices on VMware workstation's hosted virtual machine monitor”, USENIX Annual Technical Conference, 2001.

References continued.. 9. Adabala, S., V. Chadha, P. Chawla, R. Figueiredo, J. Fortes, I. Krsul, A. Matsunaga, M. Tsugawa, J. Zhang, M. Zhao, L. Zhu, and X. Zhu, “From Virtualized Resources to Virtual Computing Grids”, The In-VIGO System, Future Generation Computer Systems, Sundararaj, A. and P. Dinda,” Towards Virtual Networks for Virtual Machine Grid Computing”, 3rd USENIX Conference on Virtual Machine Technology, Jiang, X. and D. Xu, “VIOLIN: Virtual Internetworking on OverLay Infrastructure”, Department of Computer Sciences Technical Report CSD TR , Purdue University, Keahey, K., I. Foster, T. Freeman, X. Zhang, and D. Galron, “Virtual Workspaces in the Grid”, Europar. 2005, Lisbon, Portugal. 13. Keahey, K., I. Foster, T. Freeman, and X. Zhang, “Virtual Workspaces: Achieving Quality of Service and Quality of Life in the Grid”, Scientific Progamming Journal, I.Foster, T. Freeman, K.Keahey, D.Scheftner, B.Sotomayor, X.Zhang, “Virtual Clusters for Grid Communities”, CCGRID 2006, Singapore (2006) 15. T. Freeman, K. Keahey, “Flying Low: Simple Leases with Workspace Pilot”, Euro-Par Keahey, K., T. Freeman, J. Lauret, D. Olson, “Virtual Workspaces for Scientific Applications”, SciDAC 2007 Conference, Boston, MA, June Sotomayor, B. Masters paper, “ A Resource Management Model for VM-Based Virtual Workspaces”,University of Chicago, February Bradshaw, R., N. Desai, T. Freeman, K. Keahey, “A Scalable Approach To Deploying And Managing Appliances”, TeraGrid 2007, Madison, WI, June 2007