Presentation is loading. Please wait.

Presentation is loading. Please wait.

Virtual Cluster Development Environment (VCDE) By Dr.S.Thamarai Selvi Professor & Head Department of Information Technology Madras Institute of Technology.

Similar presentations


Presentation on theme: "Virtual Cluster Development Environment (VCDE) By Dr.S.Thamarai Selvi Professor & Head Department of Information Technology Madras Institute of Technology."— Presentation transcript:

1 Virtual Cluster Development Environment (VCDE) By Dr.S.Thamarai Selvi Professor & Head Department of Information Technology Madras Institute of Technology Anna University, Chennai, India. Email: stselvi@annauniv.edu

2 Agenda Virtualization Xen Machines VCDE overview VCDE Architecture VCDE Component details Conclusion

3 Virtualization Virtualization is a framework or methodology of dividing the resources of a computer into multiple execution environments, by applying one or more concepts or technologies such as hardware and software partitioning, time-sharing, partial or complete machine simulation, emulation, quality of service, and many others.  They can allow you to run multiple operating systems simultaneously on a single machine

4 Need for Virtualization Virtualization enables the introduction of new system capabilities without adding complexity to already existing complex hardware and software. Recent advances in virtualization technologies can be applied to different research areas, such as virtualized resource management, monitoring and recovering from attacks in Grid computing

5 It is a type of virtualization in which the entire OS runs on top of the hypervisor and communicates with it directly, typically resulting in better performance. The kernels of both the OS and the hypervisor must be modified, however, to accommodate this close interaction. A paravirtualized Linux operating system, for example, is specifically optimized to run in a virtual environment. Full virtualization, in contrast, presents an abstract layer that intercepts all calls to physical resources.  Ex. Xen Machine Paravirtualization

6 Hypervisor - The hypervisor is the most basic virtualization component. It's the software that decouples the operating system and applications from their physical resources. A hypervisor has its own kernel and it's installed directly on the hardware, or "bare metal." It is, almost literally, inserted between the hardware and the OS. Virtual Machine - A virtual machine (VM) is a self-contained operating environment—software that works with, but is independent of, a host operating system. In other words, it's a platform-independent software implementation of a CPU that runs compiled code. The VMs must be written specifically for the OSes on which they run. Virtualization technologies are sometimes called dynamic virtual machine software.

7 Xen Machines Xen is an open-source Virtual Machine Monitor or Hypervisor for both 32- and 64-bit processor architectures. It runs as software directly on top of the bare-metal, physical hardware and enables you to run several virtual guest operating systems on the same host computer at the same time. The virtual machines are executed securely and efficiently with near-native performance.

8 Hypervisor Control In Xen  Domain0 is given greater access to the hardware and the hypervisor. It has: A guest OS running above the domain Hypervisor Manager software to manage elements within other existing domains.

9 VCDE Objective Design and Development of Virtual Cluster Development Environment for Grids using Xen machines The remote deployment of Grid environment to execute any application written in parallel or sequential application has been automated by VCDE

10 VCDE Architecture SCHEDULER JOB SUBMISSION PORTAL VIRTUAL CLUSTER SERVICEVIRTUAL INFORMATION SERVICE VIRTUAL CLUSTER SERVER DISPATCHER VIRTUAL CLUSTER MANAGER EXECUTOR MODULE JOB STATUS SERVICE RESOURCE AGGREGATOR MATCH MAKER SECURITY SERVER GLOBUS CONTAINER COMPUTENODE 1 VIRTUAL MACHINE CREATOR VIRTUAL MACHINE CREATOR VIRTUAL MACHINE CREATOR COMPUTENODE 2 COMPUTENODE n VIRTUAL CLUSTER DEVELOPMENT ENVIRONMET CLUSTER HEAD NODE USER POOL JOB POOL HOST POOL NETWORK MANAGER IP POOL TRANSFER MODULE

11 The VCDE Components Virtual cluster service and Virtual information service Virtual cluster server User pool Job status service Job pool Network service Resource Aggregator Dispatcher Match maker Host pool Virtual Cluster Manager Executor

12 Globus Toolkit Services Two custom services are developed and deployed in Globus tool kit and running as virtual workspace, the underlying virtual machine is based on Xen VMM.  Virtual cluster service which is used to create Virtual Clusters  Virtual information service which is used to know the status of virtual resources.

13 Job Submission client This component is responsible getting the user requirements for the creation of virtual cluster. When the user is accessing the Virtual Cluster Service the user’s identity is verified using grid-map file. The Virtual Cluster Service contacts the Virtual Cluster Development Environment (VCDE) to create and configure the Virtual Cluster.  The inputs are type of Os, disk size, Host name etc.

14 Virtual Cluster Service (VCS) It is the central or core part of the Virtual Cluster Development Environment. The Virtual Cluster Service contacts the VCDE for virtual machine creation. The Virtual Cluster Server maintains the Dispatcher, Network Manager, Resource Aggregator, User Manager, and Job Queue.

15 Resource Aggregator This module fetches all the resource information from physical cluster and these information are updated periodically to the Host Pool. The Host Pool maintains the Head and Compute node’s logical volume partition, logical volume disk space total and free, ram size total and free, Kernel type, gateway, broadcast address, network address, netmask etc.

16 Match Maker The Match Making process compares the User’s requirements with the physical resource availability. The physical resource information such as Disk space, RamSizeFree, Kernel Version, Operating Systems are gathered from the resource aggregator via virtual cluster server module. In this module the rank of matched host is calculated by using RamSizeFree and disk space. The details are returned as Hashtable with hostname and rank and send it to the UserServiceThread.

17 Host, User and Job pools Host pool gets the list of hosts form the information aggregator and identifies the list of free nodes in order to create virtual machines on the physical nodes. User pool is responsible for maintaining the list of authorized users. It also has the facility to allow which users are allowed to create the virtual execution environment. We can also limit the number of jobs for each user. Job pool maintains a user request as jobs in Queue from the user manager module. It processes the user request one by one for the dispatcher module to the input for match maker module

18 Job Status Job Status service accesses the Job Pool through VCDE Server and displays the virtual cluster status and job status dynamically.

19 Dispatcher Dispatcher is invoked when the job is submitted to the Virtual Cluster Server. The Dispatcher module gets the job requirements and updates in the job pool with job id. After that, the dispatcher sends the job to match making module with user's Requirements available in the host pool. The matched hosts are identified and the ranks for the matched resources are computed.  The rank is calculated based on the free ramsize. The resource which has more free ramsize gets the highest rank.

20 Scheduler The Scheduler module is invoked after the matching host list is generated by the match making module. The resources are ordered based on the rank. The node having the highest rank is considered as the Head node for the Virtual Clusters. Virtual machines are created as compute nodes from the matched host list and the list of these resources are sent to Dispatcher Module.

21 Virtual Cluster Manager Virtual Cluster Manager Module (VCM) is implemented by using the Round-Robin Algorithm. Based on the user’s node count, VCM creates the first node as the head node and others as compute nodes. The VCM waits until it receives the message on successful creation of Virtual Cluster and the completion software installation.

22 Virtual Machine Creator The two main functions of the virtual machine creator are  Updating Resource Information and  Creation of Virtual Machines  The resource information viz., hostname, OS, Architecture, Kernel Version, Ram disk, Logical Volume device, Ram Size, Broadcast Address, Net mask, Network Address and Gateway Addresses are getting updated in the host pool through VCS. Based the message received from the Virtual Cluster Manager it starts to create the virtual machines.  If the message received from the VCM is “Head Node”, it starts to create the Virtual Cluster Head Node with required software  else if the message received from the Virtual Cluster Manager is “Client Node”, it creates the compute node with minimal software.

23 Automation of GT (1) Prerequisite software for the Globus installation has been automated. The required softwares are  JDK  Ant  Tomcat web server  Junit  Torque

24 Automation of GT (2) All the steps required for the Globus installation also been automated.  Globus package installation  Configurations like SimpleCA, RFT and other services.

25 Security Server The Security Server is to perform mutual authentication dynamically. When the Virtual Cluster installation and configuration is completed, the Security client running in the virtual cluster head node sends the certificate file, signing policy file and the user's identity to the Security server running in VCS.

26 Executor Module After the formation of virtual clusters, the executor module is invoked. This module fetches the job information from the job pool and creates RSL file and contacts the Virtual Cluster Head Node’s Job Managed Factory Service and submits this job description RSL file. It gets the job status and updates the same in the job pool.

27 Transfer Module The job executable, input files and RSL file are transferred using the transfer manager to the Virtual Cluster Head Node. After the execution of the job, the output file is transferred to the head node of the physical cluster.

28 Virtual Information Service The resource information server fetches the Xen Hypervisor status, hostname, operating system, privileged domain id and name, Kernel Version, Ramdisk, Logical Volume Space, Total and Free Memory, Ram Size details, Network related information and the details of the created virtual cluster.

29 VCDE Architecture SCHEDULER JOB SUBMISSION PORTAL VIRTUAL CLUSTER SERVICEVIRTUAL INFORMATION SERVICE VIRTUAL CLUSTER SERVER DISPATCHER VIRTUAL CLUSTER MANAGER EXECUTOR MODULE JOB STATUS SERVICE RESOURCE AGGREGATOR MATCH MAKER SECURITY SERVER GLOBUS CONTAINER COMPUTENODE 1 VIRTUAL MACHINE CREATOR VIRTUAL MACHINE CREATOR VIRTUAL MACHINE CREATOR COMPUTENODE 2 COMPUTENODE n VIRTUAL HEAD NODE VIRTUAL COMPUTE NODE 1 VIRTUAL COMPUTE NODE n VIRTUALCLUSTER VIRTUAL CLUSTER DEVELOPMENT ENVIRONMET CLUSTER HEAD NODE USER POOL JOB POOL HOST POOL NETWORK MANAGER IP POOL TRANSFER MODULE

30 Fedora 4 nodes 512 MB 10 GB Fedora 4 nodes 512 MB 10 GB Fedora 4 nodes 512 MB 10 GB Fedora 4 nodes 512 MB 10 GB VM CREATOR HEAD NODE SLAVE NODE 2 SLAVE NODE 3 SLAVE NODE 1 VIRTUAL CLUSTER VIRTUAL CLUSTER FORMATION VCDE SERVER Ethernet

31 Image Constituents Node TypeImage SizeVM Image Constituents Head Node2.0 GB file system imageFedora Core 4 GT4.0.1 Binary Installer Jdk1.6 Apache Ant 1.6 PostgreSQL 7.4 Torque-1.2.0 Mpich-1.2.6 Junit-3.8.1 Jakarta-tomcat-5.0.27 FASTA Application and Nucleotide sequence database Compute Node1.0 GB file system imageFedora core 4 Mpich-1.2.6 Torque-1.2.0

32 Experimental Setup In our testbed, We have created the physical cluster with four nodes, one Head Node and three compute nodes.  The operating system in the head node is Scientific Linux 4.0 with  2.6 Kernel  Xen 3.0.2,  GT4.0.5  VCDE Server and VCDE Scheduler In the compute node, VM Creator is the only module running.

33 Conclusion The VCDE (Virtual Cluster Development Environment) has been designed and developed for creating virtual clusters automatically to satisfy the requirements of the users. There is no human intervention in the process of creating the virtual execution environment. The complete automation takes more time, so in the near future, the performance of the VCDE will be improved

34 References 1. Foster, I., C. Kesselman, J. Nick, and S. Tuecke, “The Physiology of the Grid: An Open Grid Services Architecture for Distributed Systems Integration”, 2002: Open Grid Service Infrastructure WG, Global Grid Forum. 2. Foster, I., C. Kesselman, and S. Tuecke, “The Anatomy of the Grid: Enabling Scalable Virtual Organizations”, International Journal of Supercomputer Applications, 2001. 15(3): p. 200-222. 3. Goldberg, R., “Survey of Virtual Machine Research”, IEEE Computer, 1974. 7(6): p. 34-45. 4. Keahey, K., I. Foster, T. Freeman, X. Zhang, and D. Galron, “Virtual Workspaces in the Grid”, ANL/MCS-P1231-0205, 2005. 5. Figueiredo, R., P. Dinda, and J. Fortes, "A Case for Grid Computing on Virtual Machines”, 23rd International Conference on Distributed Computing Systems. 2003. 6. Reed, D., I. Pratt, P. Menage, S. Early, and N. Stratford, “Xenoservers: Accountable Execution of Untrusted Programs”, 7th Workshop on Hot Topics in Operating Systems,1999. Rio Rico, AZ: IEEE Computer Society Press. 7. Barham, P., B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebar, I. Pratt, and A. Warfield, “Xen and the Art of Virtualization”, ACM Symposium on Operating Systems Principles (SOSP). 8. Sugerman, J., G. Venkitachalan, and B.H. Lim, “Virtualizing I/O devices on VMware workstation's hosted virtual machine monitor”, USENIX Annual Technical Conference, 2001.

35 References continued.. 9. Adabala, S., V. Chadha, P. Chawla, R. Figueiredo, J. Fortes, I. Krsul, A. Matsunaga, M. Tsugawa, J. Zhang, M. Zhao, L. Zhu, and X. Zhu, “From Virtualized Resources to Virtual Computing Grids”, The In-VIGO System, Future Generation Computer Systems, 2004. 10. Sundararaj, A. and P. Dinda,” Towards Virtual Networks for Virtual Machine Grid Computing”, 3rd USENIX Conference on Virtual Machine Technology, 2004. 11. Jiang, X. and D. Xu, “VIOLIN: Virtual Internetworking on OverLay Infrastructure”, Department of Computer Sciences Technical Report CSD TR 03-027, Purdue University, 2003. 12. Keahey, K., I. Foster, T. Freeman, X. Zhang, and D. Galron, “Virtual Workspaces in the Grid”, Europar. 2005, Lisbon, Portugal. 13. Keahey, K., I. Foster, T. Freeman, and X. Zhang, “Virtual Workspaces: Achieving Quality of Service and Quality of Life in the Grid”, Scientific Progamming Journal, 2005. 14. I.Foster, T. Freeman, K.Keahey, D.Scheftner, B.Sotomayor, X.Zhang, “Virtual Clusters for Grid Communities”, CCGRID 2006, Singapore (2006) 15. T. Freeman, K. Keahey, “Flying Low: Simple Leases with Workspace Pilot”, Euro-Par 2008. 16. Keahey, K., T. Freeman, J. Lauret, D. Olson, “Virtual Workspaces for Scientific Applications”, SciDAC 2007 Conference, Boston, MA, June 2007 17. Sotomayor, B. Masters paper, “ A Resource Management Model for VM-Based Virtual Workspaces”,University of Chicago, February 2007 18. Bradshaw, R., N. Desai, T. Freeman, K. Keahey, “A Scalable Approach To Deploying And Managing Appliances”, TeraGrid 2007, Madison, WI, June 2007


Download ppt "Virtual Cluster Development Environment (VCDE) By Dr.S.Thamarai Selvi Professor & Head Department of Information Technology Madras Institute of Technology."

Similar presentations


Ads by Google