Download presentation
Presentation is loading. Please wait.
Published byDaniella Prudence Morris Modified over 8 years ago
1
Cloud Computing and Virtualization in ASTI Nena Carina P. Española nena@asti.dost.gov.ph // SEAIP 2011 // November 28, 2011// NCHC, Taichung, Taiwan //
2
Outline Background About ASTI The Philippine e-Science Grid ASTI's HPC Facility Virtualization in ASTI Before Virtualization ASTI's Private Cloud Benchmarks Why we love it Next Steps
3
About ASTI ICT and microelectronics R&D agency under the Department of Science and Technology Situated inside the Univ. of the Philippines Diliman Campus Manages and operates PREGINET Office of the Director Research and Development Division Solutions and Services Division Special Projects Division Knowledge Management Division
4
Focus Technology Areas Internet, Network, and Wireless Technologies Open Source and Low-cost Computing Grid Computing Applications and Digital Content Development Sensor and Warning Systems Development
5
The Philippine e-Science Grid Objectives Establish a national grid infrastructure in the country that will enable collaborative research among local educational and research institutions Provide seamless access to high-performance computing resources and applications for Life and Physical Sciences
6
UP-CSRC AdMU-SOSE PREGINET InternationalResearchNetwork ASTI PSciGrid Philippine e-Science Grid Core Advanced Science and Technology Institute Computer Science Research Center – Univ. of the Philippines School of Science and Engineering - Ateneo de Manila University
7
Collaborations UP Marine Science Institute Fish Larval Dispersal Model for the Bohol Sea UP National Institute of Physics Undergraduate and Post-graduate students UP Los Baños Biotech Data Warehousing for Drug Discovery Manila Observatory
8
Collaborations International Rice Research Institute National Institute of Geological Sciences Energy Development Corporation First paying client Runs iTOUGH2 for reservoir modelling
9
International Linkages Contributing member EUAsiaGrid, PANDA Grid, EGI Institutional Member Pacific Rim and Grid Middleware Assembly
10
ASTI's HPC Facility Computing 51 computing nodes (2 x 2.0 Ghz Intel Xeon Quad-cores), 408 cores 500 GB of disk space and 16 GB of RAM per node 8 FPGA-based hardware accelerators Storage 4TB for DNA and protein sequences 4TB Softwar Mirror 4TB Image Repository
11
Pre-Virtualization Different computing clusters for different purposes Banyuhay – The Bionformatics Cluster Unos – The Meteorology Cluster Buhawi – The General-Purpose Cluster Dalubhasaan – The Cluster Sandbox Liknayan – The EUAsiaGrid and EGI Collaboration Cluster (EGI certified production cluster)
12
Pre-Virtualization Under/Over-utilization of Cluster Some clusters are very seldom used while others are overutilized Disk storage not prioritized Some users complained of inadequate disk space No usage policy in place Torque PBS is present in clusters but is not being used
13
Virtualization in ASTI ASTI HPC Overhaul Existing physical clusters are decomissioned one by one to create one big physical cluster Not yet complete All freed-up nodes are reinstalled with CentOS 5.6 with Xen from a FreeBSD PXE Server
14
ASTI's Private Cloud OpenNebula 3.0 Cloud Toolkit Xen Driver Hostname: one.pscigrid.gov.ph 17 Total hosts still growing 4TB Image Repository
15
OpenNebula 3.0 An open-source cloud computing toolkit for managing heterogeneous distributed data center infrastructures Supports several hypervisors: Xen, KVM, and VMware
16
Cloud Operations Center
17
Running VM's
18
Image Repository
19
ASTI's Private Cloud Virtual Clusters Liknayan Alien (PandaGrid) EDC Multipurpose Pragma VM's Geobloss OpenMotif Other VM's Geoserver gLite Services Storage Element APEL Node UI
20
ASTI's Private Cloud Prepared images of cluster have the following pre-installed software Torque/Maui installation Gold Allocation Manager Ganglia Monitoring Tentakel and autofs
21
NAS Parallel Benchmark A set of benchmarks targeting performance evaluation of highly parallel supercomputers Developed and maintained by the NASA Advanced Supercomputing (NAS) Division PHYSICAL CLUSTER 4 nodes, 8 cores per node 8GB RAM per Node Gibabit Interconnect PHYSICAL CLUSTER 4 nodes, 8 cores per node 8GB RAM per Node Gibabit Interconnect VIRTUAL CLUSTER 8 nodes, 4 cores per node 8GB RAM per Node Gibabit Interconnect VIRTUAL CLUSTER 8 nodes, 4 cores per node 8GB RAM per Node Gibabit Interconnect
22
NAS Parallel Benchmark
23
NPB-MPI Benchmark Result
24
Why we love it? Ease of deployment Generic OS images Prepared generic cluster front-end with Torque PBS and Maui, and Tentakel Prepared generic worker node Better resource utilization Addresses the issue of over/under-utilization of resources Can easily add more disk space for VM's
25
Why we love it? Basically hardware agnostic so VM migration from one host to another is very easy Very flexible: CPU and RAM configurations of VM's can be changed easily Image Repository is mirrored for redundancy
26
Next Steps Implement a fair share usage policy Use Torque PBS to queue jobs Quotas for directories Orientation for new users Gold Allocation Manager for accounting and billing Job Submission Portal Web-based user-friendly portal for the users Additional storage and computing power
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.