Presentation is loading. Please wait.

Presentation is loading. Please wait.

FermiCloud Project Overview and Progress Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department of Energy.

Similar presentations


Presentation on theme: "FermiCloud Project Overview and Progress Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department of Energy."— Presentation transcript:

1 FermiCloud Project Overview and Progress Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department of Energy under contract No. DE-AC02-07CH11359

2 Outline Experience with FermiGrid Drivers for FermiCloud FermiCloud Project Program of Work Requirements & Technology Evaluation Security Current FermiCloud Capabilities Current FermiCloud Stakeholders Effort Summary and Acknowledgements 5-Feb-2013FermiCloud Review - Project Overview1

3 5-Feb-2013FermiCloud Review - Project Overview2 On November 10, 2004, Vicky White (then Fermilab CD Head) wrote the following: In order to better serve the entire program of the laboratory the Computing Division will place all of its production resources in a Grid infrastructure called FermiGrid. This strategy will continue to allow the large experiments who currently have dedicated resources to have first priority usage of certain resources that are purchased on their behalf. It will allow access to these dedicated resources, as well as other shared Farm and Analysis resources, for opportunistic use by various Virtual Organizations (VOs) that participate in FermiGrid (i.e. all of our lab programs) and by certain VOs that use the Open Science Grid. The strategy will allow us: to optimize use of resources at Fermilab to make a coherent way of putting Fermilab on the Open Science Grid to save some effort and resources by implementing certain shared services and approaches to work together more coherently to move all of our applications and services to run on the Grid to better handle a transition from Run II to LHC (and eventually to BTeV) in a time of shrinking budgets and possibly shrinking resources for Run II worldwide to fully support Open Science Grid and the LHC Computing Grid and gain positive benefit from this emerging infrastructure in the US and Europe.

4 FermiGrid – A “Quick” History In 2004, the Fermilab Computing Division undertook the strategy of placing all of its production resources in a Grid "meta-facility" infrastructure called FermiGrid (the Fermilab Campus Grid). In April 2005, the first “core” FermiGrid services were deployed. In 2007, the FermiGrid-HA project was commissioned with the goal of delivering 99.999% core service availability (not including building or network outages). During 2008, all of the FermiGrid services were redeployed in Xen virtual machines. 5-Feb-2013FermiCloud Review - Project Overview3

5 FermiGrid-HA2 Experience In 2009, based on operational experience and plans for redevelopment of the FCC-1 computer room, the FermiGrid-HA2 project was established to split the set of FermiGrid services across computer rooms in two separate buildings (FCC-2 and GCC-B). This project was completed on 7-Jun-2011 (and tested by a building failure less than two hours later). FermiGrid-HA2 worked exactly as designed. Our operational experience with FermiGrid-HA and FermiGrid- HA2 has shown the benefits of virtualization and service redundancy. Benefits to the user community – increased service reliability and uptime. Benefits to the service maintainers – flexible scheduling of maintenance and upgrade activities. 5-Feb-2013FermiCloud Review - Project Overview4

6 What Environment Do We Operate In? RiskMitigation Service Failure Redundant copies of Service (with service monitoring). System Failure Redundant copies of Service (on separate systems with service monitoring). Network Failure Redundant copies of Service (on separate systems with independent network connections and service monitoring). Building Failure Redundant copies of Service (on separate systems with independent network connections in separate buildings and service monitoring). FermiGrid-HA – Deployed in 2007 FermiGrid-HA2 – Deployed in 2011 5-Feb-20135FermiCloud Review - Project Overview

7 FermiGrid Service Availability (measured over the past year) Service Raw Availability HA Configuration Measured HA Availability Minutes of Downtime VOMS – VO Management Service 99.740%Active-Active99.883%600 GUMS – Grid User Mapping Service 99.863%Active-Active100.000%0 SAZ – Site AuthoriZation Service 99.864%Active-Active100.000%0 Squid – Web Cache99.817%Active-Active99.988%60 MyProxy – Grid Proxy Service99.751%Active-Standby99.874%600 ReSS – Resource Selection Service 99.915%Active-Active99.988%60 Gratia – Fermilab and OSG Accounting 99.662%Active-Standby99.964%180 MySQL Database99.937%Active-Active100.000%0 5-Feb-20136FermiCloud Review - Project Overview

8 Experience with FermiGrid = Drivers for FermiCloud 5-Feb-2013FermiCloud Review - Project Overview7 Access to pools of resources using common interfaces: Monitoring, quotas, allocations, accounting, etc. Opportunistic access: Users can use the common interfaces to “burst” to additional resources to meet their needs Efficient operations: Deploy common services centrally High availability services: Flexible and resilient operations

9 Additional Drivers for FermiCloud Existing development and integration (AKA the FAPL cluster) facilities were: Technically obsolescent and unable to be used effectively to test and deploy the current generations of Grid middleware. The hardware was over 8 years old and was falling apart. The needs of the developers and service administrators in the Grid and Cloud Computing Department for reliable and “at scale” development and integration facilities were growing. Operational experience with FermiGrid had demonstrated that virtualization could be used to deliver production class services. 5-Feb-2013FermiCloud Review - Project Overview8

10 Additional Developments 2004- 2012 Large multi-core servers have evolved from from 2 to 64 cores per box, A single “rogue” user/application can impact 63 other users/applications. Virtualization can provide mechanisms to securely isolate users/applications. Typical “bare metal” hardware has significantly more performance than usually needed for a single-purpose server, Virtualization can provide mechanisms to harvest/utilize the remaining cycles. Complicated software stacks are difficult to distribute on grid, Distribution of preconfigured virtual machines together with GlideinWMS can aid in addressing this problem. Large demand for transient development/testing/integration work, Virtual machines are ideal for this work. Science is increasingly turning to complex, multiphase workflows. Virtualization coupled with cloud can provide the ability to flexibly reconfigure hardware “on demand” to meet the changing needs of science. 5-Feb-2013FermiCloud Review - Project Overview9

11 FermiCloud – Hardware Acquisition In the FY2009 budget input cycle (in Jun-Aug 2008), based on the experience with the “static” virtualization in FermiGrid, the (then) Grid Facilities Department proposed that hardware to form a “FermiCloud” infrastructure be purchased. This was not funded for FY2009. Following an effort to educate the potential stakeholder communities during FY2009, the hardware request was repeated in the FY2010 budget input cycle (Jun-Aug 2009), and this time it was funded. The hardware specifications were developed in conjunction with the GPCF acquisition, and the hardware was delivered in May 2010. 5-Feb-2013FermiCloud Review - Project Overview10

12 FermiCloud – Initial Project Specifications Once funded, the FermiCloud Project was established with the goal of developing and establishing Scientific Cloud capabilities for the Fermilab Scientific Program, Building on the very successful FermiGrid program that supports the full Fermilab user community and makes significant contributions as members of the Open Science Grid Consortium. Reuse High Availabilty, AuthZ/AuthN, Virtualization from Grid In a (very) broad brush, the mission of the FermiCloud project is: To deploy a production quality Infrastructure as a Service (IaaS) Cloud Computing capability in support of the Fermilab Scientific Program. To support additional IaaS, PaaS and SaaS Cloud Computing capabilities based on the FermiCloud infrastructure at Fermilab. The FermiCloud project is a program of work that is split over several overlapping phases. Each phase builds on the capabilities delivered as part of the previous phases. 5-Feb-2013FermiCloud Review - Project Overview11

13 Overlapping Phases 5-Feb-2013FermiCloud Review - Project Overview12 Phase 1: “Build and Deploy the Infrastructure” Phase 1: “Build and Deploy the Infrastructure” Phase 2: “Deploy Management Services, Extend the Infrastructure and Research Capabilities” Phase 2: “Deploy Management Services, Extend the Infrastructure and Research Capabilities” Phase 3: “Establish Production Services and Evolve System Capabilities in Response to User Needs & Requests” Phase 3: “Establish Production Services and Evolve System Capabilities in Response to User Needs & Requests” Phase 4: “Expand the service capabilities to serve more of our user communities” Phase 4: “Expand the service capabilities to serve more of our user communities” Time Today

14 FermiCloud Phase 1: “Build and Deploy the Infrastructure” Specify, acquire and deploy the FermiCloud hardware, Establish initial FermiCloud requirements and selected the “best” open source cloud computing framework that met these requirements (OpenNebula), Deploy capabilities to meet the needs of the stakeholders (JDEM analysis development, Grid Developers and Integration test stands, Storage/dCache Developers, LQCD testbed). 5-Feb-2013FermiCloud Review - Project Overview13 Completed late 2010

15 FermiCloud-HA Upon delivery, the FermiCloud hardware was initially sited in two adjacent racks in the GCC-B computer room. The FY2011 “Worker Node” acquisition needed additional rack space in the GCC-B computer room, so the Grid & Cloud Computing Department agreed to move ~50% of FermiCloud from GCC-B to the (then) new FCC-3 computer room. GCC-B had several cooling related outages that affected FermiCloud during the summer of 2011, As we had learned from the operations of the FermiGrid services, having a high availability services infrastructure that is split across multiple buildings offers benefits to both the user and service administration communities, This move would allow us to eventually build in high availability and redundancy capabilities into the FermiCloud architecture. The systems were relocated in August 2011 and the process started of learning how to: Deploy the OpenNebula software in a HA configuration, Acquire and deploy a redundant HA SAN 5-Feb-2013FermiCloud Review - Project Overview14 Included as part of Phases 2&3

16 FermiCloud Phase 2: “Deploy Management Services, Extend the Infrastructure and Research Capabilities” Implement x509 based authentication (patches contributed back to OpenNebula project and are generally available in OpenNebula V3.2), Perform secure contextualization of virtual machines at launch, Perform virtualized filesystem I/O measurements, Develop (draft) economic model, Implement monitoring and accounting, Collaborate with KISTI personnel to demonstrate Grid and Cloud Bursting capabilities, Perform initial benchmarks of Virtualized MPI, Target “small” low-cpu-load servers such as Grid gatekeepers, forwarding nodes, small databases, monitoring, etc., Begin the hardware deployment of a distributed SAN, Investigate automated provisioning mechanisms (puppet & cobbler). 5-Feb-2013FermiCloud Review - Project Overview15 Completed late 2012

17 FermiCloud Phase 3: “Establish Production Services and Evolve System Capabilities in Response to User Needs & Requests” Deploy highly available 24x7 production services, -Both infrastructure and user services. Deploy puppet & cobbler in production, -Done. Develop and deploy real idle machine detection, -Idle VM detection tool written by summer student. Research possibilities for a true multi-user filesystem on top of a distributed & replicated SAN, -GFS2 on FibreChannel SAN across FCC3 and GCC-B. Live migration becomes important for this phase. -Manual migration has been used, Live migration is currently in test, Automatically triggered live migration yet to come. Formal ITIL Change Management “Go-Live”, -Have been operating under “almost” ITIL Change Management for the past several months. 5-Feb-2013FermiCloud Review - Project Overview16 Underway

18 FermiCloud Phase 4: “Expand the service capabilities to serve more of our user communities” Complete the deployment of the true multi-user filesystem on top of a distributed & replicated SAN, Demonstrate interoperability and federation: -Accepting VM's as batch jobs, -Interoperation with other Fermilab virtualization infrastructures (GPCF, VMware), -Interoperation with KISTI cloud, Nimbus, Amazon EC2, other community and commercial clouds, Participate in Fermilab 100 Gb/s network testbed, -Have just taken delivery of 10 Gbit/second cards. Perform more “Virtualized MPI” benchmarks and run some real world scientific MPI codes, -The priority of this work will depend on finding a scientific stakeholder that is interested in this capability. Reevaluate available open source Cloud computing stacks, -Including OpenStack, -We will also reevaluate the latest versions of Eucalyptus, Nimbus and OpenNebula. 5-Feb-2013FermiCloud Review - Project Overview17 Specifications Being Finalized Specifications Being Finalized

19 FermiCloud Phase 5 See the program of work in the (draft) Fermilab-KISTI FermiCloud CRADA, This phase will also incorporate any work or changes that arise out of the Scientific Computing Division strategy on Clouds and Virtualization, Additional requests and specifications are also being gathered! 5-Feb-2013FermiCloud Review - Project Overview18 Specifications Under Development Specifications Under Development

20 FermiCloud - Stack Evaluation The first order of business was the evaluation of the (then) available Cloud stacks: Eucalyptus Nimbus OpenNebula OpenStack was not yet “on the market”, so it was not included in the evaluations. We will evaluate OpenStack this year, We will also reevaluate the latest versions of Eucalyptus, Nimbus and OpenNebula. 5-Feb-2013FermiCloud Review - Project Overview19

21 Evaluation Criteria CriteriaMax PointsEucalyptusNimbusOpenNebula OS and Hypervisor41263036 Provisioning and Contextualization9488 Object Store31151817 Interoperability36303124 Functionality89135465 Accounting/Billing6113 Network Topology46212937 Clustered File Systems2022 Security Totals27141713 TOTAL287123189205 5-Feb-2013FermiCloud Review - Project Overview20 See CD-DocDB #s 4195, 4276, 4536 for the details of this evaluation

22 Results of the FermiCloud Stack Evaluation OpenNebula was the clear “winner”, Biggest strengths: reliability of operations, flexibility in types of virtual machines you can make. Biggest lacks: X.509 authentication, HA configuration, Accounting. Nimbus was next, Biggest strength: Developers ~30 miles away at ANL. Biggest lacks: Poor documentation, uses SimpleCA rather than IGTF CAs for internal authentication. Eucalyptus was a distant third. Show-stoppers: reboot of head node lost all virtual machines, Intellectual Property policy on patches. 5-Feb-2013FermiCloud Review - Project Overview21

23 Security Main areas of cloud security development: X.509 Authentication/Authorization: X.509 Authentication written by T. Hesselroth, code submitted to and accepted by OpenNebula, publicly available since Jan-2012. Secure Contextualization: Secrets such as X.509 service certificates and Kerberos keytabs are not stored in virtual machines (See following talk for more details). Security Policy: A security taskforce met and delivered a report to the Fermilab Computer Security Board, recommending the creation of a new Cloud Computing Environment, now in progress. We also participated in the HEPiX Virtualisation Task Force, We respectfully disagree with the recommendations regarding VM endorsement. 5-Feb-2013FermiCloud Review - Project Overview22

24 Current FermiCloud Capabilities The current FermiCloud hardware capabilities include: Public network access via the high performance Fermilab network, -This is a distributed, redundant network (see next presentation). Private 1 Gb/sec network, -This network is bridged across FCC and GCC on private fiber (see next presentation), High performance Infiniband network to support HPC based calculations, -Performance under virtualization is ~equivalent to “bare metal” (see next presentation), -Currently split into two segments, -Segments could be bridged via Mellanox MetroX. Access to a high performance FibreChannel based SAN, -This SAN spans both buildings (see next presentation), Access to the high performance BlueArc based filesystems, -The BlueArc is located on FCC-2, Access to the Fermilab dCache and enStore services, -These services are split across FCC and GCC, Access to 100 Gbit Ethernet test bed in LCC (via FermiCloud integration nodes), -Intel 10 Gbit Ethernet converged network adapter X540-T1. 5-Feb-2013FermiCloud Review - Project Overview23

25 FermiCloud – FTE Months/Year YearManagementOperations Project + Development TOTAL 20101.794.760.026.57 20112.3615.696.0724.12 20121.569.863.4814.90 2013 to Date0.827.031.759.60 Average1.8710.673.2315.77 5-Feb-2013FermiCloud Review - Project Overview24

26 FermiCloud FTE Effort Plot 5-Feb-2013FermiCloud Review - Project Overview25

27 Current Stakeholders Grid & Cloud Computing Personnel, Run II – CDF & D0, Intensity Frontier Experiments, Cosmic Frontier (LSST), Korea Institute of Science & Technology Investigation (KISTI), Open Science Grid (OSG) software refactoring from pacman to RPM based distribution, Grid middleware development. 5-Feb-2013FermiCloud Review - Project Overview26

28 FermiCloud Project Summary - 1 Science is directly and indirectly benefiting from FermiCloud: CDF, D0, Intensity Frontier, Cosmic Frontier, CMS, ATLAS, Open Science Grid,… FermiCloud operates at the forefront of delivering cloud computing capabilities to support scientific research: By starting small, developing a list of requirements, building on existing Grid knowledge and infrastructure to address those requirements, FermiCloud has managed to deliver a production class Infrastructure as a Service cloud computing capability that supports science at Fermilab. FermiCloud has provided FermiGrid with an infrastructure that has allowed us to test Grid middleware at production scale prior to deployment. The Open Science Grid software team used FermiCloud resources to support their RPM “refactoring” and is currently using it to support their ongoing middleware development/integration. 5-Feb-201327FermiCloud Review - Project Overview

29 FermiCloud Project Summary - 2 The FermiCloud collaboration with KISTI has leveraged the resources and expertise of both institutions to achieve significant benefits. vCluster has demonstrated proof of principal “Grid Bursting” using FermiCloud and Amazon EC2 resources. Using SRIOV drivers on FermiCloud virtual machines, MPI performance has been demonstrated to be >96% of the native “bare metal” performance. We are currently working on finalizing a CRADA with KISTI for future collaboration. 5-Feb-201328FermiCloud Review - Project Overview

30 FermiCloud Project Summary - 3 FermiCloud personnel are currently working to deliver: The Phase 3 deliverables, including a SAN storage deployment that will offer a true multi-user filesystem on top of a distributed & replicated SAN, Finalize the Phase 4 specifications, Collaborate in the development of the Phase 5 specifications. The future is mostly cloudy. 5-Feb-201329FermiCloud Review - Project Overview

31 Acknowledgements None of this work could have been accomplished without: The excellent support from other departments of the Fermilab Computing Sector – including Computing Facilities, Site Networking, and Logistics. The excellent collaboration with the open source communities – especially Scientific Linux and OpenNebula, As well as the excellent collaboration and contributions from KISTI. 5-Feb-2013FermiCloud Review - Project Overview30

32 Thank You! Any Questions?

33 Extra Slides Are present in the “Extra Slides Presentation”


Download ppt "FermiCloud Project Overview and Progress Keith Chadwick Grid & Cloud Computing Department Head Fermilab Work supported by the U.S. Department of Energy."

Similar presentations


Ads by Google