Www.ci.anl.gov www.ci.uchicago.edu Out of the basement, into the cloud: Extending the Tier 3 Lincoln Bryant Computation and Enrico Fermi Institutes.

Slides:



Advertisements
Similar presentations
Current methods for negotiating firewalls for the Condor ® system Bruce Beckles (University of Cambridge Computing Service) Se-Chang Son (University of.
Advertisements

Facility Virtualization Experience at MWT2 Lincoln Bryant Computation and Enrico Fermi Institutes.
Using EC2 with HTCondor Todd L Miller 1. › Introduction › Submitting an EC2 job (user tutorial) › New features and other improvements › John Hover talking.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Building Campus HTC Sharing Infrastructures Derek Weitzel University of Nebraska – Lincoln (Open Science Grid Hat)
Campus High Throughput Computing (HTC) Infrastructures (aka Campus Grids) Dan Fraser OSG Production Coordinator Campus Grids Lead.
An Approach to Secure Cloud Computing Architectures By Y. Serge Joseph FAU security Group February 24th, 2011.
Ken Birman. Massive data centers We’ve discussed the emergence of massive data centers associated with web applications and cloud computing Generally.
FI-WARE – Future Internet Core Platform FI-WARE Cloud Hosting July 2011 High-level description.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
CVMFS: Software Access Anywhere Dan Bradley Any data, Any time, Anywhere Project.
An Introduction to Cloud Computing. The challenge Add new services for your users quickly and cost effectively.
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
Jaeyoung Yoon Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
CVMFS AT TIER2S Sarah Williams Indiana University.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
CLOUD COMPUTING & COST MANAGEMENT S. Gurubalasubramaniyan, MSc IT, MTech Presented by.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
Brainstorming Thin-provisioned Tier 3s.
Cloud Computing for the Enterprise November 18th, This work is licensed under a Creative Commons.
+ CS 325: CS Hardware and Software Organization and Architecture Cloud Architectures.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Grid Appliance – On the Design of Self-Organizing, Decentralized Grids David Wolinsky, Arjun Prakash, and Renato Figueiredo ACIS Lab at the University.
BaBar MC production BaBar MC production software VU (Amsterdam University) A lot of computers EDG testbed (NIKHEF) Jobs Results The simple question:
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Ian Alderman A Little History…
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
Grids, Clouds and the Community. Cloud Technology and the NGS Steve Thorn Edinburgh University Matteo Turilli, Oxford University Presented by David Fergusson.
BOSCO Architecture Derek Weitzel University of Nebraska – Lincoln.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
Simplifying Resource Sharing in Voluntary Grid Computing with the Grid Appliance David Wolinsky Renato Figueiredo ACIS Lab University of Florida.
Grid job submission using HTCondor Andrew Lahiff.
Ideas for a virtual analysis facility Stefano Bagnasco, INFN Torino CAF & PROOF Workshop CERN Nov 29-30, 2007.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Remote Cluster Connect Factories David Lesny University of Illinois.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Derek Wright Computer Sciences Department University of Wisconsin-Madison MPI Scheduling in Condor: An.
Virtual Batch Queues A Service Oriented View of “The Fabric” Rich Baker Brookhaven National Laboratory April 4, 2002.
Introducing Virtualization via an OpenStack “Cloud” System to SUNY Orange Applied Technology Students SUNY Innovative Instruction Technology Grant Christopher.
Derek Wright Computer Sciences Department University of Wisconsin-Madison Condor and MPI Paradyn/Condor.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
Running persistent jobs in Condor Derek Weitzel & Brian Bockelman Holland Computing Center.
Grid and Cloud Computing Alessandro Usai SWITCH Sergio Maffioletti Grid Computing Competence Centre - UZH/GC3
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Accelerating Campus Research with Connective Services for Cyberinfrastructure Rob Gardner Steve Tuecke.
An Introduction to Campus Grids 19-Apr-2010 Keith Chadwick & Steve Timm.
HTCondor Private Cloud Integration Andrew Lahiff STFC Rutherford Appleton Laboratory European HTCondor Site Admins Meeting 2014.
CLOUD COMPUTING WHAT IS CLOUD COMPUTING?  Cloud Computing, also known as ‘on-demand computing’, is a kind of Internet-based computing,
Cloud Computing from a Developer’s Perspective Shlomo Swidler CTO & Founder mydrifts.com 25 January 2009.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Platform & Engineering Services CERN IT Department CH-1211 Geneva 23 Switzerland t PES Improving resilience of T0 grid services Manuel Guijarro.
UC3: A Framework for Cooperative Computing at the University of Chicago Lincoln Bryant Computation and Enrico Fermi.
Tier 1 Experience Provisioning Virtualized Worker Nodes on Demand Ian Collier, Andrew Lahiff UK Tier 1 Centre, RAL ISGC 2014.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Information Initiative Center, Hokkaido University North 11, West 5, Sapporo , Japan Tel, Fax: General.
Campus Grid Technology Derek Weitzel University of Nebraska – Lincoln Holland Computing Center (HCC) Home of the 2012 OSG AHM!
Trusted Virtual Machine Images the HEPiX Point of View Tony Cass October 21 st 2011.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Using the CMS Higher Level Trigger Farm as a Cloud Resource David Colling Imperial College London.
UCS D OSG Summer School 2011 Intro to DHTC OSG Summer School An introduction to Distributed High-Throughput Computing with emphasis on Grid computing.
Condor Week May 2012No user requirements1 Condor Week 2012 An argument for moving the requirements out of user hands - The CMS experience presented.
Academia Sinica Grid Computing Centre (ASGC), Taiwan
HTCondor Annex (There are many clouds like it, but this one is mine.)
Dag Toppe Larsen UiB/CERN CERN,
Dag Toppe Larsen UiB/CERN CERN,
ATLAS Cloud Operations
Presentation transcript:

Out of the basement, into the cloud: Extending the Tier 3 Lincoln Bryant Computation and Enrico Fermi Institutes

2 Introduction

3 Motivations UChicago has a large Tier 3 community, but a relatively small Tier 3. There isn’t a lot of money to spend on new hardware. Even if we did buy new hardware, our Tier 3 isn’t always 100% utilized. There are existing, untapped resources available for opportunistic use.

4 Where to get resources? There are a number of ways to expand capacity of a Tier 3 without investing in a lot of equipment. A few examples: – Run opportunistically on a Tier 2 – Utilize campus grid resources – Rent someone else’s resources on the cloud

5 Using a Tier 2 MWT2, for example, is over pledged on compute resources. – It would be nice if Tier 3 users could use a Tier 2 when it’s drained – We could have Tier 3 jobs in the mix with other opportunistic jobs. MWT2 is also deploying a new virtual machine infrastructure. – Right now it’s just a KVM implementation, but eventually this will use OpenStack or a similar hybrid cloud technology. – We could send some Tier 3-based virtual machines over there opportunistically.

6 Using a campus grid The University of Chicago Computing Cooperative (UC3) can support hundreds of jobs. We can’t send them VMs (yet) – but if we’re lightweight and/or clever enough, we can send jobs via HTCondor’s flocking mechanism. Plugging in to our campus grid infrastructure promotes mutual opportunity for researchers.

7 Using the cloud Amazon is more than happy to take our money and run our VMs on their Elastic Compute Cloud (EC2). We only need to provide them with an appliance definition Since the Tier 3 is used in bursts, we can spin up VMs on an as-needed basis. – David Champion will talk more about this

8 Only a few tools needed For virtual machines (i.e., the case of MWT2 and EC2), we use BoxGrinder. – Creates virtual machines and cloud appliances. – With one recipe, we can build VMs that run on any number of virtualization platforms. o Cloud environments like EC2, OpenStack, Nimbus o Directly on hypervisors like KVM and Xen In the case of the campus grid, we use HTCondor flocking – We could have also done this to utilize MWT2

9 Setup

What’s our current setup? UChicago’s Tier 3 has a little over 300 job slots. Mix of old hardware and new – Mostly 2005-vintage machines with 4 job slots each – About ten machines with 24 job slots each – 2GB RAM available per job slot.

A small snag Our Tier 3 uses PBS Torque + Maui for job scheduling. Unfortunately, there isn’t a good way to talk to HTCondor pools from PBS. – (you can, however, do the reverse with BOSCO) At our campus, we need to use HTCondor in order to utilize our campus grid infrastructure.

Enter the HTCondor overlay! Installed on our head nodes and worker nodes alongside of PBS Currently configured to only run when CPUs are idle (i.e., not used by PBS jobs) You could say this is “Phase One” of our transition to HTCondor. Eventually we’ll turn off PBS.

A snapshot of the Tier 3 ~]$ qstat -B Server Max Tot Que Run Hld Wat Trn Ext Status uct3-pbs.mwt2.or Active ~]$ qstat -B Server Max Tot Que Run Hld Wat Trn Ext Status uct3-pbs.mwt2.or Active ~]$ condor_status -total Total Owner Claimed Unclaimed Matched Preempting Backfill X86_64/LINUX Total ~]$ condor_status -total Total Owner Claimed Unclaimed Matched Preempting Backfill X86_64/LINUX Total With the overlay, jobs that run via PBS are listed as being in “owner” state in HTCondor. – Due to inefficient jobs, there’s a small disparity between how many jobs each scheduler thinks are running There may be some scalability issues lurking.

Paradigm shifts

User expectations Users typically rely on their home directory being accessible or NFS mounts being available. Users also sometimes do undesirable things like reading/writing out of our xrootd FUSE mounts. We decided to set up the UCT3 HTCondor such that users should have no such expectations right from the get-go.

Living without expectations With HTCondor, users should not expect their home directories, NFS shares, or to even run jobs as their own user. Instead, we encourage users to use FAX for data access, CVMFS for software, and the Tier 3 XrootD system or possibly MWT2_LOCALGROUPDISK for large job outputs. Smaller outputs (on the order of 1GB) can be handled by HTCondor’s internal mechanisms. Some level of PBS entrenchment is to be expected, so each worker advertises available resource as ClassAds

Condor ClassAds Users don’t really care where their jobs go, as long as they run We have all of our Tier 3 workers in a single Condor pool, but we need some way of directing job flow to machines that have what the users need. To make this as transparent as possible to the user, they simply require one or more resources from the following table: ClassAdDescription HAS_CVMFSHas /cvmfs/atlas-*.cern.ch mounted HAS_TIER3_DATAHas /share/t3data[1-3] mounted HAS_TIER3_HOMEHas /share/home mounted

Tier 3 on Tier 2

Running VMs on a Tier 2 Boxgrinder generates KVM images for a hypervisor host. The hypervisor host also runs Tier 2 VMs for the ANALY_MWT2_CLOUD PanDA queue. There are still a number of manual steps. – Have to SSH to the hypervisor and start the VMs by hand – Need to set up the IP addresses of the VMs in Cobbler – Also need to provide ephemeral scratch space. We hope this will become much more streamlined come OpenStack.

Available resources We elected not to mount the various Tier 3 NFS shares. This is also a bit of a carrot to get our Tier 3 folks to use FAX. Again, we would like to push our users toward writing lightweight jobs. As long as users don’t require HAS_TIER3_DATA or HAS_TIER3_HOME, their jobs run on the virtualized Tier 3 workers transparently.

Tier 3 on the cloud

EC2 Background There have been a number of implementations for the “Tier 3 on the cloud” elsewhere. – We have tried to do the minimum to get jobs running in a cloud environment. For our prototyping, we are using ‘small’ instances. – 1 CPU / 1.7GB RAM We’ll need to do a bit more legwork to figure out what kind of instance sizes/prices are ideal.

Implementation Similar to running on a Tier 2, we use a Boxgrinder definition to run on EC2. – Little to no modification from the Tier-3-on-Tier-2 definition. – We simply configure Boxgrinder with our EC2 info and create an EC2 appliance instead of a KVM image. EC2 worker nodes are sitting behind NAT – Solve this problem by using HTCondor’s connection brokering (CCB) mechanism. Like the Tier-3-on-Tier-2 scenario, this should be transparent to the end user.

Some caveats How do we mount our NFS shares? – We don’t. – There’s a potential security hole in exposing your NFS mounts to such a large IP block. – Just use FAX instead! How do we export our users? – Again, we don’t. – You could add them to the image or use something like puppet, but those are kind of cumbersome solutions. – Our HTCondor configuration only relies on the ‘uct3’ user for job execution. Interactive use, PROOF, etc? – Likely we’ll only want to use these resources for batch jobs. – Users can use their existing Tier 3 resources for that type of work.

Improving cloud-based Tier 3 resources There are a number of other areas where we could probably improve our use of the cloud: – Proxying connections into Tier 3 NFS shares? – Setting up a Squid server in the cloud for CVMFS? – XrootD + File residency manager for caching data? There is clearly more work to be done in this area.

Tier 3 on a campus grid

A very brief intro to UC3 University of Chicago Computing Cooperative Framework for connecting campus resources – “Seeder” cluster with a little over 500 job slots – IT Services cluster with about 100 job slots (more on the way) – Can run opportunistically on MWT2 (almost 6000 job slots) – Access to the Research Computing Center’s “Midway” cluster (over 3500 job slots) via BOSCO Registered VO with OSG – Jobs can also overflow onto the grid.

Maximizing resource usage with parrot Since the vast majority of jobs will require CVMFS, our users will be stuck on nodes that have CVMFS mounted. We can get around this limitation by using parrot to access CVMFS in user space. There’s a bit of complexity that we would like to abstract away from the user. – Be sure to see Suchandra’s talk tomorrow!

The other advantage Lots of idle CPU that could be used by UC3!

Moving forward

How does this help you? We would eventually like to help small Tier 3 sites run jobs on our Tier 2 Hopefully other Tier 2 sites will be able/willing to run jobs for smaller Tier 3 sites as well. You can fork our Boxgrinder definition on github – The same appliance definition should work on most hypervisors. – You’ll need to make the appropriate changes for your site. – For EC2, there needs to be some discussion about how to get the most bang for your buck.

Thank you!

Bonus slides!

Tier 3 Usage