Www.ci.anl.gov www.ci.uchicago.edu UC3: A Framework for Cooperative Computing at the University of Chicago Lincoln Bryant Computation and Enrico Fermi.

Slides:



Advertisements
Similar presentations
Efi.uchicago.edu ci.uchicago.edu Connecting Campus Infrastructures with HTCondor Services Lincoln Bryant Computation and Enrico Fermi Institutes University.
Advertisements

Andrew McNab - Manchester HEP - 2 May 2002 Testbed and Authorisation EU DataGrid Testbed 1 Job Lifecycle Software releases Authorisation at your site Grid/Web.
Campus Grids & Campus Infrastructures Community Rob Gardner Computation Institute / University of Chicago July 17, 2013.
Overview of Wisconsin Campus Grid Dan Bradley Center for High-Throughput Computing.
Bosco: Enabling Researchers to Expand Their HTC Resources The Bosco Team: Dan Fraser, Jaime Frey, Brooklin Gore, Marco Mambelli, Alain Roy, Todd Tannenbaum,
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
High Performance Computing Course Notes Grid Computing.
Campus High Throughput Computing (HTC) Infrastructures (aka Campus Grids) Dan Fraser OSG Production Coordinator Campus Grids Lead.
Skeleton Key: Sharing Data Across Campus Infrastructures Suchandra Thapa Computation Institute / University of Chicago.
Jan 2010 Current OSG Efforts and Status, Grid Deployment Board, Jan 12 th 2010 OSG has weekly Operations and Production Meetings including US ATLAS and.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Grid Services at NERSC Shreyas Cholia Open Software and Programming Group, NERSC NERSC User Group Meeting September 17, 2007.
Swift: A Scientist’s Gateway to Campus Clusters, Grids and Supercomputers Swift project: Presenter contact:
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
SCD FIFE Workshop - GlideinWMS Overview GlideinWMS Overview FIFE Workshop (June 04, 2013) - Parag Mhashilkar Why GlideinWMS? GlideinWMS Architecture Summary.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
UC3: A Framework for Cooperative Computing at the University of Chicago Marco Mambelli ( ), Rob.
Project 1 Online multi-user video monitoring system.
OSG Area Coordinators Campus Infrastructures Update Dan Fraser Miha Ahronovitz, Jaime Frey, Rob Gardner, Brooklin Gore, Marco Mambelli, Todd Tannenbaum,
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Connect.usatlas.org ci.uchicago.edu ATLAS Connect Technicals & Usability David Champion Computation Institute & Enrico Fermi Institute University of Chicago.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
The Research Computing Center Nicholas Labello
BOSCO Architecture Derek Weitzel University of Nebraska – Lincoln.
Use of Condor on the Open Science Grid Chris Green, OSG User Group / FNAL Condor Week, April
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
Evolution of the Open Science Grid Authentication Model Kevin Hill Fermilab OSG Security Team.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Successful Common Projects: Structures and Processes WLCG Management.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
09/02 ID099-1 September 9, 2002Grid Technology Panel Patrick Dreher Technical Panel Discussion: Progress in Developing a Web Services Data Analysis Grid.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
The impacts of climate change on global hydrology and water resources Simon Gosling and Nigel Arnell, Walker Institute for Climate System Research, University.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Globus.org/genomics Globus Galaxies Science Gateways as a Service Ravi K Madduri, University of Chicago and Argonne National Laboratory
Accelerating Campus Research with Connective Services for Cyberinfrastructure Rob Gardner Steve Tuecke.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Tier 3 Support and the OSG US ATLAS Tier2/Tier3 Workshop at UChicago August 20, 2009 Marco Mambelli –
LCG Pilot Jobs + glexec John Gordon, STFC-RAL GDB 7 December 2007.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Campus Grid Technology Derek Weitzel University of Nebraska – Lincoln Holland Computing Center (HCC) Home of the 2012 OSG AHM!
Job submission overview Marco Mambelli – August OSG Summer Workshop TTU - Lubbock, TX THE UNIVERSITY OF CHICAGO.
Out of the basement, into the cloud: Extending the Tier 3 Lincoln Bryant Computation and Enrico Fermi Institutes.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
OSG User Support OSG User Support March 13, 2013 Chander Sehgal
HTCondor Accounting Update
Bob Jones EGEE Technical Director
GLOW A Campus Grid within OSG
Presentation transcript:

UC3: A Framework for Cooperative Computing at the University of Chicago Lincoln Bryant Computation and Enrico Fermi Institutes US ATLAS and UC3

2 Outline UC3 background Implementation Published results What’s next? UChicago Computing Cooperative

3 UC Computing Cooperative Kick off January 2012 Running in beta mode since April 2012 A shared Campus distributed high throughput computing infrastructure (DHTC) Inspired by other campus grids in OSG Integrate with national-scale resources such as the Open Science Grid UChicago Computing Cooperative

4 IMPLEMENTATION UChicago Computing Cooperative

5 Basic UC3 infrastructure Identity management and automatic account creation Submit host + Condor infrastructure Open “application seeder” cluster 544 job slots BOSCO multi-user service – for accessing non-Condor schedulers (PBS, SGE) – (see Marco Mambelli’s talk tomorrow) Data server backed by 50 TB of HDFS storage – Globus Online endpoint integrated with CILogon – Data access managed by Chirp, Parrot, and Skeleton Key o (see Suchandra Thapa’s talk tomorrow) Dedicated CVMFS repository for UC3 applications Various monitoring tools UChicago Computing Cooperative

6 Identity and Accounts & sign-up Signup portal integrated with UC systems UChicago Computing Cooperative David Champion (IT Services UChicago) – pre-production LDAP GROUPER

7 UC3 Connected Resources HDFS UC3 Seeder Condor ITB PBS Midwest Tier 2 Condor SIRAF SGE UC3 Cloud Condor ITS Condor UC3 BOSCO UC3 Submit UC3 Data UC3 CVMFS NFS Mounts

8 UC3 Job Routing UC3 Seeder Condor ITB PBS Midwest Tier 2 Condor SIRAF SGE UC3 Cloud Condor ITS Condor UC3 BOSCO NFS Mounts UC3 Submit HDFS UC3 Data UC3 CVMFS

9 UC3 Data Access HDFS UC3 Seeder Condor ITB PBS Midwest Tier 2 Condor SIRAF SGE UC3 Cloud Condor ITS Condor UC3 Submit UC3 Data UC3 CVMFS UC3 BOSCO CVMFS Repo NFS Mounts

Monitoring & Console UChicago Computing Cooperative Condor viewer, Cycler Server, Sysview, Ganglia

UC3 in production UChicago Computing Cooperative Glow & Engage users via OSG Atlas via OSG UC3 users Unused cycles

SCIENCE RESULTS UChicago Computing Cooperative

Cosmic Microwave Background analysis A MEASUREMENT OF THE COSMIC MICROWAVE BACKGROUND DAMPING TAIL FROM THE 2500-SQUARE-DEGREE SPT-SZ SURVEY K. T. Story (Kavli Institute) et al. “UC3 computing resources were used to simulate 100 full-sky realizations. These simulated skies include gravitationally lensed CMB anisotropy, a Poisson distribution of radio galaxies, and Gaussian realizations of the thermal and kinetic Sunyaev-Zel'dovich (SZ) effects and cosmic infrared background (CIB). The results were used to calculate the transfer function of the SPT SZ analysis pipeline, which is an essential step in the production of the SZ power spectrum from the full 2500 square degree SPT survey” Study power spectrum from South Pole Telescope (SPT) UChicago Computing Cooperative

Cosmic Microwave Background analysis UChicago Computing Cooperative Reprocessing raw data: two weeks on SPT systems  12 hours on UC3 Submitted to ApJ (The Astrophysical Journal)

Glassy systems and supercooled liquids Modeling glass formation in supercooled liquids Probing the structure of heterogeneity in supercooled liquids is computationally intense. Glen Hocky, David Reichman (Columbia) Just submitted to Journal Chemical Physics: UChicago Computing Cooperative

Global Gridded Biophysical Modeling Simulate crop yields and climate change impact at high- resolution (global extents, multi-scale models, multiple crops – corn, soy, wheat, rice) Preliminary results on yields versus CO 2 fertilizer Analysis on UC3 – workflows managed by Swift Joshua W. Elliott w/ Michael Glotter, Neil Best, David Kelly, Cheryl Porter, Alex Ruane, Ian Foster, Cynthia Rosenzweig, Elizabeth Moyer, Jim Jones, Ken Boote, Senthold Asseng, Mike Wilde, and other Chicago and AgMIP partners RDCEP collaboration – robust decision making on climate and energy policy

and of course.. ATLAS at LHC UC3 partnering with USATLAS Tier 2 and UC Tier 3 centers Provide flocking to unused ATLAS resources Allow flocking of ATLAS to spare UC3 cycles Facilitated with CERN Virtual File System for release directories, and federated Xrootd for storage access (  minimal UC3 system modifications for a large class of jobs) UChicago Computing Cooperative 100 k Fabiola Gianotti (CERN), ATLAS Collaboration Running ATLAS jobs

ATLAS production flocking into Campus UChicago Computing Cooperative ATLAS Only UC3 seeder shown Will transition some UC Tier3 users to UC3 using Parrot

LOOKING AHEAD UChicago Computing Cooperative

Future connectivity Flocking to the new Research Computing Center’s “Midway” cluster at UC. – About 4500 cores we could flock jobs to. – Looking forward to SLURM support in BOSCO. Flocking out to OSG. – UC3 collective VO established in OSG – Submission to remote sites on OSG via GlideinWMS – First step: just use OSG VO and flocking host to VO front-end managed by Mats Rynge UChicago Computing Cooperative

In summary… UC3 has been up and running since April – 544 dedicated job slots for UC3 open (seeder) use – Theoretical maximum is over 6k slots (flockable max) – 50 TB dedicated (Hadoop) storage for staging job datasets or temporary storage – uc3 cvmfs repo + Parrot for software access Login host integrated with UC ID system UC3 Globus Online endpoint – Integrating with CILogon Chirp access to storage Hadoop storage Three groups using UChicago Computing Cooperative

Thank you!

Collaboration and Acknowledgements Enrico Fermi Institute in the Physical Sciences Division – ATLAS Collaboration (HEP) – South Pole Telescope Collaboration Computation Institute at UC (OSG, Swift) Departments of Radiology and Radiation Oncology (SIRAF project) Center for Robust Decision Making on Climate and Energy Policy group at UC (CI, Economics) UC Information Technology Services UC Research Computing Center SWIFT team UChicago Computing Cooperative

24UChicago Computing Cooperative