“The HEP Cloud Facility: elastic computing for High Energy Physics” Outline Computing Facilities for HEP need to evolve to address the new needs of the.

Slides:



Advertisements
Similar presentations
R3 Kickoff Meeting Ocean Observatories Initiative Common Execution Infrastructure (CEI) Subsystem OOI CI System Architecture Team: 1.
Advertisements

The Open Science Data Cloud Robert L. Grossman University of Chicago and Open Cloud Consortium April 4, 2012 A 501(c)(3) not-for-profit operating clouds.
University of Notre Dame
System Center 2012 R2 Overview
WHY DO SOME THINGS CHANGE…? THE EVOLUTION OF CITIZEN SCHOOLS THOMAS HATCH, TEACHERS COLLEGE COLUMBIA UNIVERSITY FOR C&T 4004, SCHOOL CHANGEC&T 4004, SCHOOL.
Cloud Computing to Satisfy Peak Capacity Needs Case Study.
Cloud Computing Resource provisioning Keke Chen. Outline  For Web applications statistical Learning and automatic control for datacenters  For data.
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Welcome to HTCondor Week #14 (year #29 for our project)
Assessment of Core Services provided to USLHC by OSG.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
Ocean Observatories Initiative Common Execution Environment Kate Keahey OOI Cyberinfrastructure Life Cycle Objectives Milestone Review, Release 1 San Diego,
SCD FIFE Workshop - GlideinWMS Overview GlideinWMS Overview FIFE Workshop (June 04, 2013) - Parag Mhashilkar Why GlideinWMS? GlideinWMS Architecture Summary.
SICSA student induction day, 2009Slide 1 Social Simulation Tutorial Session 6: Introduction to grids and cloud computing International Symposium on Grid.
Ways to Connect to OSG Tuesday afternoon, 3:00 pm Lauren Michael Research Computing Facilitator University of Wisconsin-Madison.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Ocean Observatories Initiative Common Execution Infrastructure (CEI) Overview Michael Meisinger September 29, 2009.
Building the Infrastructure Grid: Architecture, Design & Deployment Logan McLeod – Database Technology Strategist.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
PanDA A New Paradigm for Computing in HEP Kaushik De Univ. of Texas at Arlington NRC KI, Moscow January 29, 2015.
1© Copyright 2015 EMC Corporation. All rights reserved. EMC OPENSTACK CLOUD SOLUTIONS EMC WITH CANONICAL OPENSTACK REFERENCE ARCHITECTURE.
A scalable and flexible platform to run various types of resource intensive applications on clouds ISWG June 2015 Budapest, Hungary Tamas Kiss,
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
4/26/2017 Use Cloud-Based Load Testing Service to Find Scale and Performance Bottlenecks Randy Pagels Sr. Developer Technology Specialist © 2012 Microsoft.
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
Managing a growing campus pool Eric Sedore
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
H. PERO, E uropean Commission European policy developments and challenges in the field of Research Infrastructures.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
Gabriele Garzoglio, on behalf of the HEPCloud Leadership Team: Stu Fuess, Burt Holzman, Rob Kennedy, Steve Timm, Tony Tiradani IIT Workshop 30 Mar 2016.
3 Compute Elements are manageable By hand 2 ? We need middleware – specifically a Workload Management System (and more specifically, “glideinWMS”) 3.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Fermilab Virtual Facility: Data-Intensive Computing In the Cloud Steven C. Timm OpenNebulaConf October 2015 Fermilab OneFacility: Data-intensive.
Review of the WLCG experiments compute plans
Présentation de l’idée de projet
What is new in CMS David Colling 11/04/2016
Elastic Computing Resource Management Based on HTCondor
Computing models, facilities, distributed computing
Integration of Openstack Cloud Resources in BES III Computing Cluster
HTCondor at Syracuse University – Building a Resource Utilization Strategy Eric Sedore Associate CIO HTCondor Week 2017.
Methodology: Aspects: cost models, modelling of system, understanding of behaviour & performance, technology evolution, prototyping  Develop prototypes.
Outline Expand via Flocking Grid Universe in HTCondor ("Condor-G")
ATLAS Cloud Operations
WLCG Manchester Report
Provisioning 160,000 cores with HEPCloud at SC17
How to enable computing
Bridges and Clouds Sergiu Sanielevici, PSC Director of User Support for Scientific Applications October 12, 2017 © 2017 Pittsburgh Supercomputing Center.
Miron Livny John P. Morgridge Professor of Computer Science
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Univ. of Texas at Arlington BigPanDA Workshop, ORNL
FCT Follow-up Meeting 31 March, 2017 Fernando Meireles
WLCG dedicated accounting portal (proposal)
Cloud Computing inside R&D at Eli Lilly and Company
Cloud Computing Dr. Sharad Saxena.
How to make hundreds of servers work for you
Welcome to HTCondor Week #18 (year 33 of our project)
Brian Lin OSG Software Team University of Wisconsin - Madison
Dynamically Negotiating Capacity Between On-demand and Batch Clusters
Exploit the massive Volunteer Computing resource for HEP computation
Présentation de l’idée de projet
Building an Elastic Batch System with Private and Public Clouds
WLCG Tier-2 site at NCP, Status Update and Future Direction
Welcome to (HT)Condor Week #19 (year 34 of our project)
Welcome to HTCondor Week #17 (year 32 of our project)
Presentation transcript:

“The HEP Cloud Facility: elastic computing for High Energy Physics” Outline Computing Facilities for HEP need to evolve to address the new needs of the field. Drivers are Capacity, Cost, and Elasticity HEPCloud is envisioned as a portal to an ecosystem of diverse computing resources commercial or academic The basic idea is to add disparate resources (HPC slots, Cloud VM, OSG nodes, local resources) into an HTCondor pool HEP Cloud is taking early steps to adapt HTC workflows to HPC facilities and has demonstrated integration with Commercial clouds (AWS) Integration challenges are many, in particular resource provisioning at scale / cost and providing services on-demand HEP Cloud has demonstrated the feasibility of using commercial clouds for 3 use cases: CMS, NOvA, and DES. CMS has run workflows at 60k slots for 2 weeks, corresponding to a 25% increase to the global resources available to the experiment NOvA has run data intensive jobs at 7,3k slots, almost 4x their dedicated slots at Fermilab Costs in the cloud are discussed and range from 1.4 to 3.0 cents per core hour, compared to an on-premises cost of 0.9 cents per core hour.