Scaling Science Communities Lessons learned by and future plans of the Open Science Grid Frank Würthwein OSG Executive Director Professor of Physics UCSD/SDSC.

Slides:



Advertisements
Similar presentations
Grid Computing at The Hartford OGF22 February 27, 2008 Robert Nordlund
Advertisements

Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
1 Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21) NSF-wide Cyberinfrastructure Vision People, Sustainability, Innovation,
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
 Amazon Web Services announced the launch of Cluster Compute Instances for Amazon EC2.  Which aims to provide high-bandwidth, low- latency instances.
Open Science Grid Frank Würthwein UCSD. 2/13/2006 GGF 2 “Airplane view” of the OSG  High Throughput Computing — Opportunistic scavenging on cheap hardware.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
SOFTWARE AS A SERVICE PLATFORM AS A SERVICE INFRASTRUCTURE AS A SERVICE.
Constellation Technologies Providing a support service to commercial users of gLite Nick Trigg.
Assessment of Core Services provided to USLHC by OSG.
Ian Bird LHCC Referees’ meeting; CERN, 11 th June 2013 March 6, 2013
PhD course - Milan, March /09/ Some additional words about cloud computing Lionel Brunie National Institute of Applied Science (INSA) LIRIS.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
1.5 Engage Year 5. Prior Year Challenges staffing issues – Rynge transition to ISI in Nov/Dec ’09 – new hire started 7/12/2010; new to Grid computing,
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
SG - OSG Improving Campus Research CI Through Leveraging and Integration: Developing a SURAgrid-OSG Collaboration John McGee, RENCI/OSG Engagement Coordinator.
Top Issues Facing Information Technology at UAB Sheila M. Sanders UAB Vice President Information Technology February 8, 2007.
Grid Computing at The Hartford Condor Week 2008 Robert Nordlund
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
DV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science DOE: Scientific Collaborations at Extreme-Scales:
Assessment of Portal Options Presented to: Technology Committee UMS Board of Trustees May 18, 2010.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
CERN openlab V Technical Strategy Fons Rademakers CERN openlab CTO.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Cyberinfrastructure: An investment worth making Joe Breen University of Utah Center for High Performance Computing.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Computing at SSRL: Experimental User Support Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
High Throughput Data Program (HTDP) at FNAL Mission: investigate the impact of and provide solutions for the scientific computing challenges in Big Data.
Accelerating Campus Research with Connective Services for Cyberinfrastructure Rob Gardner Steve Tuecke.
Mickey Slimp, Executive Director, Northeast Texas Consortium of Colleges & Universities Rich Gross, Executive Vice President, the Clements Group Ronda.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
March 2014 Open Science Grid Operations A Decade of HTC Infrastructure Support Kyle Gross Operations Support Lead Indiana University / Research Technologies.
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
EGI-InSPIRE EGI-InSPIRE RI EGI strategy towards the Open Science Commons Tiziana Ferrari EGI-InSPIRE Director at EGI.eu.
The Community Cloud Don Welch Merit Network. Definitions l My Definition: Shared Services above the campus l Elastic demand can be very deliberate l Can.
TeraGrid Software Integration: Area Overview (detailed in 2007 Annual Report Section 3) Lee Liming, JP Navarro TeraGrid Annual Project Review April, 2008.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
READ ME FIRST Use this template to create your Partner datasheet for Azure Stack Foundation. The intent is that this document can be saved to PDF and provided.
Bob Jones EGEE Technical Director
Accessing the VI-SEEM infrastructure
Penn State Center for e-Design Site Vision and Capabilities
What is HPC? High Performance Computing (HPC)
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Clouds , Grids and Clusters
Tools and Services Workshop
Grid Services for Collaborative Science
Joslynn Lee – Data Science Educator
Matt Link Associate Vice President (Acting) Director, Systems
Innovative Solutions from Internet2
Steven Newhouse EGI-InSPIRE Project Director, EGI.eu
Marketplace & service catalog concepts, first design analysis
Firefish Software for Professional Recruiters Stays Available Around the Clock from Any Device and Anywhere by Using the Microsoft Azure Platform Partner.
Fabric and Storage Management
Miron Livny John P. Morgridge Professor of Computer Science
Input on Sustainability
Introduce yourself Presented by
EGI Webinar - Introduction -
Sky Computing on FutureGrid and Grid’5000
Fall 2004 Internet2 Member Meeting
Cyberinfrastructure for the Life Sciences
The role of Digital Innovation Hubs to support the digital transformation of industry in all European regions Anne-Marie Sassen, Deputy Head of Unit Technologies.
Henry Neeman, University of Oklahoma
EGI-Engage T. Ferrari/EGI.eu.
Michele Kimpton Project Director, DuraCloud NDIPP Partner meeting
Sky Computing on FutureGrid and Grid’5000
Welcome to (HT)Condor Week #19 (year 34 of our project)
Presentation transcript:

Scaling Science Communities Lessons learned by and future plans of the Open Science Grid Frank Würthwein OSG Executive Director Professor of Physics UCSD/SDSC

Professor of Physics, experimental particle physics with CMS @ LHC About Me Professor of Physics, experimental particle physics with CMS @ LHC HWW, WW, ttbar, SUSY SUSY convener CMS 2013/14 Executive Director of Open Science Grid Co-PI of Pacific Research Platform Executive Team member of SDSC Most of the talk is written in my role as ED of OSG but I stray occasionally without prior warning.

The OSG Vision in a nutshell Across the nation, institutions invest into research computing to remain competitive Science is a team sport, and institutions with an island mentality will underperform. Integration is key to success OSG provides services to advance this vision

Integrating Clusters Worldwide

Integrating Clusters Worldwide Integrating hardware across more than 100 institutions. Supporting science across more than 30 domains. Supporting Scientists across more than 100 institutions. Monitoring networking with 279 perfSonar instances. Organized into 16 meshes each for bandwidth & latency. A total of 9982 end-to-end paths are monitored. Within last 12 months: 134 million jobs executed. 1.5 Billion hours of compute time. 2 Billion data transfers. display.opensciencegrid.org We’re doing this since 2004

Lessons Learned

Be open to resource providers at all scales Be Open to All Be open to resource providers at all scales from small colleges to large national labs Be open to user communities at all scales from individual students to large research communities domain science specific and across many campuses campus specific and across many domain sciences Be open to any business model sharing, allocations, purchasing preemption is an essential part of operations

One tool does not fit all We host services on our hardware for you. We host services on your hardware for you. We provide an integrated software stack for you to deploy to host services on your hardware for you. “for you” = “for you and your friends” in all of the above statements. You control dynamically who you consider your friends. In all cases, seamless integration is key !

Open Source is mandatory We must integrate software from many sources. Research computing infrastructure software can not be provisioned from just one source !!! Some of our sources disappear on us. external software providers have their own objectives and timescales. Science needs to continue even when software providers disappear. we adopt orphaned software, and continue to maintain it until an orderly transition to a replacement can be executed.

Stay engaged with software providers and IT shops As the scale increases, we need to work with providers on scaling out their software capabilities. Need to offer at scale testbeds to benchmark capabilites. Need to be willing to do the benchmarking both in our shop and in yours, to verify that what works when we deploy it also works when you deploy it.

Funding Agencies are Fickle Science outlasts agency timelines Try to build consortium structures that can survive the changing winds in D.C.

Actively maintain relationships with international partners. Future Goals Integrate all compute, storage, and networking resources at US Research Institutions and Commercial Cloud providers. Do so for all of science. Actively maintain relationships with international partners. let science collaborations define the partners that we need to integrate with.