10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab

Slides:



Advertisements
Similar presentations
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Open Science Grid June 28, 2006 Bill Kramer Chair of the Open Science Grid Council NERSC Center General Manager, LBNL.
R. Pordes, I Brazilian LHC Computing Workshop 1 What is Open Science Grid?  High Throughput Distributed Facility  Shared opportunistic access to existing.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Assessment of Core Services provided to USLHC by OSG.
Computing in Atmospheric Sciences Workshop: 2003 Challenges of Cyberinfrastructure Alan Blatecky Executive Director San Diego Supercomputer Center.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Open Science Ruth Pordes Fermilab, July 17th 2006 What is OSG Where Networking fits Middleware Security Networking & OSG Outline.
1 The Open Science Grid Fermilab. The Open Science Grid2 The Vision Practical support for end-to-end community systems in a heterogeneous gobal environment.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
Some Grid Experiences Laura Pearlman USC Information Sciences Institute ICTP Advanced Training Workshop on Scientific Instruments on the Grid *Most of.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
Open Science Grid  Consortium of many organizations (multiple disciplines)  Production grid cyberinfrastructure  80+ sites, 25,000+ CPU.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE – paving the way for a sustainable infrastructure.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Partnerships & Interoperability - SciDAC Centers, Campus Grids, TeraGrid, EGEE, NorduGrid,DISUN Ruth Pordes Fermilab Open Science Grid Joint Oversight.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
Job and Data Accounting on the Open Science Grid Ruth Pordes, Fermilab with thanks to Brian Bockelman, Philippe Canal, Chris Green, Rob Quick.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Open Science Grid An Update and Its Principles Ruth Pordes Fermilab.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
OSG Consortium Meeting (January 23, 2006)Paul Avery1 University of Florida Open Science Grid Progress Linking Universities and Laboratories.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Leveraging the InCommon Federation to access the NSF TeraGrid Jim Basney Senior Research Scientist National Center for Supercomputing Applications University.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
The OSG and Grid Operations Center Rob Quick Open Science Grid Operations Center - Indiana University ATLAS Tier 2-Tier 3 Meeting Bloomington, Indiana.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
Eileen Berman. Condor in the Fermilab Grid FacilitiesApril 30, 2008  Fermi National Accelerator Laboratory is a high energy physics laboratory outside.
Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab October 25, 2005.
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
1 An update on the Open Science Grid for IHEPCCC Ruth Pordes, Fermilab.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
All Hands Meeting 2005 BIRN-CC: Building, Maintaining and Maturing a National Information Infrastructure to Enable and Advance Biomedical Research.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
1 The Open Science Grid Ruth Pordes Fermilab Ruth Pordes Fermilab.
Ruth Pordes Executive Director University of Washingon Seattle OSG Consortium Meeting 21st August University of Washingon Seattle.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
1 Open Science Grid Progress & Vision Keith Chadwick, Fermilab
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
What is OSG? (What does it have to do with Atlas T3s?) What is OSG? (What does it have to do with Atlas T3s?) Dan Fraser OSG Production Coordinator OSG.
Open Science Grid Interoperability
Open Science Grid at Condor Week
Status of Grids for HEP and HENP
Presentation transcript:

10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab

OSG at CANS 2 What is OSG? Shared Common Distributed Infrastructure Supporting access to contributed Processing, disk & tape resources Over production and research networks and Open tor use by Science Collaborations

OSG at CANS 3 96 Resources across production & integration infrastructures 20 Virtual Organizations +6 operations Includes 25% non-physics. ~20,000 CPUs (from 30 to 4000) ~6 PB Tapes ~4 PB Shared Disk Snapshot of Jobs on OSGs Sustaining through OSG submissions: 3,000-4,000 simultaneous jobs. ~10K jobs/day ~50K CPUhours/day. Peak test jobs of 15K a day. Using production & research networks OSG Snapshot

OSG at CANS 4 OSG - a Community Consortium DOE Laboratories and DOE, NSF, other, University Facilities contributing computing farms and storage resources, infrastructure and user services, user and research communities. Grid technology groups: Condor, Globus, Storage Resource Management, NSF Middleware Initiative. Global research collaborations: High Energy Physics - including Large Hadron Collider, Gravitational Wave Physics - LIGO, Nuclear and Astro Physics, Bioinformatices, Nanotechnology, CS research…. Partnerships: with peers, development and research groups Enabling Grids for EScience (EGEE),TeraGrid, Regional & Campus Grids (NYSGrid, NWICG, TIGRE, GLOW..) Education: I2U2/Quarknet sharing cosmic ray data, Grid schools… PPDG GriPhyN iVDGL TrilliumGrid3 OSG (DOE) (DOE+NSF) (NSF)

OSG at CANS 5 OSG sits in the middle of an environment of a Grid- of-Grids from Local to Global Infrastructures Inter-Operating and Co-Operating Grids: Campus, Regional, Community, National, International. Virtual Organizations doing Research & Education.

OSG at CANS 6 Overlaid by virtual computational environments of single to large groups of researchers local to worldwide

OSG at CANS 7 OSG Core Activities Integration: software, systems and end-to-end environments. Production, integration, test infrastructures. Operations: common support mechanisms, security protections, troubleshooting. Inter-Operation: across administrative and technical boundaries. OSG Principles and Characteristics  Guaranteed and opportunistic access to shared resources.  Heterogeneous environment.  Interfacing and Federation across Campus, Regional, national/international Grids preserving local autonomy  New services and technologies developed external to OSG. Each activity includes technical work with Collaborators in the US and elsewhere.

OSG at CANS 8 OSG Middleware Infrastructure Applications VO Middleware Core grid technology distributions: Condor, Globus, Myproxy: shared with TeraGrid and others Virtual Data Toolkit (VDT) core technologies + software needed by stakeholders:many components shared with EGEE OSG Release Cache: OSG specific configurations, utilities etc. HEP Data and workflow management etc Biology Portals, databases etc User Science Codes and Interfaces Existing Operating, Batch systems and Utilities. Astrophysics Data replication etc

OSG at CANS 9 What is the VDT? A collection of software  Grid software: Condor, Globus and lots more  Virtual Data System: Origin of the name “VDT”  Utilities: Monitoring, Authorization, Configuration  Built for >10 flavors/versions of Linux Automated Build and Test: Integration and regression testing. An easy installation:  Push a button, everything just works.  Quick update processes. Responsive to user needs:  process to add new components based on community needs. A support infrastructure:  front line software support,  triaging between users and software providers for deeper issues.

OSG at CANS 10 Middleware to Support Security Identification and Authorization based on X509 extended attribute certificates. In common with Enabling Grids for EScience (EGEE). Address needs of Roles of groups of researchers for control and policies of access. Operational auditing across core OSG assets.

OSG at CANS 11 OSG Active in Control and Understanding of Risk Security Process modelled on NIST Management, Operational, Technical controls Security Incidents: When not If.  Organizations control their own activities: Sites, Communities, Grids.  Coordination between operations centers of participating infrastructures.  End-to-end troubleshooting involves people, software and services from multiple infrastructures & organizations

OSG at CANS 12 High Energy Physicists Analyze today’s Data Worldwide PB/mo = High impact path Production path University of Science and Technology of China

OSG at CANS 13 Physics needs in 2008: Petabyte tertiary automated tape storage at 12 centers world-wide physics and other scientific collaborations. High availability (365x24x7) and high data access rates (1GByte/sec) locally and remotely. Evolving and scaling smoothly to meet evolving requirements. E.g. CMS computing model Tier-0 Tier-1 Tier-2

OSG at CANS 14 OSG Data Transfer, Storage and Access - GBytes/sec 365 days a year for CMS & ATLAS Data Rates need to reach ~X3 in 1 year 600MB/sec ~7 Tier-1s, CERN + Tier-2s Bejing is a Tier-2 in this set

OSG at CANS 15 Aggressive program of End to End Network performance Complex end-to-end routes. Monitoring, configuration, diagnosis. Automated redundancy and recovery.

OSG at CANS 16 Submitting Locally, Executing Remotely: 15,000 jobs/day. 27 sites. Handful of submission points. + test jobs at 55K/day.

OSG at CANS 17 Applications cross infrastructures e.g.OSG and TeraGrid

OSG at CANS 18 The OSG Model of Federation OSG A(nother) Grid e.g. NAREGI Service-X Adaptor between OSG-X and AGrid-X VO or User that acts across grids Interface to Service-X Security, Data, Jobs, Operations, Information, Acccounting…

OSG at CANS 19 Before FermiGrid e.g.Fermilab User Resource Head Node Workers Astrophysics Resource Head Node Workers Common Resource Head Node Workers ParticlePhysics Resource Head Node Workers Theory Existing Common Gateway & Central Services Common Gateway & Central Services Guest User Local Grid with adaptor to national grid Central Campus wide Grid Services Enable efficiencies and sharing across internal farms and storage Maintain autonomy of individual resources Next Step: Campus Infrastructure Days - new activity OSG, Internet2 and TeraGrid

OSG at CANS 20 Information & Monitoring Storage Interfaces Interoperation Increasing in Scope

OSG at CANS 21 Summary of OSG today Providing core services, software and a distributed facility for an increasing set of research communities. Helping Virtual Organizations access resources on many different infrastructures. Reaching out to others to collaborate and contribute our experience and efforts.