OSG Area Coordinators’ Meeting 10.0.4 LIGO Applications (NEW) Kent Blackburn Caltech / LIGO October 29 th, 2009 1.

Slides:



Advertisements
Similar presentations
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
Advertisements

Systems Engineering in a System of Systems Context
Intermediate Condor: DAGMan Monday, 1:15pm Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Intermediate HTCondor: Workflows Monday pm Greg Thain Center For High Throughput Computing University of Wisconsin-Madison.
Astrophysics on the OSG (LIGO, SDSS, DES) Astrophysics on the OSG (LIGO, SDSS, DES) Kent Blackburn LIGO Laboratory California Institute of Technology Open.
Miron Livny Computer Sciences Department University of Wisconsin-Madison From Compute Intensive to Data.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration Based on paper by Laura Grit, David Irwin, Aydan.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
AHM /09/05 AHM 2005 Automatic Deployment and Interoperability of Grid Services G.Kecskemeti, Yonatan Zetuny, G.Terstyanszky,
The Integration of Peer-to-peer and the Grid to Support Scientific Collaboration Tran Vu Pham, Lydia MS Lau & Peter M Dew {tranp, llau &
10/20/05 LIGO Scientific Collaboration 1 LIGO Data Grid: Making it Go Scott Koranda University of Wisconsin-Milwaukee.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Patrick R Brady University of Wisconsin-Milwaukee
G Z LIGO Scientific Collaboration Grid Patrick Brady University of Wisconsin-Milwaukee LIGO Scientific Collaboration.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
Usability Issues Documentation J. Apostolakis for Geant4 16 January 2009.
Campus Grids Report OSG Area Coordinator’s Meeting Dec 15, 2010 Dan Fraser (Derek Weitzel, Brian Bockelman)
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
SG - OSG Improving Campus Research CI Through Leveraging and Integration: Developing a SURAgrid-OSG Collaboration John McGee, RENCI/OSG Engagement Coordinator.
 The workflow description modified to output a VDS DAX.  The workflow description toolkit developed allows any concrete workflow description to be migrated.
Status of LIGO Data Analysis Gabriela González Department of Physics and Astronomy Louisiana State University for the LIGO Scientific Collaboration Dec.
LIGO Applications Kent Blackburn (Robert Engel, Britta Daudert)
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
Grid Computing at The Hartford Condor Week 2008 Robert Nordlund
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
LIGO Z LIGO Scientific Collaboration -- UWM 1 LSC Data Analysis Alan G. Wiseman (LSC Software Coordinator) LIGO Scientific Collaboration.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Towards a Global Service Registry for the World-Wide LHC Computing Grid Maria ALANDES, Laurence FIELD, Alessandro DI GIROLAMO CERN IT Department CHEP 2013.
Intermediate Condor: Workflows Rob Quick Open Science Grid Indiana University.
1 Development of a High-Throughput Computing Cluster at Florida Tech P. FORD, R. PENA, J. HELSBY, R. HOCH, M. HOHLMANN Physics and Space Sciences Dept,
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
LIGO-G E LIGO Scientific Collaboration Data Grid Status Albert Lazzarini Caltech LIGO Laboratory Trillium Steering Committee Meeting 20 May 2004.
LIGO-G Z LIGO at the start of continuous observation Prospects and Challenges Albert Lazzarini LIGO Scientific Collaboration Presentation at NSF.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
G Z LIGO's Physics at the Information Frontier Grant and OSG: Update Warren Anderson for Patrick Brady (PIF PI) OSG Executive Board Meeting Caltech.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
NSF 21 Oct Science Nuggets Jolien Creighton University of Wisconsin–Milwaukee.
LIGO Plans for OSG J. Kent Blackburn LIGO Laboratory California Institute of Technology Open Science Grid Technical Meeting UCSD December 15-17, 2004.
April 26, Executive Director Report Executive Board 4/26/07 Things under control Things out of control.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
Ian Bird CERN, 17 th July 2013 July 17, 2013
LIGO-G W Use of Condor by the LIGO Scientific Collaboration Gregory Mendell, LIGO Hanford Observatory On behalf of the LIGO Scientific Collaboration.
LIGO-G E PAC9 Meeting LIGO Laboratory at Caltech 1 The LIGO I Science Run Data & Computing Group Operations Plan 9 th Meeting of the.
LIGO-G Z1 Using Condor for Large Scale Data Analysis within the LIGO Scientific Collaboration Duncan Brown California Institute of Technology.
LIGO: The Laser Interferometer Gravitational-wave Observatory Sarah Caudill Louisiana State University Physics Graduate Student LIGO Hanford LIGO Livingston.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Summary of OSG Activities by LIGO and LSC LIGO NSF Review November 9-11, 2005 Kent Blackburn LIGO Laboratory California Institute of Technology LIGO DCC:
A Shared Commitment to Digital Preservation and Access.
CMB & LSS Virtual Research Community Marcos López-Caniego Enrique Martínez Isabel Campos Jesús Marco Instituto de Física de Cantabria (CSIC-UC) EGI Community.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
What is OSG? (What does it have to do with Atlas T3s?) What is OSG? (What does it have to do with Atlas T3s?) Dan Fraser OSG Production Coordinator OSG.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
Running LIGO workflows on the OSG
SUPER SUstainability Policy and practice: An Executive Re-training program L. Karaoglanoglou, Prof. Dr. E. Koukios Bioresource Technology Unit (BTU), Laboratory.
Accessing the VI-SEEM infrastructure
US CMS Testbed.
“Welcome to the LSC meeting”
Status of LIGO Patrick J. Sutton LIGO-Caltech
TeraScale Supernova Initiative
Presentation transcript:

OSG Area Coordinators’ Meeting LIGO Applications (NEW) Kent Blackburn Caltech / LIGO October 29 th,

31 Jan 2008 Current Initiatives LIGO Applications Coarsely fall into four groups:  Periodic Search Codes  ( e.g., pulsars)  Compact Binary Coalescence Codes  (e.g., black holes, neutron stars, exotic stars)  Burst Search Codes  (e.g., supernovae)  Stochastic Search Codes  (e.g., primordial gravitational waves from early universe) Only the Periodic and Compact Binary Coalescence applications demand sufficiently large computing resources to warrant exploring non LIGO Data Grid resources! 2

31 Jan 2008 Current Initiatives (cont) LIGO Applications have existed for more than a decade before the LHC era  They are based on the LIGO Data Grid (LDG) architecture and its unique advantages to our scientific user community and analysis needs.  Migration onto the OSG is a challenge constrained by the requirement to preserve the integrity and trust of the LIGO Scientific Collaboration (60+ institutions, 650+ members)  Focusing on the low hanging fruit  codes (BOINC Engine) for periodic applications  ihope / Pegasus workflow planner codes for compact binary coalescence applications  Both can gain scientifically from having additional resources beyond existing internal LIGO Data Grid resources 3

31 Jan 2008 Key Accomplishments on the OSG  Close to 18 months of development have gone into this effort  Based on original code established for the German D-Grid  Began as a WS-Gram submission application  Since evolved to GT2 and now to Condor-G Currently is contributing number two in the world on a weekly basis (excludes contribution made using non-gatekeeper submission mechanisms) Robert Engel will contribute approximately 25% of his time to maintaining and growing this contribution. Useful URL:  4

31 Jan 2008 Key Accomplishments Ihope compact binary coalescence on the OSG  This code base is maintained by the largest sub-community within LIGO  Highly dynamic code base  Highly optimized to fit within existing LIGO Data Grid resources (LDG resource scope is based on this application)  Constructed as a large workflow (~100,000 jobs)  Each workflow requires tens of terabytes of LIGO data be readily available  All effort to date has been prototype / porting activities on small none interesting segments of the data  Pegasus is adopted/accepted by this sub-community for workflow management of ihope  Britta Daudert will be working close to 100% on the evolution of this application to bring OSG into the arena of science contributors.  Not a production application for the OSG at this time. LIGO is successful with existing LDG resources. 5

31 Jan 2008 Key Issues and Obstacles on the OSG  Going very well as a production application  The amount of storage needed for this application scales with the number of worker nodes at a site that are participating.  A few sites have insufficient storage in the $OSG_DATA area to allow full utilization.  Miron has offered to pay for additional storage (order Terabyte) where this is a problem. 6

31 Jan 2008 Key Issues and Obstacles (cont) Ihope application on the OSG  Lots of obstacles revolving around the large volumes of data needed to run a scientifically valued workflow!  OSG’s own views of its Storage Solution are only now emerging  Finding a unified, transparent “view” of using large storage resources on the OSG is a challenge  Need a solution that is “automatic”. Currently much work has to be done by hand to move from site to site on the OSG.  Evolution requires change to LIGO and Pegasus base codes  These changes burn up good will, trust and manpower to keep up with evolution (e.g., Gridcat -> VORS -> MyOSG -> OSG-MM)  Opportunity emerging to gain additional support from Syracuse University under the LIGO domain  Will benefit from having a well defined storage architecture in place in the OSG for opportunistic compute/storage applications. 7

31 Jan 2008 WBS Exception Reporting Currently finishing up a year three milestone to prove that a large LIGO workflow (with 50,000 jobs) can use SRM / OSG SE technology. This workflow has been running for close to a month on the Caltech ITB cluster which has over 100 worker nodes. Due to an issue with condor (which has been reported) only a few jobs are actually being run on this large resource and what was expected to complete in a week is taking a month. In principle this has been accomplished. 8