1.0.4 - LIGO Applications Kent Blackburn (Robert Engel, Britta Daudert)

Slides:



Advertisements
Similar presentations
Applications Area Issues RWL Jones GridPP13 – 5 th June 2005.
Advertisements

A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
How to Develop a Science Fair Project
1 Title slide Future for Functional Test Automation? TM Forum – April 2006 Susan Windsor Insight Through Intelligence WMHL Consulting Limited, MD.
Intermediate Condor: DAGMan Monday, 1:15pm Alain Roy OSG Software Coordinator University of Wisconsin-Madison.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Intermediate HTCondor: Workflows Monday pm Greg Thain Center For High Throughput Computing University of Wisconsin-Madison.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Assessment of Core Services provided to USLHC by OSG.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
Infrastructure Development Challenges for Young Professionals? 22 November 2013 Didibhuku Wellington Thwala Department of Construction Management and Quantity.
OSG Grid Workshop in KNUST, Kumasi, Ghana August 6-8, 2012 following the AFRICAN SCHOOL OF FUNDAMENTAL PHYSICS AND ITS APPLICATIONS July 15-Aug 04, 2012.
AugusBoth checks were cut the was cut on1/16 and the other one for was cut yesterday, both went out yesterday Marybeth Tahar Interaction.
N By: Md Rezaul Huda Reza n
Key Project Drivers - FY11 Ruth Pordes, June 15th 2010.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
OSG Area Coordinators Campus Infrastructures Update Dan Fraser Miha Ahronovitz, Jaime Frey, Rob Gardner, Brooklin Gore, Marco Mambelli, Todd Tannenbaum,
10/20/05 LIGO Scientific Collaboration 1 LIGO Data Grid: Making it Go Scott Koranda University of Wisconsin-Milwaukee.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
Ohio State University Department of Computer Science and Engineering 1 Cyberinfrastructure for Coastal Forecasting and Change Analysis Gagan Agrawal Hakan.
OSG Area Coordinators’ Meeting LIGO Applications (NEW) Kent Blackburn Caltech / LIGO October 29 th,
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Top Down View of Estimation Test Managers Forum 25 th April 2007.
Discussion Topics DOE Program Managers and OSG Executive Team 2 nd June 2011 Associate Executive Director Currently planning for FY12 XD XSEDE Starting.
INFO 636 Software Engineering Process I Prof. Glenn Booker Week 9 – Quality Management 1INFO636 Week 9.
June 10, D0 Use of OSG D0 relies on OSG for a significant throughput of Monte Carlo simulation jobs, will use it if there is another reprocessing.
Core Banking Transformation: A Roadmap to a Successful Core Banking Product Implementation - PMI Virtual Library | | © 2008 Kannan S. Ramakrishnan.
Using Biological Cyberinfrastructure Scaling Science and People: Applications in Data Storage, HPC, Cloud Analysis, and Bioinformatics Training Scaling.
How to Satisfy Reviewer B and Other Thoughts on the Publication Process: Reviewers’ Perspectives Don Roy Past Editor, Marketing Management Journal.
Intermediate Condor: Workflows Rob Quick Open Science Grid Indiana University.
The GriPhyN Planning Process All-Hands Meeting ISI 15 October 2001.
Ami™ as a process Showing the structural elements in the Accelerated Model for Improvement™
Stefano Belforte INFN Trieste 1 Middleware February 14, 2007 Resource Broker, gLite etc. CMS vs. middleware.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Geospatial, Open-Source Hosting of Agriculture, Resources and Environmental Data Pilot Project Overview & Lessons Learned Presented by Thomas Hertel Purdue.
Sep 13, 2006 Scientific Computing 1 Managing Scientific Computing Projects Erik Deumens QTP and HPC Center.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Ricardo Rocha CERN (IT/GS) EGEE’08, September 2008, Istanbul, TURKEY Experiment.
G Z LIGO's Physics at the Information Frontier Grant and OSG: Update Warren Anderson for Patrick Brady (PIF PI) OSG Executive Board Meeting Caltech.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
LIGO Plans for OSG J. Kent Blackburn LIGO Laboratory California Institute of Technology Open Science Grid Technical Meeting UCSD December 15-17, 2004.
Software Quality Assurance SOFTWARE DEFECT. Defect Repair Defect Repair is a process of repairing the defective part or replacing it, as needed. For example,
April 26, Executive Director Report Executive Board 4/26/07 Things under control Things out of control.
Cyberinfrastructure: Many Things to Many People Russ Hobby Program Manager Internet2.
Science Project Information
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
GEO Implementation Boards Considerations and Lessons Learned (Document 8) Max Craglia (EC) Co-chair of the Infrastructure Implementation Board (IIB) On.
Run II Review Closeout 15 Sept., 2004 FNAL. Thanks! …all the hard work from the reviewees –And all the speakers …hospitality of our hosts Good progress.
CERN IT Department CH-1211 Genève 23 Switzerland t Migration from ELFMs to Agile Infrastructure CERN, IT Department.
Performance Measurement: How Is Data Used in Quality Improvement ? Title I Mental Health Providers Quality Learning Network Quality Learning Network Johanna.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
LSC at LLO LIGO Scientific Collaboration - University of Wisconsin - Milwaukee 1 Report from Executive Committee Meeting Alan Wiseman G Z.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Sociology. Sociology is a science because it uses the same techniques as other sciences Explaining social phenomena is what sociological theory is all.
Summary of OSG Activities by LIGO and LSC LIGO NSF Review November 9-11, 2005 Kent Blackburn LIGO Laboratory California Institute of Technology LIGO DCC:
Cindy Tumbarello, RN, MSN, DHA September 22, 2011.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
SCEC CyberShake on TG & OSG: Options and Experiments Allan Espinosa*°, Daniel S. Katz*, Michael Wilde*, Ian Foster*°,
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
ASIS Status Report Bruce Allen LSC Meeting, LHO, August 16, 2001
Readiness of ATLAS Computing - A personal view
Presentation transcript:

LIGO Applications Kent Blackburn (Robert Engel, Britta Daudert)

on the OSG Continuing to see an average throughput of 50 to 100K CPU-Hrs per day. Discernable trend towards lower numbers over the past few weeks. Running on roughly 30 sites around the OSG This is a sweet-spot in terms of hands-on oversight and productivity. – Robert Engel reports that he has less time to devote to this with the increased demands from the Documentation project. Other than University of Wisconsin Milwaukee, these are strictly opportunistic jobs. Unclear how increased competition and LHC ramping up will impact these numbers, but for the time being they’ve been holding up well. Detailed graphs at has been showcased numerous times in various OSG articles over the past few months

Future Thoughts Some interest in developing support for SRM SE storage as opposed to the local storage currently used Probably greater interest in migrating to using a standard “pilot” method to replace LIGO/GEO home grown job submission/management solution Errors encountered have been documented in a report by Robert and given to OSG Production for improved usability for all communities. Unclear how much effort is available in the near term to make modifications/enhancements to but it is not a lot.

Binary Inspiral Workflows Focusing on several areas in an effort to achieve scientific contribution from the OSG – Use of storage elements to pre-stage (TBs) of data used by workflows designed by LIGO scientist This is legacy based on the LIGO Data Grid paradigm of all the meaningful data at all LDG sites all the time. Development of efficient data staging scripts that also register data locations in SE in RLS catalog and check integrity of transferred data sets. – Changes to both LIGO data analysis codes and Pegasus to make the two grids appear transparent to the workflow submitters This requires scientific collaboration backing to accept methodologies and code changes/reviews associated with modifying publication class production codes. – Process for this is slow and of lower priority to collaborators in LIGO, often resulting in LIGO Data Grid code changes occurring before sufficient time and review has transpired to vet OSG specific changes … leading team back to the “drawing board” to try again with new code bases Development and tracking time needs to be greatly reduced and overall differences in the workflow’s “flow” resulting from being on the LDG and OSG removed to keep pace Almost all effort has taken place at two sites: Caltech/LIGO ITB cluster and Firefly. – Looking at using other sites where much less storage is available than would be scientifically interesting to the binary inspiral analysis team so that greater understanding of the storage element model and challenges can be explored

Binary Inspiral Future Thoughts Performance of workflows on Firefly are seen to be an order of magnitude slower than seen on the LIGO Data Grid (and on the Caltech/LIGO ITB cluster). Unclear why this is the case. Britta has been investigating, hopefully with inputs from Firefly admins. Seems very clear to me and a few (many?) others that usability and transparency of storage solutions on the OSG are resulting in loss of traction for analysis codes that demand lots of data out of the gate. Pegasus has historically performed early binding of environmental factors to the workflows. Some discussion and effort underway to explore and develop later bindings, hopefully avoiding some of the issues with information services used in early binding falling out of data or upkeep.

Conclusions LIGO’s application is in science production contributing to the analysis. This is a huge analysis requiring enormous amounts of computation globally before conclusion at the level of a scientific publication are mature … its will happen! The need for large data sets available to the jobs in the binary inspiral workflows has been challenging based on the current usability of storage on the overall OSG “Grid”. – Performance has yet to be demonstrated that matches well with the LDG for jobs  data throughput on the OSG