BigPanDA Workflow Management on Titan

Slides:



Advertisements
Similar presentations
Enabling Cost-Effective Resource Leases with Virtual Machines Borja Sotomayor University of Chicago Ian Foster Argonne National Laboratory/
Advertisements

PanDA Integration with the SLAC Pipeline Torre Wenaus, BNL BigPanDA Workshop October 21, 2013.
ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Maria Grazia Pia, INFN Genova 1 Part V The lesson learned Summary and conclusions.
Integrating Network Awareness in ATLAS Distributed Computing Using the ANSE Project J.Batista, K.De, A.Klimentov, S.McKee, A.Petroysan for the ATLAS Collaboration.
A simulation study of the rapidity distributions of leptons from W boson decays at ATLAS Laura Gilbert.
NGNS Program Managers Richard Carlson Thomas Ndousse ASCAC meeting 11/21/2014 Next Generation Networking for Science Program Update.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Testing PanDA at ORNL Danila Oleynik University of Texas at Arlington / JINR PanDA UTA 3-4 of September 2013.
A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future.
Tools for collaboration How to share your duck tales…
ATLAS Grid Data Processing: system evolution and scalability D Golubkov, B Kersevan, A Klimentov, A Minaenko, P Nevski, A Vaniachine and R Walker for the.
Experience and possible evolution Danila Oleynik (UTA), Sergey Panitkin (BNL), Taylor Childers (ANL) ATLAS TIM 2014.
Event Service Intro, plans, issues and objectives for today Torre Wenaus BNL US ATLAS S&C/PS Workshop Aug 21, 2014.
Overview of ASCR “Big PanDA” Project Alexei Klimentov Brookhaven National Laboratory September 4, 2013, Arlington, TX PanDA UTA.
MultiJob pilot on Titan. ATLAS workloads on Titan Danila Oleynik (UTA), Sergey Panitkin (BNL) US ATLAS HPC. Technical meeting 18 September 2015.
Network awareness and network as a resource (and its integration with WMS) Artem Petrosyan (University of Texas at Arlington) BigPanDA Workshop, CERN,
PanDA Status Report Kaushik De Univ. of Texas at Arlington ANSE Meeting, Nashville May 13, 2014.
PanDA & BigPanDA Kaushik De Univ. of Texas at Arlington BigPanDA Workshop, CERN October 21, 2013.
HPC pilot code. Danila Oleynik 18 December 2013 from.
Update on Titan activities Danila Oleynik (UTA) Sergey Panitkin (BNL)
Big PanDA on HPC/LCF Update Sergey Panitkin, Danila Oleynik BigPanDA F2F Meeting. March
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
PanDA HPC integration. Current status. Danila Oleynik BigPanda F2F meeting 13 August 2013 from.
1 Performance Impact of Resource Provisioning on Workflows Gurmeet Singh, Carl Kesselman and Ewa Deelman Information Science Institute University of Southern.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
BigPanDA Status Kaushik De Univ. of Texas at Arlington Alexei Klimentov Brookhaven National Laboratory OSG AHM, Clemson University March 14, 2016.
PanDA & Networking Kaushik De Univ. of Texas at Arlington UM July 31, 2013.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Particle Physics Sector Young-Kee Kim / Greg Bock Leadership Team Strategic Planning Winter Workshop January 29, 2013.
Joshua Moss (Ohio State University) on behalf of the ATLAS Collaboration ICHEP 2012, Melbourne 6 July 2012 ATLAS Electroweak measurements of W and Z properties.
BigPanDA US ATLAS SW&C Technical Meeting August 1, 2016 Alexei Klimentov Brookhaven National Laboratory.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Accessing the VI-SEEM infrastructure
Review of the WLCG experiments compute plans
Status of WLCG FCPPL project
University of Texas At Arlington Louisiana Tech University
H2020, COEs and PRACE.
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Computing models, facilities, distributed computing
U.S. ATLAS Tier 2 Computing Center
BigPanDA Technical Interchange Meeting July 20, 2017 Hong Ma
The “Understanding Performance!” team in CERN IT
Methodology: Aspects: cost models, modelling of system, understanding of behaviour & performance, technology evolution, prototyping  Develop prototypes.
PanDA setup at ORNL Sergey Panitkin, Alexei Klimentov BNL
PanDA engagement with theoretical community via SciDAC-4
HPC DOE sites, Harvester Deployment & Operation
HEP Computing Tools for Brain Studies
Dagmar Adamova, NPI AS CR Prague/Rez
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
Miron Livny John P. Morgridge Professor of Computer Science
ATLAS Data Analysis Ontology: ontological representation of investigations DKB Meeting.
ATLAS Sites Jamboree, CERN January, 2017
BigPanDA WMS for Brain Studies
A high-performance computing facility for scientific research
Univ. of Texas at Arlington BigPanDA Workshop, ORNL
Dart-yin A. Soh Institute of Physics, Academia Sinica
The ATLAS Experiment at the LHC
New strategies of the LHC experiments to meet
$1M a year for 5 years; 7 institutions Active:
Introduction to Monte Carlo Event Generators
The ATLAS Experiment at the LHC
Big Data, Simulations and HPC Convergence
Study of Top properties at LHC
A Possible OLCF Operational Model for HEP (2019+)
Welcome to (HT)Condor Week #19 (year 34 of our project)
Workflow Management Software For Tomorrow
Workflow and HPC erhtjhtyhy Doug Benjamin Argonne National Lab.
Presentation transcript:

BigPanDA Workflow Management on Titan Next Generation Workflow Management for High Energy Physics We worked with Big Panda to provide Interoperabilty across resources using SAGA. Now we are enabling more flexible execution of workloads on titan for Big Panda. This will allow PanDA to support ATLAS workloads in regular queue, improve execution modes, as well as different workloads. The Simple API for Grid Applications (SAGA) is a family of related standards specified by the Open Grid Forum to define an application programming interface (API) for common distributed computing functionality. “BigPanDA Workflow Management on Titan for High Energy and Nuclear Physics and for Future Extreme Scale Scientific Applications,” DOE/SC/ASCR Next-Generation Networking for Science, Rich Carlson. PI: Alexei Klimentov (BNL); Co-Pis; K. De (U. Texas-Arlington), S. Jha (Rutgers U) J.C. Wells (ORNL)

Year-1 Accomplishment Highlights HPC Operations Increased Titan’s utilization by > 2 percent Larger than size of the average OLCF INCITE project Computer Science Prototyping and implementation of a reference implementation (PanDA NGE) of model of Pilot Jobs “Converging High-Throughput and High-Performance Computing: A Case Study”  https://arxiv.org/abs/1704.00978 “A Building Blocks Approach towards Domain Specific Workflow Systems” https://arxiv.org/abs/1609.03484 Physics Accomplishments Titan is having an impact across a broad range of LHC physics topics. Almost every paper that we publish is being touched. 52 physics publications since September 2016 acknowledge OLCF resources

Making Use of “Unusable Backfill” Consumed 129 Million Titan core hours, 7/16 to 6/17 This is 2.5 percent of total available time on Titan 170 % of average INCITE project

ATLAS simulation time worldwide: February 2017 ATLAS Detector Simulation integrated with Titan (OLCF) Titan has already contributed a large fraction of computing resources for MC simulations Titan contributed 4.4% of total simulation time in February 2017. > 5%

Physics Accomplishments Simulated a variety of standard model background processes Titan is having an impact across a broad range of LHC physics topics. Almost every paper that we publish is being touched. Leptonic decay of W bosons - to mu+nu and e+nu Invisible decay of Z bosons, to two neutrinos Drell Yan tau+tau production Gamma+jet background, Jpsi production, ttbar to all decay channels, trilepton exotic model Many different generators - Sherpa, Powheg, Pythia, NLO and NNLO. All processed through ATLAS detector using Geant4. MC15/MC16 - relevant for 13 TeV data runs in 2016-2018.