PanDA engagement with theoretical community via SciDAC-4

Slides:



Advertisements
Similar presentations
Supported by DOE 11/22/2011 QGP viscosity at RHIC and LHC energies 1 Huichao Song 宋慧超 Seminar at the Interdisciplinary Center for Theoretical Study, USTC.
Advertisements

Oklahoma Center for High Energy Physics A DOE EPSCoR Proposal Submitted by a consortium of University of Oklahoma, Oklahoma State University, Langston.
ARCS software meeting 3/15/02 Experimental Planning, Visualization, and Analysis for Condensed Matter Physicists Collin Broholm Johns Hopkins University.
January 12, 2007J. Sandweiss Two Important Long Range Programs for RHIC In addition to the many important RHIC research programs that are currently underway.
MultiJob PanDA Pilot Oleynik Danila 28/05/2015. Overview Initial PanDA pilot concept & HPC Motivation PanDA Pilot workflow at nutshell MultiJob Pilot.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
03/27/'07T. ISGC20071 Computing GRID for ALICE in Japan Hiroshima University Takuma Horaguchi for the ALICE Collaboration
Simulating Quarks and Gluons with Quantum Chromodynamics February 10, CS635 Parallel Computer Architecture. Mahantesh Halappanavar.
Research Support Services Research Support Services.
QCD Project Overview Ying Zhang September 26, 2005.
Lattice QCD in Nuclear Physics Robert Edwards Jefferson Lab CCP 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.:
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
Measurements of  Production and Nuclear Modification Factor at STAR Anthony Kesich University of California, Davis STAR Collaboration.
Lattice QCD and the SciDAC-2 LQCD Computing Project Lattice QCD Workflow Workshop Fermilab, December 18, 2006 Don Holmgren,
BROOKHAVEN SCIENCE ASSOCIATES Peter Bond Deputy Director for Science and Technology October 29, 2005 New Frontiers at RHIC Workshop.
GridPP, Durham 5 th July1 PhenoGrid Status Peter Richardson Durham University.
Jefferson Lab Status APEX Meeting April 22, 2014 R. D. McKeown Deputy Director For Science.
High Energy Nuclear Physics and the Nature of Matter Outstanding questions about strongly interacting matter: How does matter behave at very high temperature.
1 QCD Thermodynamics at High Temperature Peter Petreczky Large Scale Computing and Storage Requirements for Nuclear Physics (NP), Bethesda MD, April 29-30,
Overview of ASCR “Big PanDA” Project Alexei Klimentov Brookhaven National Laboratory September 4, 2013, Arlington, TX PanDA UTA.
MultiJob pilot on Titan. ATLAS workloads on Titan Danila Oleynik (UTA), Sergey Panitkin (BNL) US ATLAS HPC. Technical meeting 18 September 2015.
Network awareness and network as a resource (and its integration with WMS) Artem Petrosyan (University of Texas at Arlington) BigPanDA Workshop, CERN,
PanDA & BigPanDA Kaushik De Univ. of Texas at Arlington BigPanDA Workshop, CERN October 21, 2013.
R.G. Milner2nd EIC Workshop Summary and Outlook science case machine design EIC realization.
Discussion Global Analysis of Polarized RHIC and DIS data: How do we get there? Global Analysis Workshop at BNL, Oct. 8, 2007.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
U.S. Department of Energy’s Office of Science Midrange Scientific Computing Requirements Jefferson Lab Robert Edwards October 21, 2008.
The Fifth UNEDF Annual Collaboration Meeting June 20-24, 2011, Michigan State University Welcome! UNEDF members collaborators guests.
Update on Titan activities Danila Oleynik (UTA) Sergey Panitkin (BNL)
OSG Area Coordinator’s Report: Workload Management October 6 th, 2010 Maxim Potekhin BNL
Helmut Satz and the Formative Days of RHIC T. Ludlam June 9, 2011.
Big PanDA on HPC/LCF Update Sergey Panitkin, Danila Oleynik BigPanDA F2F Meeting. March
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
NATA Foundation General Grants Program Process RC Chair identifies 3 RC members to review Pre-Proposal & information is sent for review (within 2 weeks.
BigPanDA Status Kaushik De Univ. of Texas at Arlington Alexei Klimentov Brookhaven National Laboratory OSG AHM, Clemson University March 14, 2016.
High Performance Computing Activities at Fermilab James Amundson Breakout Session 5C: Computing February 11, 2015.
Jun Doi IBM Research – Tokyo Early Performance Evaluation of Lattice QCD on POWER+GPU Cluster 17 July 2015.
BigPanDA US ATLAS SW&C Technical Meeting August 1, 2016 Alexei Klimentov Brookhaven National Laboratory.
LQCD Computing Project Overview
A Brief Introduction to NERSC Resources and Allocations
Project Management – Part I
BigPanDA Workflow Management on Titan
Volunteer Computing for Science Gateways
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
BigPanDA Technical Interchange Meeting July 20, 2017 Hong Ma
PanDA setup at ORNL Sergey Panitkin, Alexei Klimentov BNL
NATA Foundation General Grants Program Process
HEP Computing Tools for Brain Studies
IEEE Journal on Special Topics in Information Theory Status Report
R.Mashinistov (UTA) July
BigPanDA WMS for Brain Studies
Univ. of Texas at Arlington BigPanDA Workshop, ORNL
Cloud Computing R&D Proposal
Scientific Discovery via Visualization Using Accelerated Computing
Scientific Computing At Jefferson Lab
Development of the Nanoconfinement Science Gateway
CS 179: GPU Programming Lecture 19: Projects 1
Nuclear Physics Data Management Needs Bruce G. Gibbard
Shared Salaries Department Tools for a Successful Budget Experience
Scientific Computational Reproducibility
Name of Project Manager Date
Reassessing the relationship between pure and applied mathematics
rvGAHP – Push-Based Job Submission Using Reverse SSH Connections
NATA Foundation General Grants Program Process
A Possible OLCF Operational Model for HEP (2019+)
Workflow Management Software For Tomorrow
PAC47 Charge Robert McKeown PAC47 July 29, 2019.
Presentation transcript:

PanDA engagement with theoretical community via SciDAC-4 S. Panitkin (BNL)

Executive Summary In late 2016 BigPanDA got contacted by two collaboration of theorists who were preparing SciDAC proposals and who where interested in using PanDA for workload management of their simulations USLQCD Collaboration CRUNCH Collaboration After initial discussions agreed to join both proposals We engaged both collaborations about 1.5 year ago as part of BigPanDA beyond ATLAS activities Discussions with P. Petreczky at BNL (LQCD) BigPanDA presentation at the MADAI (future CRUNCH) collaborations meeting at MSU DOE NP-ASCR(SciDAC-4) call for proposals,. Support for cutting edge computational nuclear theory on LCFs and DOE . 5 year projects Letters of intent submitted Jan. 2017 Proposals submitted Feb. 2017 Decisions are expected mid-summer 2017

LQCD Projects title: “Computing the Properties of Matter with Leadership Computing” Lead PI: Robert Edwards TJLab ~20 collaborators from BNL, MIT, LANL, MSU, GWU, UNC, TJLAb, W&M, PNNL, Intel, nVidia Lattice QCD calculations in support of TJlab, RHIC, LHC-HI physics programs, future electron – ion collider

CRUNCH Project Title: “Calculating Rigorous Uncertainties for Nuclear Collisions of Heavy ions (CRUNCH)” Lead PI Michael Strickland – Kent State U. ~10 collaborators from BNL, Duke, MSU, OSU, LNLL, Minnesota, Rutgers Modeling of heavy ion collisions in support of RHIC, LHC-HI

Computing aspects. LQCD example Lattice QCD – 170M hours allocation on LCFs in 2016 through INCITE and SciDAC HTC like computing in many cases. Below based on BNL’s LQCD group workflow on Titan Many ,MPI wrapped, independent ,GPU enabled, single node jobs, ~5 hours run time, that sets the number of ranks on Titan Currently direct, by hand submission, ~2 small, 2 big jobs per week USLQCD uses several LCFs, plans to work on next-gen machines (Sierra, Aurora, …). GPU and PHI optimized codes. nVidia and Intel are collaborators on the proposals. Very suitable for automation by PanDA We expect that PanDA will allow to automate job submission, increase volume of submitted jobs, automatic resubmission, monitoring, data movement Meeting in early April 2017 with Robert Edwards to start the discussion about detailed requirements and implementation details