14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.

Slides:



Advertisements
Similar presentations
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
Advertisements

Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
Bondyakov A.S. Institute of Physics of ANAS, Azerbaijan JINR, Dubna.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy.
Plethora: A Wide-Area Read-Write Storage Repository Design Goals, Objectives, and Applications Suresh Jagannathan, Christoph Hoffmann, Ananth Grama Computer.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
…building the next IT revolution From Web to Grid…
ISU DOSAR WORKSHOP Dick Greenwood DOSAR/OSG Statement of Work (SoW) Dick Greenwood Louisiana Tech University April 5, 2007.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
11 November 2010 Natascha Hörmann Computing at HEPHY Evaluation 2010.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Overview of ATLAS Israel Computing RECFA, April Overview of ATLAS Israel Computing Overview of ATLAS Israel Computing RECFA Meeting Tel Aviv University,
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
GSDC: A Unique Data Center in Korea for Fundamental Research Global Science experimental Data hub Center Korea Institute of Science and Technology Information.
U.S. ATLAS Tier 2 Computing Center
Vanderbilt Tier 2 Project
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Southwest Tier 2.
Статус ГРИД-кластера ИЯФ СО РАН.
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
Presentation transcript:

14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth

14 Aug 08DOE Review John Huth ATLAS Computing at Harvard Two functions –Supply computing power (storage and CPU) for Harvard investigators to support analysis and simulation –Act as a Tier 2 facility, jointly with BU (Northeast Tier 2 – NET2). Recent addition of substantial local resources for computing, connected to the Open Science Grid

14 Aug 08DOE Review John Huth The ATLAS computing model uses a tiered system in order to enable all members speedy access to all reconstructed data needed for analysis and raw data needed for monitoring, calibration, and alignment ATLAS Computing Model Tier-0 at CERN Archives and distributes RAW data Provides first pass processing Restricted to central production group ~10 Tier-1 Facilities Stores select RAW data, stores derived data, and performs processing on RAW data Restricted to working group managers Regional Tier-2 Facilities Resources for local research, such as analysis, simulation, and calibration Open to all members of the collaboration Local Tier-3 Facilities Typically clusters housed at a university or lab Allows fast analysis of derived datasets Typically open only to local members RAW data from ATLAS Simulated data RAW data Derived data Derived data

14 Aug 08DOE Review John Huth Computing Requirements

14 Aug 08DOE Review John Huth Integrating computing and storage resources from more than 50 sites, OSG is a U.S. distributed computing infrastructure designed for large-scale scientific research OSG provides the software framework, middleware, and oversight for more than 35 Virtual Organizations (VOs), which provide local resources and user services OSG is funded and supported by the NSF and DOE Open Science Grid

14 Aug 08DOE Review John Huth Harvard and Scientific Computing A recent and dramatic increase in support for scientific computing at Harvard –Hardware, facilities and personnel Commitment to a major scientific computing center along Oxford Street –Currently rental space at 1 Summer Street Facility –Dedicated support for high-end computing supplied by University

14 Aug 08DOE Review John Huth Harvard’s Role in ATLAS Computing Model BNL Tier-1 CPU: 4900 kSi2K Disk: 2000 TB Tape: 1000 TB CERN Tier-0 CPU: 4480 kSi2K Disk: 330 TB Tape: 1620 TB BU ATLAS Cluster CPU: 700 kSi2K Disk: 236 TB Tape: 0 TB Harvard University Odyssey Cluster CPU: 5600 kSi2K Disk: 300 TB Tape: 0 TB Northeast Tier-2 Overall flow of data into NET2

14 Aug 08DOE Review John Huth FAS Computing

14 Aug 08DOE Review John Huth 15th April 2008MIT Network Meeting Odyssey physical installation at 1 Summer Street

14 Aug 08DOE Review John Huth 15th April 2008MIT Network Meeting Capabilities of Odyssey General purpose Intel x86_ CPU cores (9,543GHz) 16TB DRAM Infiniband fully non blocking ~ % efficiency

14 Aug 08DOE Review John Huth FAS Cluster configuration - 1

14 Aug 08DOE Review John Huth Odyssey Performance Tests of the Odyssey cluster showed that a sustained rate up to 600 jobs could be handled under even heavy concurrent loads. These were mainly simulation jobs with light I/O.

14 Aug 08DOE Review John Huth Computing Capabilities Based on the CPU ratings, the Harvard cluster has as much aggregate CPU power as all other US ATLAS Tier 2’s combined. Based on concurrent usage (other applications), it is likely that we can at least triple the CPU power available for the NET2 with Odyssey online. Available for Harvard ATLAS as priority, a huge resource for analysis and data storage. –Example of “leverage” envisaged for Tier 2’s