Vanderbilt Tier 2 Project

Slides:



Advertisements
Similar presentations
Computing Infrastructure
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Click to edit Master title style European AFS and Kerberos Conference Welcome to CERN CERN, Accelerating Science and Innovation CERN, Accelerating Science.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Status of the Production ALICE TF MEETING 11/02/2010.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
David Stickland CMS Core Software and Computing
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
What’s Coming? What are we Planning?. › Better docs › Goldilocks – This slot size is just right › Storage › New.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
Grid Operations in Germany T1-T2 workshop 2015 Torino, Italy Kilian Schwarz WooJin Park Christopher Jung.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
18/12/03PPD Christmas Lectures 2003 Grid in the Department A Guide for the Uninvolved PPD Computing Group Christmas Lecture 2003 Chris Brew.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Compute and Storage For the Farm at Jlab
Title of the Poster Supervised By: Prof.*********
The CMS-HI Computing Plan Vanderbilt University
Ian Bird WLCG Workshop San Francisco, 8th October 2016
The Beijing Tier 2: status and plans
LCG Service Challenge: Planning and Milestones
The Vanderbilt Effort in CMS Vanderbilt University
U.S. ATLAS Tier 2 Computing Center
CMS-HI Offline Computing
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Charles Maguire for VU-RHIC group
The CMS-HI Computing Plan Vanderbilt University
Southwest Tier 2 Center Status Report
Bernd Panzer-Steindel, CERN/IT
Update on Plan for KISTI-GSDC
Status and Prospects of The LHC Experiments Computing
Offline data taking and processing
Dagmar Adamova, NPI AS CR Prague/Rez
Luca dell’Agnello INFN-CNAF
UK GridPP Tier-1/A Centre at CLRC
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ATLAS Sites Jamboree, CERN January, 2017
Project Status Report Computing Resource Review Board Ian Bird
ALICE Computing Model in Run3
Preparations for the CMS-HI Computing Workshop in Bologna
JDAT Production Hardware
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Vanderbilt University
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm Charles Maguire et al. March 14, 2006 Local Group Meeting.
Status of CMS-HI Compute Proposal for USDOE
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Heavy Ion Physics Program of CMS Proposal for Offline Computing
CMS-HI Offline Computing
Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.
The ATLAS Computing Model
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

Vanderbilt Tier 2 Project Charles Maguire, Project Director Department of Physics and Astronomy Vanderbilt University January 22, 2012

Overview Mission is to be the principal computing center for the CMS-HI research program at the LHC Receipt and long term storage of data heavy ion taken at the LHC Analysis center for processing these data, open to all CMS users Only center for doing re-reconstruction passes on the data Tier2 defined by Project Management and Acquisition Plan Five year project funding plan totaling $2.4M Three MoUs with CMS, FNAL, and Vanderbilt specify joint activities Chronology Initial funding authority for 3 years, starting November 1, 2010: $1.84M Expecting follow-up 2 year funding after a first phase review in 2013: $0.56M Activities documented in three periodic reports (last on Dec. 20, 2011) Has been performing its missions successfully since March 2011, including a major role in the 2011 data taking and file transfers January 22, 2012

Current Status Hardware from first two purchase stages in 2010 and 2011 60 dual-quad core nodes with 3 GB memory/core, 250 GB local disk 20 dual-hex nodes (newer) with 4 GB memory/core, 1 TB local disk Net 720 cores (job slots), with opportunistic bursting to 1050 jobs 20 networked disk depots, with 66 TB per depot (local I/O system) using RAID5 giving ~1.055 PB Currently 90% full on disk space, having both 2010 and 2011 data Final purchases on 3-year budget to be done in Spring 2012 Mix of cores and disk space to be determined based on anticipated physics beam choices for 2012 run: p+Pb, pp, and/or Pb+Pb January 22, 2012

Project Management and Acquisition Plan: Hardware and Staffing Costs at Vanderbilt Category 2010 2011 2012 2013 2014 Total New CPU (HS06) 3268 5320 9120 23028 Total CPU 8588 17708 New Disk (TBytes) 485 280 300 135 1200 Total Disk 765 1065 Hardware Cost $258,850 $252,000 $309,000 $127,255 $947,105 Staffing Cost (To DOE) $180,476 $188,285 $195,816 $202,649 $211,795 $980,021 Total Cost $439,326 $440,285 $504,816 $330,904 $1,927,126 (To Vanderbilt) January 22, 2012

Project Management and Acquisition Plan: Tape Archiving Costs at Fermilab Category 2010 2011 2012 2013 2014 Total Tape Volume (PB) 0.6 1.0 0.5 1.4 4.9 Cost to DOE $94,000 $103,000 $40,000 $116,000 $120,000 $473,000 January 22, 2012

Transfers of 2011 PbPb RECO Data Begin November 15, End December 13 Transferred 385 TB of PbPb RECO data from CERN T0 to Vanderbilt T2 Without Any Major Interruption During 24 Days of Running Maximum rate of 345 MB/s corresponding to 30 TB/day ! January 22, 2012

CMS Analysis Jobs at Vanderbilt Vanderbilt Tier 2 was officially opened for analysis jobs on April 15, 2011 Bursts of activity correlated with meetings preparations Drops in activity after meetings, or during 2011 run January 22, 2012