ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009.

Slides:



Advertisements
Similar presentations
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Recruitment and Partnerships Module 2. Getting Acquainted How many of you have a background in 4‐H? Can any of you tell me just one thing about the system.
Planning for a Western Analysis Facility Richard P. Mount Planning for a Western Analysis FacilityPage 1.
Tag line, tag line Virtualisatie op de balans Leaseweb bv IT hosting simplified.
CHEP 2004 September 2004Richard P. Mount, SLAC Huge-Memory Systems for Data-Intensive Science Richard P. Mount SLAC CHEP, September 29, 2004.
Increasing Your Impact Through Digital Technology Rae Davies Communities 2.0 Circuit Rider.
Ian Bird LHCC Referees’ meeting; CERN, 11 th June 2013 March 6, 2013
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
SLAC National Accelerator Laboratory Site Report A National Lab in Transition Randy Melen, Deputy CIO Computing Division, Operations Directorate SLAC National.
Emerson Network Power Datacenter Infrastructure Solutions for Business Critical Continuity Many Needs, One Answer. May 28 th 2010.
Virtualization. Virtualization  In computing, virtualization is a broad term that refers to the abstraction of computer resources  It is "a technique.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
May Richard P. Mount, SLAC Advanced Computing Technology Overview Richard P. Mount Director: Scientific Computing and Computing Services Stanford.
Outline IT Organization SciComp Update CNI Update
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
New Data Center at BNL– Status Update HEPIX – CERN May 6, 2008 Tony Chan - BNL.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
SLUO LHC Workshop: Closing RemarksPage 1 SLUO LHC Workshop: Closing Remarks David MacFarlane Associate Laboratory Directory for PPA.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
HEP and Non-HEP Computing at a Laboratory in Transition Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear Accelerator.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
…building the next IT revolution From Web to Grid…
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Nikhef/(SARA) tier-1 data center infrastructure
CSE 102 Introduction to Computer Engineering What is Computer Engineering?
Scientific Computing at SLAC: The Transition to a Multiprogram Future Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear.
Microsoft ® Lync On-Line ™ SIP Trunking in the Cloud.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
HEP and Non-HEP Computing at a Laboratory in Transition Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear Accelerator.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
J. Templon Nikhef Amsterdam Physics Data Processing Group “Grid” Computing J. Templon SAC, 26 April 2012.
Viewpoint from a University Group Bill Lockman with input from: Jim Cochran, Jason Nielsen, Terry Schalk WT2 workshop April 6-7, 2009.
ECEE Enabling Clouds for eScience The open collaboration spot for cloud projects in Europe Enabling Clouds for eScience.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
Hall D Computing Facilities Ian Bird 16 March 2001.
GSDC: A Unique Data Center in Korea for Fundamental Research Global Science experimental Data hub Center Korea Institute of Science and Technology Information.
The Lean and Green by Design
Use of Cloud Computing for Implementation of e-Governance Services
Intro to Software as a Service (SaaS) and Cloud Computing
IOT Critical Impact on DC Design
CERN Data Centre ‘Building 513 on the Meyrin Site’
Western Analysis Facility
Luca dell’Agnello INFN-CNAF
UK GridPP Tier-1/A Centre at CLRC
LHC Computing re-costing for
Presentation transcript:

ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009

Richard P. Mount, SLAC2 SLAC Computing Today Over 7000 batch cores (=0.6 MW) ~2PB of disk ~20 Cisco 6509s ~13PB of robotic tape silo slots (<4PB of data) ~10 Gb connections to ESNet AND to Internet 2

April 7, 2009Richard P. Mount, SLAC3 Long Lead Time Infrastructure Planning Present capacity

April 7, 2009Richard P. Mount, SLAC4 Infrastructure: The Next 3-4 Years Immediate issue –Any single component of the “optimistic” scenario takes us above our current power/cooling capacity; –New power/cooling requires minimum 9 months –3 months notice of a new requirement is normal Need to plan and implement upgrades now –IR2 racks (up to 400 KW total) – favored –New BlackBox? – disfavored –New Water-Cooled Racks? – maybe Need to insist on 4-year replacement cycle

April 7, 2009Richard P. Mount, SLAC5 Infrastructure: 2013 on Proposed (Stanford) Scientific Research Computing Facility Modular – up to 8 modules Up to 3MW payload per module Ambient air cooled Cheaper than Sun BlackBoxes But not free! (~$10 per W capital cost)

April 7, 2009Richard P. Mount, SLAC6 Concept for a Stanford Research Computing Facility at SLAC (~2013)

April 7, 2009Richard P. Mount, SLAC7 First Two Modules

April 7, 2009Richard P. Mount, SLAC8 Module Detail

April 7, 2009Richard P. Mount, SLAC9 Green Savings

April 7, 2009Richard P. Mount, SLAC10 SLAC Computing Goals for ATLAS Computing Enable a major role for the “SLAC Community” in LHC physics Be a source of Flexibility and Agility (fun) Be among the leaders of the Revolution (fun, but dangerous) Would love to provide space/power/cooling for SLAC Community equipment at zero cost, but Can offer efficiency and professionalism for T3af facilities (satisfaction of a job well done)

April 7, 2009Richard P. Mount, SLAC11 SLAC Management Goals for SLAC Computing Continue to play a role in HEP computing at the level of current sum of –BaBar + –ATLAS T2 + –FGST (GLAST) + –Astro + –Accelerator modeling Maintains and develops core competence in data-intensive science

April 7, 2009Richard P. Mount, SLAC12 Simulation – the First Revolution? BaBar Computing –Simulation:Data Production:Analysis 26% 31% 43% of h/w cost –Most simulation done at universities in return for pats on the back –Most Data Production and Analysis done at “Tier A” centers in return for a reduction of payments to the operating common fund Simulation as the dominant use of T2s seems daft Amazon EC2 can (probably) do simulation cheaper than a T2

April 7, 2009Richard P. Mount, SLAC13 SLAC as a T3af Costs (see following) Benefits: Pooled Batch computing (buy 10s or gores, get access to 1000s of cores) Storage sharing (get a fraction of a high- reliability device) High performance access to data Mass Storage High Availbility

April 7, 2009Richard P. Mount, SLAC14 Housing $1M/year in the BaBar Electronics Huts

April 7, 2009Richard P. Mount, SLAC15 Personal Conclusion US ATLAS analysis computing: –will need flexibility, agility, revolution –seems dramatically under provisioned SLAC management would like to help address these issues These issues are my personal focus