Download presentation
Presentation is loading. Please wait.
Published byShonda Greer Modified over 9 years ago
1
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing Structure T3 types and comparisons Scorecard
2
My view of the ideal computing environment Full system support by a dedicated professional hardware and software (OS and file system) High bandwidth access to the data at desired level of detail e.g., ESD, AOD, summary data and conditions data Access to all relevant ATLAS software and grid services Access to compute cycles equivalent to purchased hardware Access to additional burst cycles Access to ATLAS software support when needed Conversationally close to those in same working group Preferentially face to face July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman2 These are my views, derived from discussions with Jason Nielsen, Terry Schalk UCSC), Jim Cochran (Iowa State), Anyes Taffard (UCI), Ray Frey, Eric Torrence (Oregon), Gordon Watts (Washington), Richard Mount, Charlie Young (SLAC)
3
ATLAS Computing Structure July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman3 ATLAS world-wide tiered computing structure where ~30 TB of raw data/day from ATLAS is reconstructed, reduced and distributed to the end user for analysis T0: CERN T1: 10 centers world wide. US: BNL. No end user analysis. T2: some end-user analysis capability @ 5 US Centers, 1 located @ SLAC T3: end user analysis @ universities and some national labs. See ATLAS T3 report: http://www.pa.msu.edu/~brock/file_sharing/T3TaskForce//fin al/TierThree_v1_executiveFinal.pdf http://www.pa.msu.edu/~brock/file_sharing/T3TaskForce//fin al/TierThree_v1_executiveFinal.pdf
4
Data Formats in ATLAS July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman FormatSize(MB)/evt RAW - data output from DAQ (streamed on trigger bits)1.6 ESD - event summary data: reco info + most RAW0.5 AOD - analysis object data: summary of ESD data0.15 TAG - event level metadata with pointers to data files0.001 Derived Physics Data ~25 kb/event ~30 kb/event ~5 kb/event 4
5
Possible data reduction chain July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman5 (possible scenario for “mature” phase of ATLAS experiment)
6
T3g T3g: Tier3 with grid connectivity (a typical university-based system): Tower or rack-based Interactive nodes Batch system with worker nodes Atlas code available (in kit releases) ATLAS DDM client tools available to fetch data (currently dq2-ls, dq2-get) Can submit grid jobs Data Storage located on worker nodes or dedicated file servers Possible activities: detector studies from ESD/pDPD, physics/validation studies from D 3 PD, fast MC, CPU intensive matrix element calculations,... July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman6
7
A university-based ATLAS T3g Local computing a key to producing physics results quickly from reduced datasets Analyses/streams of interest at the typical university: CPU and storage needed for first 2 years: July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman7 performance ESD/pDPD at T2 # analyses e-gamma1 W/Z(e)2 W/Z( ) 2 minbias1 physics stream (AOD/D 1 PD) at T2 # analyse s e-gamma2 muon1 jet/missE t 1 components 160 cores 70 TB
8
A university-based ATLAS T3g Requirements matched by a rack-based system from T3 report:T3 report The university has a 10 Gb/s network to the outside. Group will locate the T3g near campus switch and interface directly to it July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman8 10 KW heat 320 kSI2K processing
9
Tier3 AF (Analysis Facility) Two sites expressed interest and have set up prototypes BNL: Interactive nodes, batch cluster, Proof cluster SLAC: Interactive nodes and batch cluster T3AF – University groups can contribute funds / hardware Groups are granted priority access to resources they purchased Purchase batch slots Remaining ATLAS may use resources when not in use by owners SLAC-specific case: Details covered in Richard Mount’s talk July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman9
10
University T3 vs. T3AF July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman10 Site:Advantages:Disadvantages: UniversityCooling, power, space usually provided Control over use of resources More freedom to innovate/experiment Dedicated CPU resource Potential matching funds from university Limited cooling, power, space and funds to scale acquisition in future years Support not 24/7, not professional. Cost may be comparable to that at T3AF Limited networking and networking support Access to databases No surge capability T3AF24/7 hardware and software support (mostly professional) Shared space for code, data (AOD) Excellent access to ATLAS data and databases Fair share mechanism to allow universities to use what they contributed Better network security ATLAS release installation provided A yearly buy in cost Less freedom to innovate/experiment by university Must share some cycles Some groups will site disks and/or worker nodes at T3AF, interactive nodes at university
11
Qualitative score card University T3gT3AF Full system support by a dedicated professionalnogenerally yes High bandwidth access to the data at desired level of detailvariablegood Access to all relevant ATLAS software and grid servicesvariableyes Access to compute cycles equivalent to purchased hardwareyes Access to additional burst cycles (e.g., crunch time analysis)generally notyes Access to ATLAS software support when neededgenerally yesyes Cost (hardware, infrastructure)some being negotiated being negotiated July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman11 Cost is probably the driving factor in hardware site decision hybrid options are also possible A T3AF at SLAC will be an important option for university groups considering a T3
12
Extra July 17, 2009SLUO/LHC workshop Computing Session Bill Lockman12
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.