Download presentation
Presentation is loading. Please wait.
Published byScot Andrews Modified over 9 years ago
1
7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term Mock Data Challenge (MDC) Long-term Data analysis and calibration Education Contact point between ATLAS, students, post-docs
2
7/22/99J. Shank US ATLAS Meeting BNL2 Tier 2 Definition What is a Tier 2 Center? assert( sizeof(Tier 2) < 0.25* sizeof (Tier 1) ); What is the economy of scale? Too few FTE’s: better off consolidating at Tier 1. Too many: the above assert fails and admin. overhead grows. Detector Sub-system specific? Detector Calibration center, e.g. Task specific? DB development center, e.g. Find-the-Higgs center. Purely Regional? Support all computing activities in the region.
3
7/22/99J. Shank US ATLAS Meeting BNL3 Example 1:Boston Tier 2 Center Focus on Muon Detector Subsystem Calibrate the Muon system How much data? –Special cal. Runs of real data, just muon ~10% event –Overall ~ 1% of data or 10 Tb/yr How much CPU? –100 sec/ev => 30 CPU’s
4
7/22/99J. Shank US ATLAS Meeting BNL4 Example 2: Physics Analysis Center Find the Higgs Get 10% of the data to refine algorithms. How much data? –10 Tb/yr (reconstructed data from CERN). CPU: –10 3 sec/ev/CPU => 300 CPUs. –We better do better than 10 3 sec/ev/CPU! Distribute full production analysis to Tier 1 + other Tier 2 centers.
5
7/22/99J. Shank US ATLAS Meeting BNL5 Example 3: Missing Et From last US ATLAS computing videoconference: (see J. Huth’s slides on usatlas web page) 40 M events (2% of triggers) 40 TB of data Would use 10% of a 26000 SpecINT95 tier 2 center Conclusions: Needs lots of data storage CPU requirements modest
6
7/22/99J. Shank US ATLAS Meeting BNL6 Network Connectivity MIT 13.8 Mbps Wayne State UMass Chicago New York City Drexel Columbia NYU Cleveland SREN APAN 70 Mbps CA*Net II DREN iDREN FNAL ANL Sprint NY NAP Penn State UIUC Brown Harvard Chicago UIC Northwestern Ohio State NCSA CMU Rutgers Cornell Princeton Indiana Michigan Notre Dame Michigan State Merit Rochester NYSERNET Syracuse Rensselaer SUNY Buffalo UNH Dartmouth TANet 15 Mbps WVU Boston Yale UMaine PSC MREN STARTAP NGIX-C MirNET 6 Mbps Boston U Tufts MCI - vBNS POP vBNS Approved Institution Planned vBNS Approved Institution vBNS Partner Institution Network of vBNS Partner Institutions Planned Network of vBNS Partner Institutions Aggregation Point Planned Aggregation Point DS3 OC3 OC12 OC48
7
7/22/99J. Shank US ATLAS Meeting BNL7 Working Definition of Tier 2 Center Hardware: CPU: 50 boxes Each 200 SpecInt95 10 4 SpecInt95 Storage: 15 Tb Low maintenance robot system People: Post-Docs: 2 Computer Professionals Designers: 1 Facilities Manager: 2 –Need Sys. Admin. type + lower level scripting support type –Could be shared Infrastructure: Network Connectivity must be state of the art (OC12 OC192?) Cost Sharing, integration w/existing facility.
8
7/22/99J. Shank US ATLAS Meeting BNL8 Mass Store Throughput Do we need HPSS? 1 GB/s throughput High maintenance cost(at least now) DVD jukeboxes 600 DVD, 3 TB of storage 10-40 MB/s throughput $45k IBM Tape Robot 7+ TB storage with 4 drives 10-40 MB/s throughput Low maintenance IBM ADSM software Can we expect cheap, low maintenance 100MB/s in 2005?
9
7/22/99J. Shank US ATLAS Meeting BNL9 Cost People 3 FTE 525k2400k Post Docs?? Hardware 50 boxes x 5k 250k Mass Storage tape robot250k Disk ($100/Gb scaled)100k Software Licenses10k50k Total:3.0M Yearly 5 yr
10
7/22/99J. Shank US ATLAS Meeting BNL10 Summary Schedule how many tier 2’s? Where/when? Spread geographically, sub-system oriented? Need them to be relevant to code development => start as many as possible now. –Need presence at CERN now.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.