Download presentation
Presentation is loading. Please wait.
Published byClyde Atkinson Modified over 8 years ago
1
les.robertson@cern.ch 10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT
2
CERN 10-feb-00 - #2les robertson - cern/it Summary LHC computing system topology Some capacity and performance parameters What a regional centre might look like And how it could be staffed Political overtones & sociological undertones How little will it cost? Conclusions
3
CERN 10-feb-00 - #3les robertson - cern/it The Basic Topology Cern – Tier 0 Tier 1 FNAL RAL IN2P3 622 Mbps Tier2 Lab a Uni b Lab c Uni n 622 Mbps 155 mbps Department GLA EDI DUR
4
CERN 10-feb-00 - #4les robertson - cern/it Tier 0 – CERN Data recording, reconstruction, 20% analysis Full data sets on permanent mass storage – raw, ESD, simulated data Hefty WAN capability Range of export-import media 24 X 7 availability Tier 1 – established lab/data centre or green-fields LHC facility Major subset of data - raw and ESD Mass storage, managed data operation ESD analysis, AOD generation, major analysis capacity Fat pipe to CERN High availability User consultancy – Library & Collaboration Software support Tier roles
5
CERN 10-feb-00 - #5les robertson - cern/it Tier 2 and the Physics Department Tier 2 – smaller labs, smaller countries, hosted by existing data centre Mainly AOD analysis Data cached from Tier 1, Tier 0 centres No mass storage management Minimal staffing costs University physics department Final analysis Dedicated to local users Limited data capacity – cached only via the network Zero administration costs (fully automated)
6
CERN 10-feb-00 - #6les robertson - cern/it More realistically - a Grid Topology Cern – Tier 0 Tier 1 FNAL RAL IN2P3 Tier2 Lab a Uni b Lab c Uni n Department
7
CERN 10-feb-00 - #7les robertson - cern/it Capacity / Performance Based on CMS/Monarc estimates (early 1999) Rounded, extended and adapted by LMR CERN Tier 1 1 expt. Tier 1 2 expts. Tier 2 1 expt. Capacity in 2006 Annual increase # cpus #disks CPU (K SPECint95)600200120250 1200 25 120 Disk (TB)550200120250 500- 1000 25 50- 100 Tape (PB) (includes copies) 3.420.4<10 I/O rates disk (GB/sec) tape (MB/sec) 100 400 20 50 40 100 4040 WAN bandwidth Mbps optimistic MB/sec 622 30 155 10
8
CERN 10-feb-00 - #8les robertson - cern/it Tier 1 RC Two classes – A - Evolved from an existing full-service data centre B - New centre – created for LHC Class A centres need no gratuitous advice from me Class B will have a strong financial incentive towards – Standardisation among experiments, Tier 1 centres, CERN Automation of everything Processors, disk caching, work scheduling Data export/import Minimal operation, staffing – Trade off mass storage for disk + network bandwidth Acquire excess capacity (contingency) rather than fighting bottlenecks, explaining to users
9
CERN 10-feb-00 - #9les robertson - cern/it What does a Tier 2 Class B centre look like? Computing & Storage Fabric built up from commodity components Simple PCs Inexpensive network-attached disk Standard network interface (whatever Ethernet happens to be in 2006) with a minimum of high(er)-end components LAN backbone WAN connection and try to avoid tape altogether Unless there is nothing better for export/import But think hard before getting into mass storage Rather mirror disks, cache data across the network from another centre (willing to tolerate the stress of mass storage management)
10
CERN 10-feb-00 - #10les robertson - cern/it Scratchpad Co-location – Level 3, Storage Networks Inc. KISS Take technical risks with the testbed Favour reliability and cost-optimisation (e.g. JIT ) with the production system
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.