10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
10-Feb-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 February 2000 Les Robertson CERN/IT.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Fabric Management for CERN Experiments Past, Present, and Future Tim Smith CERN/IT.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
MONARC : results and open issues Laura Perini Milano.
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Modeling Regional Centers with MONARC Simulation Tools Modeling LHC Regional Centers with the MONARC Simulation Tools Irwin Gaines, FNAL for the MONARC.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Meeting, 5/12/06 CMS T1/T2 Estimates à CMS perspective: n Part of a wider process of resource estimation n Top-down Computing.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CERN – IT Department CH-1211 Genève 23 Switzerland t Working with Large Data Sets Tim Smith CERN/IT Open Access and Research Data Session.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
SC4 Planning Planning for the Initial LCG Service September 2005.
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Storage Management on the Grid Alasdair Earl University of Edinburgh.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Hall D Computing Facilities Ian Bird 16 March 2001.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Russian Regional Center for LHC Data Analysis
UK GridPP Tier-1/A Centre at CLRC
LHC Computing re-costing for
ALICE Computing Model in Run3
New strategies of the LHC experiments to meet
LHC Data Analysis using a worldwide computing grid
Nuclear Physics Data Management Needs Bruce G. Gibbard
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT

CERN 10-feb-00 - #2les robertson - cern/it Summary  LHC computing system topology  Some capacity and performance parameters  What a regional centre might look like  And how it could be staffed  Political overtones & sociological undertones  How little will it cost?  Conclusions

CERN 10-feb-00 - #3les robertson - cern/it The Basic Topology Cern – Tier 0 Tier 1 FNAL RAL IN2P3 622 Mbps Tier2 Lab a Uni b Lab c Uni n 622 Mbps 155 mbps Department GLA EDI DUR

CERN 10-feb-00 - #4les robertson - cern/it  Tier 0 – CERN  Data recording, reconstruction, 20% analysis  Full data sets on permanent mass storage – raw, ESD, simulated data  Hefty WAN capability  Range of export-import media  24 X 7 availability  Tier 1 – established lab/data centre or green-fields LHC facility  Major subset of data - raw and ESD  Mass storage, managed data operation  ESD analysis, AOD generation, major analysis capacity  Fat pipe to CERN  High availability  User consultancy – Library & Collaboration Software support Tier roles

CERN 10-feb-00 - #5les robertson - cern/it Tier 2 and the Physics Department  Tier 2 – smaller labs, smaller countries, hosted by existing data centre  Mainly AOD analysis  Data cached from Tier 1, Tier 0 centres  No mass storage management  Minimal staffing costs  University physics department  Final analysis  Dedicated to local users  Limited data capacity – cached only via the network  Zero administration costs (fully automated)

CERN 10-feb-00 - #6les robertson - cern/it More realistically - a Grid Topology Cern – Tier 0 Tier 1 FNAL RAL IN2P3 Tier2 Lab a Uni b Lab c Uni n Department   

CERN 10-feb-00 - #7les robertson - cern/it Capacity / Performance Based on CMS/Monarc estimates (early 1999) Rounded, extended and adapted by LMR CERN Tier 1 1 expt. Tier 1 2 expts. Tier 2 1 expt. Capacity in 2006 Annual increase # cpus #disks CPU (K SPECint95) Disk (TB) Tape (PB) (includes copies) <10 I/O rates disk (GB/sec) tape (MB/sec) WAN bandwidth Mbps optimistic MB/sec

CERN 10-feb-00 - #8les robertson - cern/it Tier 1 RC Two classes –  A - Evolved from an existing full-service data centre  B - New centre – created for LHC Class A centres need no gratuitous advice from me Class B will have a strong financial incentive towards –  Standardisation among experiments, Tier 1 centres, CERN  Automation of everything  Processors, disk caching, work scheduling  Data export/import   Minimal operation, staffing –  Trade off mass storage for disk + network bandwidth  Acquire excess capacity (contingency) rather than fighting bottlenecks, explaining to users 

CERN 10-feb-00 - #9les robertson - cern/it What does a Tier 2 Class B centre look like? Computing & Storage Fabric built up from commodity components  Simple PCs  Inexpensive network-attached disk  Standard network interface (whatever Ethernet happens to be in 2006) with a minimum of high(er)-end components  LAN backbone  WAN connection and try to avoid tape altogether  Unless there is nothing better for export/import  But think hard before getting into mass storage  Rather mirror disks, cache data across the network from another centre (willing to tolerate the stress of mass storage management)

CERN 10-feb-00 - #10les robertson - cern/it Scratchpad  Co-location – Level 3, Storage Networks Inc.  KISS  Take technical risks with the testbed  Favour reliability and cost-optimisation (e.g. JIT ) with the production system