WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
CERN IT Department CH-1211 Genève 23 Switzerland Visit of Professor Jerzy Szwed Under Secretary of State Ministry of Science and Higher.
INFSO-RI Enabling Grids for E-sciencE The Grid Challenges in LHC Service Deployment Patricia Méndez Lorenzo CERN (IT-GD) Linköping.
1 Developing Countries Access to Scientific Knowledge Ian Willers CERN, Switzerland.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
Z. Z. Vilakazi iThemba LABS / UCT-CERN Research Centre Curation in Natural sciences.
Frédéric Hemmer, CERN, IT DepartmentThe LHC Computing Grid – October 2006 LHC Computing and Grids Frédéric Hemmer IT Deputy Department Head October 10,
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Frédéric Hemmer, CERN, IT Department The LHC Computing Grid – June 2006 The LHC Computing Grid Visit of the Comité d’avis pour les questions Scientifiques.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
INFSO-RI Enabling Grids for E-sciencE Geant4 Physics Validation: Use of the GRID Resources Patricia Mendez Lorenzo CERN (IT-GD)
Ian Bird LCG Deployment Manager EGEE Operations Manager LCG - The Worldwide LHC Computing Grid Building a Service for LHC Data Analysis 22 September 2006.
1 The LHC Computing Grid – February 2007 Frédéric Hemmer, CERN, IT Department LHC Computing and Grids Frédéric Hemmer Deputy IT Department Head January.
CERN IT Department CH-1211 Genève 23 Switzerland Visit of Professor Karel van der Toorn President University of Amsterdam Wednesday 10 th.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
INFSO-RI Enabling Grids for E-sciencE Porting Scientific Applications on GRID: CERN Experience Patricia Méndez Lorenzo CERN (IT-PSS/ED)
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Service, Operations and Support Infrastructures in HEP Processing the Data from the World’s Largest Scientific Machine Patricia Méndez Lorenzo (IT-GS/EIS),
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
SC4 Planning Planning for the Initial LCG Service September 2005.
Frédéric Hemmer, CERN, IT DepartmentThe LHC Computing Grid – September 2007 Wolfgang von Rüden, CERN, IT Department The LHC Computing Grid Frédéric Hemmer.
Ian Bird LCG Project Leader WLCG Update 6 th May, 2008 HEPiX – Spring 2008 CERN.
The LHC Computing Environment Challenges in Building up the Full Production Environment [ Formerly known as the LCG Service Challenges ]
LCG ** * * * * * * * * * * Deploying the World’s Largest Scientific Grid: The LHC Computing Grid –
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
The LHC Computing Grid Visit of Dr. John Marburger
1 The LHC Computing Grid – April 2007 Frédéric Hemmer, CERN, IT Department The LHC Computing Grid A World-Wide Computer Centre Frédéric Hemmer Deputy IT.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Ian Bird LCG Project Leader WLCG Status Report 7 th May, 2008 LHCC Open Session.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
Dr. Ian Bird LHC Computing Grid Project Leader Göttingen Tier 2 Inauguration 13 th May 2008 Challenges and Opportunities.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
“Replica Management in LCG”
Computing Operations Roadmap
“A Data Movement Service for the LHC”
Dirk Duellmann CERN IT/PSS and 3D
Grid Computing in HIGH ENERGY Physics
The LHC Computing Environment
LCG Service Challenge: Planning and Milestones
Physics Data Management at CERN
Kors Bos NIKHEF, Amsterdam.
IT Department and The LHC Computing Grid
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Data Challenge with the Grid in ATLAS
The LHC Computing Challenge
Dagmar Adamova, NPI AS CR Prague/Rez
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
The LHC Computing Grid Visit of Her Royal Highness
An introduction to the ATLAS Computing Model Alessandro De Salvo
LCG Status Report LHCC Open Session CERN 28th June 2006.
LHC Computing Grid Project - LCG
Visit of US House of Representatives Committee on Appropriations
LHC Data Analysis using a worldwide computing grid
LHC Tier 2 Networking BOF
The LHC Computing Grid Visit of Prof. Friedrich Wagner
Overview & Status Al-Ain, UAE November 2007.
LHCb thinking on Regional Centres and Related activities (GRIDs)
The LHC Computing Grid Visit of Professor Andreas Demetriou
The LHCb Computing Data Challenge DC06
Presentation transcript:

WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006 Goals of the Workshop Les Robertson LCG Project Leader

Only one year before the accelerator starts All dipoles delivered (28 Nov) Two thirds of the 1,706 magnets have been installed Cryogenics distribution line completed (19 Oct) Sector 7-8 completed on 10 Nov. – now under test The accelerator is well on the way to produce the first collisions within the next year

The final year for the Indian teams responsible for testing the magnets The last Diwali in SM18

The race is on to complete the detectors before the ring is closed ATLAS barrel toroid tested at full strength early November CMS solenoid reached full strength in September – components being installed now in the cavern ALICE sub-detector installation has been going on steadily since July

LCG Computing Service Commissioning Schedule 7 months to prepare the system 11 months before the first collisions LCG Computing Service Commissioning Schedule 2006 2007 2008 LCG initial service in operation, from the end of SC4 Tier-0  Tier-1 data distribution Continuous testing of computing models, basic services Integrating the Tier-2s for analysis as well as for simulation – -- building up end-user analysis support Exercising the computing systems, ramping up job rates, data management performance, …. Introduce residual services Full FTS services; 3D; SRM v2.2; VOMS roles Service commissioning – increase reliability, performance, capacity to target levels, experience in monitoring, 24 X 7 operation, …. 01jul07 - service commissioned - full 2007 capacity, performance first physics Experiments Sites & Services

LCG Service Hierarchy Tier-0 – the accelerator centre Data acquisition & initial processing Long-term data curation Distribution of data  Tier-1 centres Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany – Forschunszentrum Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) Tier-1 – “online” to the data acquisition process  high availability Managed Mass Storage –  grid-enabled data service Data-heavy analysis National, regional support Tier-2 – ~120 centres in ~35 countries Simulation End-user analysis – batch and interactive

LCG Service Hierarchy Tier-0 – the accelerator centre Data acquisition & initial processing Long-term data curation Distribution of data  Tier-1 centres The distributed computing environment is essential for making the data available to physicists -- there will be only a small fraction of the computing at CERN -- the data is distributed immediately to the Tier-1s -- the Tier-2s are where the physicists will be working Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany – Forschunszentrum Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF/SARA (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia SInica (Taipei) UK – CLRC (Oxford) US – FermiLab (Illinois) – Brookhaven (NY) Tier-1 – “online” to the data acquisition process  high availability Managed Mass Storage –  grid-enabled data service Data-heavy analysis National, regional support Tier-2 – ~120 centres in ~35 countries Simulation End-user analysis – batch and interactive

Distribution of Computing Services First full year CPU Disk Tape

Wide Area Network Tier-2s and Tier-1s are inter-connected by the general purpose research networks T2 T2 T2 GridKa T2 IN2P3 Dedicated 10 Gbit optical network TRIUMF Any Tier-2 may access data at any Tier-1 T2 Brookhaven ASCC Nordic T2 Fermilab RAL CNAF PIC T2 T2 SARA T2 LCG T2

Goals of the Workshop There are very many components and people involved, and all of this must work together reliably .. .. and so it is essential that the whole complicated environment (data distribution, job scheduling, user access, ..) is fully tested out and operating …. .. well before the first data comes The purpose of the workshop is to foster communication among you, the people who are going to run the regional centres in Asia, the Tier-1 and the Tier-2s to discuss and understand how the experiments will make best use of the resources to uncover any particular difficulties and problems that have to be addressed and overcome to start the planning for 2007 .. in order to make that all is in place within the next 12 months