LCG ** * * * * * * * * * * Deploying the World’s Largest Scientific Grid: The LHC Computing Grid –

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
The LCG Service Challenges: Experiment Participation Jamie Shiers, CERN-IT-GD 4 March 2005.
Highest Energy e + e – Collider LEP at CERN GeV ~4km radius First e + e – Collider ADA in Frascati GeV ~1m radius e + e – Colliders.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
Grids: Why and How (you might use them) J. Templon, NIKHEF VLV T Workshop NIKHEF 06 October 2003.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
INFSO-RI Enabling Grids for E-sciencE The Grid Challenges in LHC Service Deployment Patricia Méndez Lorenzo CERN (IT-GD) Linköping.
1 Developing Countries Access to Scientific Knowledge Ian Willers CERN, Switzerland.
Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
Frédéric Hemmer, CERN, IT DepartmentThe LHC Computing Grid – October 2006 LHC Computing and Grids Frédéric Hemmer IT Deputy Department Head October 10,
Frédéric Hemmer, CERN, IT Department The LHC Computing Grid – June 2006 The LHC Computing Grid Visit of the Comité d’avis pour les questions Scientifiques.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
LCG ** * * * * * * * * * * LCG Service Challenges: Status and Plans –
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Ian Bird LCG Deployment Manager EGEE Operations Manager LCG - The Worldwide LHC Computing Grid Building a Service for LHC Data Analysis 22 September 2006.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
INFSO-RI Enabling Grids for E-sciencE Porting Scientific Applications on GRID: CERN Experience Patricia Méndez Lorenzo CERN (IT-PSS/ED)
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
SC4 Planning Planning for the Initial LCG Service September 2005.
The LHC Computing Environment Challenges in Building up the Full Production Environment [ Formerly known as the LCG Service Challenges ]
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
LHC Computing, CERN, & Federated Identities
LCG LHC Computing Grid Project From the Web to the Grid 23 September 2003 Jamie Shiers, Database Group IT Division, CERN, Geneva, Switzerland
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Planning the Planning for the LCG Service Challenges Jamie Shiers, CERN-IT-GD 28 January 2005.
Summary of LHC Computing Models … and more … Jamie Shiers, CERN-IT-GD 27 January 2005.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
“Replica Management in LCG”
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
“A Data Movement Service for the LHC”
The LHC Computing Environment
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
The LHC Computing Challenge
LHC Data Analysis using a worldwide computing grid
Overview & Status Al-Ain, UAE November 2007.
The LHCb Computing Data Challenge DC06
Presentation transcript:

LCG ** * * * * * * * * * * Deploying the World’s Largest Scientific Grid: The LHC Computing Grid –

LCG Service Challenges – Deploying the Service Agenda  What we are trying to do and why?  (Deploying the LHC Computing Grid – the LCG Service Challenges)  Timeline  A description of the solution  Where we stand and what remains to be done  Summary and conclusions

LCG Service Challenges – Deploying the Service The European Organisation for Nuclear Research The European Laboratory for Particle Physics  Fundamental research in particle physics  Designs, builds & operates large accelerators  Financed by 20 European countries (member states) + others (US, Canada, Russia, India, ….)  1MSF budget - operation + new accelerators  2000 staff users (researchers) from all over the world  LHC (starts ~2007) experiment: 2000 physicists, 150 universities, apparatus costing ~€300M, computing ~€250M to setup, ~€60M/year to run  10+ year lifetime

LCG Service Challenges – Deploying the Service Large Hadron Collider - LHC  Proton – proton collider re-using existing tunnel from LEP  Cooled to < 2 o Kelvin; TeV beams; 27km tunnel; -100m

LCG Service Challenges – Deploying the Service LHC – Physics Goals  Higgs particle – explanation of particle masses  Search for super-symmetric particles and possible extra dimensions  Where did all the anti-matter go?  (For those of you who have read “Angels and Demons”)  Understand early Universe ( – seconds)  How did soup of quarks and gluons stabilise into nucleons / nuclei / atoms?

LCG Service Challenges – Deploying the Service The LHC Experiments CMS ATLAS LHCb

LCG Service Challenges – Deploying the Service Data Acquisition Multi-level trigger Filters out background Reduces data volume Record data 24 hours a day, 7 days a week Equivalent to writing a CD every 2 seconds Level 3 – Giant PC Cluster 160 Hz 160 Hz (320 MB/sec) Data Recording & Offline Analysis Level 2 - Embedded Processors 40 MHz interaction rate equivalent to 2 PetaBytes/sec Level 1 - Special Hardware Atlas detector

LCG Service Challenges – Deploying the Service The LHC Computing Environment  LHC Computing Grid – based on a Tier model  Tier0 – accelerator centre (CERN)  Tier1s – some 6 – 12 large computer centres around the world  Tier2s – some 150 ‘sites’ – some distributed  Tier3s etc

LCG Service Challenges – Deploying the Service

Summary of Tier0/1/2 Roles  Tier0 (CERN): safe keeping of RAW data (first copy); first pass reconstruction, distribution of RAW data and reconstruction output to Tier1; reprocessing of data during LHC down-times;  Tier1: safe keeping of a proportional share of RAW and reconstructed data; large scale reprocessing and safe keeping of corresponding output; distribution of data products to Tier2s and safe keeping of a share of simulated data produced at these Tier2s;  Tier2: Handling analysis requirements and proportional share of simulated event production and reconstruction. N.B. there are differences in roles by experiment Essential to test using complete production chain of each!

LCG Service Challenges – Deploying the Service Overview of proton - proton running ExperimentSIMSIMESDRAWTriggerRECOAODTAG ALICE400KB40KB1MB100Hz200KB50KB10KB ATLAS2MB500KB1.6MB200Hz500KB100KB1KB CMS2MB400KB1.5MB150Hz250KB50KB10KB LHCb400KB25KB2KHz75KB25KB1KB ExperimentT0T1T2Total (PB) ALICE ATLAS CMS LHCb Total (2008) requirements: ~linear increase with time (plus reprocessing)

LCG Service Challenges – Deploying the Service LCG Service Challenges - Overview  LHC will enter production (physics) in summer 2007  Will generate an enormous volume of data  Will require huge amount of processing power  LCG ‘solution’ is a world-wide Grid  Many components understood, deployed, tested..  But…  Unprecedented scale  Humungous challenge of getting large numbers of institutes and individuals, all with existing, sometimes conflicting commitments, to work together little more than 1 year  LCG must be ready at full production capacity, functionality and reliability in little more than 1 year from now  Issues include h/w acquisition, personnel hiring and training, vendor rollout schedules etc.  Should not limit ability of physicist to exploit performance of detectors nor LHC’s physics potential  Whilst being stable, reliable and easy to use

LCG Service Challenges – Deploying the Service Service Challenges: Key Principles  Service challenges result in a series of services that exist in parallel with baseline production service  Rapidly and successively approach production needs of LHC  Initial focus: core (data management) services  Swiftly expand out to cover full spectrum of production and analysis chain  Must be as realistic as possible, including end-end testing of key experiment use-cases over extended periods with recovery from glitches and longer-term outages  Necessary resources and commitment pre-requisite to success!  Effort should not be under-estimated!

LCG Service Challenges – Deploying the Service LCG Deployment Schedule

LCG Service Challenges – Deploying the Service LCG Deployment Schedule SC2 SC3 LHC Service Operation Full physics run Firstphysics Firstbeams cosmics June05-Technical Design Report Sep05-SC3 Service Phase May06–SC4 Service Phase starts Sep06–Initial LHC Service in stable operation SC4 Apr07–LHC Service commissioned Apr05–SC2 Complete Jul05–SC3 Throughput Test Apr06–SC4 Throughput Test Dec05–Tier-1 Network operational preparation setup service SC2 SC3 LHC Service Operation Full physics run Firstphysics Firstbeams cosmics Full physics run Firstphysics Firstbeams cosmics June05-Technical Design Report Sep05-SC3 Service Phase May06–SC4 Service Phase starts Sep06–Initial LHC Service in stable operation SC4 Apr07–LHC Service commissioned Apr05–SC2 Complete Jul05–SC3 Throughput Test Apr06–SC4 Throughput Test Dec05–Tier-1 Network operational preparation setup service preparation setup service

LCG Service Challenges – Deploying the Service Tier-1 Centres ALICEATLASCMSLHCb 1GridKaKarlsruheGermanyXXXX4 2CCIN2P3LyonFranceXXXX4 3CNAFBolognaItalyXXXX4 4NIKHEF/SARAAmsterdamNetherlandsXX X3 5NDGFDistributedDk, No, Fi, Se X X 1 6PICBarcelonaSpain XXX3 7RALDidcotUKXXXX4 8TriumfVancouverCanada X 1 9BNLBrookhavenUS X 1 10FNALBatavia, Ill.US X 1 11ASCCTaipeiTaiwan XX A US Tier1 for ALICE is also expected.

LCG Service Challenges – Deploying the Service pp / AA data rates (equal split) CentreALICEATLASCMSLHCbRate into T1 (pp)Rate into T1 (AA) ASCC, Taipei CNAF, Italy PIC, Spain IN2P3, Lyon GridKA, Germany RAL, UK BNL, USA FNAL, USA TRIUMF, Canada NIKHEF/SARA, Netherlands Nordic Data Grid Facility Totals61076 N.B. these calculations assume equal split as in Computing Model documents. It is clear that this is not the ‘final’ answer…

LCG Service Challenges – Deploying the Service

LCG Service Hierarchy Tier-0 – the accelerator centre  Data acquisition & initial processing  Close to 2GB/s during AA running  Long-term data curation  Distribution of data  Tier-1 centres  ~200MB/s per site; ~12 sites Canada – Triumf (Vancouver) France – IN2P3 (Lyon) Germany – Forschungszentrum Karlsruhe Italy – CNAF (Bologna) Netherlands – NIKHEF (Amsterdam) Nordic countries – distributed Tier-1 Spain – PIC (Barcelona) Taiwan – Academia Sinica (Taipei) UK – CLRC (Didcot) US – FermiLab (Illinois) – Brookhaven (NY) Tier-1 – “online” to the data acquisition process  high availability  Managed Mass Storage –  grid-enabled data service  Data intensive analysis  National, regional support  10Gbit/s dedicated links to T0  (+ significant inter-T1 traffic) Tier-2 – ~100 centres in ~40 countries  Simulation  End-user analysis – batch and interactive  1Gbit/s networks Les Robertson

LCG Service Challenges – Deploying the Service Networking  Latest estimates are that Tier-1s will need connectivity at ~10 Gbps with ~70 Gbps at CERN  There is no real problem for the technology as has been demonstrated by a succession of Land Speed Records  But LHC will be one of the few applications needing – - this level of performance as a service on a global scale  We have to ensure that there will be an effective international backbone – that reaches through the national research networks to the Tier-1s  LCG has to be pro-active in working with service providers  Pressing our requirements and our timetable  Exercising pilot services Les Robertson

LCG Service Challenges – Deploying the Service Services (all 24 x 7, now – 2020)  Managed storage: SRM 1.1 interface, moving to 2.1 (2.2)  No assumption / requirement for tertiary storage at T2s  Monte Carlo generation: write-through cache to T1s  Analysis data: read-only (?) cache from T1s; ~30 day lifetime(?)  Reliable network links: 1Gbit/s at T2s, 10Gbit/s at T1s, support full data rate to all T1s out of T0  If network link goes down, data must be re-routed to an alternate site; pro-longed outage a major problem; need correspondingly large data buffers at T0 / T1s  Reliable File Transfer services  Gridftp, srmcopy + higher level functionality - SERVICE  File catalogs, data management tools, database services  Basic Services: workload management, VO management, monitoring etc.  Multiple levels of experiment specific software and corresponding additional complexity

LCG Service Challenges – Deploying the Service Where are we now?  Roughly mid-point in activity (first proposal to completion)  Demonstrated sustained disk – disk data rates of 100MB/s to multiple Tier1 sites, >500MB/s out of CERN for some 10 days; 800MB/s to a single site (FNAL)  Now (July): demonstrate 150MB/s to Tier1s; 1GB/s out of CERN (disk – disk) plus 60MB/s to tape at Tier1s  In terms of data rate alone, have to double data rates, plus whatever is necessary in terms of ‘safety factors’,including recovering backlogs from outages etc.  But so far, these tests have just been with dummy files, with the bare minimum software involved  In particular, none of the experiment software has been included!  Huge additional work: add major complexity whilst doubling rates and providing high quality services  (BTW, neither of first two challenges fully met their goals)

LCG Service Challenges – Deploying the Service Basic Components For SC3 Setup Phase  Each T1 to provide 10Gb network link to CERN  Each T1 + T0 to provide SRM 1.1 interface to managed storage  This goes for the named T2s for the T2-T1 transfer tests too  T0 to provide File Transfer Service  Also at named T1s for T2-T1 transfer tests  BNL, CNAF, FZK, RAL using FTS  FNAL and PIC will do T1 T2 transfers for CMS using PhEDEx  File Catalog, which will act as a site-local catalog for ATLAS/CMS and a central catalog with >1 R/O replicas for LHCb  (Database services behind all of these but not for experiment data)

LCG Service Challenges – Deploying the Service SRM – Requirements Beyond Pin/Unpin 2.Relative paths in SURLS ($VO_HOME) 3.Permission functions 4.Direction functions (except mv) 5.Global Space reservation 6.srmGetProtocols 7.AbortRequest etc

LCG Service Challenges – Deploying the Service Dedicated connections for SCs Tier1LocationNRENsStatus dedicated link ASCCTaipei, TaiwanASnet, SURFnet1 Gb via SURFnet, testing BNLUpton, NY, USAESnet, LHCnet622 Mbit shared CNAFBologna, ItalyGeant2, GARR1 Gb now, 10 Gb in Sept FNALBatavia, ILL, USAESnet, LHCnet10 Gb, tested IN2P3Lyon, FranceRenater1 Gb now, 10 Gb in Sept GridKaKarlsruhe, GermanyGeant2, DFN10 Gb, tested SARAAmsterdam, NLGeant2, SURFnet10 Gb, testing NorduGridScandinaviaGeant2, NordunetWould like to start performing transfers PICBarcelona, SpainRedIris, Geant2Will participate in SC3 but not full rate RALDidcot, UKGeant2, Ukerna2 x 1 Gb via SURFnet soon TriumfVancouver, CanadaCanet, LHCnet1 Gb via SURFnet, testing Kors Bos, LCG-LHCC Referees Meeting, March 2005 (updated 30 May JDS)

LCG Service Challenges – Deploying the Service ALICE & LCG Service Challenge 3  Use case 1: RECONSTRUCTION  (Get “RAW” events stored at T0 from our Catalogue)  First Reconstruct pass at T0  Ship from T0 to T1’s (goal: 500 MB/S out of T0)  Reconstruct at T1 with calibration data  Store/Catalogue the output  Use Case 2: SIMULATION  Simulate events at T2’s  Transfer Data to supporting T1’s  High level goal of SC3 – all offline use cases except analysis

LCG Service Challenges – Deploying the Service ALICE: Possible Data Flows  A) Reconstruction: input at T0, run at T0 + push to T1s  Central Pb-Pb events, 300 CPUs at T0  1 Job = 1 Input event; Input: 0.8 GB on T0 SE; Output: 22 MB on T0 SE  Job duration: 10 min -> 1800 Jobs/h -> 1.4 TB/h (400 MB/s) from T0 -> T1s  B) Reconstruction: input at T0, push to T1s + run at T1s  Central Pb-Pb events, 600 CPUs at T1s  1 Job = 1 Input event; Input: 0.8 GB on T0 SE; Output: 22 MB on T1 SE  Job duration: 10 min -> 3600 Jobs/h -> 2.8 TB/h (800 MB/s) from T0 -> T1s  Simulation  Assume 1000 CPUs availability at T2s and central Pb-Pb event  1 Job = 1 Event; Input: few KB of configuration files; Output: 0.8 GB on T1 SE  Job duration: 6 h -> 4000 Jobs/day -> 3.2 TB/day (40 MB/s) from T2s -> T1s

LCG Service Challenges – Deploying the Service LHCb goals (summarised)  Phase 0 (July on)  Test data management tools; 1TB of storage at Tier1s  Phase 1 (September on)  Exercise tools; 8TB storage at participating Tier1s  Phase 2 (October on)  Use tools as part of a distributed production  Metrics defined for ‘pass’ and ‘success’, e.g.  Tier1:  Pass: 80% of jobs finished within 24 hours of data being available.  Success: 95% of jobs finished within 24 hours  Distribution of DST to other Tier1’s:  Pass: 80% of stripped DST at 5/6 other Tier1’s within 8 hours. All in 2 days.  Success: 95% of stripped DST at 5/6 other Tier1’s within 8 hours. All in 2 days.

LCG Service Challenges – Deploying the Service SC3 – Experiment Goals All 4 LHC Experiments have concrete plans to:  Test out the infrastructure  core services: storage, file transfer, catalogs, …  Run a prolonged production across T0/T1/T2 sites  (ALICE / LHCb represent the two extremes; CMS / ATLAS between)  Expect long-term services to be delivered as an output of SC3  These services required from October 2005 / January 2006  Variation by experiment based on detector installation schedule  These services (with upgrades) run until end of LHC – circa 2020

LCG Service Challenges – Deploying the Service Baseline services  Storage management services  Based on SRM as the interface  Basic transfer services  gridFTP, srmCopy  Reliable file transfer service  Grid catalogue services  Catalogue and data management tools  Database services  Required at Tier1,2  Compute Resource Services  Workload management  VO management services  Clear need for VOMS: roles, groups, subgroups  POSIX-like I/O service  local files, and include links to catalogues  Grid monitoring tools and services  Focussed on job monitoring  VO agent framework  Applications software installation service  Reliable messaging service  Information system

LCG Service Challenges – Deploying the Service Service Challenge 4 – SC4  SC4 starts April 2006  SC4 ends with the deployment of the FULL PRODUCTION SERVICE  Deadline for component (production) delivery: end January 2006  Adds further complexity over SC3  Additional components and services  Analysis Use Cases  SRM 2.1 features required by LHC experiments  All Tier2s (and Tier1s…) at full service level  Anything that dropped off list for SC3…  Services oriented at analysis and end-user  What implications for the sites?  Analysis farms:  Batch-like analysis at some sites (no major impact on sites)  Large-scale parallel interactive analysis farms and major sites  (100 PCs + 10TB storage) x N  User community:  No longer small (<5) team of production users  work groups of people  Large (100s – 1000s) numbers of users worldwide

LCG Service Challenges – Deploying the Service Analysis Use Cases (HEPCAL II)  Production Analysis (PA)  Goals in ContextCreate AOD/TAG data from input for physics analysis groups  ActorsExperiment production manager  TriggersNeed input for “individual” analysis  (Sub-)Group Level Analysis (GLA)  Goals in ContextRefine AOD/TAG data from a previous analysis step  ActorsAnalysis-group production manager  TriggersNeed input for refined “individual” analysis  End User Analysis (EA)  Goals in ContextFind “the” physics signal  ActorsEnd User  TriggersPublish data and get the Nobel Prize :-)

LCG Service Challenges – Deploying the Service Summary and Conclusions  Mid-way (time wise) in aggressive programme to deploy world-wide production Grid  Services must be provided no later than year end… … and then run until career end  Deploying production Grids is hard… … and requires far too much effort at the various sites  We need to reduce this effort… … as well dramatically increase the ease of use  Today hard for users to ‘see’ their science… … main effort is overcome the complexity of the Grid

LCG Service Challenges – Deploying the Service Summary and Conclusions  Mid-way (time wise) in aggressive programme to deploy world-wide production Grid  Services must be provided no later than year end… … and then run until career end  Deploying production Grids is hard… … and requires far too much effort at the various sites  We need to reduce this effort… … as well as the dramatically increase ease of use  Today hard for users to ‘see’ their science… … main effort is overcome the complexity of the Grid

The Service is the Challenge