Presentation at the International Symposium on Grid Computing

Slides:



Advertisements
Similar presentations
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Advertisements

 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
A tool to enable CMS Distributed Analysis
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Grid Workload Management Massimo Sgaravatto INFN Padova.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
SC4 Planning Planning for the Initial LCG Service September 2005.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
INFSO-RI Enabling Grids for E-sciencE CRAB: a tool for CMS distributed analysis in grid environment Federica Fanzago INFN PADOVA.
David Adams ATLAS ATLAS-ARDA strategy and priorities David Adams BNL October 21, 2004 ARDA Workshop.
David Stickland CMS Core Software and Computing
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
Dave Newbold, University of BristolGridPP Middleware Meeting ‘Real World’ issues from DC04 DC04: Trying to operate the CMS computing system at 25Hz for.
1-2 March 2006 P. Capiluppi INFN Tier1 for the LHC Experiments: ALICE, ATLAS, CMS, LHCb.
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
EGEE Middleware Activities Overview
GWE Core Grid Wizard Enterprise (
Data Challenge with the Grid in ATLAS
CMS — Service Challenge 3 Requirements and Objectives
SRM Developers' Response to Enhancement Requests
CMS transferts massif Artem Trunov.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Computing Report ATLAS Bern
Evolution of the distributed computing model The case of CMS
Network Requirements Javier Orellana
Workshop Summary Dirk Duellmann.
LCG middleware and LHC experiments ARDA project
Bernd Panzer-Steindel CERN/IT
US ATLAS Physics & Computing
R. Graciani for LHCb Mumbay, Feb 2006
LHC Data Analysis using a worldwide computing grid
ATLAS DC2 & Continuous production
Short to middle term GRID deployment plan for LHCb
The LHCb Computing Data Challenge DC06
Presentation transcript:

Presentation at the International Symposium on Grid Computing CMS Computing Model Presentation at the International Symposium on Grid Computing Taipei 27-29 April, 2005 David Stickland Princeton University

Not Presented Today… Data Challenge DC04 resulted in lots of changes to CMS Event Data Model and to the Computing Model Those results, reasons, redesign etc, will not be presented today. CMS produced about 90M events in the last year or so using LCG2, GRID3 and local computing resources Even quite complex computing production (such as digitization with pile-up) is being run on LCG now These events are being served to CMS physicists now for analysis. The data is being analyzed where it is located We have a prototype system CRAB that (many) non-gridified physicists are using to run their analysis jobs on LCG at CERN, CNAF, FNAL, Lyon, PIC, RAL, Legnaro,…

GRIDS The CMS Computing Model relies on GRIDS. CMS will work in at least three GRID environments CMS/LCG-EGEE CMS/OSG CMS/NorduGrid (Why CMS/ ? Because in each case there will be CMS specific work to be done on top of or around the offered GRID environments, we expect/need the CMS communities associated with these GRIDS to do this work) The GRID environments do not offer a consistent set of services at the same strata of the middleware Since CMS needs to offer a uniform environment that means we must have the ability to work with both upper and lower level GRID middleware components; to match our applications and interfaces to specific GRID implementations

Architectural Elements Data Granularity LHC Triggers cut deeply into physics; Data always needs to be considered in its trigger context Split annual O(2PB) raw data into O(50) (40TB) trigger determined datasets Data Tiers RAW, Reconstructed, Analysis, Tag Keep Raw and Reconstructed close together (initially at least) Custodial RAW+Reco distributed over Tier-1s (one copy somewhere) Analysis Data, Full copy at each Tier-1, partial copies at many Tier-2 Computing Tiers CMS/Tier-0: Close connection to Online, highly organized, Tier-1: Data Custody, Selection, Data Distribution, (Analysis), Re-Reco Tier-2: Analysis Data under Physicist “control”, MC production

Data Flow CMS/CAF

Data Management CMS has chosen a baseline in which (initially) the bulk experiment-wide data is pre-located at sites (by policy and explicit decisions taken by CMS to manage the data) The DM system will focus on bringing up this basic functionality, however hooks will be provided for more dynamic movement of data in the future DM architecture is based on a set of loosely coupled components which,taken together, provide the necessary core functionality. Workload Management (WM) system need only steer jobs to the correct location

Components of the Workload and Data Management DataSet Bookkeeping System What data exists Data Location Service Where is it File Placement and Transfer Service PhEDEx On top of reliable transfer components/services Local File catalogs Site specific Data Access and Storage Systems SRM on SE Access POSIX-like CMS tools to allow CMS policy and space management Monitoring and job-tracking Mona-Lisa, GridIce and Boss Information for priority changing Workflow support Configuration control Job preparation CRAB (or son of CRAB)..

Overview of CMS Planning Computing and software planning overview Baseline system assumes thin grid middleware, experiments significant stake-holders with major input to operation and choices made Full computing system essentially being put into operation now New event data model / framework being deployed this year Schedule July - December 2005: LCG Service challenge 3 October 2005 - February 2006: Magnet test, cosmic challenge Summer 2006 - : LCG Service challenge 4 Summer/Autumn 2006: DC06 (= SC4?, or production phase of SC4?) February 2007: Deliver LHC ready computing and software July 2007: LHC start-up

Outstanding GRID Issues Top three issues and/or missing functionality Priorities and share allocation in both workload and data management (data management aims for solution summer 2005) Comprehensive monitoring to understand resource usage To guide re-prioritisation and policy changes Monitoring tools != understanding (and need to get them deployed!) Operational robustness and stability (The fact that the basic SE and CE functionalities are not in this list is evidence that these core components of Grids are(getting) under control and no longer the critical path issues)

Priorities and Share Allocation in WM and DM This is the least mature part of GRIDs to date Sensible system management requires many more control possibilities than are yet available Granularity of VO is too crude Tools like PhEDEx can apply policy, e.g.: Data Transfer from Tier-0 runs at highest priority Physics group coordinators can preempt other inter-site transfers MC from tier-2 may run as “when nothing else to transfer” Unless T2 buffer overflow dictates “as fast as you can” Etc., Fair-share between experiments Presumably a site responsibility Intra-experiment fair-share Site responsibility (Separate VOs) or Experiment| ? How to stop one group/user exhausting local allocation during first week of month when it is known that another group will need it in last two weeks? More granular priority and ACLs are mandatory

Networks We are pushing available networks to their limits in the Tier-1/Tier-2 connections Tier -0 needs ~2x10Gb/s links for CMS Each Tier-1 needs ~10Gb/s links Each Tier-2 needs 1Gb/s for its incoming traffic There will be extreme upward pressure on these numbers as the distributed computing becomes more and more useable and effective Service Challenges with LCG, CMS Tier-1 centers and CMS Data Management team/components planned for this year Ensure we are on path to achieve these performances.

(Draft) Integration Schedule 2005 June 2005 (integration testing: June - August) Data management: new dataset bookkeeping system, data location index; able to represent information in current production database (RefDB) in a way that workload management can use Data transfers: priority-based + dynamic routing, easy deployment Production: ability to push pile-up to worker nodes, file merging option Workload management, conditions: (to be defined) September 2005 (integration testing: September - November) Data management + workload management: functionally complete for bulk (collaboration wide) data processing; output file harvesting; support for new EDM Production: revised RefDB, task-queue based job pull system Conditions: delivery of conditions data to sites and use there New EDM + framework complete, ORCA migration begins December 2005 (integration testing: December - February) Data management + workload management: user / analysis output data, n-tuple handling; ability to serve production needs Production: can execute bulk production using DM/WM tools Conditions: delivery of new conditions back to detector facility ORCA based on new framework

Summary All Baseline Services need to be deployed, even if also evolving, this year Components of the Service Challenges need to continue as production services Experiments must work with at least three grid implementations, we had better get used to it… All Tier-1 centers are getting underway now All first round Tier-2 centers need to be operational by, about, SC4 We live in interesting times…. (Which apparently contrary to Robert F Kennedy’s assertion, is not a Chinese curse, but it seems appropriate anyway)