1 September 2007WLCG Workshop, Victoria, Canada 1 WLCG Collaboration Workshop Victoria, Canada Site Readiness Panel Discussion Saturday 1 September 2007.

Slides:



Advertisements
Similar presentations
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
Advertisements

Nick Brook University of Bristol The LHC Experiments & Lattice EB News Brief overview of the expts  ATLAS  CMS  LHCb  Lattice.
LCG-France Project Status Fabio Hernandez Frédérique Chollet Fairouz Malek Réunion Sites LCG-France Annecy, May
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Sue Foffano LCG Resource Manager WLCG – Resources & Accounting LHCC Comprehensive Review November, 2007 LCG.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Analysis demos from the experiments. Analysis demo session Introduction –General information and overview CMS demo (CRAB) –Georgia Karapostoli (Athens.
CERN IT Department CH-1211 Genève 23 Switzerland t Some Hints for “Best Practice” Regarding VO Boxes Running Critical Services and Real Use-cases.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
ATLAS Metrics for CCRC’08 Database Milestones WLCG CCRC'08 Post-Mortem Workshop CERN, Geneva, Switzerland June 12-13, 2008 Alexandre Vaniachine.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
CMS STEP09 C. Charlot / LLR LCG-DIR 19/06/2009. Réunion LCG-France, 19/06/2009 C.Charlot STEP09: scale tests STEP09 was: A series of tests, not an integrated.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
1 HiGrade Kick-off Welcome to DESY Hamburg Zeuthen.
LCG Introduction John Gordon, STFC-RAL GDB September 9 th, 2008.
…building the next IT revolution From Web to Grid…
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory Review of U.S. LHC Software and Computing Projects Fermi National Laboratory November.
ESFRI & e-Infrastructure Collaborations, EGEE’09 Krzysztof Wrona September 21 st, 2009 European XFEL.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
Les Les Robertson LCG Project Leader WLCG – Management Overview LHCC Comprehensive Review September 2006.
Computing Coordination Aspects for HEP in Germany International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science nLCG.
LHCbComputing Manpower requirements. Disclaimer m In the absence of a manpower planning officer, all FTE figures in the following slides are approximate.
WLCG and the India-CERN Collaboration David Collados CERN - Information technology 27 February 2014.
LCG CCRC’08 Status WLCG Management Board November 27 th 2007
SC4 Planning Planning for the Initial LCG Service September 2005.
Particle Physics in Germany activities and perspectives Bernhard Spaan - TU Dortmund Chair of Komitee für Elementarteilchenphysik (KET) technische universität.
LCG Sep. 26thLHCC comprehensive review 2006 Volker Guelzow 1 Tier 1 status, a summary based upon a internal review Volker Gülzow DESY.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
The CMS Computing System: getting ready for Data Analysis Matthias Kasemann CERN/DESY.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Placement Intro Dirk Duellmann WLCG TEG Workshop Amsterdam 24. Jan 2012.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
HEPMARK2 Consiglio di Sezione 9 Luglio 2012 Michele Michelotto - Padova.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
DESY. Status and Perspectives in Particle Physics Albrecht Wagner Chair of the DESY Directorate.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
LCG Introduction John Gordon, STFC-RAL GDB June 11 th, 2008.
WLCG – Status and Plans Ian Bird WLCG Project Leader openlab Board of Sponsors CERN, 23 rd April 2010.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
The LHC Computing Environment
Data Challenge with the Grid in ATLAS
Database Readiness Workshop Intro & Goals
Readiness of ATLAS Computing - A personal view
Computing Overview Topics here: CSA lessons (briefly) PADA
LHC Data Analysis using a worldwide computing grid
Presentation transcript:

1 September 2007WLCG Workshop, Victoria, Canada 1 WLCG Collaboration Workshop Victoria, Canada Site Readiness Panel Discussion Saturday 1 September 2007 Volker Guelzow (DESY)

1 September 2007WLCG Workshop, Victoria, Canada 2 Some general remarks on site readiness Some remarks on the CMS view of their sites readiness (from various CMS presentations). Some remarks on the situation in Germany

1 September 2007WLCG Workshop, Victoria, Canada 3 General Remarks Storage systems in principle fine and checked through SC’s SRM 2.2 is coming in pre production Most of the Tier 1`s are expected to run SRM 2.2 end of october Accounting tools are insufficient Availability/Reliability checked with exp. tasks? Is SAM good enough? Tier 1/2 readiness strongly depends on the quality of the middleware SW What are the kSpecInt2k numbers worth in real life today?

1 September 2007WLCG Workshop, Victoria, Canada 4 General Remarks Scale up of HW is still critical -> right balance of components Contact persons at Tier 1’s ( Tier 2’s seems to be easier) with HEP skills Help desk Tier 1/2’s needed Documentation Integration into experiments user support systems 24/7 still open in some cases We have to help Tier 3’s

1 September 2007WLCG Workshop, Victoria, Canada 5 Conclusion from a CMS presentation by Bonacorsi, Kasemann, Wuerthwein, Belforte

1 September 2007WLCG Workshop, Victoria, Canada 6 Status Tier 1/2 in Germany Full Tier 2 for Atlas, CMS, DESY Hamburg& Zeuthen in production, National Analysis Facility in setup phase 0.5 Tier 2 CMS RWTH Aachen in production Full federated Atlas Tier 2 Uni-Wuppertal& Uni-Freiburg partially ready Tier GridKa for Alice, Atlas,CMS, LHCb +non LHC in production Tier 2 Alice partially ready Federated Tier 2 Atlas Centre Max-Planck& LM- Uni-Munich in setup phase

1 September 2007WLCG Workshop, Victoria, Canada 7 From CMS-talks: CSA06 - Goal to test services at 25% of 2008 complexity. Some services tested well beyond 25% Fall 2007: Computing facilities must be able to handle challenge activities, MC production and processing, physics analysis, commissioning and calibration activities in preparation for physics. Spring 2008: MC production and processing at 100M evts/month; Cosmic Run with magnetic field, Challenge 08? MC production continues throughout the year. Computing and Offline Milestones: Beg July: preCSA07 Computing Software Analysis Challenge Beg Sept:CSA07 Computing Software Analysis Challenge begins End Oct:Offline & Computing ready for Cosmics Run End Feb‘08: Production of startup MC samples (100M evts/month)