CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.

Slides:



Advertisements
Similar presentations
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Advertisements

1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
CCRC’08 Jeff Templon NIKHEF JRA1 All-Hands Meeting Amsterdam, 20 feb 2008.
CASTOR Upgrade, Testing and Issues Shaun de Witt GRIDPP August 2010.
WLCG ‘Weekly’ Service Report ~~~ WLCG Management Board, 22 th July 2008.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
FZU participation in the Tier0 test CERN August 3, 2006.
WLCG Service Report ~~~ WLCG Management Board, 27 th January 2009.
WLCG Service Report ~~~ WLCG Management Board, 27 th October
CERN IT Department CH-1211 Genève 23 Switzerland t EIS section review of recent activities Harry Renshall Andrea Sciabà IT-GS group meeting.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
ATLAS Metrics for CCRC’08 Database Milestones WLCG CCRC'08 Post-Mortem Workshop CERN, Geneva, Switzerland June 12-13, 2008 Alexandre Vaniachine.
WLCG Service Report ~~~ WLCG Management Board, 24 th November
2 Sep Experience and tools for Site Commissioning.
CCRC08-1 report WLCG Workshop, April KorsBos, ATLAS/NIKHEF/CERN.
LCG Plans for Chrsitmas Shutdown John Gordon, STFC-RAL GDB December 10 th, 2008.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
WLCG Service Report ~~~ WLCG Management Board, 1 st September
Andrea Sciabà CERN CMS availability in December Critical services  CE, SRMv2 (since December) Critical tests  CE: job submission (run by CMS), CA certs.
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
LFC Replication Tests LCG 3D Workshop Barbara Martelli.
DDM Monitoring David Cameron Pedro Salgado Ricardo Rocha.
LHCb The LHCb Data Management System Philippe Charpentier CERN On behalf of the LHCb Collaboration.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
WLCG Service Report ~~~ WLCG Management Board, 7 th September 2010 Updated 8 th September
WLCG Service Report ~~~ WLCG Management Board, 7 th July 2009.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
4 March 2008CCRC'08 Feb run - preliminary WLCG report 1 CCRC’08 Feb Run Preliminary WLCG Report.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
WLCG Service Report ~~~ WLCG Management Board, 31 st March 2009.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
WLCG Service Report ~~~ WLCG Management Board, 18 th September
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
GGUS summary (3 weeks) VOUserTeamAlarmTotal ALICE4004 ATLAS CMS LHCb Totals
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
WLCG ‘Weekly’ Service Report ~~~ WLCG Management Board, 5 th August 2008.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
WLCG Service Report Jean-Philippe Baud ~~~ WLCG Management Board, 24 th August
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
WLCG Service Report ~~~ WLCG Management Board, 17 th February 2009.
Status of gLite-3.0 deployment and uptake Ian Bird CERN IT LCG-LHCC Referees Meeting 29 th January 2007.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
A quick summary and some ideas for the 2005 work plan Dirk Düllmann, CERN IT More details at
WLCG Collaboration Workshop 21 – 25 April 2008, CERN Remaining preparations GDB, 2 nd April 2008.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
DB Questions and Answers open session (comments during session) WLCG Collaboration Workshop, CERN Geneva, 24 of April 2008.
Database Requirements Updates from LHC Experiments WLCG Grid Deployment Board Meeting CERN, Geneva, Switzerland February 7, 2007 Alexandre Vaniachine (Argonne)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
WLCG ‘Weekly’ Service Report ~~~ WLCG Management Board, 19 th August 2008.
Jean-Philippe Baud, IT-GD, CERN November 2007
Database Readiness Workshop Intro & Goals
ATLAS activities in the IT cloud in April 2008
~~~ LCG GDB, 2nd April 2008 CCRC’08 W CCRC’08 Phase Two ~~~ LCG GDB, 2nd April 2008.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Castor services at the Tier-0
Olof Bärring LCG-LHCC Review, 22nd September 2008
Simulation use cases for T2 in ALICE
WLCG Service Report 5th – 18th July
R. Graciani for LHCb Mumbay, Feb 2006
Dirk Duellmann ~~~ WLCG Management Board, 27th July 2010
The LHCb Computing Data Challenge DC06
Presentation transcript:

CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008

2 Overview Even though this MB is a face-to-face meeting, will stick with format that has evolved over recent weeks More extensive report on today’s CCRC’08 F2F will be given at tomorrow’s GDB Moving steadily towards (semi-)automated reporting… ¿Which is good(?) OB presentation is attached to agenda for insomniacs

3 ServiceIssue DBsTwo major activities. 1) Preparation of new RAC 5 and 6 hardware (both now on critical power) to migrate experiment services next week. 2) Upgrade of integration RACs to Oracle This version will probably not be used for CCRC'08(?) More details in notes… 3DCNAF down for the entire week due to maintenance. Streams replication to CNAF has been split and stopped, it will be resumed this week. ATLAS streams replication between offline and online database stopped on Good Friday due to an error from the streams apply process caused by an user application mis- configuration. The replication was resumed after a few hours. CASTORThe new CASTOR version (2.1.7) will be ready for pre-production certification on the 1st of April as originally planned. All tests and stress tests are successful. SRMA bug has been found in the CERN SRM v2 databases. This is in fact fixed in a rollout due on Monday so the rollout has been brought forwards. FTSSome Apache patches are in the pipeline for FTS servers. CEThe fixes deployed at CERN to the LCG CE that reduce their load by an order of magnitude are being packaged for external distribution and should be ready next week. DMPatches to the version of lcgutils should also be ready for distribution next week. Current activities – Services

Current activities – ATLAS 4 DayIssue WedThroughput testing with junk data. Build up T0 to T1 transfers to 150% nominal, adding 50% each day (150% over the w/e) CNAF has a downtime so their share is going to BNL. NDGF have requested a reduced rate due to insufficient resources. SRMv2 is being used at CERN. ThuSome Castor fixes at the end of the day. Try to reach 100% of the rate today then 200% on Friday then throttle back to 100% for the weekend. A current problem is they no longer see BNL on the ATLAS dashboard. Fri1) Yesterday the export rate was supposed to be increased to 100% of nominal. Problem in the ATLAS T0 machinery and not enough LSF jobs were submitted. 2) Display problem with the dashboard. Some entries end up in the production dashboard but should be in the T0 dashboard and vice versa. 3) SARA is not getting data on disk since disk is full (being cleaned up now). NDGF does not get data on disk since disk is full (can not be cleaned up centrally since NDGF does not use LFC - the only supported catalog for the central deletion tools). NDGF people should do the cleaning (alerted). LFCATLAS are planning a bulk CERN site change to about 60K LFC entries. The consensus was that this is a relative simple operation that should only take a few tens of minutes of real time. To be scheduled!

Current activities – CMS 5 ExperimentDayIssue CMSMon Tue WedWill continue their T0 to T1 exports to overlap with ATLAS but there is a downtime coming up at FZK. ThuAsking how to get actions from IT on non-urgent issues during long holiday periods such as Easter. The use case there was to allow some dataops team members to access the DAQ worker nodes to look at LSF issues. These are non- urgent requests but which cannot wait for weeks. It was agreed to discuss this at the next F2F ? meeting. ? Fri

Current activities – ALICE 6 ALICEActivity Main activities for this week: Pass 1 reconstruction of RAW data taken during February/March commissioning exercise (CERN T0) Replication of specific RAW datasets to named T2s for fast detector- specific analysis (calibration/software tuning) MC production ongoing all centres

Current activities – LHCb 7 ExperimentDayActivity LHCbMon Tuetesting their stripping workflow. Wed Thuare evaluating the requirements for local worker node disk space in case they switch to copying production input data to local WN disk rather than reading via remote I/O. Current estimate is 5 GB. They will be running some production to exercise the manpower of their stripping workflow. Fri