Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 1/20BARBET Jean-Michel - 1/20 Alice T1/T2 Tutorial.

Slides:



Advertisements
Similar presentations
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Advertisements

LCG-France Project Status Fabio Hernandez Frédérique Chollet Fairouz Malek Réunion Sites LCG-France Annecy, May
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CERN IT Department CH-1211 Genève 23 Switzerland t Some Hints for “Best Practice” Regarding VO Boxes Running Critical Services and Real Use-cases.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
CERN IT Department CH-1211 Genève 23 Switzerland t EIS section review of recent activities Harry Renshall Andrea Sciabà IT-GS group meeting.
SouthGrid Status Pete Gronbech: 2 nd April 2009 GridPP22 UCL.
Status of the production and news about Nagios ALICE TF Meeting 22/07/2010.
PROOF Cluster Management in ALICE Jan Fiete Grosse-Oetringhaus, CERN PH/ALICE CAF / PROOF Workshop,
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
Optimisation of Grid Enabled Storage at Small Sites Jamie K. Ferguson University of Glasgow – Jamie K. Ferguson – University.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
Installing, running, and maintaining large Linux Clusters at CERN Thorsten Kleinwort CERN-IT/FIO CHEP
CERN - IT Department CH-1211 Genève 23 Switzerland t Oracle Real Application Clusters (RAC) Techniques for implementing & running robust.
 Load balancing is the process of distributing a workload evenly throughout a group or cluster of computers to maximize throughput.  This means that.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
VMware vSphere Configuration and Management v6
CREAM: ALICE Experience WLCG GDB Meeting, CERN 11th November 2009 Stefano Bagnasco (INFN-Torino), Jean-Michel Barbet (Subatech), Latchezar Betev (ALICE),
Experiment Operations: ALICE Report WLCG GDB Meeting, CERN 14th October 2009 Patricia Méndez Lorenzo, IT/GS-EIS.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Xrootd Monitoring and Control Harsh Arora CERN. Setting Up Service  Monalisa Service  Monalisa Repository  Test Xrootd Server  ApMon Module.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
+ AliEn site services and monitoring Miguel Martinez Pedreira.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN IT Monitoring and Data Analytics Pedro Andrade (IT-GT) Openlab Workshop on Data Analytics.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Patricia Méndez Lorenzo (CERN, IT/GS-EIS) ċ. Introduction  Welcome to the first ALICE T1/T2 tutorial  Delivered for site admins and regional experts.
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
2007/05/22 Integration of virtualization software Pierre Girard ATLAS 3T1 Meeting
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
Virtual machines ALICE 2 Experience and use cases Services at CERN Worker nodes at sites – CNAF – GSI Site services (VoBoxes)
CERN Openlab Openlab II virtualization developments Havard Bjerke.
Status of the SL5 migration ALICE TF Meeting
Servizi core INFN Grid presso il CNAF: setup attuale
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
GRID OPERATIONS IN ROMANIA
The Beijing Tier 2: status and plans
High Availability Linux (HA Linux)
Security aspects of the CREAM-CE
INFN Computing infrastructure - Workload management at the Tier-1
Grid status ALICE Offline week Nov 3, Maarten Litmaath CERN-IT v1.0
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Update on Plan for KISTI-GSDC
Quattor Usage at Nikhef
GSIAF "CAF" experience at GSI
WLCG Collaboration Workshop;
Pierre Girard ATLAS Visit
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Grid Management Challenge - M. Jouvin
Presentation transcript:

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 1/20BARBET Jean-Michel - 1/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 1/29 Alice T1/T2 Tutorial Site Experience Subatech – Nantes – France IN2P3-SUBATECH

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 2/20BARBET Jean-Michel - 2/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 2/29 Brief presentation The SUBATECH Laboratory is one of the 20 units of the French National Physics Institute : IN2P3 It is located in Nantes, Britanny, West of France

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 3/20BARBET Jean-Michel - 3/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 3/29 Brief presentation Hosted by the Ecole des Mines de Nantes (EMN)‏ Joint research unit (UMR) between EMN, Nantes University and the CNRS Institute IN2P3 More than 150 people IT service : 5

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 4/20BARBET Jean-Michel - 4/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 4/29 LCG and Alice sites in France CCIN2P3 (T1+T2+AF)‏ Clermont GRIF (IRFU,IPNO)‏ Nantes SUBATECH Strasbourg IRES Lyon IPNL Grenoble LPSC

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 5/20BARBET Jean-Michel - 5/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 5/29 SUBATECH LCG Services and hardware Site BDII Computing Element CREAM-CE Computing Element LCG-CE Storage Element DPM-xrootd : 1 head-node 4 serveurs GPFS IBM SAN DS To 2 Servers GPFS 4 servers Dell MD1000 RAID6 Storage Element xrootd natif 1 manager xrootd VOBOX Alice 1 VOBOX Alice 2 Cluster Torque 260CPU 25 woodcrest 20 Clovertown Shared Grid+Local Cluster Torque 120CPU 15 Clovertown Grid only Vmware Virtual Machine Physical Machine

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 6/20BARBET Jean-Michel - 6/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 6/29 Network 1Gbit/s on RENATER Firewall/Virus scanner/IDS Core Router Router IN2P3 RENATER Firewall and UTM Core Router Cisco 6500 Non-blocking Backplane Storage servers 2x1Gb/s VM servers 2x1Gb/s Worker nodes 1Gb/s

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 7/20BARBET Jean-Michel - 7/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 7/29 LCG-CE Computing Element 260 SL4.5/i386 cores for Alice Computing Element LCG-CE VOBOX Alice 1 SL4.5 i386 Software Area Worker Nodes SL4.5 i386 Batch Manager Torque+MAUI Home Dirs of Pool Accounts Local alicesgm Home Dir NFS Server

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 8/20BARBET Jean-Michel - 8/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 8/29 CREAM-CE Computing Element 120 cores SL4.5/x86_64 for Alice (separate cluster)‏ Computing Element CREAM-CE VOBOX Alice 2 SL4.5 i386 Software Area Worker Nodes SL4.5 x86_64 Home Dirs of Pool Accounts Local alicesgm Home Dir Batch Manager Torque + MAUI NFS Server

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 9/20BARBET Jean-Michel - 9/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 9/29 DPM-xrootd Storage 40To SAN storage, 4 GPFS partitions DPM-xrootd head-node IBM SAN DS To MySQL DPM-xrootd server-node GPFS DPM-xrootd head-node DPM-xrootd server-node GPFS Server GPFS GPFS Network

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 10/20BARBET Jean-Michel - 10/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 10/29 Native xrootd Storage 100To in 4 DAS cells each composed of 1 Dell 2950 server attached to 2 MD1000 in RAID6 (4 partitions/server)‏ Dedicated xrootd manager NEC Express RE1 Xrootd Server MD To Xrootd Server MD To Xrootd Server MD To Xrootd Server MD To Xrootd Manager

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 11/20BARBET Jean-Michel - 11/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 11/29 Site Experience DPM-xrootd Native xrootd CREAM-CE AliEn Torrent

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 12/20BARBET Jean-Michel - 12/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 12/29 DPM-xrootd Took time to be available and to set up Integration xrootd with DPM not complete :  Xrootd usage gets mapped to the DPM root principal  Used/free space incorrectly reported by the GIP  Used/free space not automatically reported by MonaLisa Sometimes break  The xrootd daemon stalls on servers and have to be restarted (automatic cron script may help)‏  High load involving the MySQL daemon in case of high access rate on the manager (a virtual machine here is probably a bad choice and it may require better designed hardware)‏

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 13/20BARBET Jean-Michel - 13/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 13/29 Native xrootd Easy to setup How to deploy ?  Manually ?  RPM is OK but have to understand dependencies on OS, architecture and other software in order to share RPMs  RPM requires install under the root account while install under dedicated”xrootd” account is usually preferred  Can be deployed using Quattor if RPM is used Storage under native xrootd not accounted by EGEE/LCG Seems rock solid but not enough experience yet  For example : how data is spread over 4 filesystems

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 14/20BARBET Jean-Michel - 14/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 14/29 CREAM-CE Relatively easy to setup  YUM install  Had to extract the BLAH server from RPMs to install on the batch manager (which is separate)‏  Had to manually setup APEL publication Still new  Glite update broke the CREAM-CE in january 2009, a bug in gLexec configuration was discovered  Need to master TomCat ? (security, management, debugging)‏ Not yet tested by SAM Same Job profile between LCG-CE and CREAM-CE

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 15/20BARBET Jean-Michel - 15/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 15/29 AliEn Torrent Successfully tested but needed adjustments :  The port 6881 originally used is filtered for outgoing connections on most IN2P3 sites (old P2P fighting)‏  Our UTM firewall have a policy to block it, have to put an exception May raise questions and concerns  A poll on french sites about their feelings on the usage of this tool resulted in comments that still have to be summarized (mostly on efficiency and usage of network resources, not on security)‏ What about distributing software for users this way ?

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 16/20BARBET Jean-Michel - 16/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 16/29 Building Quality Infrastructure Resilience/Redundancy Incident detection Monitoring Security Change management

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 17/20BARBET Jean-Michel - 17/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 17/29 Infrastructure Room : ~60m2 Air Conditioning  2 units, 1 can keep the room cool enough depending on outside temperature Power  2 different 50Kw lines with UPS on each used at 50%  Almost all hardware except worker nodes have dual power, one on each line  Add specific device for boxes with only one power supply  Worker nodes have only one power supply and are balanced on the 2 lines

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 18/20BARBET Jean-Michel - 18/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 18/29 Resilience - Redundancy In case of problems, have the service survive (Resilience) or have a replacement ready (Redundancy)‏ Resilience  Dual power on different lines help the machine stay powered in case of problem on one line. But for this you need each line to be able to power all hardware on which vital services rely  Vmware vMotion : if one of the VM server goes down, virtual machines can be moved to the other (here also capacity matters)‏ Redundancy  Have a second machine ready, on-line or not

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 19/20BARBET Jean-Michel - 19/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 19/29 Incident detection Better know before things really break Infrastructure  Temperature monitoring with alarms (mail and automatic phone call or pager)‏  UPS monitoring with alarms Hardware and OS  Monitoring of important Unix parameters : CPU,load,RAM  Disk space Services  Daemons  Specific probes (see next slide)‏

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 20/20BARBET Jean-Michel - 20/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 20/29 Incident detection Nagios specific probes for the Grid Services  CE : APEL accounting synchronisation  CE : Inactive jobs (CPU time not increasing over 30')‏  CREAM-CE : Tomcat daemon  DPM-xrootd-disk : Daemons (GPFS, gridftp, xrootd)  DPM-xrootd-head : Daemons (DPM,DPNS,Mysql,xrootd  Xrootd : Daemons listening  Worker nodes : local disk switching in RO mode, PBS Mom

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 21/20BARBET Jean-Michel - 21/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 21/29 Example : Nagios for CREAM-CE

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 22/20BARBET Jean-Michel - 22/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 22/29 Automatic actions Early detection of incidents makes it possible to take action, automatically in some cases  Temperature crossing a certain threshold => halt part of the worker nodes. Ultimate protection can be enforced by cutting all power if a (high) threshold is reached.  Same if the power is down and the UPS batteries become low  Need to distinguish between vital and less vital services  Restart daemons that have crashed or are stuck (DPM)‏  Switch to a spare server (heartbeat)‏

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 23/20BARBET Jean-Michel - 23/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 23/29 Monitoring the Site Apart from incident detection which raise an alarm, the Site Manager can also keep an eye on the Services :  MonaLisa Monitoring  Nagios graphs  logs and tools available  Network usage and traffic (CC netstat, Extra)‏

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 24/20BARBET Jean-Michel - 24/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 24/29 Change management Changes break things, but cannot be avoided  Glite updates, OS updates, One should always be able to backtrack  Virtual machines : revert to snapshot  Quattor : redeploy previous profiles  Backup or disk image We have to keep track of changes  Logbook : elog or something similar

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 25/20BARBET Jean-Michel - 25/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 25/29 Communication with the VO The Site Manager needs to know :  What is the expected behaviour of the services he runs  If the site does not behave well from the VO point of view  What are the requirements of the VO and their evolution along time The Site Manager have to notice the VO :  In case of downtimes or services disruption  If he thinks something is wrong

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 26/20BARBET Jean-Michel - 26/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 26/29 Open Questions and Issues Tracing Jobs  Understanding AliEn job numbers and translate into the job numbers in the local batch system, looking at output LCG-CE to CREAM transition  Keep one cluster (how to do it cleanly) or split the cluster Job Efficiency : understanding errors Blocked jobs : to be understood Xrootd : how to provide redundency ? Accounting in ML : move to SPEC-HEP06 What about support of final users ?

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 27/20BARBET Jean-Michel - 27/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 27/29 Perspectives for SUBATECH Test SL5 on worker nodes Upgrade network connection to RENATER from 1Gbits/s to 10Gbits/s Build a test Analysis Facility based on PROOF Decommission the DPM-xrootd storage

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 28/20BARBET Jean-Michel - 28/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 28/29 Conclusion The quality of service for the grid-site is strongly linked to the quality of infrastructure (room, network, monitoring) and the whole site policy for this matter Supporting only one LHC VO has probably permitted more involvement to debug problems and test new solutions for Alice OpenOffice.org

Alice LCG Task Force Meeting 16 Oct 2008Alice LCG Task Force Meeting 16 Oct 2008 BARBET Jean-Michel - 29/20BARBET Jean-Michel - 29/20 Alice T1/T2 Tutorial CERN May 2009Jean-Michel BARBET Subatech, IN2P3 France 29/29 References SUBATECH : IN2P3 : LCG-FR : Nagios : Elog : Quattor :