CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019

Slides:



Advertisements
Similar presentations
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
Advertisements

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Experiment support at IN2P3 Artem Trunov CC-IN2P3
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
CT NIKHEF June File server CT system support.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Overview of day-to-day operations Suzanne Poulat.
11/30/2007 Overview of operations at CC-IN2P3 Exploitation team Reported by Philippe Olivero.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
LCG-France Vincent Breton, Eric Lançon and Fairouz Malek, CNRS-IN2P3 and LCG-France ISGC Symposium Taipei, March 27th 2007.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
GDB meeting - Lyon - 16/03/05 An example of data management in a Tier A/1 Jean-Yves Nief.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
Vendredi 27 avril 2007 Management of ATLAS CC-IN2P3 Specificities, issues and advice.
BeStMan/DFS support in VDT OSG Site Administrators workshop Indianapolis August Tanya Levshina Fermilab.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
A Nordic Tier-1 for LHC Mattias Wadenstein Systems Integrator, NDGF Grid Operations Workshop Stockholm, June the 14 th, 2007.
Kilian Schwarz ALICE Computing Meeting GSI, October 7, 2009
SuperB – INFN-Bari Giacinto DONVITO.
NL Service Challenge Plans
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
INFN Computing infrastructure - Workload management at the Tier-1
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Data Challenge with the Grid in ATLAS
Grid Computing for the ILC
SAM at CCIN2P3 configuration issues
Grid services for CMS at CC-IN2P3
Luca dell’Agnello INFN-CNAF
UK GridPP Tier-1/A Centre at CLRC
The CCIN2P3 and its role in EGEE/LCG
A high-performance computing facility for scientific research
Simulation use cases for T2 in ALICE
The INFN Tier-1 Storage Implementation
CC-IN2P3 Pierre-Emmanuel Brinette IN2P3-CC Storage Team
Bernd Panzer-Steindel CERN/IT
R. Graciani for LHCb Mumbay, Feb 2006
Artem Trunov Computing Center IN2P3
LHC Data Analysis using a worldwide computing grid
AFS at CERN and other High Energy Physics Sites
Pierre Girard ATLAS Visit
Overview & Status Al-Ain, UAE November 2007.
The LHCb Computing Data Challenge DC06
Presentation transcript:

CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019 13/01/2019 dimanche 13 janvier 2019dimanche 13 janvier 2019 CC and LQCD

CC presentation - global 13/01/2019 CC presentation - global Staff: 69 peoples at Lyon Activities: Provide computing, storage and data transfert infrastructure for experiments and users. Provide mutualized services for experiments and IN2P3 laboratories such: Database, web, mail, network, software management, visio-conf, webcast, backup, EDMS, CVS, … Provide grid infrastructure and acces to local resources Working with SLAC, Fermilab, CERN, BNL, FZK, CNAF, RAL. Evolution driven mainly by LHC experiments. T. Kachelhoffer - GDR LQCD 13/01/2019

CC presentation - users 13/01/2019 CC presentation - users HEP-Nuc experiments: D0, Babar, Phenix, … LHC experiments: Atlas, Cms, Alice, Lhcb Astro-neutrino: Auger, Némo, Antares, Virgo, Snovae Bio: ~8 groups, mainly around Lyon. IN2P3/DAPNIA laboratories. ~90 active groups. ~3000 local users (800 active one’s) with ~500 (200) non IN2P3/DAPNIA + grid users. T. Kachelhoffer - GDR LQCD 13/01/2019

CC presentation - batch (cpu) 13/01/2019 CC presentation - batch (cpu) Distributed architecture. Local batch system: BQS sharing all cpus and users on 2 farms: anastasie & pistoo. Running 3000 jobs in // on 2000 cpus: 2,2 MSi2k. Next comming: 479 1’’ boxes dual cpu, quadri cores with 16 GB of memory (2 GB/core) in june and september. This will add ~4,5 MSi2k to the farm. Going to 64 bits architecture (SL4-64). T. Kachelhoffer - GDR LQCD 13/01/2019

CC presentation - batch 2007 13/01/2019 CC presentation - batch 2007 T. Kachelhoffer - GDR LQCD 13/01/2019

CC presentation - storage 13/01/2019 CC presentation - storage Distributed architecture with ethernet & fiber chanel technologies 3 main systems: AFS: 5 TB. GPFS: 150 TB. MSS: 2100 TB on tape, 600 TB disk space. Coming this year: 1200 TB disk space for MSS (dcache, xrootd, rfio). 300 TB disk space for GPFS. T. Kachelhoffer - GDR LQCD 13/01/2019

CC presentation - storage AFS 13/01/2019 CC presentation - storage AFS Used for small files, small volume. Used for home dir and as group shared space. Used for system and software installation. Specific AFS acl’s. Backup of home and throng dir. Access thru standard unix protocol. 5 TB actually. T. Kachelhoffer - GDR LQCD 13/01/2019

CC presentation - storage GPFS 13/01/2019 CC presentation - storage GPFS Used for higher volume and file size. Standard unix acl’s. Non permanent working space. No backup. Access thru NFS protocol. 150 TB actually. T. Kachelhoffer - GDR LQCD 13/01/2019

CC presentation - storage MSS 13/01/2019 CC presentation - storage MSS Used for higher volume and file size. Handle by HPSS. 2 level system: disk & tape (500 GB - 1 TB / tape). Access thru: Rfio: 50 TB disk. Dcache: 370 TB disk. Xrootd: 180 TB disk (shared cache disk). Srb: 10 TB disk (data transfert & data management). 2100 TB actually on tape. T. Kachelhoffer - GDR LQCD 13/01/2019

CC presentation - data management 13/01/2019 CC presentation - data management SRB: used for data management and data transfert by Babar, Auger, Snovae, Ilc, Bio, Antares, Edelweiss, … Grid tools: SRM, LFC, FTS, … used globally or partially by some of the LHC experiments. IRODS: strong evolution of SRB in a free software distribution mode. Actually version 0.9. T. Kachelhoffer - GDR LQCD 13/01/2019

13/01/2019 CC presentation - grid Overall EGEE and LCG grid middleware installed and available at CC for users: UI, computing element, storage element, SRM, VO boxes, fts, … CIC portal developed, installed and managed at CC. CC is the ROC for France. VO hosting (biomed) at CC. T. Kachelhoffer - GDR LQCD 13/01/2019

CC & LQCD Mainly contact with Grenoble people (J. Carbonnel). 13/01/2019 CC & LQCD Mainly contact with Grenoble people (J. Carbonnel). Computing: Problem with the memory requests. Could run on pistoo farm with MPI or PVM ? Partially solved with new hardware. Data management, data transfert and data access: Using HPSS locally with rfio for data access. Using SRB for data management and data transfert between Lyon, Grenoble and Orsay (LPTH) ? Grid technologies for data handling by ILDG VO: 2 TB of dcache disk space (which could be improved) with HPSS interface accessed by SRM. T. Kachelhoffer - GDR LQCD 13/01/2019

13/01/2019 CC & LQCD Questions ? T. Kachelhoffer - GDR LQCD 13/01/2019