Christof Hanke, HEPIX Spring Meeting 2008, CERN

Slides:



Advertisements
Similar presentations
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
“GRID” Bricks. Components (NARA I brick) AIC RMC4E2-QI-XPSS 4U w/SATA Raid Controllers: 3ware- mirrored root disks (2) Areca- data disks, battery backed.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
A comparison of distributed data storage middleware for HPC, GRID and Cloud Mikhail Goldshtein 1, Andrey Sozykin 1, Grigory Masich 2 and Valeria Gribova.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
LAL Site Report Michel Jouvin LAL / IN2P3
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Agenda and NDGF status Olli Tourunen HIP-NDGF meeting CSC, May 7 th 2007.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
The DCS lab. Computer infrastructure Peter Chochula.
Site report HIP / CSC HIP : Helsinki Institute of Physics CSC: Scientific Computing Ltd. (Technology Partner) Storage Elements (dCache) for ALICE and CMS.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Portuguese Grid Infrastruture(s) Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010.
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
PIC port d’informació científica Luis Diaz (PIC) ‏ Databases services at PIC: review and plans.
TCD Site Report Stuart Kenny*, Stephen Childs, Brian Coghlan, Geoff Quigley.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
TRIUMF Site Report for HEPiX, JLAB, October 9-13, 2006 – Corrie Kost TRIUMF SITE REPORT Corrie Kost & Steve McDonald Update since Hepix Spring 2006.
RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
NDGF and the distributed Tier-I Center Michael Grønager, PhD Technical Coordinator, NDGF dCahce Tutorial Copenhagen, March the 27th, 2007.
A Nordic Tier-1 for LHC Mattias Wadenstein Systems Integrator, NDGF Grid Operations Workshop Stockholm, June the 14 th, 2007.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
PSI CMS T3 Status & '1[ ] HW Plan March '16 Fabio Martinelli
Brief introduction about “Grid at LNS”
Road map SC3 preparation
The LOFAR LTA in Jülich AENEAS Kick-off meeting, Den Haag
OpenLab Enterasys Meeting
Operations and plans - Polish sites
NL Service Challenge Plans
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Service Challenge 3 CERN
Kolkata Status and Plan
UK GridPP Tier-1/A Centre at CLRC
Computing Board Report CHIPP Plenary Meeting
Southwest Tier 2.
BEIJING-LCG2 Site Report
Australia Site Report Sean Crosby DPM Workshop – 13 December 2013.
Presentation transcript:

Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC Site report HIP / CSC HIP : Helsinki Institute of Physics CSC: Scientific Computing Ltd. (Technology Partner) Storage Elements (dCache) for ALICE and CMS Christof Hanke, HEPIX Spring Meeting 2008, CERN

Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC What's happening here ? ALICE : part of the NDGF T1-centre CMS : T2-centre Storage-Elements (dCache) Computing-Elements (arc, gLite) Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Hardware installation Disk : Server (HP-Blades): ¤ CMS : 5 ¤ ALICE : 2 ¤ Cold spare : 1 ¤ CMS : ~100 TB ¤ ALICE : ~70 TB Tape : ¤ ALICE : ~64 TB 1styr Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Server-Installation: HP-Blades with own chassis ¤ 64Bit, 2x4 cores, 2.33 GHz ¤ 12 GB RAM ¤ mirrored System Disk ¤ 6x 1GB Ethernet ¤ 2x FC 4Gb (QLogic) ¤ Scientifc Linux 5.x Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Server-Installation: CMS : ALICE : 1 Admin-Node 2 Pool-Nodes 1 SRM-Node + xrootd-Door Tape-Server (CSC-internal, SAM-FS) 3 Pool-Nodes (+gridftpDoor) Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Disk-Installation: Hitachi Mid-Range Raid-System Mostly SATA, some FC connected via multipathed SAN Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Tape-Installation: Use of existing Tape-server at CSC. Type : SAMFS Connectors to dCache written by CSC (but still to be implemented) Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Network: dCache-Servers: private and public interfaces (1GB/s) public: 1GB FUNET uplink (Will be connected to OPN “after summer”) private: shared with CE “sepeli”, pools are bonded to 4GB/s each, admin/srm : 1GB/s Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC dCache Setup: admin-Node : pnfs, psql, adminDoor et. al. srm-Node : srm, psql, xrootd pool-Nodes : pool, gridftp, 24 TB disk Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Monitoring: Accessible with Grid-certificate via: https://duchess.csc.fi ¤ dCache-pages ¤ srmwatch ¤ ntop Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Monitoring - dCachepages www.dcache.org Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Monitoring - srmwatch search web for : srmwatch dcache, not yet fully functional. Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Monitoring - ntop www.ntop.org dCache traffic, here gridftp WAN-ifce pool-node, still some work to be done here. Christof Hanke, HEPIX Spring Meeting 2008, CERN

Site report HIP / CSC Future plans ¤ attach tape to ALICE pools ¤ work in progress : ¤ srmwatch ¤ ntop/dCache ¤ nagios monitoring ¤ set up workflows for dCache (remove pools, consistency checks, functionality checks) Christof Hanke, HEPIX Spring Meeting 2008, CERN

Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC Thank you for your attention. Comments ? Questions ? Christof Hanke, HEPIX Spring Meeting 2008, CERN