Site report HIP / CSC HIP : Helsinki Institute of Physics CSC: Scientific Computing Ltd. (Technology Partner) Storage Elements (dCache) for ALICE and CMS.

Slides:



Advertisements
Similar presentations
ESLEA and HEPs Work on UKLight Network. ESLEA Exploitation of Switched Lightpaths in E- sciences Applications Exploitation of Switched Lightpaths in E-
Advertisements

Computing Infrastructure
THIS TEXT WILL NOT BE SHOWN DURING PRESENTATION! Design by Jon Angelo Gjetting Reproducability not allowed without explicit.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
BEIJING-LCG2 Tire2 Grid. Outline  Introduction about IHEP Grid  Fabric infrastructure of BeiJing-Lcg2  Middleware grid-service and supported VO’s.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
“GRID” Bricks. Components (NARA I brick) AIC RMC4E2-QI-XPSS 4U w/SATA Raid Controllers: 3ware- mirrored root disks (2) Areca- data disks, battery backed.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
LAL Site Report Michel Jouvin LAL / IN2P3
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
Agenda and NDGF status Olli Tourunen HIP-NDGF meeting CSC, May 7 th 2007.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
BNL Facility Status and Service Challenge 3 HEPiX Karlsruhe, Germany May 9~13, 2005 Zhenping Liu, Razvan Popescu, and Dantong Yu USATLAS/RHIC Computing.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Hadoop on HEPiX storage test bed at FZK Artem Trunov.
The DCS lab. Computer infrastructure Peter Chochula.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of India CMS Grid Computing Facility (T2-IN-TIFR) Rajesh Babu Muda TIFR, Mumbai On behalf of IndiaCMS T2 Team July 28, 20111Status of India CMS.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
CASTOR CNAF TIER1 SITE REPORT Geneve CERN June 2005 Ricci Pier Paolo
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
BaBar Cluster Had been unstable mainly because of failing disks Very few (
PIC port d’informació científica Luis Diaz (PIC) ‏ Databases services at PIC: review and plans.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
The HEPiX IPv6 Working Group David Kelsey (STFC-RAL) EGI OMB 19 Dec 2013.
TRIUMF Site Report for HEPiX, JLAB, October 9-13, 2006 – Corrie Kost TRIUMF SITE REPORT Corrie Kost & Steve McDonald Update since Hepix Spring 2006.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Atlas Tier 3 Overview Doug Benjamin Duke University.
STORAGE EXPERIENCES AT MWT2 (US ATLAS MIDWEST TIER2 CENTER) Aaron van Meerten University of Chicago Sarah Williams Indiana University OSG Storage Forum,
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
NDGF and the distributed Tier-I Center Michael Grønager, PhD Technical Coordinator, NDGF dCahce Tutorial Copenhagen, March the 27th, 2007.
A Nordic Tier-1 for LHC Mattias Wadenstein Systems Integrator, NDGF Grid Operations Workshop Stockholm, June the 14 th, 2007.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
Kilian Schwarz ALICE Computing Meeting GSI, October 7, 2009
Brief introduction about “Grid at LNS”
Road map SC3 preparation
The Beijing Tier 2: status and plans
Operations and plans - Polish sites
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Service Challenge 3 CERN
Kolkata Status and Plan
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Presentation transcript:

Site report HIP / CSC HIP : Helsinki Institute of Physics CSC: Scientific Computing Ltd. (Technology Partner) Storage Elements (dCache) for ALICE and CMS Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC

What's happening here ? ALICE : part of the NDGF T1-centre CMS : T2-centre Storage-Elements (dCache) Computing-Elements (arc, gLite)‏ Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC

Hardware installation Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC Disk : ¤ CMS : ~100 TB ¤ ALICE : ~70 TB Tape : ¤ ALICE : ~64 TB 1 st yr Server (HP-Blades): ¤ CMS : 5 ¤ ALICE : 2 ¤ Cold spare : 1

Server-Installation: Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC HP-Blades with own chassis ¤ 64Bit, 2x4 cores, 2.33 GHz ¤ 12 GB RAM ¤ mirrored System Disk ¤ 6x 1GB Ethernet ¤ 2x FC 4Gb (QLogic) ¤ Scientifc Linux 5.x

Server-Installation: Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC CMS : 1 Admin-Node 1 SRM-Node + xrootd-Door 3 Pool-Nodes (+gridftpDoor)‏ ALICE : 2 Pool-Nodes Tape-Server (CSC-internal, SAM-FS)‏

Disk-Installation: Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC Hitachi Mid-Range Raid-System Mostly SATA, some FC connected via multipathed SAN

Tape-Installation: Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC Use of existing Tape-server at CSC. Type : SAMFS Connectors to dCache written by CSC (but still to be implemented)‏

Network: Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC dCache-Servers: private and public interfaces (1GB/s) public: 1GB FUNET uplink (Will be connected to OPN “after summer”) private: shared with CE “sepeli”, pools are bonded to 4GB/s each, admin/srm : 1GB/s

dCache Setup: Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC admin-Node : pnfs, psql, adminDoor et. al. srm-Node : srm, psql, xrootd pool-Nodes : pool, gridftp, 24 TB disk

Monitoring: Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC Accessible with Grid-certificate via: ¤ dCache-pages ¤ srmwatch ¤ ntop

Monitoring - dCachepages Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC

Monitoring - srmwatch Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC search web for : srmwatch dcache, not yet fully functional.

Monitoring - ntop Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC dCache traffic, here gridftp WAN-ifce pool-node, still some work to be done here.

Future plans Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC ¤ attach tape to ALICE pools ¤ work in progress : ¤ srmwatch ¤ ntop/dCache ¤ nagios monitoring ¤ set up workflows for dCache (remove pools, consistency checks, functionality checks)

Thank you for your attention. Comments ? Questions ? Christof Hanke, HEPIX Spring Meeting 2008, CERN Site report HIP / CSC