GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Installing dCache into an existing Storage environment at GridKa Forschungszentrum.

Slides:



Advertisements
Similar presentations
TAB, 03. March 2006 Bruno Hoeft German LHC – WAN present state – future concept Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.
Advertisements

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft File Systems for your Cluster Selecting a storage solution for tier 2 Suggestions and experiences.
ESLEA and HEPs Work on UKLight Network. ESLEA Exploitation of Switched Lightpaths in E- sciences Applications Exploitation of Switched Lightpaths in E-
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
Distributed cluster dynamic storage: a comparison of dcache, xrootd, slashgrid storage systems running on batch nodes Alessandra Forti CHEP07, Victoria.
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
HEPiX GFAL and LCG data management Jean-Philippe Baud CERN/IT/GD.
Steve Traylen Particle Physics Department Experiences of DCache at RAL UK HEP Sysman, 11/11/04 Steve Traylen
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft LCG-POB, , Reinhard Maschuw1 Grid Computing Centre Karlsruhe - GridKa Regional/Tier.
GridKa January 2005 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann 1 Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
T0/T1 – Network Meeting Bruno Hoeft T0/T1 – GridKa/CERN Network Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box.
LCG Grid Deployment Board, March 2003 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Status of GridKa for LCG-1 Forschungszentrum Karlsruhe.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Forschungszentrum Karlsruhe Technik und Umwelt Regional Data and Computing Centre Germany (RDCCG) RDCCG – Regional Computing and Data Center Germany software.
Fermilab Mass Storage System Gene Oleynik Integrated Administration, Fermilab.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Jos van Wezel Doris Ressmann GridKa, Karlsruhe TSM as tape storage backend for disk pool managers.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Helmut Dres, Institute For Scientific Computing – GDB Meeting Global Grid User Support.
1 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft WLCG Collaboration Workshop, Jan. 24th 2007 Report Tier-1 + associated Tier-2s Andreas Heiss.
IWR Ideen erden Realität LCG 3D CERN 13-14/09/2006 Andreas Motzke GridKa Database & Squid Status Andreas Motzke Research Center Karlsruhe (FZK)
IWR Ideen werden Realität Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Institut für Wissenschaftliches Rechnen GridKa Database Services Overview.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Investigation of Electric Wind in the Corona Charging Unit of a Novel Electrostatic Collector.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft gridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
IWR Ideen werden Realität Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Institut für Wissenschaftliches Rechnen Status of Database Services.
EGEE is a project funded by the European Union under contract IST Using SRM: DPM and dCache G.Donvito,V.Spinoso INFN Bari
Applying Data Grids to Support Distributed Data Management Storage Resource Broker Reagan W. Moore Ian Fisk Bing Zhu University of California, San Diego.
High Performance Computing Course Notes High Performance Storage.
Storage Task Force Intermediate pre report. History GridKa Technical advisory board needs storage numbers: Assemble a team of experts. 04/05 At HEPiX.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
ITEP participation in the EGEE project NEC’2005, Varna, Bulgaria Ivan Korolko (ITEP Moscow)
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
16 th May 2006Alessandra Forti Storage Alessandra Forti Group seminar 16th May 2006.
A comparison of distributed data storage middleware for HPC, GRID and Cloud Mikhail Goldshtein 1, Andrey Sozykin 1, Grigory Masich 2 and Valeria Gribova.
BNL Facility Status and Service Challenge 3 Zhenping Liu, Razvan Popescu, Xin Zhao and Dantong Yu USATLAS Computing Facility Brookhaven National Lab.
Data Management The GSM-WG Perspective. Background SRM is the Storage Resource Manager A Control protocol for Mass Storage Systems Standard protocol:
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft 1 Institute for Scientific Computing in the Forschungszentrum Karlsruhe Overview Rainer Kupsch.
Large Scale Parallel File System and Cluster Management ICT, CAS.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
LCG Storage workshop at CERN. July Geneva, Switzerland. BNL’s Experience dCache1.8 and SRM V2.2 Carlos Fernando Gamboa Dantong Yu RHIC/ATLAS.
Optimisation of Grid Enabled Storage at Small Sites Jamie K. Ferguson University of Glasgow – Jamie K. Ferguson – University.
BNL Facility Status and Service Challenge 3 HEPiX Karlsruhe, Germany May 9~13, 2005 Zhenping Liu, Razvan Popescu, and Dantong Yu USATLAS/RHIC Computing.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Hadoop on HEPiX storage test bed at FZK Artem Trunov.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
Tier-2 storage A hardware view. HEP Storage dCache –needs feed and care although setup is now easier. DPM –easier to deploy xrootd (as system) is also.
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
GridKa December 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann dCache Implementation at FZK Forschungszentrum Karlsruhe.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
Martina Franca (TA), 07 November Installazione, configurazione, testing e troubleshooting di Storage Element.
New directions in storage | ISGC 2015, Taipei | Patrick Fuhrmann | 19 March 2015 | 1 Presenter: Patrick Fuhrmann dCache.org Patrick Fuhrmann, Paul Millar,
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
The CMS Beijing Tier 2: Status and Application Xiaomei Zhang CMS IHEP Group Meeting December 28, 2007.
The LOFAR LTA in Jülich AENEAS Kick-off meeting, Den Haag
dCache “Intro” a layperson perspective Frank Würthwein UCSD
Computing Board Report CHIPP Plenary Meeting
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Enabling High Speed Data Transfer in High Energy Physics
Computing Infrastructure for DAQ, DM and SC
Research Data Archive - technology
LHC Data Analysis using a worldwide computing grid
Presentation transcript:

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Installing dCache into an existing Storage environment at GridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D Karlsruhe, Germany Dr. Doris Ressmann

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Forschungszentrum Karlsruhe Grid Computing Centre Karlsruhe GridKa

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft GridKa planned hardware resources kSI95 CPU Disk Tape 780 CPUs 160 TB disk 300 TB tape

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Tivoli Storage Manager (TSM) TSM library management TSM is not developed for archive Interruption of TSM archive No control what has been archived dCache (DESY, FNAL) creates a separate session for every file Transparent access Allows transparent maintenance at TSM

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft dCache main components TSM with tapes pools compute nodes mountpoint gridftp srmcp file transfer head node

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft PNFS Perfectly Normal File System real data database for filenames metadata F A E pool and tape pnfs

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft dCache interface dCache Access Protocol (dcap) compute node: dccp connection to head node return available pool node copy direct into available pool node dc_open(...); dc_read(...); pool: data is precious (can't be deleted) flush into tsm data is cached (can be deleted from pool) compute node: dccp if not in pool the data will be taken from tsm

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Tivoli Storage Manager (tsm)

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft dCache pool node

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Tivoli Storage Manager (tsm) after dCache tuning

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Test Environment Problematic Hardware RAID controller 3WARE with 1.6 TB – Always Degraded mode – Rebuilding 70 kB/s or 10 MB/s – Lost data

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft TSM properties TSM disk cache overflow Allocation of tape drives (max 2) Adapt server properties for specific dCache requirements Management Class (retention time) Copy groups

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Conclusion and Future Work More reliable hardware especially for write pools Several TSM server SRM and LCG connection Pools on parallel File system GPFS

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft