Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft gridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.

Slides:



Advertisements
Similar presentations
TAB, 03. March 2006 Bruno Hoeft German LHC – WAN present state – future concept Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.
Advertisements

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft File Systems for your Cluster Selecting a storage solution for tier 2 Suggestions and experiences.
London Tier2 Status O.van der Aa. Slide 2 LT 2 21/03/2007 London Tier2 Status Current Resource Status 7 GOC Sites using sge, pbs, pbspro –UCL: Central,
Applications Area Issues RWL Jones GridPP13 – 5 th June 2005.
GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Installing dCache into an existing Storage environment at GridKa Forschungszentrum.
Steve Traylen Particle Physics Department Experiences of DCache at RAL UK HEP Sysman, 11/11/04 Steve Traylen
Owen SyngeDCache Slide 1 D-Cache Status Report Owen Synge.
GridKa January 2005 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann 1 Mass Storage at GridKa Forschungszentrum Karlsruhe GmbH.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
LCG Grid Deployment Board, March 2003 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Status of GridKa for LCG-1 Forschungszentrum Karlsruhe.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Tools used for operations at GridKa Angela Poschlad, SCC.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Jos van Wezel Doris Ressmann GridKa, Karlsruhe TSM as tape storage backend for disk pool managers.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Helmut Dres, Institute For Scientific Computing – GDB Meeting Global Grid User Support.
1 Forschungszentrum Karlsruhe in der Helmholtz - Gemeinschaft WLCG Collaboration Workshop, Jan. 24th 2007 Report Tier-1 + associated Tier-2s Andreas Heiss.
IWR Ideen erden Realität LCG 3D CERN 13-14/09/2006 Andreas Motzke GridKa Database & Squid Status Andreas Motzke Research Center Karlsruhe (FZK)
IWR Ideen werden Realität Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Institut für Wissenschaftliches Rechnen GridKa Database Services Overview.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
IWR Ideen werden Realität Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Institut für Wissenschaftliches Rechnen Status of Database Services.
Storage: Futures Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 8 October 2008.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site Adam Padee ( ) Ryszard Gokieli ( ) Krzysztof.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
BEIJING-LCG2 Tire2 Grid. Outline  Introduction about IHEP Grid  Fabric infrastructure of BeiJing-Lcg2  Middleware grid-service and supported VO’s.
Cambridge Site Report Cambridge Site Report HEP SYSMAN, RAL th June 2010 Santanu Das Cavendish Laboratory, Cambridge Santanu.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
Ted Hesselroth USATLAS Tier 2 and Tier 3 Workshop November 29, 2007 Abhishek Singh Rana and Frank Wuerthwein UC San Diego Installing and Using SRM-dCache.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
DCache at Tier3 Joe Urbanski University of Chicago US ATLAS Tier3/Tier2 Meeting, Bloomington June 20, 2007.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
LCG Storage workshop at CERN. July Geneva, Switzerland. BNL’s Experience dCache1.8 and SRM V2.2 Carlos Fernando Gamboa Dantong Yu RHIC/ATLAS.
BNL Facility Status and Service Challenge 3 HEPiX Karlsruhe, Germany May 9~13, 2005 Zhenping Liu, Razvan Popescu, and Dantong Yu USATLAS/RHIC Computing.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) Hadoop on HEPiX storage test bed at FZK Artem Trunov.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
Eine Einführung ins Grid Andreas Gellrich IT Training DESY Hamburg
Site report HIP / CSC HIP : Helsinki Institute of Physics CSC: Scientific Computing Ltd. (Technology Partner) Storage Elements (dCache) for ALICE and CMS.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
LCG Storage Accounting John Gordon CCLRC – RAL LCG Grid Deployment Board September 2006.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
Rutherford Appleton Lab, UK VOBox Considerations from GridPP. GridPP DTeam Meeting. Wed Sep 13 th 2005.
OSG Abhishek Rana Frank Würthwein UCSD.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
Last update 29/01/ :01 LCG 1Maria Dimou- cern-it-gd Maria Dimou IT/GD CERN VOMS server deployment LCG Grid Deployment Board
Storage Classes report GDB Oct Artem Trunov
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 in DPM Sophie Lemaitre Jean-Philippe.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
GridKa Summer 2010 T. Kress, G.Quast, A. Scheurer Migration of data from old to new dCache instance finished on Nov. 23 rd almost 500'000 files (600.
GridKa December 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann dCache Implementation at FZK Forschungszentrum Karlsruhe.
Scientific Computing in PPD and other odds and ends Chris Brew.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Martina Franca (TA), 07 November Installazione, configurazione, testing e troubleshooting di Storage Element.
Ted Hesselroth, OSG Site Administrators Meeting, December 13, 2007 Abhishek Singh Rana and Frank Wuerthwein UC San Diego dCache in OSG 1.0 and SRM 2.2.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
Security recommendations DPM Jean-Philippe Baud CERN/IT.
Kilian Schwarz ALICE Computing Meeting GSI, October 7, 2009
NDGF The Distributed Tier1 Site.
SRM Developers' Response to Enhancement Requests
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Artem Trunov, Günter Quast EKP – Uni Karlsruhe
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
INFNGRID Workshop – Bari, Italy, October 2004
Dirk Duellmann ~~~ WLCG Management Board, 27th July 2010
The LHCb Computing Data Challenge DC06
Presentation transcript:

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft gridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D Karlsruhe, Germany Doris Ressmann Silke Halstenberg Jos van Wezel

Storage Workshop July 2007Dr. Doris Ressmann pps-se-fzk.gridka.de Lcg info system (GIIS) gLite-SE pps-dcacheh1 2x2.8GHz 4 GB mem 134GB disk SL4 kernel: / 64 bit dCap, dirDomain utilityDomain, lmDomain dCacheDomain admin-door, gPlazma httpdDomain, StatisticDomain xrootdDomain pnfsServer, pnfs Manager Pnfs und companion DB pps-srm-fzk.gridka.de 2x2.8GHz 4GB mem 134GB disk SL4 kernel: / 64 bit srmDomain InfoProvider SRM postgres DB pps-dCache 1.8 headnodes

Storage Workshop July 2007Dr. Doris Ressmann Storage Pools fileserver with 5.2 TB for pools 1 pool 1.7 TB tape connected for all VOs 1 pool 1.7 TB for tape tests (only used by gridka staff) 1 pool 1.7 TB disk-only fileserver with 5.2 TB for pools 1 pools 2.6 TB tape connected (did not come up after update) 1 pool 2.6 TB disk-only 3,2 GHz / 2 GB mem / kernel 64 bit SL4

Storage Workshop July 2007Dr. Doris Ressmann VO support ops, dteam, alice, atlas, cms, lhcb, no dedicated pools for a VO 3 link groups (Test-link, All-tape-link, All-disk-only-link) –psu create link all-tape-link any-protocol all-store world-net –psu set link all-tape-link -readpref=20 -writepref=20 - cachepref=20 -p2ppref=-1 –psu add link all-tape-link tape-pools –psu set linkGroup attribute all-tape-link-group HSM=TSM –psu addto linkGroup all-tape-link-group all-tape-link Is this enough for all SRM 2.2 cases e.g. Custodial, Near line,…Space Manager configuration? all-store covers SRM1.1 directory tags!!!

Storage Workshop July 2007Dr. Doris Ressmann To do gPlazma authentication VOMS –different roles e.g for space reservation more tests of the new tape connection get ATLAS tests going check LHCb requirements…..

Storage Workshop July 2007Dr. Doris Ressmann Info Provider In dCache 1.7 Pool Manager –psu create pgroup ops –psu addto pgroup ops f e_ops GRIIS and dCache SRM on same host infoProviderStaticFile=/opt/lcg/var/gip/ldif/st atic-file-SE.ldif –link to /opt/d-cache/jobs/infoDynamicSE- provider-dcache –Same for plugin In dCache 1.8 a pool belongs to one and only one pgroup !!!!

Storage Workshop July 2007Dr. Doris Ressmann open issues accounting / info system: separation of SE and SRM Is it still possible to have pools shared by different VOs in 1.8? Flavia's DN ( address=....) –without gPlazma its ok, it is used as (E=...) –with gPlazma the requests are with ( address=....)

Storage Workshop July 2007Dr. Doris Ressmann