STEINBUCH CENTRE FOR COMPUTING - SCC www.kit.edu KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association.

Slides:



Advertisements
Similar presentations
ARIZONA DEPARTMENT OF ADMINISTRATION INFORMATION SERVICES DIVISION - DATA CENTER.
Advertisements

Polish Tier-2 Federation Resource pledges for WLCG Nominal WAN (Mbits/sec) Disk (Tbytes) CPU (kSI2k)
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 58.
Virtualized Infrastructure Darwin A. Campbell 1 ; Carson M. Andorf 1 ; Ethalinda K. Cannon 2 ; Bremen L. Braun 1 7 ; Scott M. Birkett 2 7 ; Jack M. Gardiner.
King Conservation District Grants 2010 Allocation Guidance WRIA 8 Salmon Recovery Council March 18, 2010.
London Tier2 Status O.van der Aa. Slide 2 LT 2 21/03/2007 London Tier2 Status Current Resource Status 7 GOC Sites using sge, pbs, pbspro –UCL: Central,
John Gordon eScience Centre UK Deployment of LCG1 John Gordon CCLRC Escience Centre GridPP7, Oxford, July 2003.
Southgrid Status Pete Gronbech: 21 st March 2007 GridPP 18 Glasgow.
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Northgrid Status Alessandra Forti Gridpp24 RHUL 15 April 2010.
GridPP: Executive Summary Tony Doyle. Tony Doyle - University of Glasgow Oversight Committee 11 October 2007 Exec 2 Summary Grid Status: Geographical.
GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Installing dCache into an existing Storage environment at GridKa Forschungszentrum.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
STEINBUCH CENTRE FOR COMPUTING - SCC KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) LHCOPN issues Responce of MDM monitoring Steinbuch.
LCG Grid Deployment Board, March 2003 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Status of GridKa for LCG-1 Forschungszentrum Karlsruhe.
KIT – The cooperation of Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) DE-KIT Monitoring Steinbuch Centre for Computing.
Break Time Remaining 10:00.
Buffers & Spoolers J L Martin Think about it… All I/O is relatively slow. For most of us, input by typing is painfully slow. From the CPUs point.
Seungmi Choi PlanetLab - Overview, History, and Future Directions - Using PlanetLab for Network Research: Myths, Realities, and Best Practices.
1130-April-101 Presentation on Performance during FY (2009 – 10) April 30, 2010.
The benefits of using cloud computing for Stem Cell Imaging Nick Trigg CEO, Constellation 24 th June 2009.
: 3 00.
5 minutes.
KIT – University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Steinbuch Centre for Computing (SCC)
Clock will move after 1 minute
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft STEINBUCH CENTRE FOR COMPUTING - SCC
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft gridKa Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
STEINBUCH CENTRE FOR COMPUTING - SCC KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association.
OPNET Technologies, Inc. Performance versus Cost in a Cloud Computing Environment Yiping Ding OPNET Technologies, Inc. © 2009 OPNET Technologies, Inc.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
15/07/2010Swiss WLCG Operations Meeting Summary of the last GridKA Cloud Meeting (07 July 2010) Marc Goulette (University of Geneva)
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
“CLOUD COMPUTING” “CLOUD COMPUTING”. SIMPLE INTRO TO CLOUD COMPUTING (download at beginning of class before viewing) SCROLL DOWN TO 2 ND VIDEO SIMPLE.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Small File File Systems USC Jim Pepin. Level Setting  Small files are ‘normal’ for lots of people Metadata substitute (lots of image data are done this.
Issues in Milan Two main problems (details in the next slides): – Site excluded from analysis due to corrupted installation of some releases (mainly )
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
ATLAS DC2 seen from Prague Tier2 center - some remarks Atlas sw workshop September 2004.
Data management for ATLAS, ALICE and VOCE in the Czech Republic L.Fiala, J. Chudoba, J. Kosina, J. Krasova, M. Lokajicek, J. Svec, J. Kmunicek, D. Kouril,
LHCb-Italy Farm Monitor Domenico Galli Bologna, June 13, 2001.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
London Tier 2 Status Report GridPP 11, Liverpool, 15 September 2004 Ben Waugh on behalf of Owen Maroney.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
The Web Archiving Service Spring 2009 Update User’s Council Annual Meeting Tracy Seneca California Digital Library Capture Today’s Web;
GGUS summary (4 weeks) VOUserTeamAlarmTotal ALICE1102 ATLAS CMS LHCb Totals
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Cloud Servers. What is a Cloud Server?  A Server that is accessed via the internet.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
GridKa Cloud T1/T2 at Forschungszentrum Karlsruhe (FZK)
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
Best 20 jobs jobs sites.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Grid Operations in Germany T1-T2 workshop 2015 Torino, Italy Kilian Schwarz WooJin Park Christopher Jung.
Lessons learned administering a larger setup for LHCb
ATLAS activities in the IT cloud in April 2008
GridKa School 2007 Christopher Jung, Forschungszentrum Karlsruhe.
BusinessObjects IN Cloud ……InfoSol’s story
UM D0RACE STATION Status Report Chunhui Han June 20, 2002
“CLOUD COMPUTING”.
Presentation transcript:

STEINBUCH CENTRE FOR COMPUTING - SCC KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association GridKa Tier1 Report Christopher Jung

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Dr. Christopher Jung – GridKa Cloud Meeting CPU usage For April 2010: atlas group #jobswalltime [h]CPU time [h] CPU time / walltime average wait time [h] prd542,5691,825,1891,380, sgm2, plt319,701187,583144, usr52,005157,001113, d18,54117,1998,

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Dr. Christopher Jung – GridKa Cloud Meeting Space tokens As of 5th of May, 08:30: ATLAS space token size [GB]used [GB]free [GB]usage DATADISK600,000294,571305, % DATATAPE30, % GROUPDISK10, % HOTDISK1, % MCDISK600,000539,68060, % MCTAPE20, ,8390.8% SCRATCHDISK80,00058,46221, %

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Dr. Christopher Jung – GridKa Cloud Meeting PBS problems Started on 10 th of April PBS mom demon hanging on some WNs (could not access /proc) PBS server hangs, too Implemented a script that automatically reboots PBS when PBS server hangs Script to reboot WNs with kswapd having high CPU load -> did not work as reboot also needs to access /proc Reinstallation of all WNs with new kernel (with hard reboot if necessary) on 15 th and 16 th of April stable PBS since then Possible reasons: Massive NFS problems on 9 th of April (might have been caused by raising ATLAS job limit from 5,000 to 6,000 on 1 st of April) kswapd had high CPU load (kernel bug) causes /proc problems? (also observed by other sites) Additionally, we updated PBS to the latest version this Monday

INSTITUTS-, FAKULTÄTS-, ABTEILUNGSNAME (in der Masteransicht ändern) Steinbuch Centre for Computing Dr. Christopher Jung – GridKa Cloud Meeting June milestone Major increase! Questions: When will disk be needed? All new storage for disk-only? April 2009June 2010 disk2,092 TB4,035 TB tape1,578 TB2,990 TB computing2843 kSI2k (=11372 HEPSPEC06) 4958 kSI2k (=19832 HEPSPEC06)