GridKa SC4 Tier2 Workshop – Sep. 18 - 19, 2006 - 1 Warsaw Tier2 Site Adam Padee ( ) Ryszard Gokieli ( ) Krzysztof.

Slides:



Advertisements
Similar presentations
Polish Tier-2 Ryszard Gokieli Institute for Nuclear Studies Warsaw.
Advertisements

Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Polish Tier-2 Andrzej Olszewski Institute of Nuclear Physics Kraków, Poland October 2005 – February 2006.
Computing in Poland from the Grid/EGEE/WLCG point of view Ryszard Gokieli Institute for Nuclear Studies Warsaw Gratefully acknowledging slides from: P.Lasoń.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
How to join the CYCLOPS VO Marco Verlato CYCLOPS Second Training Workshop 5-7 May 2008 Chania, Greece.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Spending Plans and Schedule Jae Yu July 26, 2002.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
NCPHEP ATLAS/CMS Tier3: status update V.Mossolov, S.Yanush, Dz.Yermak National Centre of Particle and High Energy Physics of Belarusian State University.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
SiGNET – Slovenian Production Grid Marko Mikuž Univ. Ljubljana & J. Stefan Institute on behalf of SiGNET team ICFA DDW’06 Kraków, 10 th October 2006.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
Portuguese Grid Infrastruture(s) Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
INFSO-RI Enabling Grids for E-sciencE Turkish Tier-2 Site Report Emrah AKKOYUN High Performance and Grid Computing Center TUBITAK-ULAKBIM.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
A Distributed Tier-1 for WLCG Michael Grønager, PhD Technical Coordinator, NDGF CHEP 2007 Victoria, September the 3 rd, 2007.
NDGF Site Report Mattias Wadenstein Hepix 2009 spring, Umeå , Umeå University.
The Beijing Tier 2: status and plans
Operations and plans - Polish sites
Moroccan Grid Infrastructure MaGrid
VOCE Peter Kaczuk, Dan Kouril, Miroslav Ruda, Jan Svec,
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
UK GridPP Tier-1/A Centre at CLRC
Presentation transcript:

GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site Adam Padee ( ) Ryszard Gokieli ( ) Krzysztof Nawrocki ( ) Karol Wawrzyniak ( ) Wojciech Wiślicki ( )

GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier 2 center - overview Resources hosted by ICM - Interdisciplinary Centre for Mathematical and Computational Modeling (main computing facility of Warsaw University) Site name in GOC DB: WARSAW-EGEE Domain.polgrid.pl Main services: helpdesk, ce, se, cms-vo Resources funded by National Research Agency, manpower also by EGEE

GridKa SC4 Tier2 Workshop – Sep , Resources at ICM 260 AMD Opteron CPUs in Sun Fire V20z and V40z nodes 15 TB of disk space

GridKa SC4 Tier2 Workshop – Sep , Resources at ICM (continued) 180 AMD Opteron 252 CPUs + 44 AMD Opteron 275 (dual core) CPUs = 268 effective CPUs (about 400 kSi2k), all in Sun Fire V20z and V40z nodes 2 GB of memory per CPU and 73 GB SCSI HDD per node 15 TB of disk space in SATA HDDs connected to 4 Sun StorEdge 3500 RAID arrays Storage currently served via Classic SE (3.5T) & DPM (3.5T + 3.5T dedicated for CMS) Gigabit ethernet for internal communication Full IPMI management & remote monitoring Currently 50% of resources allocated for CMS (soft limit and may be changed)

GridKa SC4 Tier2 Workshop – Sep , WARSAW – EGEE accounting data Scheduler setup (FairShare, MaxProc): CMS (50%, 200) Compass (20%, 100) LHCb(10%, 50) VOCE(10%, 50) Atlas( 3%, 10)

GridKa SC4 Tier2 Workshop – Sep , Network infrastructure Current situation Stable 1 Gbit/s from Warsaw to Poznan (over PIONIER network, dedicated VLAN for scientific projects) 2.4 Gbit/s from Poznan to GEANT Planned for near future There are also plans to connect Tier2 VLAN directly to DFN with 2*1 Gbit/s dedicated connection (from Poznan).

GridKa SC4 Tier2 Workshop – Sep , Network infrastructure (continued) PIONIER

GridKa SC4 Tier2 Workshop – Sep , Software installation User support for the Central European ROC (helpdesk.polgrid.pl), already connected to GGUS Currently standard Glite 3.0 installed on the cluster CE (ce.polgrid.pl) Classic SE (se.polgrid.pl) Testing services: DPM SE (setut.polgrid.pl) and CMS VO box (cms-vo.polgrid.pl) Planned for nearest future Finish storage reorganization Complete CMS services installation