BINP/GCF Status Report BINP LCG Site Registration Oct 2009

Slides:



Advertisements
Similar presentations
BINP/GCF Status Report Jan 2010
Advertisements

National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Quarterly report ScotGrid Quarter Fraser Speirs.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site Adam Padee ( ) Ryszard Gokieli ( ) Krzysztof.
Overview BINP is contributing to all the activities of ATLAS Trigger/DAQ SysAdmin Group since 2007: – D.Popov ( , 1 visit) – A.Zaytsev ( ,
EGEE SA1 Operations Workshop Stockholm, 13-15/06/2007 Enabling Grids for E-sciencE Service Level Agreement Metrics SLA SA1 Working Group Łukasz Skitał.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
- CENTRU DE RESURSE GRID ICI - Bucharest 1 Grid site RO-12-ICI Grid site RO-12-ICI Bildea Ana ICI Bucharest, December 5, 2008.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
08/11/908 WP2 e-NMR Grid deployment and operations Technical Review in Brussels, 8 th of December 2008 Marco Verlato.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Quarterly report SouthernTier-2 Quarter P.D. Gronbech.
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
Quarterly report ScotGrid Quarter Fraser Speirs.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Organisation Management and Policy Group (MPG): Responsible for setting and policy decisions and resolving any issues concerning fractional usage, acceptable.
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
THE UNIVERSITY OF MELBOURNE Melbourne, Australia ATLAS Tier 2 Site Status Report Marco La Rosa
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE-EGI Grid Operations Transition Maite.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
CERN - IT Department CH-1211 Genève 23 Switzerland t Oracle Real Application Clusters (RAC) Techniques for implementing & running robust.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Tier 3 Status at Panjab V. Bhatnagar, S. Gautam India-CMS Meeting, July 20-21, 2007 BARC, Mumbai Centre of Advanced Study in Physics, Panjab University,
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
UKI-SouthGrid Overview and Oxford Status Report Pete Gronbech SouthGrid Technical Coordinator HEPSYSMAN – RAL 10 th June 2010.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Accounting in LCG/EGEE Can We Gauge Grid Usage via RBs? Dave Kant CCLRC, e-Science Centre.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
TCD Site Report Stuart Kenny*, Stephen Childs, Brian Coghlan, Geoff Quigley.
LCG Issues from GDB John Gordon, STFC WLCG MB meeting September 28 th 2010.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE Operations: Evolution of the Role of.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
TIFR, Mumbai, India, Feb 13-17, GridView - A Grid Monitoring and Visualization Tool Rajesh Kalmady, Digamber Sonvane, Kislay Bhatt, Phool Chand,
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks CYFRONET site report Marcin Radecki CYFRONET.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Regional Operations Centres Core infrastructure Centres
LCG Service Challenge: Planning and Milestones
Update on Plan for KISTI-GSDC
Статус ГРИД-кластера ИЯФ СО РАН.
Presentation transcript:

BINP/GCF Status Report BINP LCG Site Registration Oct 2009

21 Oct 2009 BINP/GCF Status Report 2 Overview Current status BINP LCG site registration procedures Getting to production with ATLAS VO activities NSC/SCN connectivity Cooperation with NSU SC facility Future prospects

BINP LCG Farm: Present Status 21 Oct CPU: 40 cores (100 kSI2k) | 200 GB RAM HDD: 25 TB raw (22 TB visible) Input power limit: 15 kVA Heat output: 5 kW

Current Resource Allocation (up to 80 VM slots now available within 200 GB of RAM) Computing Power LCG:  4 host systems now (40%)  70% share is prospected for production with ATLAS VO (near future) KEDR:  5 host systems (50%) VEPP-2000, CMD-3, Test VMs, etc.:  1 host system (10%) Centralized Storage LCG:  0.5 TB (VM images)  15 TB (DPM pool buffer, VOs software areas) KEDR:  0.5 TB (VM images)  4 TB (local backup of experimental data) Others (e.g. NSU):  up to 4 TB reserved for local NFS/PVFS2 buffer 21 Oct 2009 BINP/GCF Status Report 4

BINP LCG Site Registration (1) STEP 1: DONE  Defining the basic configuration values for the site (name, place within the hierarchy, geographic location, etc.) BINP-Novosibirsk-LCG Tier-2 within the distributed RuTier-2 of WLCG  Creating the mailing lists for covering the site admin activities and WLCG site security issues  Choosing the architecture of the site, setting up the software repositories, and deploying the start-up set of the nodes (CE+SE+WNs) SLC4x86 + gLite 3.1  Registering the site in GOC (GRID Operating Center) with help of ROC (Regional Operating Center) representative, get the “Candidate” status for the site, publish the contact info of site admins & security officers A.Zaytsev, A.Suharev 21 Oct 2009 BINP/GCF Status Report 5

BINP LCG Site Registration (2) STEP 2: DONE  Installing utility nodes of the site (MON, LFC, WMS/LB, PX, UI, extra DPM_disk VMs, etc.)  Querying the certificates for all the service nodes of the site  Configuring middleware on all the nodes  Tune the local firewalls according to the site internal and external connectivity requirements  Tune local NAT engines / LCG farm-edge / BINP-edge / NSC firewalls to provide the service nodes of the site with external connectivity (with major help from S.D.Belov and ICT sysadmins)  Get “OK” status with GStat tests run hourly by GOC  Get “Certified/Production” status for the site from ROC  Define the list of supported VOs DTEAM, RDTEAM, RDSTEST, OPS, ATLAS  Start receiving the production SAM tests from GOC 21 Oct 2009 BINP/GCF Status Report 6

BINP LCG Site Registration (3) STEP 3: IN PROGRESS  Getting OK for all the SAM tests (currently being dealt with)  Confirm the stability of operations for 1-2 weeks  Upscale the number of WNs to the production level (from 12 up to 32 CPU cores = 80 kSI2k max)  Ask ATLAS VO admins to install the experimental software on the site  Test the site for ability to run ATLAS production jobs  Check if the 110 Mbps SB RAS channel is capable to carry the load of 80 kSI2k site  Get to production with ATLAS VO (hopefully by the end of Nov 2009) 21 Oct 2009 BINP/GCF Status Report 7

Future Prospects Several ways to follow:  Further upgrades of the farm up to 360 CPU Cores ( MSI2k) and 300 TB of disk space  Extending LCG site to the outer computing resources (mainly to the SC of the NSU – up to 128 cores might be granted for the LCG activities) 10 Gbps NSU-BINP channel is expected to operate at full throughput staring from this week (SSCC is on its way, TSU is on the horizon) Virtualization schema proposed for NSU is to be validated in 2 weeks to come  Both of the previous strategies in parallel Important issues foreseen:  Scaling up beyond 0.5 MSI2k might require more than 1 Gbps of external connectivity (exclusively) – major effort of improving the situation with external connectivity of the site needed in  10 Gbps links to the local experiments (KEDR, CMD-3) are required to make sure that all the resources of the farm (and its offshore parts) are used efficiently 21 Oct 2009 BINP/GCF Status Report 8

360 CPU cores/ 300 TB Configuration 21 Oct 2009 BINP/GCF Status Report 9

Prospected 10 Gbps Network Layout 21 Oct 2009 BINP/GCF Status Report 10

Summary LCG site registration progress: 2 of 3 steps are handled With our own resources we export up to 80 kSI2k / 15 TB to WLCG (NSU may add 300 kSI2k in the near future) The exact hardware and network equipment upgrade plan is yet to be defined (though specs are ready for up to 1.1 M$) We are about to try to get to production with ATLAS VO exclusively in 2009Q4, more VOs support demanded starting from Oct 2009 BINP/GCF Status Report 11

Questions & Comments