Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.

Slides:



Advertisements
Similar presentations
CBPF J. Magnin LAFEX-CBPF. Outline What is the GRID ? Why GRID at CBPF ? What are our needs ? Status of GRID at CBPF.
Advertisements

Update on Status of Tier-1 GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
Agenda Network Infrastructures LCG Architecture Management
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Global Science experiment Data hub Center Oct. 13, 2014 Seo-Young Noh Status Report on Tier 1 in Korea.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
System performance monitoring in the ALICE Data Acquisition System with Zabbix Adriana Telesca October 15 th, 2013 CHEP 2013, Amsterdam.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
11 ALICE Computing Activities in Korea Beob Kyun Kim e-Science Division, KISTI
November 16, 2012 Seo-Young Noh Haengjin Jang {rsyoung, Status Updates on STAR Computing at KISTI.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
1 1APCTP LHC Konkuk University. Introduction to GSDC Project Activities in 2009 Strategies and Plans in 2010 GSDC office opening ceremony CERN.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
COMSATS Institute of Information Technology, Islamabad PK-CIIT Grid Operations in Pakistan COMSATS Dr. Saif-ur-Rehman Muhammad Waqar Asia Tier Center Forum.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
KISTI-GSDC SITE REPORT Asia Tier Center KISTI, Daejeon, South Korea 22 Sep – 24 Sep 2015 Sang-Un Ahn on the behalf of KISTI-GSDC.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
NSF ANNUAL REVIEW June 2010 Ocean Observatories Initiative OOI Cyberinfrastructure Terrestrial CyberPoPs Implementation Matthew Arrott, Mark James, Brian.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
A follow-up on network projects 10/29/2013 HEPiX Fall Co-authors:
STATUS OF KISTI TIER1 Sang-Un Ahn On behalf of the GSDC Tier1 Team WLCG Management Board 18 November 2014.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
LCG Issues from GDB John Gordon, STFC WLCG MB meeting September 28 th 2010.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Development of a Tier-1 computing cluster at National Research Centre 'Kurchatov Institute' Igor Tkachenko on behalf of the NRC-KI Tier-1 team National.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Operation team at Ccin2p3 Suzanne Poulat –
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
KISTI activities and plans Global experiment Science Data hub Center Jin Kim LHCOPN-ONE Workshop in Taipei1.
Global Science experimental Data hub Center April 25, 2016 Seo-Young Noh Status Report on KISTI’s Computing Activities.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
GSDC: A Unique Data Center in Korea for Fundamental Research Global Science experimental Data hub Center Korea Institute of Science and Technology Information.
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
Bob Ball/University of Michigan
LCG Service Challenge: Planning and Milestones
Grid site as a tool for data processing and data analysis
Andrea Chierici On behalf of INFN-T1 staff
October 28, 2013 at 14th CERN-Korea Committee, Geneva
Update on Plan for KISTI-GSDC
KISTI-GSDC Tier-1 SITE REPORT
Luca dell’Agnello INFN-CNAF
Olof Bärring LCG-LHCC Review, 22nd September 2008
Update from the HEPiX IPv6 WG
Статус ГРИД-кластера ИЯФ СО РАН.
Universita’ di Torino and INFN – Torino
The LHCb Computing Data Challenge DC06
Presentation transcript:

Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute of Science and Technology Information Global Science experiment Data hub Center

OUTLINE 2  Computing Resources  Operations  Network  Conclusion 28 April th CERN-Korea Committee

KISTI GSDC Tier-1 Team 3 ROLEName Representative Haeng-Jin Jang System Management Hee-Jun Yoon System Administration Jeong-Heon Kim Storage (Disk & Tape) Hee-Jun Yoon Sang-Oh Park Network Hyoung-Woo Park KISTI support (Dr. Bu-Seung Cho) Site Operation & Administration Il-Yeon Yeo Sang-Un Ahn KIAF Operation & User Support Sang-Un Ahn ~ 9 people 28 April th CERN-Korea Committee

Computing Resource Status 4  2013 Pledges (CPU): HepSpec06 25,000  Current HepSpec06: 28,055  2,524 Jobs slots available (4 reserved slots for pilot jobs) with H/T enabled  2013 Pledges (Tape Storage): Tape 1,500 TB  Current Tape capacity: 1,000 TB  Pledges will be met in this year  2013 Pledges (Disk Storage): Disk 1,000 TB  Current Disk capacity: 966 TB (allocated 1,000 TB but usable space slightly below) 28 April th CERN-Korea Committee

OPERATIONS 5

Total wall clock hours for ALICE jobs in the last 6 months KISTI, 3.9 % (Including Tier-2 ) Jobs Oct 2013 T1 worker nodes migration to 10GbE equipped ones ALICE Central Service Maintenance EMI-3 Migration & Delivery of full pledges ~ 800 ~ 1800 ~ 2500 Apr Current capacity: 2,524 job slots, 28.1 kHS06 –84 nodes, 32 (logical) cores per node, 11 HS06/core Maintenance issues –Worker nodes migration to 10GbE equipped ones –Middleware: EMI-3 migration (end of support to EMI-2 by 30 April) –Delivered full pledges for % (2013)

Site Reliability 7 28 April th CERN-Korea Committee

KISTI Analysis Facility - KIAF Parallel Analysis Facility based on PROOF In operation since 2011, ALICE use only 1 master, 8 worker nodes, 12 cores and 22 TB disk per node Similar size and utilization as CAF - CERN Analysis Facility 8 28 April th CERN-Korea Committee

Plans for On-call Service Alarm system – Nagios + notifications – Implementing SMS plugin + Night Owl shift by private company – Tape system - hardware/software malfunction reported to IBM and third-party company – 24/7 support, intervention to be carried out within one day – Ongoing evaluation of monitoring frameworks: e.g. Icinga, Zabbix, etc. On-call scheme – One week shift cycle with 5-6 personnel – Expecting 1 or 2 calls in a cycle - alarms from batch scheduler and services, WN servicing – From daily monitoring report – detailed action list on services and hardware incidents Night owl shift – Private company contract – on-site support – If necessary - SMS and notification to off-site on-duty experts – Supercomputing division at KISTI is running similar system for years We are planning to prepare for On-call Service. Maybe it has 3 functions of service. 28 April th CERN-Korea Committee

NETWORK 10

Internal Network Internal network for Tier-1 is isolated from the computing centre service network Done in Oct internal network re-structuring (3-week shutdown) Preparation for upgrade of bandwidth of external network up to 10Gbps Main switch upgrade: bandwidth up to 2.5 Tbps HA configuration of private network Remove bottlenecks to storage Full 20 Gbps configuration (Incoming/Outgoing) Replaced all switches by 10 Gbps; done on part of service racks 1Gbps switches in place for servers with 1Gbps cards Worker nodes to be upgraded with10 Gb cards Tape service nodes are being connected to the 10 Gbps switches 11

External Network Current Bandwidth to CERN: 2 Gbps Dedicated link via Daejeon-Chicago-Amsterdam-Geneva Roadmap for 10 Gbps upgrade presented to WLCG MB and accepted Working on upgrading bandwidth up to 10 Gbps 12

LHC OPN KISTI T1 network ( /24) included into LHC OPN BGP Peering between Kreonet KISTI and LCG CERN perfSONAR has been deployed for measuring bandwidth and latency; firewall policy issue persists concerning the ports below 1024 e.g. 80 (http), 443 (https), 843 (bwctl) 13

Conclusion KISTI T1 has been approved as a full T1 at the meeting of WLCG Overview Board in Nov The progress of ramping up the capability as a T1 appreciated by ALICE community and a roadmap to 10G network accepted In Jan, KISTI T1 joined LHC OPN Over the last 6 months, KISTI T1 has been in “shape-shifting” in terms of network Core switches replaced (bandwidth: 0.9 Tbps  2.5 Tbps) Rack switches replaced (bandwidth: 1 Gbps  10 Gbps) Servers migrated to 10GbE equipped ones 14