IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.

Slides:



Advertisements
Similar presentations
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Advertisements

S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
IHEP Site Status Jingyan Shi, Computing Center, IHEP 2015 Spring HEPiX Workshop.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Copyright 2009 FUJITSU TECHNOLOGY SOLUTIONS PRIMERGY Servers and Windows Server® 2008 R2 Benefit from an efficient, high performance and flexible platform.
Tier 2 Prague Institute of Physics AS CR Status and Outlook J. Chudoba, M. Elias, L. Fiala, J. Horky, T. Kouba, J. Kundrat, M. Lokajicek, J. Svec, P. Tylka.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Tier 3g Infrastructure Doug Benjamin Duke University.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
BESIII physical offline data analysis on virtualization platform Qiulan Huang Computing Center, IHEP,CAS CHEP 2015.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Southgrid Technical Meeting Pete Gronbech: 16 th March 2006 Birmingham.
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
LT 2 London Tier2 Status Olivier van der Aa LT2 Team M. Aggarwal, D. Colling, A. Fage, S. George, K. Georgiou, W. Hay, P. Kyberd, A. Martin, G. Mazza,
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
UCSD CMS 2009 T2 Site Report Frank Wuerthwein James Letts Sanjay Padhi Abhishek Rana Haifen Pi Presented by Terrence Martin.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
IHEP Computing Center Site Report Shi, Jingyan Computing Center, IHEP.
KISTI-GSDC SITE REPORT Sang-Un Ahn, Jin Kim On the behalf of KISTI GSDC 24 March 2015 HEPiX Spring 2015 Workshop Oxford University, Oxford, UK.
V.Ilyin, V.Gavrilov, O.Kodolova, V.Korenkov, E.Tikhonenko Meeting of Russia-CERN JWG on LHC computing CERN, March 14, 2007 RDMS CMS Computing.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
ANSE: Advanced Network Services for Experiments Institutes: –Caltech (PI: H. Newman, Co-PI: A. Barczyk) –University of Michigan (Co-PI: S. McKee) –Vanderbilt.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
BaBar Cluster Had been unstable mainly because of failing disks Very few (
BEIJING-LCG Network Yan Xiaofei
ASGC Site Report Felix Lee HEPiX 2011 Fall In Vancouver 24 Oct, 2011.
Scientific Computing in PPD and other odds and ends Chris Brew.
IHEP Site Status Qiulan Huang Computing Center, IHEP,CAS HEPIX FALL 2015.
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
IHEP Computing Site Report Shi, Jingyan Computing Center, IHEP.
OpenStack Chances and Practice at IHEP Haibo, Li Computing Center, the Institute of High Energy Physics, CAS, China 2012/10/15.
QI Fazhi / IHEP CC HEPiX IPv6 F2F Meeting IPv6 Network Status in IHEP/China QI Fazhi Computing Center, IHEP July 4, 2013.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Building Virtual Scientific Computing Environment with Openstack Yaodong Cheng, CC-IHEP, CAS ISGC 2015.
Module Objectives At the end of the module, you will be able to:
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
IHEP Computing Center Site Report Shi, Jingyan Computing Center, IHEP.
Academia Sinica Grid Computing Centre (ASGC), Taiwan
RHEV Platform at LHCb Red Hat at CERN 17-18/1/17
The advances in IHEP Cloud facility
The Beijing Tier 2: status and plans
Leading New ICT, Making eFinance More Effective.
AWS Integration in Distributed Computing
Yaodong CHENG Computing Center, IHEP, CAS 2016 Fall HEPiX Workshop
Update on Plan for KISTI-GSDC
AGLT2 Site Report Shawn McKee/University of Michigan
Ákos Frohner EGEE'08 September 2008
The Scheduling Strategy and Experience of IHEP HTCondor Cluster
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
ETHZ, Zürich September 1st , 2016
Status and prospects of computing at IHEP
RHUL Site Report Govind Songara, Antonio Perez,
Presentation transcript:

IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP

Outline Infrastructure Infrastructure Local Cluster Status Local Cluster Status LCG Tire2 Site Statistics LCG Tire2 Site Statistics Management & Operation Management & Operation Summary Summary Chen Gang/CC/IHEP

Infrastructure Serving more than 1000 users Serving more than 1000 users Power supply capacity: 1800Kw Power supply capacity: 1800Kw Shi,Jingyan/CC/IHEP Cooling: water cooling rack for the blade servers Cooling: water cooling rack for the blade servers

Water cooling rack Water cooling rack Inter-row air conditioning Inter-row air conditioning Cooling capacity per rack: 28kw Cooling capacity per rack: 28kw Infrastructure Upgrade Shi,Jingyan CC--IHEP Power Capacity: 1800kw Cooling System

Chen Gang/CC/IHEP Local Cluster --Computing Nodes Most for BES,YBJ,DYB,Atlas,CMS experiments Most for BES,YBJ,DYB,Atlas,CMS experiments Some small projects added Some small projects added Blade system IBM/HP/Dell Blade system IBM/HP/Dell Blade links with GigE/IBBlade links with GigE/IB Chassis links to central switch with 10GigEChassis links to central switch with 10GigE 886 computing nodes: 7082 CPU-cores 886 computing nodes: 7082 CPU-cores Most running SL5.5 (64 bit)Most running SL5.5 (64 bit) Intend to migrate to SL5.8Intend to migrate to SL5.8 A small part stayes in running SL4.5 (32 bit)A small part stayes in running SL4.5 (32 bit)

Torque: Torque: Maui: Maui: Intend to upgrade to or higher to support MPI jobsIntend to upgrade to or higher to support MPI jobs Tools developed to monitor the resources usage, queue status etc. Tools developed to monitor the resources usage, queue status etc. Accounting tool developed Accounting tool developed Chen Gang/CC/IHEP Local Cluster -- Scheduler

Scheduler 50 queues to fit various requests Besides serial jobs, MPI, GPU jobs are also supported Testbed Integration of Torque and openstack Managing and scheduling VM nodes in batch-cloud Chen.Gang/CC/IHEP

Local Cluster -- Storage Gluster system installed Gluster system installed Storage provided less than 4 months Storage provided less than 4 months Keeps optimizing performance Keeps optimizing performance Adjust to deal with the new bugs Adjust to deal with the new bugs Total space: 153TB, Used space: 145TB Total space: 153TB, Used space: 145TB Chen Gang/CC--IHEP

Beijing LCG Tier II Site For CMS, ATLAS experiments For CMS, ATLAS experiments Job slots Job slots Storage: Storage: 320TB dCache320TB dCache 320TB dpm320TB dpm 1T disks were replaced by 2T disks1T disks were replaced by 2T disks Chen Gang/CC--IHEP

Beijing LCG Tier II Site CPU Time CPU Time

Orient + Network Connection Daya Bay Beijing CSTNet Hong Kong IHEP USA GLORIAD 10G ASGC IPv4 10G IPv6 Beijing Tsinghua YBJ EUR. 2.5G 155M 10G Others EDU.CN 10G Chen Gang/CC--IHEP

Two hosts for perfsonar Two hosts for perfsonar Perfsonar.ihep.ac.cn for Bandwidth testPerfsonar.ihep.ac.cn for Bandwidth test Perfsonar2.ihep.ac.cn for Latency testPerfsonar2.ihep.ac.cn for Latency test Network performance tuning is in progress between IHEP and EU. Sites Network performance tuning is in progress between IHEP and EU. Sites /view/InternationalConnectivity/I HEP-CCIN2P3http://twiki.ihep.ac.cn/twiki/bin /view/InternationalConnectivity/I HEP-CCIN2P3 Chen Gang/CC--IHEP

Network Research Goal Goal A flexible, reliable and high performance HEP data transfer network (virtual and private) and system platform in ChinaA flexible, reliable and high performance HEP data transfer network (virtual and private) and system platform in China IPv4 and IPv6 supportedIPv4 and IPv6 supported The traffic can be switched between IPv4 and IPv6 infrastructure and physical path automatically or manually based the network performance and applicationsThe traffic can be switched between IPv4 and IPv6 infrastructure and physical path automatically or manually based the network performance and applications  IHEPDTN  IHEPDTN End user networkEnd user network Backbone network ( IPv6 & IPv4 )Backbone network ( IPv6 & IPv4 ) SDN Switch (L2VPN gateway & Openflow supported)SDN Switch (L2VPN gateway & Openflow supported) Control center (API to Application)Control center (API to Application) Applications(FTS/NMS/…….)Applications(FTS/NMS/…….) Members Members IHEP/SJU/SDU/TsingHua/……IHEP/SJU/SDU/TsingHua/…… Network Vendor : Ruijie NetworksNetwork Vendor : Ruijie Networks

model

Most part of computing environment running well Most part of computing environment running well New gLuster system is in production New gLuster system is in production Network performance between IHEP-Eur. got an clear improvement Network performance between IHEP-Eur. got an clear improvement New Management and Operation system will be deployed to improve the efficiency New Management and Operation system will be deployed to improve the efficiency Summary Chen Gang/CC/IHEP

Thank you! Questions? Chen Gang/CC/IHEP