Computer System Replacement at KEK K. Murakami KEK/CRC.

Slides:



Advertisements
Similar presentations
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Advertisements

S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Copyright 2009 FUJITSU TECHNOLOGY SOLUTIONS PRIMERGY Servers and Windows Server® 2008 R2 Benefit from an efficient, high performance and flexible platform.
Belle computing upgrade Ichiro Adachi 22 April 2005 Super B workshop in Hawaii.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
National Energy Research Scientific Computing Center (NERSC) The GUPFS Project at NERSC GUPFS Team NERSC Center Division, LBNL November 2003.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En’yo, Takashi Ichihara, Yasushi Watanabe and Satoshi.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Challenges of Storage in an Elastic Infrastructure. May 9, 2014 Farid Yavari, Storage Solutions Architect and Technologist.
Quantitative Methodologies for the Scientific Computing: An Introductory Sketch Alberto Ciampa, INFN-Pisa Enrico Mazzoni, INFN-Pisa.
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Large Scale Test of a storage solution based on an Industry Standard Michael Ernst Brookhaven National Laboratory ADC Retreat Naples, Italy February 2,
Workshop KEK - CC-IN2P3 KEK new Grid system 27 – 29 Oct. CC-IN2P3, Lyon, France Day2 14: :55 (40min) Koichi Murakami, KEK/CRC.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
HPC system for Meteorological research at HUS Meeting the challenges Nguyen Trung Kien Hanoi University of Science Melbourne, December 11 th, 2012 High.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Large Scale Parallel File System and Cluster Management ICT, CAS.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Towards Exascale File I/O Yutaka Ishikawa University of Tokyo, Japan 2009/05/21.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
VMware vSphere Configuration and Management v6
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Lenovo - Eficiencia Energética en Sistemas de Supercomputación Miguel Terol Palencia Arquitecto HPC LENOVO.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Tackling I/O Issues 1 David Race 16 March 2010.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
An Introduction to GPFS
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
ORNL is managed by UT-Battelle for the US Department of Energy OLCF HPSS Performance Then and Now Jason Hill HPC Operations Storage Team Lead
GRID & Parallel Processing Koichi Murakami11 th Geant4 Collaboration Workshop / LIP - Lisboa (10-14/Oct./2006) 1 GRID-related activity in Japan Go Iwai,
KEK CC - present and future - Mitsuaki NOZAKi (KEK)
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Reducing Risk with Cloud Storage
KEKCC – KEK Central Computer System
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
GGF15 – Grids and Network Virtualization
Kirill Lozinskiy NERSC Storage Systems Group
Interoperability of Digital Repositories
SDM workshop Strawman report History and Progress and Goal.
Presentation transcript:

Computer System Replacement at KEK K. Murakami KEK/CRC

Outline  Overview  Introduction of New Central Computing System (KEKCC)  CPU  Storage  Operation Aspects 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 2

Overview 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 3

Computing Facility at KEK  2 System  Super Computer System  Central Computer System  Linux cluster  Support for IT infrastructure (mail / web)  Both system are now under replacement  Rental System  System replacement by every 3-5 years  International Bidding  Cycle of RFI / RFP, System introduction for 2 years 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 4

KEK supercomputer system KEKSC is now in service / fully installed soon.  For large scale numerical simulations  System-A is running Sep 2011—Jan 2012  System-A+B: March 2012–  System-A: Hitachi SR16000 model M1  Power7, 54.9 TFlops, 14TB memory  56 nodes: 960GFlops, 256GB/node  Automated parallelization on single node (32 cores)  System-B: IBM Blue Gene/Q  6 racks (3 from Mar 2012, 3 from Oct 2012)  1.258PFlops, 96TB in total  Rack: 1024 nodes, 5D torus network 209.7TFlops, 16TB memory  Scientific subjects  Large-scale simulation program ( /Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 5

New Central Computer System 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 6 Central Computer System (KEKCC) B-Factory Computer System new KEKCC Rental period will end in next Feb. Service-in on Apr/2012

New KEKCC 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 7

Features of New KEKCC  Main Contractor :  3.5 years rental system (until Aug/2015)  4000 cores CPU  Linux cluster (SL5)  Interactive / Batch servers  Grid (gLite) deployed  Storage system for BIG data  7PB disk storage (DDN)  Tape library with max. capacity of 16 PB  High-speed I/O, High scalability 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 8

CPU  Work server & Batch server  Xeon 5670 (2.93 GHz / 3.33 GHz TB, 6core)  282 nodes : 4GB /core  58 nodes : 8GB /core  2 CPU/node : 4080 cores  Interconnect  InfiniBand 4xQDR (4GB/s), RDMA  Connection to storage system  Job scheduler  LSF (ver. 8)  Scalability up to 1M jobs  Grid deployment  gLite  Work server as Grid-UI, Batch server as Grid-WN 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 9 IBM System x iDataPlex

Disk System 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 10 DDN SFA10000  DDN SFA10K x 6  Capacity : 1152TB x 6 = 6.9 PB (effective)  Throughput: 12 GB/s x 6  used for GPFS and GHI  GPFS file system  Parallel file system  Total throughput : > 50 GB/s  Optimized for massive access  number of file servers  no bottle-neck interconnect, RDMA-enabled  Separation of meta-data area  large block size  Performance  >500MB/s for single file I/O in benchmark test

Tape System 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 11  Tape Library  Max. capacity : 16 PB  Tape Drive  TS1140 : 60 drives  latest enterprise drive  We do not use LTO because of less reliability.  Only two venders, IBM or StorageTek  Tape Media  JC : 4TB, 250 MB/s  JB : 1.6TB (repack), 200 MB/s  Magnetic body produced by Fuji Film is used for both IBM and StorageTek media. IBM TS3500 IBM TS1140

HSM (Hierarchical Storage Management)  HPSS  Disk (first layer) + Tape (second layer)  Experience in former KEKCC  Improvements from former system  Increase of tape drives  Improvement on tape drive I/O speed  Enforcement on interconnect (10GbE, IB)  Performance improvement on staging area (capacity, access speed)  Integration with GPFS file system (GHI)  GHI (GPFS-HPSS interface) : New!  GPFS as staging area  Perfect coherence with GPFS access (POSIX I/O)  no HPSS client API  instead of current VFS interface  high performance I/O of GPFS 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 12

GHI Data Flow Mover #1 Mover #2 Mover #3 Mover #4 Tape Lib CORE Server Lab-LAN SAN Switch HPSS Disk LAN GPFS NSD#1 GPFS NSD#2 GPFS NSD#3 SAN Switch GPFS Disk Lab LAN Linux Cluster write read /Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting

Internal Cloud Service  Motivation  Requirements of specific system  experiments, groups, community  test for new operating system  Efficient resource management (servers on demand)  PAAS-type of service  Cloud middleware  Platform IFS + IFS adaptive cluster (coherence with LSF)  In future, open solution (e.g. Openstack)  Provisioning tools  KVM (VM solution)  xCAT (system reinstallation by node)  Virtualization technology, not yet enough…  CPU virtualization is ok, but I/O virtualization is not yet enough.  Technology choice : 10GbE or IB taking into accounts of virtualization technology. (nPAR, SR-IOV) 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 14

Operation Aspects 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 15

Effect of 3.11 Earthquake  Earthquake Intensity at KEK (Tsukuba)  6- in Japanese scale / 7 max  VIII in MMI (Modified Mercalli) scale  Hardware damage was minimal.  Some racks waved.  Some HDDs were broken, minimal data loss  UPS was no helpful.  Introduce automatic shutdown mechanism within UPS alive especially for disk system.  Crisis of Electricity Supply  Accident of Fukushima nuclear power plant  Many (almost) nuclear power plants are off-line due to investigation of stress test.  Potential risk of blackout on summer day-time  Political electricity saving  30 % power cut compelled  Electricity rate will be raised by about 15%. 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 16

Electricity Saving in New System  Saving Energy Products  IBM iData-Plex (high intensity, high cooling efficiency (-40%))  Power Unit Efficiency (>80 PLUS Silver)  Tape is green device.  Disk system is not eco.  No MAID  risk on failure, data transfer rate (grid access)  Electrical Power Visualization  Electrical consumption of all components is monitored.  IBM System Director  Intelligent PDU  Power clamp meter  Power capping  IBM Active Energy Manager  Power capping for servers controlling CPU frequency  Max. power consumption can be set to 220 W – 350 W / server 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 17

Challenges to the Future  Facility  Electricity  new system 350 KW +400 KW air cooling  current PUE > 2.x  Mega-W scale in next  Cooling  Water cooling  Space  New building  Data center container  Data management  BIG (Exascale) data management  EByte in near future  Data copy at every system replacement  5PB in current, 20PB in next,...  Strategy for tape / library (IBM / StorageTek)  Development of tape generation is too rapid. 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 18

Summary  Computer Facility at KEK  Super computer system  Central computer (KEKCC) system (Linux cluster)  migrated system (former KEKCC + B-Factory CC)  Service-in from Apr./2012  4000-cores CPU  Linux cluster (Scientific Linux 5.6)  Grid environment (gLite)  Storage System  7PB DDN storage / GPFS file system  16PB capacity tape library  HPSS (GPFS-HPSS Interface) as HSM  High-speed access, high scalability for BIG data  Challenges to the Future  How to design next system 2012/Mar/14FJPPL (KEK/CRC - CC/IN2P3) meeting 19