Computing Environment in Chinese Academy of Sciences Dr. Xue-bin Dr. Zhonghua Supercomputing Center Computer Network.

Slides:



Advertisements
Similar presentations
PRAGMA BioSciences Portal Raj Chhabra Susumu Date Junya Seo Yohei Sawai.
Advertisements

Inetrconnection of CNGrid and European Grid Infrastructure Depei Qian Beihang University Feb. 20, 2006.
Development of China-VO ZHAO Yongheng NAOC, Beijing Nov
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Cloud Computing Mick Watson Director of ARK-Genomics The Roslin Institute.
Ver 0.1 Page 1 SGI Proprietary Introducing the CRAY SV1 CRAY SV1-128 SuperCluster.
Beowulf Supercomputer System Lee, Jung won CS843.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
Novell Server Linux vs. windows server 2008 By: Gabe Miller.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Virtual Machines for HPC Paul Lu, Cam Macdonell Dept of Computing Science.
Lunarc history IBM /VF IBM S/VF Workstations, IBM RS/ – 1997 IBM SP2, 8 processors Origin.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Integrating HPC and the Grid – the STFC experience Matthew Viljoen, STFC RAL EGEE 08 Istanbul.
HPC and e-Infrastructure Development in China’s High- tech R&D Program Danfeng Zhu Sino-German Joint Software Institute (JSI), Beihang University Dec.
A.V. Bogdanov Private cloud vs personal supercomputer.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
The BioBox Initiative: Bio-ClusterGrid Gilbert Thomas Associate Engineer Sun APSTC – Asia Pacific Science & Technology Center.
CNGI Applications in CSTNET QingHua Zhang CSTNET January 2007.
Grid ASP Portals and the Grid PSE Builder Satoshi Itoh GTRC, AIST 3rd Oct UK & Japan N+N Meeting Takeshi Nishikawa Naotaka Yamamoto Hiroshi Takemiya.
Operating Systems CS3502 Fall 2014 Dr. Jose M. Garrido
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
National Center for Supercomputing Applications GridChem: Integrated Cyber Infrastructure for Computational Chemistry Sudhakar.
University of Illinois at Urbana-Champaign NCSA Supercluster Administration NT Cluster Group Computing and Communications Division NCSA Avneesh Pant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Kento Aida, Tokyo Institute of Technology Grid Challenge - programming competition on the Grid - Kento Aida Tokyo Institute of Technology 22nd APAN Meeting.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
Scientific Data Grid on NGI Kai Nan Computer Network Information Center Chinese Academy of Sciences CANS 2004, Miami.
SAN DIEGO SUPERCOMPUTER CENTER NUCRI Advisory Board Meeting November 9, 2006 Science Gateways on the TeraGrid Nancy Wilkins-Diehr TeraGrid Area Director.
Brief Introduction of CNGrid Operation and Support Center Dr. Zhonghua Lu Supercomputing Center, Computer Network Information Center Chinese.
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Wenjing Wu Computer Center, Institute of High Energy Physics Chinese Academy of Sciences, Beijing BOINC workshop 2013.
The Grid System Design Liu Xiangrui Beijing Institute of Technology.
Computer Systems Lab The University of Wisconsin - Madison Department of Computer Sciences Linux Clusters David Thompson
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
The project of application for network computing in seismology --The prototype of SeisGrid Chen HuiZhong, Ze Ren Zhi Ma, Hu Bin Institute.
Large Scale Parallel File System and Cluster Management ICT, CAS.
CCS Overview Rene Salmon Center for Computational Science.
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
On High Performance Computing and Grid Activities at Vilnius Gediminas Technical University (VGTU) dr. Vadimas Starikovičius VGTU, Parallel Computing Laboratory.
Lecture 8: 9/19/2002CS149D Fall CS149D Elements of Computer Science Ayman Abdel-Hamid Department of Computer Science Old Dominion University Lecture.
The CESCA Consortium Alfred Gil, HPC Support Scientist Barcelona, December 4th 2013 GENIUS kick off meeting.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
NEES Cyberinfrastructure Center at the San Diego Supercomputer Center, UCSD George E. Brown, Jr. Network for Earthquake Engineering Simulation NEES TeraGrid.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
1 THE EARTH SIMULATOR SYSTEM By: Shinichi HABATA, Mitsuo YOKOKAWA, Shigemune KITAWAKI Presented by: Anisha Thonour.
Introduction Application of parallel programming to the KAMM model
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
The Hungarian ClusterGRID Project Péter Stefán research associate NIIF/HUNGARNET
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Grids and SMEs: Experience and Perspectives Emanouil Atanassov, Todor Gurov, and Aneta Karaivanova Institute for Parallel Processing, Bulgarian Academy.
Volunteer Computing: Involving the World in Science David P. Anderson U.C. Berkeley Space Sciences Lab February 16, 2007.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
High Performance Computing (HPC)
NIIF HPC services for research and education
Grid infrastructure development: current state
Introduction to XSEDE Resources HPC Workshop 08/21/2017
CRESCO Project: Salvatore Raia
NCSA Supercluster Administration
Overview of HPC systems and software available within
The C&C Center Three Major Missions: In This Presentation:
Presentation transcript:

Computing Environment in Chinese Academy of Sciences Dr. Xue-bin Dr. Zhonghua Supercomputing Center Computer Network Information Center Chinese Academy of Sciences ( Chinese American Networking Symposium 2004, Miami, USA

Outline Glance at Supercomputing Center Computing Facilities Parallel Software User Environment Application Areas Main Node of China National Grid Conclusions

Chinese American Networking Symposium 2004, Miami, USA Supercomputing Center 4 divisions: System maintenance, Parallel computing research, Application software development and Grid technology application Focus on parallel computing, parallel libraries, and provide solutions for complicated large scale applications in science and engineering Objective: Provide computing and storage facilities and service for CAS

Chinese American Networking Symposium 2004, Miami, USA Computing Facilities ICT Dawning 2000 Peak: 111 Gflops Memory: 46GB Storage: 628GB Total number of processors: 164 Total number of nodes: 82 CPU: double 333Mhz Power PC 604e (Thin/thick nodes); double 200Mhz Power3 (High performance nodes, Main server nodes) OS: AIX Installation: May, 2000

Chinese American Networking Symposium 2004, Miami, USA Computing Facilities (con.) Lenovo DeepComp 6800 Peak: 5.3TeraFLOPS HPL: 4.2 TeraFLOPS Rank in TOP500: 14 (2003) Number of nodes: 265 Number of processors: 1060 Processor: Itanium 2, 1.3 GHz Memory: 2.6 TBytes Storage: 80 TBytes Network: Quadrics QSnet OS: Red hat AS 2.1 Installation :December, 2003

Chinese American Networking Symposium 2004, Miami, USA Computing Facilities (con.) System Net. Monitor Net. Management Net. Disk Array Front server IO Nodes Storage Net. Computational nodes Login nodes LAN

Chinese American Networking Symposium 2004, Miami, USA Computing Facilities (con.) SGI Onyx 350 Peak: 38GFLOPS HPL: 22 GFLOPS Number of processors: 32 Processor: r16000, 600MHz Memory: 32 GBytes Storage: 500 GBytes OS: IRIX 6.5 Installation (March, 2004)

Chinese American Networking Symposium 2004, Miami, USA Parallel Software Commercial Software Gaussian 2003 ADF Ansys LS-Dyna (ABAQUS) Platform LSF Totalview Parawise

Chinese American Networking Symposium 2004, Miami, USA Parallel Software (con.) Free Software LAPACK, ScaLAPACK PETSc, FFTW Paraview, VTK, Vis5D, Pymol, Rasmol, VMD, GMT GAMESS, CPMD, DOCK BLAST, Clustalw, Blat, Predpharp, etc ……

Chinese American Networking Symposium 2004, Miami, USA Parallel Software (con.) Development of Application Software MPI_AltSplice, MPI_SiClone Auto-Program Generator of FEM BDF Material Science Fluid Dynamics ……

Chinese American Networking Symposium 2004, Miami, USA User Environment ScGrid Portal User Module: Login & logout, User configuration, Job submission (batch and interactive), Job viewing, resource querying, file manager, file transfer, etc. Accounting Module Admin Module: Portal Administrator and CA manager ssh, ftp

Chinese American Networking Symposium 2004, Miami, USA User Environment (con.)

Chinese American Networking Symposium 2004, Miami, USA User Environment (con.)

Chinese American Networking Symposium 2004, Miami, USA Application Areas Computational Chemistry Computational Physics Computational Mechanics Bioinformatics Geophysics Atmospheric Physics Astrophysics Material Sciences ……

Chinese American Networking Symposium 2004, Miami, USA Application Areas ( con. ) A successful earthquake prediction on the Yangjiang region in Guangdong Province broke out in Sep using Load/Unload Response Ratio (LURR); Using Deepcom 6800, SGI Power Onyx350. On the first of September this year, we conducted the LURR spatial scan of the Chinese Mainland on the DeepComp 6800 supercomputer, with radii of the scan space window of 100, 200, 300, and 400km respectively and the scan step of just º. After finishing the computation on the Deepcom 6800, SGI Power Onyx350 was used for visualization. Of all the abnormal areas in the results, the Yangjiang region in Guangdong Province was the most remarkable (see the Fig. 1). The results had been submitted to the Department of Monitor and Prediction and the Center for Analysis and Prediction of China Seismological Bureau , and were reported on the user conference of the Deepcom 6800 held on September 11 this year. Just six days later the Yangjiang region was hit by an earthquake. This emphasizes the need for timely results and ratifies the use of supercomputers.

Chinese American Networking Symposium 2004, Miami, USA Application Areas ( con. ) ,R=100, ,0.125/0.125,y/yc>1 Yangjiang earthquake Fig.1 LURR special scan of the Chinese Mainland and the location of Yangjiang earthquake

Chinese American Networking Symposium 2004, Miami, USA Will be a node of ACES_iSERVO Grid ACES grew from discussion commencing in 1995 between scientists of USA,China,Japan and Australia. ACES___APEC cooperation for Earthquake Simulation APEC__the Asia Pacific Economic Cooperation, iSERVO___international Solid Earth Research Virtual Observatory Institute

Chinese American Networking Symposium 2004, Miami, USA Be a main node of China National Grid Be the operation and support center of China National Grid

Chinese American Networking Symposium 2004, Miami, USA China National Grid CNGrid: 8 nodes, 16TFLOPS

Chinese American Networking Symposium 2004, Miami, USA Conclusions World class supercomputer Various applications User friendly interface Grid computing service Research on seamless computing Computational science alliance

Chinese American Networking Symposium 2004, Miami, USA Welcome you to visit our center and do cooperative research work Thank you very much for your attention