HPC at IISER Pune Neet Deo System Administrator

Slides:



Advertisements
Similar presentations
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Advertisements

Computing Infrastructure
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
Statewide IT Conference30-September-2011 HPC Cloud Penguin on David Hancock –
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
2. Computer Clusters for Scalable Parallel Computing
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
DAISY Pipeline in NLB Functional and technical requirements.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
Shimin Chen Big Data Reading Group.  Energy efficiency of: ◦ Single-machine instance of DBMS ◦ Standard server-grade hardware components ◦ A wide spectrum.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
A Makeshift HPC (Test) Cluster Hardware Selection Our goal was low-cost cycles in a configuration that can be easily expanded using heterogeneous processors.
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
Big Red II & Supporting Infrastructure Craig A. Stewart, Matthew R. Link, David Y Hancock Presented at IUPUI Faculty Council Information Technology Subcommittee.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Hotfoot HPC Cluster March 31, Topics Overview Execute Nodes Manager/Submit Nodes NFS Server Storage Networking Performance.
Common Practices for Managing Small HPC Clusters Supercomputing 12
HPC system for Meteorological research at HUS Meeting the challenges Nguyen Trung Kien Hanoi University of Science Melbourne, December 11 th, 2012 High.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
Infiniband in EDA (Chip Design) Glenn Newell Sr. Staff IT Architect Synopsys.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
Computational Research in the Battelle Center for Mathmatical medicine.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
北京大学计算中心 Computer Center of Peking University Building an IaaS System in Peking University Weijia Song April 23, 2012.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
S. Pardi Computing R&D Workshop Ferrara 2011 – 4 – 7 July SuperB R&D on going on storage and data access R&D Storage Silvio Pardi
Lyon Analysis Facility - status & evolution - Renaud Vernet.
Our New Submit Server. chtc.cs.wisc.edu One thing out of the way... Your science is King! We need rules to facilitate resource sharing. Given good reasons.
Supercomputing versus Big Data processing — What's the difference?
High Performance Computing (HPC)
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
The demonstration of Lustre in EAST data system
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Stallo: First impressions
Footer.
Overview of HPC systems and software available within
EDUHIPC-18: WORKSHOP ON EDUCATION FOR HIGH-PERFORMANCE COMPUTING, 17 December 2018, Bengaluru, India Parallel & Distributed Computing (PDC) Using Low Cost,
K computer RIKEN Advanced Institute for Computational Science
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
K computer RIKEN Advanced Institute for Computational Science
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

HPC at IISER Pune Neet Deo System Administrator

About IISER The Indian Institute of Science Education and Research Pune is a premier indian institute dedicated to research and teaching in the basic sciences. Established in As a unique initiative in science education in India, IISER aims to be a Science University of the highest caliber devoted to both teaching and research in a totally integrated manner, with state-of-the-art research and high quality education, thus nurturing both curiosity and creativity.

IISER The computing facility and IT infrastructure at IISER Pune caters to all the faculty, students and staff. The institute has a good number of server and workstation class computers for various system administration services. This is apart from the large number of PCs and workstations that serve the personal computing requirements

High Performance Cluster IISER Pune has a 6 Tera FLOP high-performance computing cluster primarily used by faculty and research students This system has 512 cores spread over 64 nodes with 18 GB RAM per node and Infiniband interconnect In addition, there 3 mini-clusters with number of cores ranging from 60 to 100

HPC - details  64 nodes – each with quad core 2 * Intel(R) Xeon(R) CPU 2.93GHz 18 GB RAM  OS – RHEL 5.5  10 TB raw storage. RAID 5, Gluster, over NFS  CMS – ROCKS  Scheduler – PBS  Interconnect – IB DDR

HPC – challenges  initially it was ext4 FS over NFS – installed glusterfs  number of users increased from 30  high memory requirement  GPU users – 2 such users  raw power requirement  data backup  Less manpower to manage infrastructure

Expansion Plans IISER Pune has plans to expand its HPC to 4000 cores. The plan is under discussion We are in the process for developing specs so as to satisfy computing needs and also address the problems that we face in current setup Major design aspects under discussion are  4GB per core memory or 8GB  Parallel file system  Accelerators - GPU / Mic / both  Interconnect speed  System sizing – fat memory nodes, SMP node  Aggregating current resources  Opensource vs commercial software

Thank you!! Questions?