Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM.

Slides:



Advertisements
Similar presentations
IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Advertisements

S.Chechelnitskiy / SFU Simon Fraser Running CE and SE in a XEN virtualized environment S.Chechelnitskiy Simon Fraser University CHEP 2007 September 6 th.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Scaling Up Engineering Analysis using Windows HPC Server 2008 Todd Wedge Platform Strategy Advisor, HPC Microsoft.
Parallel Processing1 Parallel Processing (CS 676) Overview Jeremy R. Johnson.
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
Audio Dial-in: Participant Passcode: #
Academic and Research Technology (A&RT)
PARALLEL PROCESSING The NAS Parallel Benchmarks Daniel Gross Chen Haiout.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
Cluster Computing. References HA Linux Project – Sys Admin – Load Balancing.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Cluster Computing Applications Project Parallelizing BLAST Research Alliance of Minorities.
HPC Technical Workshop Björn Tromsdorf Product & Solutions Manager, Microsoft EMEA London
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Illinois Campus Cluster Program User Forum October 24, 2012 Illini Union Room 210 2:00PM – 3:30PM.
Aim High…Fly, Fight, Win NWP Transition from AIX to Linux Lessons Learned Dan Sedlacek AFWA Chief Engineer AFWA A5/8 14 MAR 2011.
Virtual Desktop Infrastructure Solution Stack Cam Merrett – Demonstrator User device Connection Bandwidth Virtualisation Hardware Centralised desktops.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Building a High-performance Computing Cluster Using FreeBSD BSDCon '03 September 10, 2003 Brooks Davis, Michael AuYeung, Gary Green, Craig Lee The Aerospace.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
NSTXpool Computer Upgrade WP #1685 Bill Davis December 9, 2010.
Planning and Designing Server Virtualisation.
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
CCS Overview Rene Salmon Center for Computational Science.
Derek Wright Computer Sciences Department University of Wisconsin-Madison MPI Scheduling in Condor: An.
A look at computing performance and usage.  3.6GHz Pentium 4: 1 GFLOPS  1.8GHz Opteron: 3 GFLOPS (2003)  3.2GHz Xeon X5460, quad-core: 82 GFLOPS.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Parallel Computing With High Performance Computing Clusters (HPCs) By Jeremy Cathey.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve? What Kinds of Accounts? Why Does Mississippi Need Supercomputers? What Kinds of Research?
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Enabling the use of e-Infrastructures with.
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Building and managing production bioclusters Chris Dagdigian BIOSILICO Vol2, No. 5 September 2004 Ankur Dhanik.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Science Support for Phase 4 Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
Linux Operating System By: Julie Dunbar. Overview Definitions History and evolution of Linux Current development In reality ◦United States  Business.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
HPC need and potential of ANSYS CFD and mechanical products at CERN A. Rakai EN-CV-PJ2 5/4/2016.
High Performance Computing (HPC)
Organizations Are Embracing New Opportunities
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Super Computing By RIsaj t r S3 ece, roll 50.
Grid Computing.
Low-Cost High-Performance Computing Via Consumer GPUs
32nd TOP500 List SC08, Austin, TX.
…updates… 9/19/2018.
System G And CHECS Cal Ribbens
Nicole Ondrus Top 500 Parallel System Presentation
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM

Agenda ► KFUPM HPC Cluster Details ► Brief look at RHEL and Windows 2008 HPC Environments ► Dual Boot Configuration ► Job Scheduling ► Current and Soon to be available Software ► Expectations from users

Why Cluster Computing and Supercomputing? ► Some Problems Larger Than Single Computer Can Process ► Memory Space (>> 4-8 GB) ► Computation Cost ► More Iterations and Large Data sets ► Data Sources (Sensor processing) ► National Pride ► Technology Migrates to Consumers

How Fast Are Supercomputers? ► The Top Machines Can Perform Tens of Trillions Floating Point Operations per Second (TeraFLOPS) ► They Can Store Trillions of Data Items in RAM! ► Example: 1 KM grid over USA ► 4000x2000x100 = 800 million grid points ► If each point has 10 values, and each value takes 10 ► ops to compute => 80 billion ops per iteration ► If we want 1 hour timesteps for 10 years, iters ► More than 7 Peta-ops total!

Lies, Damn Lies, and Statistics ► Manufacturers Claim Ideal Performance ► 2 FP 3 GHz => 6 GFLOPS ► Dependences mean we won't get that much! ► How Do We Know Real Performance ► Top500.org Uses High-Perf LINPACK ► ► Solves Dense Set of Linear Equations ► Much Communications and Parallelism ► Not Necessarily Reflective of Target Apps

HPC in Academic Institutions ► HPC cluster resources are no longer a research topic but a core part of the research infrastructure. ► Researchers are using HPC clusters and are dependent on them ► Increased competitiveness ► Faster time to research ► Prestige, to attract talent and grants ► Cost-effective infrastructure spending

Top Universities using HPC Clusters ► National Center for Supercomputing Applications at University of Illinois Urbana Champagne ► Texas Advanced Computing Center/University of Texas, Austin United States ► National Institute for Computational Sciences/University of Tennessee United States National Institute for Computational Sciences/University of Tennessee National Institute for Computational Sciences/University of Tennessee ► Information Technology Center, The University of Tokyo Japan Information Technology Center, The University of Tokyo Information Technology Center, The University of Tokyo ► Stony Brook/BNL, New York Center for Computational Sciences United States Stony Brook/BNL, New York Center for Computational Sciences Stony Brook/BNL, New York Center for Computational Sciences ► GSIC Center, Tokyo Institute of Technology Japan GSIC Center, Tokyo Institute of Technology GSIC Center, Tokyo Institute of Technology ► University of Southampton, UK ► University of Cambridge, UK ► Oklahoma State University, US

Top Research Institutes using HPC Clusters ► DOE/NNSA/LANL United States DOE/NNSA/LANL ► Oak Ridge National Laboratory United States Oak Ridge National Laboratory Oak Ridge National Laboratory ► NASA/Ames Research Center/NAS United States NASA/Ames Research Center/NAS NASA/Ames Research Center/NAS ► Argonne National Laboratory United States Argonne National Laboratory Argonne National Laboratory ► NERSC/LBNL United States NERSC/LBNL ► NNSA/Sandia National Laboratories United States NNSA/Sandia National Laboratories NNSA/Sandia National Laboratories ► Shanghai Supercomputer Center China Shanghai Supercomputer Center Shanghai Supercomputer Center

KFUPM HPC Environment

KFUPM ► Planning & Survey started in early 2008 ► Procured in October 2008 ► Cluster Installation and Testing during Nov-Dec-Jan ► Applications like Gaussian with Linda, DL-POLY, ANSYS tested on the cluster setup ► Test problems were provided by professors of Chemistry, Physics, Mechanical Engg. Departments. ► More applications on the cluster will be installed shortly e.g., GAMESS-UK.

KFUPM Cluster Hardware ► HPC IBM Cluster 1350 ► 128 nodes, 1024 Cores. Master Nodes ► 3x Xeon E5405 Quad-Core, 8 GB, 2x 500 GB HD (mirrored) Compute Nodes ► 128 nodes(IBM 3550 rack mounted). Each node dual processor, Quad- Core Xeon E5405 (2 GHz). 8 GB RAM, 64TB total local storage. ► Interconnect 10GB Ethernet. Uplink 1000-Base-T Gigabit. Operating Systems for Compute nodes (Dual Boot) ► Windows HPC Server 2008 and Red Hat Linux 5.2.

Dual Boot clusters ► Choice of the right operating system for a HPC cluster can be a very difficult decision. ► This choice will usually have a big impact on the Total Cost of Ownership (TCO) of the cluster. ► Parameters like multiple user needs, application environment requirements and security policies add to the complex human factors included in training, maintenance and support planning, all leading to associated risks on the final return on investment (ROI) of the whole HPC infrastructure. ► Dual Boot HPC clusters provide two environments (Linux and Windows in our case) for the price of one.

Key takeaways: - Mixed clusters provide a low barrier to leverage HPC related hardware, software, storage and other infrastructure investments better – “Optimize, flexibility of infrastructure” - Maximize the utilization of compute infrastructure by expanding the pool of users accessing the HPC cluster resources - “Ease of use and familiarity breeds usage” ►

Possibilities with HPC ► Computational fluid dynamics Computational fluid dynamics Computational fluid dynamics ► Simulation and Modeling imulationodelingimulationodeling ► Seismic tomography eismic tomographyeismic tomography ► Nano Sciences ► Vizualization ► Weather Forecasting ► Protein / Compound Synthesis

Available Software ► Gaussian with Linda ► ANSYS ► FLUENT ► Distributed MATLAB, ► Mathematica ► DL_POLY ► MPICH ► Microsoft MPI SDK ► The following software will also be made available in the near future. ► Eclipse, GAMESS-UK, GAMESS-US, VASP and NW-CHEM

Initial Results of Beta Testing ► Few applications like Gaussian etc have been beta tested and considerable speed up in computing times has been reported ► MPI program run tested on the cluster reported considerable speed up as compared to serial server runs.

KFUPM ► Several Firsts ► Dual Boot Cluster ► Supports RedHat Linux 5.2 and Windows 2008 HPC Server ► Capability to support variety of applications ► Parallel Programming Support ► Advanced Job Scheduling options

Expectations ► Own the system ► Respect other’s jobs ► Assist ITC HPC team by searching and sending complete installation, software procurement and licensing requirements ► Help other users by sharing your experience  Use Vbulletin at