Cross Council ICT Conference May 2004 1 High Performance Computing Ron Perrott Chairman High End Computing Strategy Committee Queen’s University Belfast.

Slides:



Advertisements
Similar presentations
Clusters, Grids and their applications in Physics David Barnes (Astro) Lyle Winton (EPP)
Advertisements

1 Computational models of the physical world Cortical bone Trabecular bone.
Parallel computer architecture classification
Commodity Computing Clusters - next generation supercomputers? Paweł Pisarczyk, ATM S. A.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Overview of High Performance Computing at KFUPM Khawar Saeed Khan ITC, KFUPM.
Beowulf Supercomputer System Lee, Jung won CS843.
1 Barter, Auction and Technology Campbell R. Harvey Duke University, Durham, NC USA National Bureau of Economic Research, Cambridge MA USA
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
This project and the research leading to these results has received funding from the European Community's Seventh Framework Programme.
Stephen Pickles UKLight Town Meeting, NeSC, Edinburgh, 9/9/2004 TeraGyroid HPC Applications.
Parallel Processing1 Parallel Processing (CS 676) Overview Jeremy R. Johnson.
SDSC Computing the 21st Century Talk Given to the NSF Sugar Panel May 27, 1998.
ASU/TGen Computational Facility.
Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
1 Computer Science, University of Warwick Metrics  FLOPS (FLoating point Operations Per Sec) - a measure of the numerical processing of a CPU which can.
What’s New in the Cambridge High Performance Computer Service? Mike Payne Cavendish Laboratory Director - Dr. Paul Calleja.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
Lecture 1: Introduction to High Performance Computing.
HPC Technical Workshop Björn Tromsdorf Product & Solutions Manager, Microsoft EMEA London
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Scientific Computing on Smartphones David P. Anderson Space Sciences Lab University of California, Berkeley April 17, 2014.
NERSC User Group Meeting Future Technology Assessment Horst D. Simon NERSC, Division Director February 23, 2001.
1 CHAPTER 2 COMPUTER HARDWARE. 2 The Significance of Hardware  Pace of hardware development is extremely fast. Keeping up requires a basic understanding.
HPCx: Multi-Teraflops in the UK A World-Class Service for World-Class Research Dr Arthur Trew Director.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
ANL Royal Society - June 2004 The TeraGyroid Project - Aims and Achievements Richard Blake Computational Science and Engineering Department CCLRC Daresbury.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Future Requirements for NSF ATM Computing Jim Kinter, co-chair Presentation to CCSM Advisory Board 9 January 2008.
Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst IDF: Multi-Core Processing for.
Company LOGO High Performance Processors Miguel J. González Blanco Miguel A. Padilla Puig Felix Rivera Rivas.
Computational Science at Edinburgh From Excellence to Enterprise Dr Arthur Trew Director.
SOS71 Is a Grid cost-effective? Ralf Gruber, EPFL-SIC/FSTI-ISE-LIN, Lausanne.
Export Controls—What’s next? Joseph Young Bureau of Industry and Security Export Controls – What’s Next? Joseph Young Bureau of Industry and Security.
1 CRESCO: Centro computazionale di RicErca sui Sistemi COmplessi Where the World Stands on Supercomputing Prof. Jack Dongarra University of Tennessee and.
BlueGene/L Facts Platform Characteristics 512-node prototype 64 rack BlueGene/L Machine Peak Performance 1.0 / 2.0 TFlops/s 180 / 360 TFlops/s Total Memory.
HPC Business update HP Confidential – CDA Required
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Vision for OSC Computing and Computational Sciences
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 North West Grid Overview R.J. Allan CCLRC Daresbury Laboratory A world-class Grid.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
CCS Overview Rene Salmon Center for Computational Science.
“Grids and eScience” Mark Hayes Technical Director - Cambridge eScience Centre GEFD Summer School 2003.
The UK eScience Grid (and other real Grids) Mark Hayes NIEeS Summer School 2003.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
A look at computing performance and usage.  3.6GHz Pentium 4: 1 GFLOPS  1.8GHz Opteron: 3 GFLOPS (2003)  3.2GHz Xeon X5460, quad-core: 82 GFLOPS.
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
CS591x -Cluster Computing and Parallel Programming
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
1 High Performance Computing: A Look Behind and Ahead Jack Dongarra Computer Science Department University of Tennessee.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Professor Arthur Trew Director, EPCC EPCC: KT in novel computing.
AT LOUISIANA STATE UNIVERSITY CCT: Center for Computation & LSU Condor in Louisiana Tevfik Kosar Center for Computation & Technology Louisiana.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
High Performance Computing Kyaw Zwa Soe (Director) Ministry of Science & Technology Centre of Advanced Science & Technology.
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
EGEE Workshop on Management of Rights in Production Grids Paris, June 19th, 2006 Victor Alessandrini IDRIS - CNRS DEISA : status, strategies, perspectives.
Page : 1 SC2004 Pittsburgh, November 12, 2004 DEISA : integrating HPC infrastructures in Europe DEISA : integrating HPC infrastructures in Europe Victor.
SPRING 2012 Assembly Language. Definition 2 A microprocessor is a silicon chip which forms the core of a microcomputer the concept of what goes into a.
Modern supercomputers, Georgian supercomputer project and usage areas
Clouds , Grids and Clusters
White Rose Grid Infrastructure Overview
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Super Computing By RIsaj t r S3 ece, roll 50.
Parallel Computers Today
Nicole Ondrus Top 500 Parallel System Presentation
BlueGene/L Supercomputer
Course Description: Parallel Computer Architecture
Presentation transcript:

Cross Council ICT Conference May High Performance Computing Ron Perrott Chairman High End Computing Strategy Committee Queen’s University Belfast

Cross Council ICT Conference May A high performance computer is a hardware and software system that provides close to the maximum performance that can currently be achieved. => parallelism => state of the art technology => pushing the limits What is a high performance computer? Why do we need them? Computational fluid dynamics, protein folding, climate modeling, national security, in particular for cryptanalysis and for simulation, etc. Economy, security, health and well-being of the country. => Scientific discovery => Social impact => Commercial potential

Cross Council ICT Conference May HPC – UK Important to research in many scientific disciplines Increasing breadth of science involved High UK HPC international activities Contributions to and benefits for UK industry

Cross Council ICT Conference May UK projects Atomic, Molecular & Optical Physics Computational Biology Computational Radiation Biology and Therapy Computational Chemistry Computational Engineering - Fluid Dynamics Environmental Modelling Cosmology Particle Physics Fusion & Plasma Microturbulence Accelerator Modelling Nanoscience Disaster Simulation => computation has become as important as theory and experiment in the conduct of research

Cross Council ICT Conference May Whole systems Electronic Structure - from atoms to matter Computational Biology - from molecules to cells and beyond Fluid Dynamics - from eddies to aircraft Environmental Modelling - from oceans to the earth From the earth to the solar system ? ……And on to the Universe

Cross Council ICT Conference May Technology Trends: Microprocessor Capacity 2X transistors/Chip Every 1.5 years Called “Moore’s Law ” Microprocessors have become smaller, denser, and more powerful. Not just processors, bandwidth, storage, etc. 2X memory and processor speed and ½ size, cost, & power every 18 months. Gordon Moore, co- founder of Intel 1965 Number of devices/chip doubles every 18 months

Cross Council ICT Conference May J. Dongarra - Listing of the 500 most powerful Computers in the World - Yardstick: LINPACK Ax=b, dense problem - Updated twice a year SC‘xy in the States in November Meeting in Mannheim, Germany in June - All data available from

Cross Council ICT Conference May Scalar Super Scalar Vector Parallel Super Scalar/Vector/Parallel Moore’s Law (Floating Point operations / second, Flop/s) ,000 (1 KiloFlop/s, KFlop/s) , , ,000,000 (1 MegaFlop/s, MFlop/s) ,000, ,000, ,000,000,000 (1 GigaFlop/s, GFlop/s) ,000,000, ,000,000, ,000,000,000,000 (1 TeraFlop/s, TFlop/s) ,000,000,000, ,000,000,000,000 (35 TFlop/s) (10 3 ) (10 6 ) (10 9 ) (10 12 ) (10 15 )

Cross Council ICT Conference May TOP500 – Performance - Nov 2003 Laptop (10 15 ) (10 12 ) (10 9 )

Cross Council ICT Conference May Earth Simulator Homogeneous, Centralized, Proprietary, Expensive! Target Application: CFD-Weather, Climate, Earthquakes 640 NEC SX/6 Nodes (mod) –5120 CPUs which have vector ops –Each CPU 8 Gflop/s Peak 40 TFlop/s (peak) ~ 1/2 Billion £ for machine, software, & building Footprint of 4 tennis courts 7 MWatts –Say 10 cent/KWhr - $16.8K/day = $6M/year! Expect to be on top of Top500 until TFlop ASCI machine arrives From the Top500 (November 2003)

Cross Council ICT Conference May HPC Trends Over the last 10 years the range for the Top500 has increased greater than Moore’s Law 1993: –#1 = 59.7 GFlop/s –#500 = 422 MFlop/s 2003: –#1 = 35.8 TFlop/s –#500 = 403 GFlop/s

Cross Council ICT Conference May November 2003 ManufacturerComputer Rmax Tflop/s Installation SiteYear# Proc Rpeak Tflop/s 1 NECEarth-Simulator35.8 Earth Simulator Center Earth Simulator Center Yokohama Hewlett- Packard ASCI Q - AlphaServer SC ES45/1.25 GHz 13.9 Los Alamos National Laboratory Los Alamos National Laboratory Los Alamos Self Apple G5 Power PC w/Infiniband 4X 10.3 Virginia Tech Blacksburg, VA Dell PowerEdge 1750 P4 Xeon 3.6 Ghz w/Myrinet 9.82 University of Illinois U/C Urbana/Champaign Hewlett- Packard rx2600 Itanium2 1 GHz Cluster – w/Quadrics 8.63 Pacific Northwest National Laboratory Pacific Northwest National Laboratory Richland Linux NetworX Opteron 2 GHz, w/Myrinet 8.05 Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory Livermore Linux NetworX MCR Linux Cluster Xeon 2.4 GHz – w/Quadrics 7.63 Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory Livermore IBMASCI White, Sp Power3 375 MHz7.30 Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory Livermore IBMSP Power3 375 MHz 16 way7.30NERSC/LBNL NERSC/LBNL Berkeley IBM xSeries Cluster Xeon 2.4 GHz – w/Quadrics 6.59 Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory Livermore % of top500 performance in top 9 machines; 131 system > 1 TFlop/s; 210 machines are clusters

Cross Council ICT Conference May Performance Extrapolation TFlop/s To enter the list PFlop/s Computer Blue Gene 130,000 proc ASCI P 12,544 proc

Cross Council ICT Conference May Taxonomy Special purpose processors and interconnect High Bandwidth, low latency communication Designed for scientific computing Relatively few machines will be sold High price Commodity processors and switch Processors design point for web servers & home pc’s Leverage millions of processors Price point appears attractive for scientific computing Capability ComputingCluster Computing

Cross Council ICT Conference May UK Facilities Main centres in Manchester Edinburgh and Daresbury Smaller centres around UK

Cross Council ICT Conference May HPCx Edinburgh and CCLRC IBM 1280 processor POWER4 Currently 3.5 Tflop/s to 6.0 Tflop/s, July October 2006 up to 12.0 Tflop/s

Cross Council ICT Conference May CSAR University of Manchester/Computer Sciences Corporation 256 Itanium2 processor SGI Altix (Newton) - Jun 2006; peak performance of 5.2 Gflop/s 512 processor Origin3800 ( Green)- Jun 2006

Cross Council ICT Conference May Hector – High End Computing Terascale Resource Scientific Case Business case Peak performance of 50 to 100 Tflop/s by 2006, doubling to 100 to 200 Tflop/s after 2 years, and doubling again to 200 to 400 Tflop/s 2 years after that. Oak Ridge National Laboratory 100 Tflop/s in Tflop/s in 2007

Cross Council ICT Conference May ANL UK US – Teragrid HPC-Grid Experiment TeraGyroid: Lattice-Boltzmann simulations of defect dynamics in amphiphilic liquid crystals

Cross Council ICT Conference May TeraGyroid - Project Partners Teragrid sites at: –ANL Visualization, Networking –NCSA Compute –PSC Compute, Visualization –SDSC Compute Reality Grid partners: –University College London Compute, Visualization, Networking –University of Manchester Compute, Visualization, Networking –Edinburgh Parallel Computing Centre Compute –Tufts University Compute UK High-End Computing Services –HPCx -University of Edinburgh and CCLRC Daresbury Laboratory Compute, Networking, Coordination –CSAR -Manchester and CSC Compute and Visualization

Cross Council ICT Conference May TeraGyroid - Results Linking these resources allowed computation of the largest set of lattice-Boltzmann (LB) simulations ever performed, involving lattices of over one billion sites Won SC03 HPC Challenge for “Most Innovative Data-Intensive Application” Demonstrated extensive use of the US UK infrastructure