Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.

Slides:



Advertisements
Similar presentations
Super Computers By Phuong Vo.
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Parallel computer architecture classification
MULTICORE PROCESSOR TECHNOLOGY.  Introduction  history  Why multi-core ?  What do you mean by multicore?  Multi core architecture  Comparison of.
PRIYADHARSHINI S SUPERCOMPUTERS. OVERVIEW  The term is commonly applied to the fastest high-performance systems in existence at the time of their construction.
Fundamental of Computer Architecture By Panyayot Chaikan November 01, 2003.
Seymour Cray: supercomputers Tong Lu COMP 1631 Winter 2011.
Definitions of Supercomputer A time dependent term which refers to the class of most powerful computer systems world-wide at the time of reference An.
Today’s topics Single processors and the Memory Hierarchy
Beowulf Supercomputer System Lee, Jung won CS843.
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
BY MANISHA JOSHI.  Extremely fast data processing-oriented computers.  Speed is measured in “FLOPS”.  For highly calculation-intensive tasks.  For.
SUPERCOMPUTERS By: Cooper Couch. WHAT IS A SUPERCOMPUTER? In the most Basic sense a supercomputer is one, that is at the forefront of modern processing.
What is a Computer?.
SISD—Single Instruction Single Data Xin Meng Tufts University School of Engineering.
Parallel Computers Chapter 1
Parallel Processing1 Parallel Processing (CS 676) Overview Jeremy R. Johnson.
ELEC 6200, Fall 07, Oct 29 McPherson: Vector Processors1 Vector Processors Ryan McPherson ELEC 6200 Fall 2007.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
Lecture 1: Introduction to High Performance Computing.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Introduction to Parallel Processing Ch. 12, Pg
CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis.
Flynn’s Taxonomy of Computer Architectures Source: Wikipedia Michael Flynn 1966 CMPS 5433 – Parallel Processing.
Presentation On Parallel Computing  By  Abdul Mobin  KSU ID :  Id :
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Multicore Systems CET306 Harry R. Erwin University of Sunderland.
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
PIPELINING AND VECTOR PROCESSING
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
Flynn’s Architecture. SISD (single instruction and single data stream) SIMD (single instruction and multiple data streams) MISD (Multiple instructions.
Copyright © 2011 Curt Hill MIMD Multiple Instructions Multiple Data.
Personal Chris Ward CS147 Fall  Recent offerings from NVIDA show that small companies or even individuals can now afford and own Super Computers.
Parallel Computing.
CS591x -Cluster Computing and Parallel Programming
MULTICORE PROCESSOR TECHNOLOGY.  Introduction  history  Why multi-core ?  What do you mean by multicore?  Multi core architecture  Comparison of.
Outline Why this subject? What is High Performance Computing?
Computer Architecture And Organization UNIT-II Flynn’s Classification Of Computer Architectures.
EKT303/4 Superscalar vs Super-pipelined.
Lecture 3: Computer Architectures
Parallel Processing Presented by: Wanki Ho CS147, Section 1.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Cray By Andrew Colby. Seymour Cray Died at age 71 Founder of Cray Research Father of.
Computer Architecture Lecture 24 Parallel Processing Ralph Grishman November 2015 NYU.
LECTURE #1 INTRODUCTON TO PARALLEL COMPUTING. 1.What is parallel computing? 2.Why we need parallel computing? 3.Why parallel computing is more difficult?
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Introduction. News you can use Hardware –Multicore chips (2009: mostly 2 cores and 4 cores, but doubling) (cores=processors) –Servers (often.
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Introduction to Parallel Processing
Single Instruction Multiple Data
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali.
CMSC 611: Advanced Computer Architecture
Parallel computer architecture classification
Parallel Processing - introduction
CS 147 – Parallel Processing
Super Computing By RIsaj t r S3 ece, roll 50.
Flynn’s Classification Of Computer Architectures
What is Parallel and Distributed computing?
Introduction.
Introduction.
Nicole Ondrus Top 500 Parallel System Presentation
Chapter 17 Parallel Processing
Introduction and History of Cray Supercomputers
AN INTRODUCTION ON PARALLEL PROCESSING
Multicore and GPU Programming
Multicore and GPU Programming
Presentation transcript:

Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010

What is a Supercomputer  A computer that is at the frontline of current processing capacity, particularly speed of calculation  Today's supercomputer tend to become tomorrow's ordinary computer  Designed to perform complex calculations at super high speeds which would require a year or longer for a normal computer

Uses of Supercomputers Used for highly calculation-intensive tasks  Problems involving quantum physics  Weather forecasting  Climate research  Molecular modeling  Physical simulations

History of Supercomputers  Supercomputers were introduced in the 1960s  They were designed primarily by Seymour Cray at Control Data Corporation (CDC)  Early machines were basically very fast scalar processors Cray-1 supercomputer Cray-2 supercomputer

Measuring Performance FLOPS (FLoating point Operations Per Second) Usually prefixed by an SI unit of magnitude: megaFLOPS, gigaFLOPS, teraFLOPS, petaFLOPS, exaFLOPS Supercomputers are projected to reach 1 exaFLOPS (EFLOPS) in 2019 LINPACK Benchmark A software library for performing numerical linear algebra on digital computers written in FORTRAN

Comparisons 3.6GHz Pentium 4 1 gigaFLOPS (GFLOPS) 1.8GHz Opteron 3 gigaFLOPS (GFLOPS) IBM Roadrunner 1.1 petaFLOPS (PFLOPS) Cray Jaguar 1.75 petaFLOPS (PFLOPS)

Supercomputer Challenges Generates large amounts of heat and must be cooled Information cannot move faster than the speed of light between two parts of a supercomputer Seymour Cray's supercomputer designs attempted to keep cable runs as short as possible for this reason, hence the cylindrical shape of his Cray range of computers Supercomputers consume and produce massive amounts of data in a very short period of time

Supercomputer Challenges (cont.) Technologies developed for supercomputers include: Vector processing Liquid cooling Non-Uniform Memory Access (NUMA) Striped disks (the first instance of what was later called RAID) Parallel filesystems

Seymour Roger Cray Born September 28, 1925 in Chippewa Falls, Wisconsin His father was a civil engineer who is said to have fostered Cray's interest in science and engineering Cray's passion for building scientific computers led him to help start Control Data Corporation (CDC) in 1957 Recognized as “the father of supercomputing” “Anyone can build a fast CPU. The trick is to build a fast system.” – Seymour Cray

Cray Computers Cray is said to have frequently cited two important aspects to his design philosophy: remove heat, and ensure that all signals that are supposed to arrive somewhere at the same time do indeed arrive at the same time Cray was also said to have been proud of the cushions that surrounded his cylindrically shaped computers, atop the power supplies Cray Inc. is a supercomputer manufacturer based in Seattle, Washington It was founded in 1972 by computer designer Seymour Cray

Cray Computers (cont.) CDC 1604  The CDC 1604 was a 48-bit computer designed and manufactured by Seymour Cray and his team at the Control Data Corporation The 1604 is known as the first commercially successful transistorized computer Cray-1 The Cray-1 was a supercomputer designed, manufactured, and marketed by Cray Research One of the best known and most successful supercomputers in history The first Cray-1 system was installed at Los Alamos National Laboratory in 1976 Cray-1 with internals exposed

Cray Computers (cont.) Cray Jaguar In November 2009, the AMD Opteron-based Cray XT5 Jaguar at the Oak Ridge National Laboratory was announced as the fastest operational supercomputer Cray Jaguar performed at a sustained processing rate of 1.75 petaFLOPS, beating the IBM Roadrunner for the number one spot on the TOP500 list Future Development Cray, Inc. announced in December 2009 a plan to build a 1 exaFLOPS (EFLOPS) supercomputer by the end of the 2010s Jaguar XT5

Flynn's Taxonomy, Computer Architectures Single instruction single data steam (SISD) Single instruction multiple data steams (SIMD) Multiple instruction single data steam (MISD) Multiple instruction multiple data steams (MIMD)

Flynn's Taxonomy, Computer Architectures (cont.)

Single instruction single data steam (SISD) A sequential computer which exploits no parallelism in either the instruction or data streams Examples of SISD architecture: traditional uniprocessor machines like a PC or old mainframes

Single instruction multiple data steams (SIMD) A computer which exploits multiple data streams against a single instruction stream to perform operations which may be naturally parallelized Examples of SIMD architecture: an array processor or GPU

Multiple instruction single data steam (MISD) Multiple instructions operate on a single data stream Uncommon architecture which is generally used for fault tolerance Example of MISD architecture: the Space Shuttle flight control computer

Multiple instruction multiple data steams (MIMD) Multiple autonomous processors simultaneously executing different instructions on different data Distributed systems are generally recognized to be MIMD architectures