BY MANISHA JOSHI.  Extremely fast data processing-oriented computers.  Speed is measured in “FLOPS”.  For highly calculation-intensive tasks.  For.

Slides:



Advertisements
Similar presentations
Super Computers By Phuong Vo.
Advertisements

GPU Programming using BU Shared Computing Cluster
Vectors, SIMD Extensions and GPUs COMP 4611 Tutorial 11 Nov. 26,
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
1 Computational models of the physical world Cortical bone Trabecular bone.
Monte-Carlo method and Parallel computing  An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
Istituto Tecnico Industriale "A. Monaco"
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
SUPERCOMPUTERS By: Cooper Couch. WHAT IS A SUPERCOMPUTER? In the most Basic sense a supercomputer is one, that is at the forefront of modern processing.
GPU System Architecture Alan Gray EPCC The University of Edinburgh.
A 4-year $2.6 million grant from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), to perform “real-time” CT imaging dose calculations.
Utilization of GPU’s for General Computing Presenter: Charlene DiMeglio Paper: Aspects of GPU for General Purpose High Performance Computing Suda, Reiji,
Scientific discovery, analysis, and prediction made possible through high performance computing. Scientific discovery, analysis, and prediction made possible.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Why GPU Computing. GPU CPU Add GPUs: Accelerate Science Applications © NVIDIA 2013.
Parallel Processing1 Parallel Processing (CS 676) Overview Jeremy R. Johnson.
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
GPU Computing with CUDA as a focus Christie Donovan.
CS 240A Applied Parallel Computing John R. Gilbert Thanks to Kathy Yelick and Jim Demmel at UCB for.
Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.
1 ITCS 6/8010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Jan 19, 2011 Emergence of GPU systems and clusters for general purpose High Performance Computing.
Heterogeneous Computing Dr. Jason D. Bakos. Heterogeneous Computing 2 “Traditional” Parallel/Multi-Processing Large-scale parallel platforms: –Individual.
Emergence of GPU systems for general purpose high performance computing ITCS 4145/5145 April 4, 2013 © Barry Wilkinson CUDAIntro.ppt.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
Exploiting Disruptive Technology: GPUs for Physics Chip Watson Scientific Computing Group Jefferson Lab Presented at GlueX Collaboration Meeting, May 11,
1 ITCS 4/5010 CUDA Programming, UNC-Charlotte, B. Wilkinson, Dec 31, 2012 Emergence of GPU systems and clusters for general purpose High Performance Computing.
Caveman arts By Trevor. Welcome to a world of hard drives and video cards. I am going to teach you how to buy the right parts to make your dream computer.
Chapter 2 Computer Clusters Lecture 2.3 GPU Clusters for Massive Paralelism.
By Arun Bhandari Course: HPC Date: 01/28/12. GPU (Graphics Processing Unit) High performance many core processors Only used to accelerate certain parts.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
1 © 2012 The MathWorks, Inc. Parallel computing with MATLAB.
HPC Business update HP Confidential – CDA Required
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
CS 240A Applied Parallel Computing John R. Gilbert Thanks to Kathy Yelick and Jim Demmel at UCB for.
CS 240A Applied Parallel Computing John R. Gilbert Thanks to Kathy Yelick and Jim Demmel at UCB for.
Evolution of Microprocessors Microprocessor A microprocessor incorporates most of all the functions of a computer’s central processing unit on a single.
Copyright © 2011 Curt Hill MIMD Multiple Instructions Multiple Data.
Types of computers Hardware. 8/3/12 Hardware - the tangible, physical parts of the computer which work together to input, process, store and output data.
Introduction What is GPU? It is a processor optimized for 2D/3D graphics, video, visual computing, and display. It is highly parallel, highly multithreaded.
Carlo del Mundo Department of Electrical and Computer Engineering Ubiquitous Parallelism Are You Equipped To Code For Multi- and Many- Core Platforms?
ARCHES: GPU Ray Tracing I.Motivation – Emergence of Heterogeneous Systems II.Overview and Approach III.Uintah Hybrid CPU/GPU Scheduler IV.Current Uintah.
Personal Chris Ward CS147 Fall  Recent offerings from NVIDA show that small companies or even individuals can now afford and own Super Computers.
High Performance Computing
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Copyright © Curt Hill SIMD Single Instruction Multiple Data.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
The types of computers and their functionalities.
Introducing the Raspberry Pi Nauru ICT Department April 2016.
Emergence of GPU systems for general purpose high performance computing ITCS 4145/5145 © Barry Wilkinson GPUIntro.ppt Oct 30, 2014.
Hybrid Parallel Implementation of The DG Method Advanced Computing Department/ CAAM 03/03/2016 N. Chaabane, B. Riviere, H. Calandra, M. Sekachev, S. Hamlaoui.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Benefits. CAAR Project Phases Each of the CAAR projects will consist of a: 1.three-year Application Readiness phase ( ) in which the code refactoring.
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
Accelerated Processing Units
University of California, Berkeley
Super Computing By RIsaj t r S3 ece, roll 50.
Emergence of GPU systems for general purpose high performance computing ITCS 4145/5145 © Barry Wilkinson GPUIntro.ppt Nov 4, 2013.
Parallel Computers Today
About Hardware Optimization in Midas SW
Multicore and GPU Programming
Multicore and GPU Programming
CSE 102 Introduction to Computer Engineering
Presentation transcript:

BY MANISHA JOSHI

 Extremely fast data processing-oriented computers.  Speed is measured in “FLOPS”.  For highly calculation-intensive tasks.  For “high performance computing”.  Aims at capability computing rather than capacity computing.

Evolution of supercomputers *Supercomputers were introduced in the 1960s. *The supercomputers of the 1970s (Cray-1)  Used only a few processors.  Speed was 80MFlops. *In1980s (Cray-2) was an  8 processor supercomputer.  Speed was 1.9MFlops. *In the 1990s, machines with thousands of processors began to appear. *By the end of the 20th century supercomputers with ten thousands processors.

 Built by Cray Inc. at Oak Ridge laboratory.  Titan is an upgrade version of “Jaguar”.  Titan also has more than 700 terabytes of memory.  Titan is measured at PFLOPS.  Taken the no.1 slot in top10 list of supercomputers.  20,000 trillion calculations each second – or 20PFLOPS.

More about Titan More about Titan 1. The combination of AMD’s Opteron CPUs in conjunction with Nvidia Tesla GPUs. 2. Titan is ten times more powerful than a CPU-only system. 3. It uses microchips more usually used for video gaming. 4.It has 18,688 nodes each has- 16-core AMD Opteron 6274 CPU with 32 GB. Nvidia Tesla K20X GPU with 6 GB.

K20X GPU(Graphic Processing Unit) K20X GPU(Graphic Processing Unit)

GPU Discription 1.Specifically designed to process the 3D graphics. 2. They are best suited for "parallellisable jobs“. 3. Higher throughput. 4. Accelerate scientific and technical computing. 5. GPUs are used in embedded systems, mobile phones, personal computers, and video games. 6. Producing real-world phenomena like explosions, smoke and fire.

 System Name : Titan.  Built by : Cray inc.  Laboratory : Oak Ridge  System family : Cray XT7.  Operating system : LINUX.  Processor : AMD’s Opteron CPU & Nvidia Tesla GPU.  Released : January 2013.

1.High speed-17.59pflops. 2.Greater Performance. 3.Transfer speed of 1 TB/s. 4. Theoretically peak performance is 20 petaFLOPS. 5. Titan draws 8.2 MW ten times as fast in terms of floating point calculations.