Clusters, Grids and their applications in Physics David Barnes (Astro) Lyle Winton (EPP)

Slides:



Advertisements
Similar presentations
Low Cost Supercomputing
Advertisements

The Australian Virtual Observatory Clusters and Grids David Barnes Astrophysics Group.
Hub A hub is a device that connects PCs together All hubs Contain multiple access ports the hub simply forwards the packets to all the other devices connected.
Opening Workshop DAS-2 (Distributed ASCI Supercomputer 2) Project vrije Universiteit.
Fundamentals of Grid Computing IBM Redbooks paper Viktors Berstis Presented by: Saeed Ghanbari Saeed Ghanbari.
Super Computers By Phuong Vo.
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
1 Computational models of the physical world Cortical bone Trabecular bone.
Beowulf Clusters Matthew Doney. What is a cluster?  A cluster is a group of several computers connected  Several different methods of connecting them.
Commodity Computing Clusters - next generation supercomputers? Paweł Pisarczyk, ATM S. A.
2. Computer Clusters for Scalable Parallel Computing
Today’s topics Single processors and the Memory Hierarchy
Beowulf Supercomputer System Lee, Jung won CS843.
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
Dinker Batra CLUSTERING Categories of Clusters. Dinker Batra Introduction A computer cluster is a group of linked computers, working together closely.
Types of Parallel Computers
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
PARALLEL PROCESSING The NAS Parallel Benchmarks Daniel Gross Chen Haiout.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
Cluster Computing Slides by: Kale Law. Cluster Computing Definition Uses Advantages Design Types of Clusters Connection Types Physical Cluster Interconnects.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
07/14/08. 2 Points Introduction. Cluster and Supercomputers. Cluster Types and Advantages. Our Cluster. Cluster Performance. Cluster Computer for Basic.
Scheduling of Tiled Nested Loops onto a Cluster with a Fixed Number of SMP Nodes Maria Athanasaki, Evangelos Koukis, Nectarios Koziris National Technical.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
Scientific Computing on Smartphones David P. Anderson Space Sciences Lab University of California, Berkeley April 17, 2014.
1 Lecture 7: Part 2: Message Passing Multicomputers (Distributed Memory Machines)
1 Computing platform Andrew A. Chien Mohsen Saneei University of Tehran.
1 In Summary Need more computing power Improve the operating speed of processors & other components constrained by the speed of light, thermodynamic laws,
Distributed Computing Cloud Computing : Module 2.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Edgar Gabriel Short Course: Advanced programming with MPI Edgar Gabriel Spring 2007.
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
Beowulf Cluster Jon Green Jay Hutchinson Scott Hussey Mentor: Hongchi Shi.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
March 22, 2000Dr. Thomas Sterling, Caltech1. Networking Options for Beowulf Clusters Dr. Thomas Sterling California Institute of Technology and NASA Jet.
Message Passing Computing 1 iCSC2015,Helvi Hartmann, FIAS Message Passing Computing Lecture 1 High Performance Computing Helvi Hartmann FIAS Inverted CERN.
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
CS591x -Cluster Computing and Parallel Programming
Distributed Programming CA107 Topics in Computing Series Martin Crane Karl Podesta.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Background Computer System Architectures Computer System Software.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
SYSTEM MODELS FOR ADVANCED COMPUTING Jhashuva. U 1 Asst. Prof CSE
BLUE GENE Sunitha M. Jenarius. What is Blue Gene A massively parallel supercomputer using tens of thousands of embedded PowerPC processors supporting.
Super Computing By RIsaj t r S3 ece, roll 50.
Constructing a system with multiple computers or processors
ITIS 1210 Introduction to Web-Based Information Systems
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Designing a PC Farm to Simultaneously Process Separate Computations Through Different Network Topologies Patrick Dreher MIT.
Types of Parallel Computers
Cluster Computers.
Presentation transcript:

Clusters, Grids and their applications in Physics David Barnes (Astro) Lyle Winton (EPP)

Todays typical workstation can: Compute 1024-point fast Fourier transforms at a rate of per second. Compute and apply gravitational force for particles at a rate of ~one time step per sec. Render ~7M elements of a data volume per sec. Stream data to and from disk at around 30 MByte per sec. Communicate with other machines at up to 10 Mbyte per sec, with a latency of a few tens of milliseconds. … … what if this is not enough? … …

High Performance Computers ~ 20 years ago –1x10 6 Floating Point Ops/sec (Mflop/s) [scalar processors] ~ 10 years ago –1x10 9 Floating Point Ops/sec (Gflop/s) [vector processors] ~ Today: superscalar-based CLUSTERS –1x10 12 Floating Point Ops/sec (Tflop/s) Highly parallel, distributed, networked superscalar processors ~ less than 10 years away –1x10 15 Floating Point Ops/sec (Pflop/s) Multi-level clusters and GRIDS a+b=c A+B=C a+b=c d+e=f

High-Performance Computing Directions: Beowulf-class PC Clusters Common off-the-shelf PC Nodes –Pentium, Alpha, PowerPC, SMP COTS LAN/SAN Interconnect –Ethernet, Myrinet, Giganet, ATM Open Source Unix –Linux, BSD Message Passing Computing –MPI, PVM –HPF Best price-performance Low entry-level cost Just-in-place configuration Vendor invulnerable Scalable Rapid technology tracking Definition:Advantages: Enabled by PC hardware, networks and operating system achieving capabilities of scientific workstations at a fraction of the cost and availability of industry standard message passing libraries. Slide from Jack Dongarra … lets see some clusters …

~70 Gflop/s Mojo: School of Physics cluster 24 nodes, 24 CPUs, 2 & 2.4 GHz Pentium 4 ~70 Gflop/s

~500 Gflop/s Swinburne Centre for Astrophysics and Supercomputing: 90 nodes, 180 CPUs, 2.0, 2.2 & 2.4 GHz Pentium 4 16 nodes, 32 CPUs, 933 MHz Pentium III ~500 Gflop/s Accomplish > 8 million 1024-pt FFTs per second. Render > 10 9 volume elements per second. Calculate at one time step per second for ~ particles using brute-force approach.

~200 Tflop/s Next-generation blade cluster IBM Blue Gene ~200 Tflop/s

Uses thousands of Internet-connected PCs to help in the search for extraterrestrial intelligence. Uses data collected with the Arecibo Radio Telescope, in Puerto Rico When your computer is idle the software downloads a 300 kilobyte chunk of data for analysis. The results of this analysis are sent back to the SETI team and combined with results from thousands of other participants. Largest distributed computation project in existence –~ 400,000 machines –Averaging 26 Tflop/s Slide from Jack Dongarra This is more than a cluster … perhaps it is the first genuine Grid …

over to Lyle Winton…