KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.

Slides:



Advertisements
Similar presentations
Threads, SMP, and Microkernels
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Distributed Systems CS
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Parallel computer architecture classification
1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Parallel Processing: Architecture Overview Subject Code: Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS) Lab. The University of Melbourne.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
Introduction to MIMD architectures
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.
Multiprocessors ELEC 6200 Computer Architecture and Design Instructor: Dr. Agrawal Yu-Chun Chen 10/27/06.

Multiprocessors CSE 471 Aut 011 Multiprocessors - Flynn’s Taxonomy (1966) Single Instruction stream, Single Data stream (SISD) –Conventional uniprocessor.
Parallel Processing Architectures Laxmi Narayan Bhuyan
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
Parallel Processing Group Members: PJ Kulick Jon Robb Brian Tobin.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
CPE 731 Advanced Computer Architecture Multiprocessor Introduction
Parallel Processing: Architecture Overview Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS) Lab. The University of Melbourne Melbourne, Australia.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
CMSC 611: Advanced Computer Architecture Parallel Computation Most slides adapted from David Patterson. Some from Mohomed Younis.
Lecture 1 – Parallel Programming Primer CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed.
Parallel Architectures
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
Computer Architecture Parallel Processing
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Computer System Architectures Computer System Software
Parallel Computing Basic Concepts Computational Models Synchronous vs. Asynchronous The Flynn Taxonomy Shared versus Distributed Memory Interconnection.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
Introduction to Parallel Computing: Architectures, Systems, and Programming Prof. Rajkumar Buyya Cloud Computing and Distributed Systems (CLOUDS) Lab.
Lappeenranta University of Technology / JP CT30A7001 Concurrent and Parallel Computing Introduction to concurrent and parallel computing.
Operating System 4 THREADS, SMP AND MICROKERNELS
What is a Distributed System? n From various textbooks: l “A distributed system is a collection of independent computers that appear to the users of the.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Department of Computer Science University of the West Indies.
Multi-core.  What is parallel programming ?  Classification of parallel architectures  Dimension of instruction  Dimension of data  Memory models.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
Spring 2003CSE P5481 Issues in Multiprocessors Which programming model for interprocessor communication shared memory regular loads & stores message passing.
Introduction to Parallel Processing
Parallel Computing.
Operating System 4 THREADS, SMP AND MICROKERNELS.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Outline Why this subject? What is High Performance Computing?
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
Lecture 3: Computer Architectures
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
Background Computer System Architectures Computer System Software.
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
These slides are based on the book:
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Parallel Processing: Architecture Overview
Lecture 1 – Parallel Programming Primer
Multiprocessor Systems
CMSC 611: Advanced Computer Architecture
Distributed Processors
Parallel Computers Definition: “A parallel computer is a collection of processing elements that cooperate and communicate to solve large problems fast.”
Multi-Processing in High Performance Computer Architecture:
What is Parallel and Distributed computing?
Different Architectures
Chapter 17 Parallel Processing
Symmetric Multiprocessing (SMP)
Multiprocessors - Flynn’s taxonomy (1966)
Parallel Processing Architectures
Operating System 4 THREADS, SMP AND MICROKERNELS
Chapter 4 Multiprocessors
Presentation transcript:

KUAS.EE Parallel Computing at a Glance

KUAS.EE History Parallel Computing

KUAS.EE What is Parallel Processing ? Processing of multiple tasks simultaneously on multiple Processors is called parallel processing. D1D2D3 P1P2P3 R Pm

KUAS.EE Why Parallel Processing ?. Computational requirements are ever increasing, both in the area of scientific and business. grand challenge problems. Sequential architecture reaches physical limitation.. Hardware improvements in pipelining, superscalar are non-scalable and requires sophisticated complier technology.. Vector processing works well for certain of problems.. The technology of parallel processing is mature.. Significant development in networking technology is paving a way for heterogeneous computing.

KUAS.EE Hardware Architecture for Parallel Processing ? 1.Single instruction single data (SISD) 2.Single instruction multiple data (SIMD) 3.Multiple instruction and single data (MISD) 4.Multiple instruction and multiple data (MIMD)

KUAS.EE Single instruction single data (SISD) Sequential computer : PC, Macintosh, Workstation

KUAS.EE Single instruction multiple data (SIMD) Vector machines CRAY, Thinking Machines

KUAS.EE Multiple instruction and single data (MISD)

KUAS.EE Multiple instruction and multiple data (MIMD) work asynchronously

KUAS.EE Shared Memory MIMD Machine Tightly-couple multiprocessor Silicon Graphics machines Sun ’ s SMP address shared memory single address space real address vs. virtual address thread NUMA v.s UMA Message passing v.s shared memory

KUAS.EE Distributed Shared Memory MIMD Machine Loosely-coupled multiprocessor C-DAC ’ s PARAM IBM ’ s SP/2 Intel ’ s Paragaon

KUAS.EE Comparison between Shared Memory MIMD and Distributed Shared MIMD Shared Memory MIMD Distributed Memory MIMD (MPP) ManufacturabilityEasy ProgrammabilityEasySlightly difficult ReliabilityPoorGood Extensibility Scalability DifficultEasy

KUAS.EE Approaches to Parallel Programming.Data Parallelism (SIMD).Process Parallelism.Farmer and Worker Models (Master and Slaves)

KUAS.EE PARAM Supercomputers

KUAS.EE PARAS Operating Environment It is a complete parallel programming environment. 1.OS kernel 2.Host servers 3.Compliers 4.Run-time environment 5.Parallel file system 6.On-line debugger and profiling tool 7.Graphics and visualization support 8.Networking interface 9.Off-line parallel processing tools 10.Program restructures 11.libraries Program development environment Program run-time environment Utilities

KUAS.EE PARAS Programming Model. PARAS Microkernel. CONcurrent threads Environment(CORE). POSIX threads interface. Popular Message Passing interface such as. MPI. PVM. Parallelizing Compliers. Tools and debuggers for parallel programming. Load balancing and distribution tools

KUAS.EE Levels of Parallelism

KUAS.EE Levels of Parallelism