PARALLEL COMPUTING.

Slides:



Advertisements
Similar presentations
Parallel Computing Glib Dmytriiev
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Parallel Algorithms Lecture Notes. Motivation Programs face two perennial problems:: –Time: Run faster in solving a problem Example: speed up time needed.
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
Introduction to Parallel Processing Ch. 12, Pg
Flynn’s Taxonomy of Computer Architectures Source: Wikipedia Michael Flynn 1966 CMPS 5433 – Parallel Processing.
Presentation On Parallel Computing  By  Abdul Mobin  KSU ID :  Id :
Advances in Language Design
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
Parallel Algorithms Sorting and more. Keep hardware in mind When considering ‘parallel’ algorithms, – We have to have an understanding of the hardware.
Pipeline And Vector Processing. Parallel Processing The purpose of parallel processing is to speed up the computer processing capability and increase.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Super computers Parallel Processing By Lecturer: Aisha Dawood.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
Computer Architecture And Organization UNIT-II General System Architecture.
EEC4133 Computer Organization & Architecture Chapter 9: Advanced Computer Architecture by Muhazam Mustapha, May 2014.
Parallel Computing.
Outline Why this subject? What is High Performance Computing?
Von Neumann Computers Article Authors: Rudolf Eigenman & David Lilja
Parallel Processing Presented by: Wanki Ho CS147, Section 1.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
An Overview of Parallel Processing
Chapter 2 Data Manipulation © 2007 Pearson Addison-Wesley. All rights reserved.
Parallel Computing Presented by Justin Reschke
Primitive Concepts of Distributed Systems Chapter 1.
Hybrid Parallel Implementation of The DG Method Advanced Computing Department/ CAAM 03/03/2016 N. Chaabane, B. Riviere, H. Calandra, M. Sekachev, S. Hamlaoui.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Distributed and Parallel Processing George Wells.
Computer Organization and Architecture Lecture 1 : Introduction
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Applied Operating System Concepts
Introduction to Parallel Processing
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
A Level Computing – a2 Component 2 1A, 1B, 1C, 1D, 1E.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Parallel Programming pt.1
Introduction to Parallel Computing
PARALLEL COMPUTING Submitted By : P. Nagalakshmi
Distributed and Parallel Processing
Introduction to parallel programming
buses, crossing switch, multistage network.
Computer Systems – Memory & the 3 box Model
3.3.3 Computer architectures
CS 147 – Parallel Processing
Flynn’s Classification Of Computer Architectures
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
Introduction to Parallel Processing
Pipelining and Vector Processing
CISC AND RISC SYSTEM Based on instruction set, we broadly classify Computer/microprocessor/microcontroller into CISC and RISC. CISC SYSTEM: COMPLEX INSTRUCTION.
Intro to Architecture & Organization
Virtualization Techniques
High Performance Computing (Supercomputer/Parallel Computing)
Introduction to locality sensitive approach to distributed systems
Operating System Concepts
buses, crossing switch, multistage network.
CSE8380 Parallel and Distributed Processing Presentation
Overview Parallel Processing Pipelining
AN INTRODUCTION ON PARALLEL PROCESSING
Language Processors Application Domain – ideas concerning the behavior of a software. Execution Domain – Ideas implemented in Computer System. Semantic.
COMP60611 Fundamentals of Parallel and Distributed Systems
Part 2: Parallel Models (I)
The Nature of Computing
Database System Architectures
Module 6: Introduction to Parallel Computing
Chapter-1 Computer is an advanced electronic device that takes raw data as an input from the user and processes it under the control of a set of instructions.
Reasons To Study Programming Languages
Operating System Concepts
Prebared by Omid Mustefa.
Presentation transcript:

PARALLEL COMPUTING

Abstract Parallelism has become a standard technique in the design of high performance computers.despite the impressive progress achived in the desugn of sequential von Neumann machines , their computing power is limited in te light of certain applications. Parallel computing emerged as an alternative and viable medium for the solution of many important problems.Many conventional machines such as PCs and workstations contain some degree of parallelism. Such a tendency represents a departure from sequential model of computation and parallel computing itself has not been a big success. The difficulty lies with a gap between the view needed to use a particular machine effectively and the view needed to develop parallel software successfully .

Introduction Software has been written for serial computation: To be executed by a single computer having a single Central Processing Unit (CPU); Problems are solved by a series of instructions, executed one after the other by the CPU. Only one instruction may be executed at any moment in time.

The computer resources are: A single computer with multiple processors Parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. The computer resources are: A single computer with multiple processors An arbitrary number of computers connected by a network; A combination of both. 3. The computational problem usually demonstrates characteristics such as the ability to be: Broken apart into discrete pieces of work that can be solved simultaneously; Execute multiple program instructions at any moment in time; Solved in less time with multiple compute resources than with a single compute resource. 4. Parallel computing is an evolution of serial computing that attempts to emulate what has always been the state of affairs in the natural world, many complex, interrelated events happening at the same time.  

USE Save time Solve larger problems Taking advantage of non-local resources Cost savings Overcoming memory constraints Transmission speeds Limits to miniaturization Economic limitations

ISSUES Distributed Systems Issues Concurrent access Reliability Transparency Scalability

Concepts and Terminology 1)Von Neumann Architecture

Basic design: Memory is used to store both program and data instructions Program instructions are coded data which tell the computer to do something Data is simply information to be used by the program A central processing unit (CPU) gets instructions and/or data from memory, decodes the instructions and then sequentially performs them.

2)Flynn's Classical Taxonomy Flynn's taxonomy distinguishes multi- processor computer architectures according to how they can be classified along the two independent dimensions of Instruction and Data. Each of these dimensions can have only one of two possible states: Single or Multiple. The matrix below defines the 4 possible classifications according to Flynn.

3)Parallel Computer Memory Architectures (a).Shared Memory

(b).Distributed Memory

(c).Hybrid Distributed-Shared Memory

4) Parallel Programming Models (a).Shared Memory Model

(b).Threads Model

(c).Message Passing Model

Conclusion we can show that partial evaluation plays an important role in the parallel computation process. This approach is intended for a broad spectrum of activity such as automatic transforming, optimizing, specialization of programs with respect to the partial knowledge of the input and for their parallelization. we demonstrate through the analysis of the program examples the way to partially overcome some shortcomings and non-effectiveness of declarative programs, and show that the method is particularly effective on numerically oriented scientific programs and even for irregular data structures like trees,lists,graphs etc.,

THANK YOU