Introduction to Parallel Computing. Serial Computing.

Slides:



Advertisements
Similar presentations
The Art of Multiprocessor Programming Nir Shavit, Ori Shalev CS Spring 2007 (Based on the book by Herlihy and Shavit)
Advertisements

Parallelism Lecture notes from MKP and S. Yalamanchili.
MULTICORE PROCESSOR TECHNOLOGY.  Introduction  history  Why multi-core ?  What do you mean by multicore?  Multi core architecture  Comparison of.
Computer Hardware.
Final Class, ECE472 Midterm #2 due today – 1-5% extra credit for written report of Dally’s video Oral presentation of class project: today Graduate students:
Claude TADONKI Mines ParisTech – LAL / CNRS / INP 2 P 3 University of Oujda (Morocco) – October 7, 2011 High Performance Computing Challenges and Trends.
SYNAR Systems Networking and Architecture Group CMPT 886: Special Topics in Operating Systems and Computer Architecture Dr. Alexandra Fedorova School of.
GPU Computing with CUDA as a focus Christie Donovan.
Montek Singh COMP Oct 11,  Basics of multicore systems ◦ what is multicore? ◦ why multicore? ◦ main features ◦ examples  Next two classes.
Programming with CUDA, WS09 Waqar Saleem, Jens Müller Programming with CUDA and Parallel Algorithms Waqar Saleem Jens Müller.
Supercomputers Daniel Shin CS 147, Section 1 April 29, 2010.
Introduction What is Parallel Algorithms? Why Parallel Algorithms? Evolution and Convergence of Parallel Algorithms Fundamental Design Issues.
Multicore experiment: Plurality Hypercore Processor Performed by: Anton Fulman Ze’ev Zilberman Supervised by: Mony Orbach Characterization presentation.
INTEL CONFIDENTIAL Why Parallel? Why Now? Introduction to Parallel Programming – Part 1.
Chapter 2 System Unit Components Discovering Computers 2012: Chapter
Inbyggda realtidssystem: En flerkärning frälsning Embedded real-time systems: A multi-core salvation Thomas Nolte,
Single-Chip Multi-Processors (CMP) PRADEEP DANDAMUDI 1 ELEC , Fall 08.
RISC and CISC. Dec. 2008/Dec. and RISC versus CISC The world of microprocessors and CPUs can be divided into two parts:
1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab
Information and Communication Technology Fundamentals Credits Hours: 2+1 Instructor: Ayesha Bint Saleem.
Guide to Operating Systems, 4th ed.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Parallel and Distributed Systems Instructor: Xin Yuan Department of Computer Science Florida State University.
Multi-core architectures. Single-core computer Single-core CPU chip.
Pipeline And Vector Processing. Parallel Processing The purpose of parallel processing is to speed up the computer processing capability and increase.
Multi-Core Architectures
© David Kirk/NVIDIA and Wen-mei W. Hwu, 1 Programming Massively Parallel Processors Lecture Slides for Chapter 1: Introduction.
Multi-core Programming Introduction Topics. Topics General Ideas Moore’s Law Amdahl's Law Processes and Threads Concurrency vs. Parallelism.
Lecture 1: Performance EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2013, Dr. Rozier.
Guide to Operating Systems, 4 th ed. Chapter 3: Operating Systems Hardware Components.
High Performance Computing Processors Felix Noble Mirayma V. Rodriguez Agnes Velez Electric and Computer Engineer Department August 25, 2004.
Outline  Over view  Design  Performance  Advantages and disadvantages  Examples  Conclusion  Bibliography.
CPU Inside Maria Gabriela Yobal de Anda L#32 9B. CPU Called also the processor Performs the transformation of input into output Executes the instructions.
Chapter 1 Performance & Technology Trends Read Sections 1.5, 1.6, and 1.8.
Hyper Threading Technology. Introduction Hyper-threading is a technology developed by Intel Corporation for it’s Xeon processors with a 533 MHz system.
Dr. Alexandra Fedorova School of Computing Science SFU
Department of Computer Science 1 Beyond CUDA/GPUs and Future Graphics Architectures Karu Sankaralingam University of Wisconsin-Madison Adapted from “Toward.
Parallel Processing & Distributed Systems Thoai Nam And Vu Le Hung.
Central processing unit
Shashwat Shriparv InfinitySoft.
Multi-core processors. 2 Processor development till 2004 Out-of-order Instruction scheduling Out-of-order Instruction scheduling.
Motivation: Sorting is among the fundamental problems of computer science. Sorting of different datasets is present in most applications, ranging from.
Presentation 31 – Multicore, Multiprocessing, Multithreading, and Multitasking. When discussing modern PCs, the term “Multi” is thrown around a lot as.
THE BRIEF HISTORY OF 8085 MICROPROCESSOR & THEIR APPLICATIONS
MULTICORE PROCESSOR TECHNOLOGY.  Introduction  history  Why multi-core ?  What do you mean by multicore?  Multi core architecture  Comparison of.
Chapter 5: Computer Systems Design and Organization Dr Mohamed Menacer Taibah University
Advanced Computer Networks Lecture 1 - Parallelization 1.
Computer Science and Engineering Power-Performance Considerations of Parallel Computing on Chip Multiprocessors Jian Li and Jose F. Martinez ACM Transactions.
EKT303/4 Superscalar vs Super-pipelined.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 3.
 System Requirements are the prerequisites needed in order for a software or any other resources to execute efficiently.  Most software defines two.
Hardware Trends CSE451 Andrew Whitaker. Motivation Hardware moves quickly OS code tends to stick around for a while “System building” extends way beyond.
Hardware Trends CSE451 Andrew Whitaker. Motivation Hardware moves quickly OS code tends to stick around for a while “System building” extends way beyond.
Introducing Networks and the Internet Mrs. Wilson Rocky Point High School.
Multi-Core CPUs Matt Kuehn. Roadmap ► Intel vs AMD ► Early multi-core processors ► Threads vs Physical Cores ► Multithreading and Multi-core processing.
Introduction CSE 410, Spring 2005 Computer Systems
Computer Organization CS345 David Monismith Based upon notes by Dr. Bill Siever and from the Patterson and Hennessy Text.
Lesson 8 CPUs Used in Personal Computers.
مقدمة في علوم الحاسب Lecture 5
Visit for more Learning Resources
Parallel Computing in the Multicore Era
Architecture & Organization 1
Computer Architecture
Architecture & Organization 1
Parallel Computing in the Multicore Era
Lesson 8 CPUs Used in Personal Computers.
Vrije Universiteit Amsterdam
Types of Parallel Computers
Chapter 2: Performance CS 447 Jason Bakos Fall 2001 CS 447.
Presentation transcript:

Introduction to Parallel Computing

Serial Computing

Parallel Computing

Why Parallel Computing?

The Power Wall

The Single-Core Performance Wall

“The major processor manufacturers and architectures, from Intel and AMD to Sparc and PowerPC, have run out of room with most of their traditional approaches to boosting CPU performance. Instead of driving clock speeds and straight-line instruction throughput ever higher, they are instead turning en masse to hyperthreading and multicore architectures.” The Free Lunch Is Over A Fundamental Turn Toward Concurrency in Software By Herb Sutter

The Future

Why Learn Parallel Computing? Entertainment Industry Performance Challenge Massive Data Real-time Analytics Algorithmic Opportunities

Entertainment Industry

Performance Challenge Advanced research has moved beyond the capacity of a single computer for detailed multi-level simulations, data analysis, and large- scale computations. Electron localization function in the cubic NaCl3 structure.

Massive Data

Real-time Analytics

Algorithmic Opportunities Different molecular forces cause proteins to fold into unique, complex shapes. (Image courtesy of Justin MacCallum, Stony Brook University)

Examples of Parallel Computing

Summary The computers of today, and tomorrow, have tremendous processing power that require parallel programming to fully utilize. There are significant differences between sequential and parallel programming, that can be challenging. With early exposure to these differences, students are capable of achieving performance improvements with multicore programming.