Intro to Parallel and Distributed Processing Some material adapted from slides by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google.

Slides:



Advertisements
Similar presentations
A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
Advertisements

1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Dec 5, 2005 Topic: Intro to Multiprocessors and Thread-Level Parallelism.
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
What is cloud computing? Why is this different? Jimmy Lin The iSchool University of Maryland Monday, March 30, 2009 This work is licensed under a Creative.
Parallel Programming Models and Paradigms
Cloud Computing Lecture #1 Parallel and Distributed Computing Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008 This work is licensed.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Cloud Computing Lecture #1 What is Cloud Computing? (and an intro to parallel/distributed processing) Jimmy Lin The iSchool University of Maryland Wednesday,
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Lecture 1 – Parallel Programming Primer CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed.
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
For more notes and topics visit:
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Computer System Architectures Computer System Software
Parallel Processing LAB NO 1.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Cloud Computing Part #2 Materials adopted from the slides by Jimmy Lin, The iSchool, University of Maryland Zigmunds Buliņš, Mg. sc. ing 1.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
What is a Distributed System? n From various textbooks: l “A distributed system is a collection of independent computers that appear to the users of the.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
Multiprocessing. Going Multi-core Helps Energy Efficiency William Holt, HOT Chips 2005 Adapted from UC Berkeley "The Beauty and Joy of Computing"
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Distributed Computing Seminar Lecture 1: Introduction to Distributed Computing & Systems Background Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet.
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
Cloud Computing Lecture #1 What is Cloud Computing? (and an intro to parallel/distributed processing) Jimmy Lin The iSchool University of Maryland Modified.
Hwajung Lee. Maximilien Brice, © CERN  Fact: Processor population is exploding. Technology has dramatically reduced the price of processors.  Geographic.
Lecture 1 – Introduction CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of this presentation is licensed.
Parallel Computing.
MapReduce Algorithm Design Based on Jimmy Lin’s slides
Grid Computing Framework A Java framework for managed modular distributed parallel computing.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Outline Why this subject? What is High Performance Computing?
Lecture 3: Computer Architectures
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Parallel Computing Presented by Justin Reschke
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
Background Computer System Architectures Computer System Software.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
Intro to Parallel and Distributed Processing Some material adapted from slides by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn University
Introduction to Cloud Computing 1. 2 Performance progress 2010: 2.57 petaflops 2005: teraflops 2000: 4.94 teraflops 1995: 170 gigaflops 15,100.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Lecture 1 – Parallel Programming Primer
Introduction to Distributed Platforms
A. Rama Bharathi Regd. No: 08931F0040 III M.C.A
Grid Computing.
Constructing a system with multiple computers or processors
University of Technology
GGF15 – Grids and Network Virtualization
Multi-Processing in High Performance Computer Architecture:
湖南大学-信息科学与工程学院-计算机与科学系
Different Architectures
Chapter 17 Parallel Processing
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
buses, crossing switch, multistage network.
Constructing a system with multiple computers or processors
Hwajung Lee ITEC452 Distributed Computing Lecture 1 Introduction to Distributed Systems.
Presentation transcript:

Intro to Parallel and Distributed Processing Some material adapted from slides by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar, 2007 (licensed under Creation Commons Attribution 3.0 License) Slides from: Jimmy Lin The iSchool University of Maryland

Web-Scale Problems? How to approach problems in: – Biocomputing – Nanocomputing – Quantum computing –…–… It all boils down to… – Divide-and-conquer – Throwing more hardware at the problem Clusters, Grids, Clouds Simple to understand… a lifetime to master…

Cluster computing Improve performance and availability over single computers Compute nodes located locally, connected by fast LAN A few nodes to large number Group of linked computers working closely together, in parallel Typically machines were homogeneous Sometimes viewed as forming single computer

Cluster computing Can be: – Tightly coupled Share resources (memory, I/O) – Loosely coupled Connected on network but don’t share resources

Memory Typology: Shared Memory Processor

Memory Typology: Distributed MemoryProcessorMemoryProcessor MemoryProcessorMemoryProcessor Network

Memory Typology: Hybrid Memory Processor Network Processor Memory Processor Memory Processor Memory Processor

Clusters Nodes have computational devices and local storage, networked disk storage allows for sharing Static system, centralized control Data Centers Multiple clusters at a datacenter Google cluster 1000s servers – Racks servers

Grids Grids: – Large number of users – Large volume of data – Large computational task involved Connecting resources through a network

Grid Term Grid borrowed from electrical grid Users obtains computing power through Internet by using Grid just like electrical power from any wall socket

Grid Computing Heterogeneous, geographically dispersed Loosely coupled – Nodes do not share resources – connect by network WAN Thousands of users worldwide Can be formed from computing resources belonging to multiple organizations Although can be dedicated to single application, typically for variety of purposes

Grid By connecting to a Grid, can get: – needed computing power – storage spaces and data – Specialized equipment Middleware needed to use grid Each user - a single login account to access all resources Can connect a data center (cluster) to a grid Compute grids, data grids

Grids Resources - owned by diverse organizations – Virtual Organization Sharing of data and resources in dynamic, multi- institutional virtual organizations – Academia – Access to specialized resources not available locally

An Illustrative Example NASA research scientist: – collected microbiological samples in the tidewaters around Wallops Island, Virginia. Needs: – high-performance microscope at National Center for Microscopy and Imaging Research (NCMIR), University of California, San Diego. Samples sent to San Diego and used NPACI’s Telescience Grid and NASA’s Information Power Grid (IPG) to view and control the output of the microscope from her desk on Wallops Island. Viewed the samples, and move platform holding them, making adjustments to the microscope.

Example (continued) The microscope produced a huge dataset of images This dataset was stored using a storage resource broker on NASA’s IPG Scientist was able to run algorithms on this very dataset while watching the results in real time

EU Grid (European Grid Infrastructure) SE – storage element, CE – computing element

Grids Domains as diverse as: – Global climate change – High energy physics – Computational genomics – Biomedical applications Volunteer Grid Computing – SetiAtHome SetiAtHome – World Community Grid World Community Grid – FoldingAtHome FoldingAtHome – EinsteinAtHome EinsteinAtHome

Grids Grid is: – heterogeneous, geographically distributed, independent site – Gridware manages the resources for Grids Grid differs from: – Parallel computing – grid is more than a single task on multiple machines – Distributed system – grid is more than distributing the load of a program across two or more processes – Cluster computing – grid is more than homogeneous sites connected by LAN (grid can be multiple clusters)

Grid Precursor to Cloud computing but no virtualization Some say there wouldn’t be cloud computing if there hadn’t been grid computing “Virtualization distinguishes cloud from grid” Zissis

Parallel/Distributed Background

Flynn’s Taxonomy Instructions Single (SI)Multiple (MI) Data Multiple (MD) SISD Single-threaded process MISD Pipeline architecture SIMD Vector Processing MIMD Multi-threaded Programming Single (SD)

SISD DDDDDDD Processor Instructions

SIMD D0D0 Processor Instructions D0D0 D0D0 D0D0 D0D0 D0D0 D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D1D1 D2D2 D3D3 D4D4 … DnDn D0D0

MISD DDDD D D D Processor Instructions D

MIMD DDDDDDD Processor Instructions DDDDDDD Processor Instructions

The Cloud - Parallelization

Divide and Conquer SIMD or MIMD? “Work” w1w1 w2w2 w3w3 r1r1 r2r2 r3r3 “Result” “worker” Partition Combine

Different Workers Different threads in the same core Different cores in the same CPU Different CPUs in a multi-processor system Different machines in a distributed system

Choices, Choices, Choices Commodity vs. “exotic” hardware Number of machines vs. processor vs. cores Bandwidth of memory vs. disk vs. network Different programming models

Parallelization Problems How do we assign work units to workers? What if we have more work units than workers? What if workers need to share partial results? How do we aggregate partial results? How do we know all the workers have finished? What if workers die? What is the common theme of all of these problems? “Work” w1w1 w2w2 w3w3 r1r1 r2r2 r3r3 “Result” “ w o r k e r” Partition Combine

General Theme? Parallelization problems arise from: – Communication between workers – Access to shared resources (e.g., data) Thus, we need a synchronization system! This is tricky: – Finding bugs is hard – Solving bugs is even harder

Managing Multiple Workers Difficult because – (Often) don’t know the order in which workers run – (Often) don’t know where the workers are running – (Often) don’t know when workers interrupt each other Thus, we need what structures?: – Semaphores (lock, unlock) – Conditional variables (wait, notify, broadcast) – Barriers Still, lots of problems: – Deadlock, livelock, race conditions,... Moral of the story: be careful! – Even trickier if the workers are on different machines

Patterns for Parallelism Parallel computing has been around for decades Here are some “design patterns” …

Master/Slaves slaves master GFS

Producer/Consumer PC P/C Bounded buffer GFS

Work Queues C P P P C C shared queue WWWWW Multiple producers/consumers

Rubber Meets Road From patterns to implementation: – pthreads, OpenMP for multi-threaded programming – MPI (message passing interface) for clustering computing –…–… The reality: – Lots of one-off solutions, custom code – Write your own dedicated library, then program with it – Burden on the programmer to explicitly manage everything MapReduce to the rescue!

Parallelization Ajax: the “front-end” of cloud computing – Highly-interactive Web-based applications MapReduce: the “back-end” of cloud computing – Batch-oriented processing of large datasets – Highly parallel – solve problem in parallel, reduce e.g. count words in document Hadoop MapReduce: a programming model and software framework – for writing applications that rapidly process vast amounts of data in parallel on large clusters of compute nodes.

Multiple threads on a VM If 2 hyper threads on 4 cores, VM thinks have 8 If want all 8 threads at once?