Computer Science 320 Parallel Computing Design Patterns.

Slides:



Advertisements
Similar presentations
Practical techniques & Examples
Advertisements

Divide and Conquer Yan Gu. What is Divide and Conquer? An effective approach to designing fast algorithms in sequential computation is the method known.
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
Development of Parallel Simulator for Wireless WCDMA Network Hong Zhang Communication lab of HUT.
1 Contents Distributed data structures Different implementations First example: bank transactions JavaSpaces concepts Replicated-worker pattern Second.
Advanced Topics in Algorithms and Data Structures Page 1 Parallel merging through partitioning The partitioning strategy consists of: Breaking up the given.
Algorithms (Contd.). How do we describe algorithms? Pseudocode –Combines English, simple code constructs –Works with various types of primitives Could.
Parallel Merging Advanced Algorithms & Data Structures Lecture Theme 15 Prof. Dr. Th. Ottmann Summer Semester 2006.
Beowulf Cluster Computing Each Computer in the cluster is equipped with: – Intel Core 2 Duo 6400 Processor(Master: Core 2 Duo 6700) – 2 Gigabytes of DDR.
Parallel Computation in Biological Sequence Analysis Xue Wu CMSC 838 Presentation.
Unit 1. Sorting and Divide and Conquer. Lecture 1 Introduction to Algorithm and Sorting.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Multimedia Enabling Software. The Human Perceptual System Since the multimedia systems are intended to be used by human, it is a pragmatic approach to.
Introduction - The Need for Data Structures Data structures organize data –This gives more efficient programs. More powerful computers encourage more complex.
Parallel Programming in.NET Kevin Luty.  History of Parallelism  Benefits of Parallel Programming and Designs  What to Consider  Defining Types of.
1 AQA ICT AS Level © Nelson Thornes 2008 Application Software.
TERMS TO KNOW. Programming Language A vocabulary and set of grammatical rules for instructing a computer to perform specific tasks. Each language has.
Alternative Parallel Processing Approaches Jonathan Sagabaen.
Protein Tertiary Structure Prediction
Task Farming on HPCx David Henty HPCx Applications Support
CS 221 – May 13 Review chapter 1 Lab – Show me your C programs – Black spaghetti – connect remaining machines – Be able to ping, ssh, and transfer files.
PAGE: A Framework for Easy Parallelization of Genomic Applications 1 Mucahid Kutlu Gagan Agrawal Department of Computer Science and Engineering The Ohio.
DLS on Star (Single-level tree) Networks Background: A simple network model for DLS is the star network with a master-worker platform. It consists of a.
Index Building Overview Database tables Building flow (logical) Sequential Drawbacks Parallel processing Recovery Helpful rules.
Frontiers in Massive Data Analysis Chapter 3.  Difficult to include data from multiple sources  Each organization develops a unique way of representing.
Mr Conti 28th March 2014 Lesson Aim To learn about different types of software that are produced for computer systems. Lesson Outcomes Good – will be able.
Pipelined and Parallel Computing Data Dependency Analysis for 1 Hongtao Du AICIP Research Mar 9, 2006.
Vector/Array ProcessorsCSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: Vector/Array Processors Reading: Stallings, Section.
Lecture 11 Data Structures, Algorithms & Complexity Introduction Dr Kevin Casey BSc, MSc, PhD GRIFFITH COLLEGE DUBLIN.
Clustering Sequential Data: Research Paper Review Presented by Glynis Hawley April 28, 2003 On the Optimal Clustering of Sequential Data by Cheng-Ru Lin.
ITGS Application Software. ITGS Application software (productivity software) –Allows the user to perform tasks to solve problems, such as creating documents,
Thinking in Parallel – Implementing In Code New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
SLAAC SLD Update Steve Crago USC/ISI September 14, 1999 DARPA.
Computer Science 320 Load Balancing with Clusters.
Pipeline Introduction Sequential steps of –Plugin calls –Script calls –Cluster jobs Purpose –Codifies the process of creating the data set –Reduces human.
MapReduce Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
Vector and symbolic processors
A Parallel, High Performance Implementation of the Dot Plot Algorithm Chris Mueller July 8, 2004.
CS 351/ IT 351 Modeling and Simulation Technologies HPC Architectures Dr. Jim Holten.
Image Processing A Study in Pixel Averaging Building a Resolution Pyramid With Parallel Computing Denise Runnels and Farnaz Zand.
MapReduce: simplified data processing on large clusters Jeffrey Dean and Sanjay Ghemawat.
INVITATION TO Computer Science 1 11 Chapter 2 The Algorithmic Foundations of Computer Science.
Development of a Distributed Task Bag Using CORBA Frank McCown Operating Systems – UALR Dec. 6, 2001.
Computer System Evolution. Yesterday’s Computers filled Rooms IBM Selective Sequence Electroinic Calculator, 1948.
Software. Introduction n A computer can’t do anything without a program of instructions. n A program is a set of instructions a computer carries out.
Thinking in Parallel - Introduction New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
1 AQA ICT AS Level © Nelson Thornes 2008 Operating Systems What are they and why do we need them?
Parallel Computing Chapter 3 - Patterns R. HALVERSON MIDWESTERN STATE UNIVERSITY 1.
Our Graphics Environment Landscape Rendering. Hardware  CPU  Modern CPUs are multicore processors  User programs can run at the same time as other.
Index Building.
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Lesson Objectives Aims From the spec:
Unit 1. Sorting and Divide and Conquer
Parallel Programming By J. H. Wang May 2, 2017.
CLUSTER COMPUTING Presented By, Navaneeth.C.Mouly 1AY05IS037
Genomic Data Clustering on FPGAs for Compression
The short-read alignment in distributed memory environment
Introduction to Query Optimization
Pipelining and Vector Processing
Objective of This Course
P A R A L L E L C O M P U T I N G L A B O R A T O R Y
Selected Topics: External Sorting, Join Algorithms, …
Interpret the execution mode of SQL query in F1 Query paper
Parallel algorithm design
Part 2: Parallel Models (I)
Parallel System for BLAST
Evaluation of Relational Operations: Other Techniques
Applying principles of computer science in a biological context
ESE532: System-on-a-Chip Architecture
Presentation transcript:

Computer Science 320 Parallel Computing Design Patterns

Problem Solving: How to Start? See if your problem fits into a class of problems that have already been solved Look for a suggestion for your solution in that class of solutions

Design Patterns A design pattern provides a template for suggested solutions to a class of siliarly structured problems Identify a design pattern that best matches your problem

Parallel Design Patterns Three patterns in 1989 paper by Carriero and Gelernter: –Result parallelism –Agenda parallelism –Specialist parallelism

Result Parallelism Good for processing each element in a data structure, such as the pixels in an image or the frames in a movie Ideally, the results of the computations are independent of each other

Result Parallelism Bottleneck: sequential dependencies, where one result must await the computation of another Positions of multiple stars in a time sequence Spreadsheet recalculations

Result Parallelism with Dependencies

Agenda Parallelism Good for computing one result from a large number of inputs See if any DNA sequences match a query sequence May also run into sequential dependencies, where tasks must wait

Agenda Parallelism: BLAST Basic Local Alignment Search Tool Unlike result parallelism, only interested in some results or combination thereof

Agenda Parallelism with Reduction Compute in parallel and then apply a reduction operator

Specialist Parallelism Each processor performs a specialized task on a series of data items (also known as pipelining)

Specialist Parallelism For each star Calculate position Render image Store in PNG file

What if There Aren’t Enough Processors? Large problems have billions of results to compute or tasks to perform, but we don’t yet have billions of processors The specialist pattern usually requires fewer processors

Result Pattern: Clumping/Slicing Clumping: lump many conceptual processors into one real processor Slicing: partition a data structure into pieces and dedicate a process to each piece

Agenda Pattern: Clumping/Slicing Clumping: lump many conceptual processors into one real processor Slicing: partition a data structure into pieces and dedicate a process to each piece

Agenda Pattern: Master-Worker A conceptual design usually for clusters The master processor manages the agenda of tasks, and delegates these to the worker processors The master receives the results and combines them

For Next Time Introduction to parallel Java, and a first parallel program!