Parallelism COS 597C David August David Walker. Goals To compare and contrast a variety of different programming languages and programming styles –imperative.

Slides:



Advertisements
Similar presentations
1 CS101 Introduction to Computing Lecture 17 Algorithms II.
Advertisements

Getting started with ML ML is a functional programming language. ML is statically typed: The types of literals, values, expressions and functions in a.
Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
Software Frame Simulator (SFS) Technion CS Computer Communications Lab (236340) in cooperation with ECI telecom Uri Ferri & Ynon Cohen January 2007.
Introduction to Operating Systems CS-2301 B-term Introduction to Operating Systems CS-2301, System Programming for Non-majors (Slides include materials.
Parallelism and Concurrency Koen Lindström Claessen Chalmers University Gothenburg, Sweden Ulf Norell.
Reference: Message Passing Fundamentals.
Introduction to Analysis of Algorithms
8 February How Computers Work: Algorithms The Internet.
Concurrency CS 510: Programming Languages David Walker.
MapReduce Simplified Data Processing on Large Clusters Google, Inc. Presented by Prasad Raghavendra.
CS 101 Problem Solving and Structured Programming in C Sami Rollins Spring 2003.
OPL: Our Pattern Language. Background Design Patterns: Elements of Reusable Object-Oriented Software o Introduced patterns o Very influential book Pattern.
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
MapReduce : Simplified Data Processing on Large Clusters Hongwei Wang & Sihuizi Jin & Yajing Zhang
Software Engineering for Cloud Computing Rao, Feng 04/27/2011.
SYSTEMS SUPPORT FOR GRAPHICAL LEARNING Ken Birman 1 CS6410 Fall /18/2014.
COMP313A Programming Languages Introduction. More Housekeeping Stuff Reading Material Textbook –Programming Languages: Principles and Practice by Kenneth.
What is Concurrent Programming? Maram Bani Younes.
MapReduce.
Cloud MapReduce : a MapReduce Implementation on top of a Cloud Operating System Speaker : 童耀民 MA1G Authors: Huan Liu, Dan Orban Accenture.
Dr. Ken Hoganson, © August 2014 Programming in R STAT8030 Programming in R COURSE NOTES 1: Hoganson Programming Languages.
Abstraction IS 101Y/CMSC 101 Computational Thinking and Design Tuesday, September 17, 2013 Carolyn Seaman University of Maryland, Baltimore County.
HTML5 Application Development Fundamentals
COMPUTER SOFTWARE Section 2 “System Software: Computer System Management ” CHAPTER 4 Lecture-6/ T. Nouf Almujally 1.
Meir Botner David Ben-David. Project Goal Build a messenger that allows a customer to communicate with a service provider for a fee.
 Actor Model  Software Transactional Memory  Data Flow Programming.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
CS525: Special Topics in DBs Large-Scale Data Management Hadoop/MapReduce Computing Paradigm Spring 2013 WPI, Mohamed Eltabakh 1.
MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat.
Lecture 3 CS492 Special Topics in Computer Science Distributed Algorithms and Systems.
Software & the Concurrency Revolution by Sutter & Larus ACM Queue Magazine, Sept For CMPS Halverson 1.
Sadegh Aliakbary Sharif University of Technology Fall 2012.
TMF1013 : Introduction To Computing Lecture 1 : Fundamental of Computer ComputerFoudamentals.
SALSASALSASALSASALSA Design Pattern for Scientific Applications in DryadLINQ CTP DataCloud-SC11 Hui Li Yang Ruan, Yuduo Zhou Judy Qiu, Geoffrey Fox.
Large scale IP filtering using Apache Pig and case study Kaushik Chandrasekaran Nabeel Akheel.
The Vesta Parallel File System Peter F. Corbett Dror G. Feithlson.
Large scale IP filtering using Apache Pig and case study Kaushik Chandrasekaran Nabeel Akheel.
Reactive pattern matching for F# Part of “Variations in F#” research project Tomáš Petříček, Charles University in Prague
Programovací jazyky F# a OCaml Chapter 6. Sequence expressions and computation expressions (aka monads)
Introduction to VHDL Simulation … Synthesis …. The digital design process… Initial specification Block diagram Final product Circuit equations Logic design.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Architecture View Models A model is a complete, simplified description of a system from a particular perspective or viewpoint. There is no single view.
CS-303 Introduction to Programming
Component Patterns – Architecture and Applications with EJB copyright © 2001, MATHEMA AG Component Patterns Architecture and Applications with EJB Markus.
JavaScript 101 Introduction to Programming. Topics What is programming? The common elements found in most programming languages Introduction to JavaScript.
Introduction CSE 1310 – Introduction to Computers and Programming Vassilis Athitsos University of Texas at Arlington 1.
Hadoop/MapReduce Computing Paradigm 1 CS525: Special Topics in DBs Large-Scale Data Management Presented By Kelly Technologies
MapReduce: Simplified Data Processing on Large Clusters By Dinesh Dharme.
Getting ready. Why C? Design Features – Efficiency (C programs tend to be compact and to run quickly.) – Portability (C programs written on one system.
Martin Kruliš by Martin Kruliš (v1.0)1.
Review A program is… a set of instructions that tell a computer what to do. Programs can also be called… software. Hardware refers to… the physical components.
The Structuring of Systems Using Upcalls By David D. Clark Presented by Samuel Moffatt.
Topic 2: Hardware and Software
Topic: Python’s building blocks -> Variables, Values, and Types
SOFTWARE DESIGN AND ARCHITECTURE
Abstract Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for.
Hierarchical Architecture
Lecture 5: GPU Compute Architecture
Introduction to Operating Systems
Advanced Operating Systems
12 Asynchronous Programming
Lecture 5: GPU Compute Architecture for the last time
Programming Languages
Overview of big data tools
Slides prepared by Samkit
Microsoft Consumer Channels and Central Marketing Group
MapReduce: Simplified Data Processing on Large Clusters
Lecture 12 Input/Output (programmer view)
Presentation transcript:

Parallelism COS 597C David August David Walker

Goals To compare and contrast a variety of different programming languages and programming styles –imperative programming (threads, shared memory, vector machines) –functional programming (nested data parallelism, asychronous, functional reactive programming) –implementation techniques (GPUs, vector flattening) –new languages (Cilk, StreamIt, Map-Reduce, Sawzall, Dryad)

Course Organization A series of relatively independent modules –Students in the class will be assigned to different modules and help develop content for them lectures assignments for other students Workload –some time preparing (learning) material for other students –some time working on exercises, topic-oriented projects –lots of group work

Walker’s Modules Asynchronous and Reactive Functional Programming Software Transactional Memory Nested Data Parallelism Massively Parallel Systems (cloud computing) A unifying theme: F# –a modern functional programming language with strong support for concurrency & access to lots of libraries

Asynchronous, Reactive Programming Technology Goals: –responsiveness & concurrency in a GUI to respond to users rapidly in a web browser to hide network latency in a robot controller to respond to environmental changes in a network controller to structure code for controlling a set of routers in a programmed animation to write computations over time Old-fashioned way: –call-backs, explicit event-based programming with tricky control- flow New-fangled way: –an asynchronous concurrency monad “workflow” that helps structure programs

open System.Net open Microsoft.FSharp.Control.WebExtensions let urlList = [ "Microsoft.com", " "MSDN", " "Bing", " ] let fetchAsync(name, url:string) = async { try let uri = new System.Uri(url) let webClient = new WebClient() let! html = webClient.AsyncDownloadString(uri) printfn "Read %d characters for %s" html.Length name with | ex -> printfn "%s" (ex.Message); } let runAll() = urlList |> Seq.map fetchAsync |> Async.Parallel |> Async.RunSynchronously |> ignore runAll() introduce async computation run asynchronously, queueing the rest of the computation run set of asynchs in parallel

Asynchronous, Reactive Programming Stuff we’ll learn –how to structure reactive programs using asynchronous workflows and monads –what a monad is and how to build different kinds of monads in F# –non-standard applications: programming routers (Frenetic), programming robots (Yampa), and programming animations (Fran) Possible projects/assignments –the mechanics of how to implement functional reactive programming infrastructure

(Software) Transactional Memory Technology Goals: –to simplify parallel programming by providing programmers with the illusion that the instructions of a transaction are executed atomically a programmer does not have to reason about the possible interleavings of the instructions of a particular block of program code with all other instructions in the program

v := x.item; x.item := v + 1; v := x.item; x.item := v + 1; If x.item starts as 1. What are its final results?

val readTVar : TVar -> Stm val writeTVar : TVar -> 'a -> Stm val : atomically : Stm -> 'a let incr x = stm { let! v = readTVar x let! _ = writeTVar x (v+1) return v } let incr2 x = stm { let! _ = incr x let! v = incr x return v } composable transactions atomic increment incr x |> atomically introduce atomic block

(Software) Transactional Memory Stuff we’ll learn –programming paradigms, pros and cons of STMs –software transactions can be phrased as another form of monad workflow –implementation techniques –hardware support Possible projects/assignments –structuring scientific apps as STMs

Nested Data Parallel Programming Technology Goals –enable simple, concise, high-level expression of parallel algorithms –provide a clear, machine-independent cost model for algorithm design

let r = new System.Random(); let rec quicksort (s : Nesl.vector ) = if s.Length < 2 then s else let pivot = Nesl.choose r s let les = Nesl.filter ((>) pivot) s let eqs = Nesl.filter ((=) pivot) s let ges = Nesl.filter ((<) pivot) s let answers = Nesl.map quicksort [| les; ges |] Nesl.concat [| Nesl.get answers 0; eqs; Nesl.get answers 1 |] select lower elements in parallel select lower elements in parallel quicksort in parallel create concatenation in parallel select equal elements in parallel

Nested Data Parallel Programming Stuff we’ll learn –data parallel design patterns and algorithms over vectors, matrices and graphs –cost model for data parallel programs work, depth and relation to real machines –implementation techniques vector flattening & cost guarantees Possible projects/assignments –parallelizing “hard-to-parallelize” algorithms –parallelizing high-value scientific applications genomics algorithms –implementation infrastructure in F#

Massively Parallel Systems Technology goal –Make it easy to program applications that scale to google-sized workloads counting all the words on all the web pages in the world filtering rss feeds for everyone with google reader installed managing all amazon clients DNA sequencing and analysis –Fault tolerance & performance

f f f f map g g g g g g reduce web pages

Massively Parallel Systems What we’ll learn –language design for programming massively parallel systems map-reduce, sawzall, dryad, azure –interesting things from guest speakers from Microsoft & Google Possible projects/assignments –implementing high-value scientific apps