Lecture 1 – Introduction CSE 490h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of this presentation is licensed.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

Operating Systems Part III: Process Management (Process Synchronization)
OpenMP Optimization National Supercomputing Service Swiss National Supercomputing Center.
Relaxed Consistency Models. Outline Lazy Release Consistency TreadMarks DSM system.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Chapter 6: Process Synchronization
Background Concurrent access to shared data can lead to inconsistencies Maintaining data consistency among cooperating processes is critical What is wrong.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 5: Process Synchronization.
Process Synchronization. Module 6: Process Synchronization Background The Critical-Section Problem Peterson’s Solution Synchronization Hardware Semaphores.
Parallel Programming Henri Bal Rob van Nieuwpoort Vrije Universiteit Amsterdam Faculty of Sciences.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 19 Performance of Computer Systems.
Avishai Wool lecture Introduction to Systems Programming Lecture 4 Inter-Process / Inter-Thread Communication.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
02/23/2004CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Cloud Computing Lecture #1 Parallel and Distributed Computing Jimmy Lin The iSchool University of Maryland Monday, January 28, 2008 This work is licensed.
02/17/2010CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Fundamentals of Python: From First Programs Through Data Structures
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Lecture 1 – Parallel Programming Primer CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed.
Operating Systems CSE 411 CPU Management Oct Lecture 13 Instructor: Bhuvan Urgaonkar.
Advanced Topics: MapReduce ECE 454 Computer Systems Programming Topics: Reductions Implemented in Distributed Frameworks Distributed Key-Value Stores Hadoop.
10/04/2011CS4961 CS4961 Parallel Programming Lecture 12: Advanced Synchronization (Pthreads) Mary Hall October 4, 2011.
CC02 – Parallel Programming Using OpenMP 1 of 25 PhUSE 2011 Aniruddha Deshmukh Cytel Inc.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Advanced Operating Systems CIS 720 Lecture 1. Instructor Dr. Gurdip Singh – 234 Nichols Hall –
Parallel Programming Models Basic question: what is the “right” way to write parallel programs –And deal with the complexity of finding parallelism, coarsening.
Programming Paradigms for Concurrency Part 2: Transactional Memories Vasu Singh
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Distributed Computing Seminar Lecture 1: Introduction to Distributed Computing & Systems Background Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet.
Games Development 2 Concurrent Programming CO3301 Week 9.
Operating Systems CSE 411 Multi-processor Operating Systems Multi-processor Operating Systems Dec Lecture 30 Instructor: Bhuvan Urgaonkar.
Internet Software Development Controlling Threads Paul J Krause.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Lecture 1 – Introduction CSE 490h – Introduction to Distributed Computing, Spring 2007 Except as otherwise noted, the content of this presentation is licensed.
Mass Data Processing Technology on Large Scale Clusters Summer, 2007, Tsinghua University All course material (slides, labs, etc) is licensed under the.
By Jeff Dean & Sanjay Ghemawat Google Inc. OSDI 2004 Presented by : Mohit Deopujari.
Operating Systems CMPSC 473 Mutual Exclusion Lecture 11: October 5, 2010 Instructor: Bhuvan Urgaonkar.
HADOOP Carson Gallimore, Chris Zingraf, Jonathan Light.
1 CS 501 Spring 2003 CS 501: Software Engineering Lecture 23 Performance of Computer Systems.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Parallel Computing Presented by Justin Reschke
© Spinnaker Labs, Inc. Google Cluster Computing Faculty Training Workshop Module II: Student Background Knowledge This presentation includes course content.
CSCI1600: Embedded and Real Time Software Lecture 17: Concurrent Programming Steven Reiss, Fall 2015.
Agenda  Quick Review  Finish Introduction  Java Threads.
CISC. What is it?  CISC - Complex Instruction Set Computer  CISC is a design philosophy that:  1) uses microcode instruction sets  2) uses larger.
CLOUD COMPUTING ARCHITECTURES & APPLICATIONS LECTURERS LAZAR KIRCHEV, PhD ILIYAN NENOV KRUM BAKALSKY 7 March, 2011 LECTURE #1 INTRODUCTION. CONTEMPORARY.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Concurrent Programming in Java Based on Notes by J. Johns (based on Java in a Nutshell, Learning Java) Also Java Tutorial, Concurrent Programming in Java.
Intro to Parallel and Distributed Processing Some material adapted from slides by Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google.
Big Picture Lab 4 Operating Systems C Andras Moritz
Lecture 3 – MapReduce: Implementation CSE 490h – Introduction to Distributed Computing, Spring 2009 Except as otherwise noted, the content of this presentation.
Advanced Operating Systems CIS 720
Lecture 1 – Parallel Programming Primer
CPS110: Reader-writer locks
Background on the need for Synchronization
Distributed Shared Memory
Threading And Parallel Programming Constructs
Distributed System Gang Wu Spring,2018.
Concurrency: Mutual Exclusion and Process Synchronization
CSCI1600: Embedded and Real Time Software
EE 4xx: Computer Architecture and Performance Programming
CSE 153 Design of Operating Systems Winter 19
CSCI1600: Embedded and Real Time Software
Lecture 20 Parallel Programming CSE /27/2019.
CS 144 Advanced C++ Programming May 7 Class Meeting
MapReduce: Simplified Data Processing on Large Clusters
Presentation transcript:

Lecture 1 – Introduction CSE 490h – Introduction to Distributed Computing, Winter 2008 Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution 2.5 License.

Staff Gayle Laakmann – Instructor  (IM or ) Slava Chernyak – TA   OH – TDB

Outline Administrivia Scope of Problems A Brief History Parallel vs. Distributed Computing Parallelization and Synchronization Problem Discussion Prelude to MapReduce

Goals and Expectations Weekly Lecture Weekly Reading  Short response questions + discussion Final Project Spring 2008: Follow-Up Course

Final Project Implement a simulation of MapReduce

Spring 2008 Capstone course Use MapReduce to build large application

Computer Speedup Moore’s Law: “The density of transistors on a chip doubles every 18 months, for the same cost” (1965) Image: Tom’s Hardware

Moore’s Law Applies to…  Processing Speed  Memory Capacity What does this mean for computing?

Scope of problems What can you do with 1 computer? What can you do with 100 computers? What can you do with an entire data center?

Distributed problems Simulating several hundred or thousand characters Happy Feet © Kingdom Feature Productions; Lord of the Rings © New Line Cinema

Distributed problems Indexing the web (Google)

Parallel vs. Distributed Parallel computing can mean:  Vector processing of data (SIMD)  Multiple CPUs in a single computer (MIMD) Distributed computing is multiple CPUs across many computers (MIMD)

A Brief History… Parallel computing was favored in the early years Primarily vector-based at first Gradually more thread- based parallelism was introduced Cray 2 supercomputer (Wikipedia)

“Massively parallel architectures” start rising in prominence Message Passing Interface (MPI) and other libraries developed Bandwidth was a big problem A Brief History…

A Brief History…1995-Today Cluster/grid architecture increasingly dominant Special node machines eschewed in favor of COTS technologies Web-wide cluster software Companies like Google take this to the extreme (10,000 node clusters)

Parallelization & Synchronization

Parallelization Idea Parallelization is “easy” if processing can be cleanly split into n units:

Parallelization Idea (2) In a parallel computation, we would like to have as many threads as we have processors. e.g., a four-processor computer would be able to run four threads at the same time.

Parallelization Idea (3)

Parallelization Idea (4)

Parallelization Pitfalls But this model is too simple! How do we assign work units to worker threads? What if we have more work units than threads? How do we aggregate the results at the end? How do we know all the workers have finished? What if the work cannot be divided into completely separate tasks? What is the common theme of all of these problems?

Parallelization Pitfalls (2) Each of these problems represents a point at which multiple threads must communicate with one another, or access a shared resource. Golden rule: Any memory that can be used by multiple threads must have an associated synchronization system!

What is Wrong With This? Thread 1: void foo() { x++; y = x; } Thread 2: void bar() { y++; x++; } If the initial state is y = 0, x = 6, what happens after these threads finish running?

Multithreaded = Unpredictability When we run a multithreaded program, we don’t know what order threads run in, nor do we know when they will interrupt one another. Thread 1: void foo() { eax = mem[x]; inc eax; mem[x] = eax; ebx = mem[x]; mem[y] = ebx; } Thread 2: void bar() { eax = mem[y]; inc eax; mem[y] = eax; eax = mem[x]; inc eax; mem[x] = eax; } Many things that look like “one step” operations actually take several steps under the hood:

Multithreaded = Unpredictability This applies to more than just integers: Pulling work units from a queue Reporting work back to master unit Telling another thread that it can begin the “next phase” of processing … All require synchronization!

Synchronization Primitives A synchronization primitive is a special shared variable that guarantees that it can only be accessed atomically. Hardware support guarantees that operations on synchronization primitives only ever take one step

Semaphores A semaphore is a flag that can be raised or lowered in one step Semaphores were flags that railroad engineers would use when entering a shared track Only one side of the semaphore can ever be red! (Can both be green?)

Semaphores set() and reset() can be thought of as lock() and unlock() Calls to lock() when the semaphore is already locked cause the thread to block. Pitfalls: Must “bind” semaphores to particular objects; must remember to unlock correctly

The “corrected” example Thread 1: void foo() { sem.lock(); x++; y = x; sem.unlock(); } Thread 2: void bar() { sem.lock(); y++; x++; sem.unlock(); } Global var “Semaphore sem = new Semaphore();” guards access to x & y

Condition Variables A condition variable notifies threads that a particular condition has been met Inform another thread that a queue now contains elements to pull from (or that it’s empty – request more elements!) Pitfall: What if nobody’s listening?

The final example Thread 1: void foo() { sem.lock(); x++; y = x; fooDone = true; sem.unlock(); fooFinishedCV.notify(); } Thread 2: void bar() { sem.lock(); if(!fooDone) fooFinishedCV.wait(sem); y++; x++; sem.unlock(); } Global vars: Semaphore sem = new Semaphore(); ConditionVar fooFinishedCV = new ConditionVar(); boolean fooDone = false;

Barriers A barrier knows in advance how many threads it should wait for. Threads “register” with the barrier when they reach it, and fall asleep. Barrier wakes up all registered threads when total count is correct Pitfall: What happens if a thread takes a long time?

Too Much Synchronization? Deadlock Synchronization becomes even more complicated when multiple locks can be used Can cause entire system to “get stuck” Thread A: semaphore1.lock(); semaphore2.lock(); /* use data guarded by semaphores */ semaphore1.unlock(); semaphore2.unlock(); Thread B: semaphore2.lock(); semaphore1.lock(); /* use data guarded by semaphores */ semaphore1.unlock(); semaphore2.unlock(); (Image: RPI CSCI.4210 Operating Systems notes)

The Moral: Be Careful! Synchronization is hard  Need to consider all possible shared state  Must keep locks organized and use them consistently and correctly Knowing there are bugs may be tricky; fixing them can be even worse! Keeping shared state to a minimum reduces total system complexity

Prelude to MapReduce We saw earlier that explicit parallelism/synchronization is hard Synchronization does not even answer questions specific to distributed computing, like how to move data from one machine to another Fortunately, MapReduce handles this for us

Prelude to MapReduce MapReduce is a paradigm designed by Google for making a subset (albeit a large one) of distributed problems easier to code Automates data distribution & result aggregation Restricts the ways data can interact to eliminate locks (no shared state = no locks!)

Next time… We will go over MapReduce in detail Discuss the functional programming paradigms of fold and map, and how they relate to distributed computation