Spaces Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.

Slides:



Advertisements
Similar presentations
MULTIPROCESSOR OPERATING SYSTEM
Advertisements

PHP I.
Multiple Processor Systems
CS 206 Introduction to Computer Science II 09 / 05 / 2008 Instructor: Michael Eckmann.
Parallel Processing & Parallel Algorithm May 8, 2003 B4 Yuuki Horita.
1/1/ / faculty of Electrical Engineering eindhoven university of technology Introduction Part 2: Data types and addressing modes dr.ir. A.C. Verschueren.
CS 31003: Compilers Introduction to Phases of Compiler.
Parallel and Distributed Simulation
 Dr. Vered Gafni 1 Modeling Real-Time Systems.  Dr. Vered Gafni 2 Behavioral Model (Signature, Time) Signature: v 1 :D 1, v 2 :D 2,…,v n :D n S = (D.
Programming Languages Marjan Sirjani 2 2. Language Design Issues Design to Run efficiently : early languages Easy to write correctly : new languages.
Chapter 9: Searching, Sorting, and Algorithm Analysis
Development of Parallel Simulator for Wireless WCDMA Network Hong Zhang Communication lab of HUT.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
PZ13B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ13B - Client server computing Programming Language.
Reference: Message Passing Fundamentals.
12a.1 Introduction to Parallel Computing UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Dr. Muhammed Al-Mulhem 1ICS ICS 535 Design and Implementation of Programming Languages Part 1 Fundamentals (Chapter 4) Operational Semantics ICS.
Axiomatic Semantics Dr. M Al-Mulhem ICS
An Introduction to Parallel Computing Dr. David Cronk Innovative Computing Lab University of Tennessee Distribution A: Approved for public release; distribution.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Multiscalar processors
On the Task Assignment Problem : Two New Efficient Heuristic Algorithms.
Algorithms in a Multiprocessor Environment Kevin Frandsen ENCM 515.
Mapping Techniques for Load Balancing
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
The sequence of graph transformation (P1)-(P2)-(P4) generating an initial mesh with two finite elements GENERATION OF THE TOPOLOGY OF INITIAL MESH Graph.
Object Oriented Programming Lecture 8: Introduction to laboratorial exercise – part II, Introduction to GUI frames in Netbeans, Introduction to threads.
More with Methods (parameters, reference vs. value, array processing) Corresponds with Chapters 5 and 6.
OpenMP OpenMP A.Klypin Shared memory and OpenMP Simple Example Threads Dependencies Directives Handling Common blocks Synchronization Improving load balance.
Chapter 8-3 : Distributed Systems Distributed systems Distributed systems Document-based middleware Document-based middleware Coordination-based middleware.
Quick overview of threads in Java Babak Esfandiari (extracted from Qusay Mahmoud’s slides)
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Internet Software Development Controlling Threads Paul J Krause.
1 Control Unit Operation and Microprogramming Chap 16 & 17 of CO&A Dr. Farag.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
DISTRIBUTED COMPUTING
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
Chapter 6 Questions Quick Quiz
COMP SYSTEM ARCHITECTURE HOW TO BUILD A CACHE Antoniu Pop COMP25212 – Lecture 2Jan/Feb 2015.
Lesson 1.4 Equations and Inequalities Goal: To learn how to solve equations and check solutions of equations and inequalities.
CDP Tutorial 3 Basics of Parallel Algorithm Design uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison.
Introduction to OOP CPS235: Introduction.
Sum of Arithmetic Sequences. Definitions Sequence Series.
Distributed Algorithms Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.
Uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 1 Introduction EE692 Parallel and Distribution Computation | Prof. Song.
MEMORY MANAGEMENT Operating System. Memory Management Memory management is how the OS makes best use of the memory available to it Remember that the OS.
Chapter 4: Variables, Constants, and Arithmetic Operators Introduction to Programming with C++ Fourth Edition.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Starting Out with C++ Early Objects Seventh Edition by Tony Gaddis, Judy.
Channels. Models for Communications Synchronous communications – E.g. Telephone call Asynchronous communications – E.g. .
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Clock Snooping and its Application in On-the-fly Data Race Detection Koen De Bosschere and Michiel Ronsse University of Ghent, Belgium Taipei, TaiwanDec.
Click to edit Master text styles Stacks Data Structure.
Opener: Find three consecutive odd integers whose sum is -63 Integer #1 = n Integer #2 = n + 2 Integer #3 = n + 4 (n) + (n + 2) + (n + 4) = -63 3n + 6.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Pervasive Computing MIT SMA 5508 Spring 2006 Larry Rudolph 1 Publish & Subscribe Larry Rudolph May 3, 2006 SMA 5508 & MIT
ADVANTAGES OF SIMULATION
Digital Processing Platform
Computers & Programming Languages
CS190/295 Programming in Python for Life Sciences: Lecture 6
Distributed Algorithms
Channels.
Semaphores Chapter 6.
Computer Science Core Concepts
Principles of Concurrent and Distributed Programming, Second Edition
Understanding Conditions
Channels.
Channels.
Prebared by Omid Mustefa.
Presentation transcript:

Spaces Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari

Introduction Shared memory or synchronous communications have the disadvantage of tight coupling. In spaces: –Loose coupling –Enables shared data to be persistent.

The Linda Model A giant bulletin board called a space on which notes can be posted. Statements: –postnote(v1, v2,...) This statement creates a note from the values of the parameters and posts it in the space. If there are processes blocked waiting for a note matching this parameter signature, an arbitrary one of them is unblocked. –removenote(x1, x2,...) The statement removes a note that matches the parameter signature from the space and assigns its values to the parameters. If no matching note exists, the process is blocked. If there is more than one matching note, an arbitrary one is removed. –readnote(x1, x2,...) Like removenote, but it leaves the note in the space.

Example postnote('a', 10, 20); postnote('a', 30); postnote('a', false, 40); postnote('a', true, 50);

Matching Conditions The sequence of types of the parameters equals the sequence of types of the note elements. If a parameter is not a variable, its value must equal the value of the element of the note in the same position.

Example integer m, integer n, boolean b removenote('b', m, n) removenote('a', m, n) readnote('a', m) removenote('a', n) removenote('a', m, n) removenote('a', b, n) removenote('a', b, m) postnote('b', 60, 70(

Formal Parameters A variable parameter that only matches notes containing the current value of the variable. Denoted by appending the symbol = to the variable name.

Load Balancing & Master-Worker In the previous implementation of the matrix multiplication, it makes no sense to remove one processor for repair or to replace a few processors by faster ones. Load Balancing: Writing a program that is flexible so that it adapts itself to the amount of processing power available. One process called a master posts task notes in the space; other processes called workers take task notes from the space and perform the required computation. The master–worker paradigm is very useful in concurrent programming, and is particularly easy to implement with spaces because of their freedom from temporal and spatial constraints.