Principles of Concurrent and Distributed Programming, Second Edition

Slides:



Advertisements
Similar presentations
MULTIPROCESSOR OPERATING SYSTEM
Advertisements

PHP I.
Multiple Processor Systems
Parallel and Distributed Simulation
 Dr. Vered Gafni 1 Modeling Real-Time Systems.  Dr. Vered Gafni 2 Behavioral Model (Signature, Time) Signature: v 1 :D 1, v 2 :D 2,…,v n :D n S = (D.
Programming Languages Marjan Sirjani 2 2. Language Design Issues Design to Run efficiently : early languages Easy to write correctly : new languages.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
Reference: Message Passing Fundamentals.
12a.1 Introduction to Parallel Computing UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Dr. Muhammed Al-Mulhem 1ICS ICS 535 Design and Implementation of Programming Languages Part 1 Fundamentals (Chapter 4) Operational Semantics ICS.
An Introduction to Parallel Computing Dr. David Cronk Innovative Computing Lab University of Tennessee Distribution A: Approved for public release; distribution.
Tuple Spaces and JavaSpaces CS 614 Bill McCloskey.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
Algorithms in a Multiprocessor Environment Kevin Frandsen ENCM 515.
MULTIPROCESSOR SYSTEMS OUTLINE  Coordinated job Scheduling  Separate Systems  Homogeneous Processor Scheduling  Master/Slave Scheduling.
1 Chapter 9 Spaces with LINDA. 2 Linda Linda is an experimental programming concept unlike ADA or Occam which are fully developed production-quality languages.
The Data Element. 2 Data type: A description of the set of values and the basic set of operations that can be applied to values of the type. Strong typing:
Spaces Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.
1-8 An Introduction to Equations
Object Oriented Programming Lecture 8: Introduction to laboratorial exercise – part II, Introduction to GUI frames in Netbeans, Introduction to threads.
More with Methods (parameters, reference vs. value, array processing) Corresponds with Chapters 5 and 6.
Chapter 8-3 : Distributed Systems Distributed systems Distributed systems Document-based middleware Document-based middleware Coordination-based middleware.
Quick overview of threads in Java Babak Esfandiari (extracted from Qusay Mahmoud’s slides)
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
Internet Software Development Controlling Threads Paul J Krause.
TABLES AND VALUES Section 1.5. Open Sentence Equation.
Code Grammar. Syntax A set of rules that defines the combination of symbols and expressions.
DISTRIBUTED COMPUTING
1.3 Open Sentences A mathematical statement with one or more variables is called an open sentence. An open sentence is neither true nor false until the.
Lesson 1.4 Equations and Inequalities Goal: To learn how to solve equations and check solutions of equations and inequalities.
1-5 Open Sentences Objective: To solve open sentences by performing arithmetic operations.
Sum of Arithmetic Sequences. Definitions Sequence Series.
Development of a Distributed Task Bag Using CORBA Frank McCown Operating Systems – UALR Dec. 6, 2001.
Distributed Algorithms Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.
Uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 1 Introduction EE692 Parallel and Distribution Computation | Prof. Song.
Chapter 4: Variables, Constants, and Arithmetic Operators Introduction to Programming with C++ Fourth Edition.
Copyright © 2011 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Starting Out with C++ Early Objects Seventh Edition by Tony Gaddis, Judy.
Channels. Models for Communications Synchronous communications – E.g. Telephone call Asynchronous communications – E.g. .
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Click to edit Master text styles Stacks Data Structure.
Opener: Find three consecutive odd integers whose sum is -63 Integer #1 = n Integer #2 = n + 2 Integer #3 = n + 4 (n) + (n + 2) + (n + 4) = -63 3n + 6.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Application of Design Patterns to Geometric Decompositions V. Balaji, Thomas L. Clune, Robert W. Numrich and Brice T. Womack.
Pervasive Computing MIT SMA 5508 Spring 2006 Larry Rudolph 1 Publish & Subscribe Larry Rudolph May 3, 2006 SMA 5508 & MIT
Introduction to computer software. Programming the computer Program, is a sequence of instructions, written to perform a specified task on a computer.
BAHIR DAR UNIVERSITY Institute of technology Faculty of Computing Department of information technology Msc program Distributed Database Article Review.
Group Members Hamza Zahid (131391) Fahad Nadeem khan Abdual Hannan AIR UNIVERSITY MULTAN CAMPUS.
Definition of Distributed System
ADVANTAGES OF SIMULATION
REGISTER TRANSFER LANGUAGE AND DESIGN OF CONTROL UNIT
Digital Processing Platform
Computers & Programming Languages
Mobile Agents.
Multiple Processor Systems
Chapter (3) - Looping Questions.
Distributed computing deals with hardware
Distributed Algorithms
Channels.
Semaphores Chapter 6.
Computer Science Core Concepts
Chapter 2: Java Fundamentals
Programming Languages
Lecture 5 Binary Operation Boolean Logic. Binary Operations Addition Subtraction Multiplication Division.
Understanding Conditions
Channels.
Channels.
Comparing Python and Java
Lesson 3. Controlling program flow. Loops. Methods. Arrays.
Prebared by Omid Mustefa.
Presentation transcript:

Principles of Concurrent and Distributed Programming, Second Edition Spaces Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari

Introduction Shared memory or synchronous communications have the disadvantage of tight coupling. In spaces: Loose coupling Enables shared data to be persistent.

The Linda Model A giant bulletin board called a space on which notes can be posted. Statements: postnote(v1, v2, . . .) This statement creates a note from the values of the parameters and posts it in the space. If there are processes blocked waiting for a note matching this parameter signature, an arbitrary one of them is unblocked. removenote(x1, x2, . . .) The statement removes a note that matches the parameter signature from the space and assigns its values to the parameters. If no matching note exists, the process is blocked. If there is more than one matching note, an arbitrary one is removed. readnote(x1, x2, . . .) Like removenote, but it leaves the note in the space.

Example postnote('a', 10, 20); postnote('a', 30); postnote('a', false, 40); postnote('a', true, 50);

Matching Conditions The sequence of types of the parameters equals the sequence of types of the note elements. If a parameter is not a variable, its value must equal the value of the element of the note in the same position.

Example integer m, integer n, boolean b removenote('b', m, n) removenote('a', m, n) readnote('a', m) removenote('a', n) removenote('a', b, n) removenote('a', b, m) postnote('b', 60, 70(

Formal Parameters A variable parameter that only matches notes containing the current value of the variable. Denoted by appending the symbol = to the variable name.

Load Balancing & Master-Worker In the previous implementation of the matrix multiplication, it makes no sense to remove one processor for repair or to replace a few processors by faster ones. Load Balancing: Writing a program that is flexible so that it adapts itself to the amount of processing power available. One process called a master posts task notes in the space; other processes called workers take task notes from the space and perform the required computation. The master–worker paradigm is very useful in concurrent programming, and is particularly easy to implement with spaces because of their freedom from temporal and spatial constraints.