Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.

Slides:



Advertisements
Similar presentations
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Advertisements

COMMUNICATING SEQUENTIAL PROCESSES C. A. R. Hoare The Queen’s University Belfast, North Ireland.
1 Chapter 5 Concurrency: Mutual Exclusion and Synchronization Principals of Concurrency Mutual Exclusion: Hardware Support Semaphores Readers/Writers Problem.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Concurrency Important and difficult (Ada slides copied from Ed Schonberg)
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 6: Process Synchronization.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles Seventh Edition By William Stallings.
Concurrency: mutual exclusion and synchronization Slides are mainly taken from «Operating Systems: Internals and Design Principles”, 8/E William Stallings.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Monitors Chapter 7. The semaphore is a low-level primitive because it is unstructured. If we were to build a large system using semaphores alone, the.
Informationsteknologi Wednesday, September 26, 2007 Computer Systems/Operating Systems - Class 91 Today’s class Mutual exclusion and synchronization 
PZ13B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ13B - Client server computing Programming Language.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
PZ11B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ11B - Parallel execution Programming Language Design.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
5.6 Semaphores Semaphores –Software construct that can be used to enforce mutual exclusion –Contains a protected variable Can be accessed only via wait.
Concurrent Processes Lecture 5. Introduction Modern operating systems can handle more than one process at a time System scheduler manages processes and.
1 Semaphores Special variable called a semaphore is used for signaling If a process is waiting for a signal, it is suspended until that signal is sent.
Concurrency CS 510: Programming Languages David Walker.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
3.5 Interprocess Communication
1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.
A. Frank - P. Weisberg Operating Systems Introduction to Cooperating Processes.
Advances in Language Design
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings 1.
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago Polytechnic,
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Understanding Operating Systems 1 Chapter 6 : Concurrent Processes What is Parallel Processing? Typical Multiprocessing Configurations Process Synchronization.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Dave Bremer Otago.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Introduction to Concurrency.
Chapter 3 Parallel Programming Models. Abstraction Machine Level – Looks at hardware, OS, buffers Architectural models – Looks at interconnection network,
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
The Complexity of Distributed Algorithms. Common measures Space complexity How much space is needed per process to run an algorithm? (measured in terms.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
1 Client-Server Interaction. 2 Functionality Transport layer and layers below –Basic communication –Reliability Application layer –Abstractions Files.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Hwajung Lee. Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity, non-determinism,
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
C H A P T E R E L E V E N Concurrent Programming Programming Languages – Principles and Paradigms by Allen Tucker, Robert Noonan.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
1 Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11: :30 AM.
1 Processes and Threads Part II Chapter Processes 2.2 Threads 2.3 Interprocess communication 2.4 Classical IPC problems 2.5 Scheduling.
Distributed Mutual Exclusion Synchronization in Distributed Systems Synchronization in distributed systems are often more difficult compared to synchronization.
Semaphores Chapter 6. Semaphores are a simple, but successful and widely used, construct.
Mutual Exclusion -- Addendum. Mutual Exclusion in Critical Sections.
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Model and complexity Many measures Space complexity Time complexity
Concurrency: Mutual Exclusion and Synchronization
Monitors Chapter 7.
Midterm review: closed book multiple choice chapters 1 to 9
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Monitors Chapter 7.
Concurrency: Mutual Exclusion and Process Synchronization
Monitors Chapter 7.
Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Chapter 3: Process Management
Presentation transcript:

Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded commands –Tasks Persistent systems Client-server computing

Parallel processing The execution of more than one program/subprogram simultaneously. A subprogram that can execute concurrently with other subprograms is called a task or a process. Hardware supported: multiprocessor systems distributed computer systems Software simulated - : time-sharing

Principles of parallel programming languages Variable definitions mutable : values may be assigned to the variables and changed during program execution (as in sequential languages). definitional: variable may be assigned a value only once

Principles…. Parallel composition: A parallel statement, which causes additional threads of control to begin executing Execution models (Program structure) Transformational: E.G. parallel matrix multiplication Reactive

Principles…. Communication shared memory with common data objects accessed by each parallel program; messages Synchronization: Parallel programs must be able to coordinate actions

Concurrent execution Programming constructs Using parallel execution primitives of the operating system (C can invoke the fork operation of Unix ) Using parallel constructs. A programming language parallel construct indicates parallel execution

Example AND statement (programming language level) Syntax: statement1 and statement2 and … statementN Semantics: All statements execute in parallel. call ReadProcess and call Write process and call ExecuteUserProgram ;

Guarded commands Guard: a condition that can be true or false Guards are associated with statements A statement is executed when its guard becomes true

Example Guarded if: if B1  S1 | B2  S2 | … | Bn  Sn fi Guarded repetition statement do B1  S1 | B2  S2 | … | Bn  Sn od Bi - guards, Si - statements

Tasks Subprograms that run in parallel with the program that has initiated them Dependent on the initiating program The initiating program cannot terminate until all of its dependents terminate A task may have multiple simultaneous activations

Task interaction Tasks unaware of each other Tasks indirectly aware of each other –use shared memory Tasks directly aware of each other

Control Problems Mutual exclusion Deadlock P1 waits for an event to be produced by P2 P2 waits for an event to be produced by P1 Starvation P1, P2, P3 need non-shareable resource. P1 and P2 alternatively use the resource, P3 - denied access to that resource.

Mutual exclusion Two tasks require access to a single non-shareable resource. Critical resource - the resource in question. Critical section in the program - the portion in the program that uses the resource The rule: only one program at a time can be allowed in its critical section

Synchronization of Tasks Interrupts - provided by OS. Semaphores - shared data objects, with two primitive operations - signal and wait. Messages - information is sent from one task to another. The sending task may continue to execute. Guarded commands - force synchronization by insuring conditions are met before executing tasks. Rendezvous - similar to messages, the sending task waits for an answer.

Semaphores May be initialized to a nonnegative number Wait operation decrements the semaphore value Signal operation increments semaphore value Semaphore is a variable that has an integer value

Mutual exclusion with semaphores Each task performs: wait(s); /* critical section */ signal(s); B[0]B[1]B[2]B[3]B[4] … OUT IN The Producer/Consumer problem with infinite buffer

Solution : s - semaphore for entering the critical section delay - semaphore to ensure reading from non-empty buffer Producer:Consumer: produce();wait(delay); wait (s);wait(s); append();take(); signal(delay); signal(s); signal(s);consume();

Persistent systems Traditional software: Data stored outside of program - persistent data Data processed in main memory - transient data Persistent languages do not make distinction between persistent and transient data, automatically reflect changes in the database

Design issues A mechanism to indicate an object is persistent A mechanism to address a persistent object Simultaneous access to an individual persistent object - semaphores Check type compatibility of persistent objects - structural equivalence

Client-server computing Network models centralized, where a single processor does the scheduling distributed or peer-to-peer, where each machine is an equal, and the process of scheduling is spread among all of the machines

Client-server mediator architecture Client machine: Interacts with user Has protocol to communicate with server Server: Provides services: retrieves data and/or programs Issues: May be communicating with multiple clients simultaneously – Need to keep each such transaction separate – Multiple local address spaces in server