ITERATIVE COMPUTATIONS CONCURRENCY ID1218 Lecture 042009-11-04 Christian Schulte Software and Computer Systems School of Information and.

Slides:



Advertisements
Similar presentations
DISTRIBUTED COMPUTING PARADIGMS
Advertisements

Peer-to-peer and agent-based computing P2P Algorithms.
Processes Management.
Operating Systems Lecture 7.
CS 542: Topics in Distributed Systems Diganta Goswami.
Remote Procedure Call (RPC)
1 Chapter 5 Threads 2 Contents  Overview  Benefits  User and Kernel Threads  Multithreading Models  Solaris 2 Threads  Java Threads.
1 Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Introduction in algorithms and applications Introduction in algorithms and applications Parallel machines and architectures Parallel machines and architectures.
MINI ERLANG ID1218 Lecture Christian Schulte Software and Computer Systems School of Information and Communication Technology.
Erlang concurrency. Where were we? Finished talking about sequential Erlang Left with two questions  retry – not an issue; I mis-read the statement in.
Chapter 29 Structure of Computer Names Domain Names Within an Organization The DNS Client-Server Model The DNS Server Hierarchy Resolving a Name Optimization.
Concurrency CS 510: Programming Languages David Walker.
Lecture 19 Distributed Programming (Ch. 10) Other message-passing programming models  Channels vs mailboxes  Synchronous vs asynchronous.
3.5 Interprocess Communication Many operating systems provide mechanisms for interprocess communication (IPC) –Processes must communicate with one another.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 3: Processes.
3.5 Interprocess Communication
Centralized Architectures
Computer Science Lecture 2, page 1 CS677: Distributed OS Last Class: Introduction Distributed Systems – A collection of independent computers that appears.
SUMMARY ID1218 Lecture Christian Schulte Software and Computer Systems School of Information and Communication Technology.
1 Chapter 4 Threads Threads: Resource ownership and execution.
FUNCTIONAL PROGRAMMING IN ERLANG ID1218 Lecture Christian Schulte Software and Computer Systems School of Information and.
Fundamentals of Python: From First Programs Through Data Structures
1 What is message passing? l Data transfer plus synchronization l Requires cooperation of sender and receiver l Cooperation not always apparent in code.
CSE 486/586 CSE 486/586 Distributed Systems PA Best Practices Steve Ko Computer Sciences and Engineering University at Buffalo.
Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communication in Client-Server.
Threads Many software packages are multi-threaded Web browser: one thread display images, another thread retrieves data from the network Word processor:
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
Architectures of distributed systems Fundamental Models
Lecture 3 Process Concepts. What is a Process? A process is the dynamic execution context of an executing program. Several processes may run concurrently,
Messaging is an important means of communication between two systems. There are 2 types of messaging. - Synchronous messaging. - Asynchronous messaging.
DISTRIBUTED COMPUTING PARADIGMS. Paradigm? A MODEL 2for notes
1 Concurrency Architecture Types Tasks Synchronization –Semaphores –Monitors –Message Passing Concurrency in Ada Java Threads.
1 Lecture 5 (part2) : “Interprocess communication” n reasons for process cooperation n types of message passing n direct and indirect message passing n.
3.1 Silberschatz, Galvin and Gagne ©2009Operating System Concepts with Java – 8 th Edition Chapter 3: Processes.
Processes. Chapter 3: Processes Process Concept Process Scheduling Operations on Processes Cooperating Processes Interprocess Communication Communication.
Concurrency: Mutual Exclusion and Synchronization Chapter 5.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Synchronization Methods in Message Passing Model.
Middleware Services. Functions of Middleware Encapsulation Protection Concurrent processing Communication Scheduling.
Processes CSCI 4534 Chapter 4. Introduction Early computer systems allowed one program to be executed at a time –The program had complete control of the.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-4 Process Communication Department of Computer Science and Software.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
The Client-Server Model And the Socket API. Client-Server (1) The datagram service does not require cooperation between the peer applications but such.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion & Leader Election Steve Ko Computer Sciences and Engineering University.
LOGO Erlang 11 ACM 王浩然. Company Logo concurrency  in real world  in computer.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
 Process Concept  Process Scheduling  Operations on Processes  Cooperating Processes  Interprocess Communication  Communication in Client-Server.
Channels. Models for Communications Synchronous communications – E.g. Telephone call Asynchronous communications – E.g. .
R Some of these slides are from Prof Frank Lin SJSU. r Minor modifications are made. 1.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
Object Interaction: RMI and RPC 1. Overview 2 Distributed applications programming - distributed objects model - RMI, invocation semantics - RPC Products.
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Last Class: Introduction
Process Management Process Concept Why only the global variables?
Chapter 3: Process Concept
Model and complexity Many measures Space complexity Time complexity
Processes Overview: Process Concept Process Scheduling
Client-Server Interaction
Chapter 4: Processes Process Concept Process Scheduling
Lecture 2: Processes Part 1
Erlang 3 Concurrency 8-Dec-18.
Operating System Concepts
Channels.
Computer Science 312 Concurrent Programming I Processes and Messages 1.
Channels.
Channels.
Presentation transcript:

ITERATIVE COMPUTATIONS CONCURRENCY ID1218 Lecture Christian Schulte Software and Computer Systems School of Information and Communication Technology KTH – Royal Institute of Technology Stockholm, Sweden

A Fourth Look L04, ID1218, Christian Schulte 2

A Better Length? L04, ID1218, Christian Schulte l([]) -> 0; l([_|Xr]) -> 1+l(Xr). l([],N) -> N; l([_|Xr],N) -> l(Xr,N+1).  Two different functions: l/1 and l/2  Which one is better? 3

Running l/1 l([1,2,3]) ;  → [1,2,3]  CALL(l/1) ;  → CALL(l/1) ; [1,2,3] → 1+l([2,3]) ;  → 1  l([2,3])  ADD ;  → l([2,3])  ADD ; 1 → [2,3]  CALL(l/1)  ADD ; 1 → CALL(l/1)  ADD ; [2,3]  1 → 1+l([3])  ADD ; 1 → 1  l([3])  ADD  ADD ; 1 → l([3])  ADD  ADD ; 1  1 → …  requires stack space in the length of list L04, ID1218, Christian Schulte 4

Running l/2 l([1,2,3],0) ;  → [1,2,3]  0  CALL(l/2) ;  → 0  CALL(l/2) ; [1,2,3] → CALL(l/2) ; 0  [1,2,3] → l([2,3],0+1) ;  → [2,3]  0+1  CALL(l/2) ;  → 0+1  CALL(l/2) ; [2,3] → 0  1  ADD  CALL(l/2) ; [2,3] → 1  ADD  CALL(l/2) ; 0  [2,3] → ADD  CALL(l/2) ; 1  0  [2,3] → CALL(l/2) ; 1  [2,3] → l([3],1+1) ;  → …  requires constant stack space! L04, ID1218, Christian Schulte 5

Appending Two Lists L04, ID1218, Christian Schulte app([],Ys) -> Ys; app([X|Xr],Ys) -> [X|app(Xr,Ys)].  How much memory needed: easy!  Stack space… in the length of the first list CONS accumulate on the stack 6

L04, ID1218, Christian Schulte Iterative Computations  Iterative computations run with constant stack space  Make use of last optimization call correspond to loops essentially  Tail recursive procedures are computed by iterative computations 7

Concurrency L04, ID1218, Christian Schulte

L04, ID1218, Christian Schulte 9 The World Is Concurrent!  Concurrent programs several activities execute simultaneously (concurrently)  Most of the software you use is concurrent operating system: IO, user interaction, many processes, … web browser, client, server, … telephony switches handling many calls …

L04, ID1218, Christian Schulte 10 Why Should We Care?  Software must be concurrent… … for many application areas  Concurrency can be helpful for constructing programs organize programs into independent parts concurrency allows to make them independent with respect to how they execute essential: how do concurrent programs interact?

L04, ID1218, Christian Schulte 11 Concurrent Programming Is Easy…  Erlang has been designed to be very good at concurrency…  Essential for concurrent programming here message passing very simple interaction between concurrent programs light-weight processes no shared data structures independence

Concurrency in Erlang  Concurrent programs are composed of communicating processes each process has a unique id: PID processes are spawned to execute functions send messages to PIDs receive messages messages are Erlang data structures  Erlang processes are not OS processes one Erlang OS process can host lots of Erlang processes create by spawning they are independent of the underlying OS L04, ID1218, Christian Schulte 12

Creating Processes  Spawning a process: takes a function as input creates a new concurrently running process executing the function returns the PID of the newly created process L04, ID1218, Christian Schulte 13

Our First Process loop() -> receive kill -> io:format("Aargh: dead...~n"); Other -> io:format("Yummy: ~p~n",[Other]), loop() end. L04, ID1218, Christian Schulte 14

Running the Process > P=spawn(fun loop/0). > P ! apple. … Yummy: apple > P ! bogey. … Yummy: bogey > P ! kill. … Aaargh: dead > P ! ham. … L04, ID1218, Christian Schulte 15

Processes Run Forever…  Why does process not run out of memory property of loop/0 L04, ID1218, Christian Schulte 16

Primitives  Creating processes spawn(F) for function value F spawn(M,F,As) for function F in module M with argument list As  Sending messages PID ! message  Receiving messages receive … end with clauses  Who am I? self() returns the PID of the current process L04, ID1218, Christian Schulte 17

Processes  Each process has a mailbox incoming messages are stored in order of arrival sending puts message in mailbox  Processes are executed fairly if a process can receive a message or compute… …eventually, it will It will pretty soon… Simple priorities available (low) L04, ID1218, Christian Schulte 18

Message Sending  Message sending P ! M is asynchronous the sender does not wait until message has been processed continues execution immediately evaluates to M  When a process sends messages M1 and M2 to same PID, they arrive in order in mailbox FIFO ordering  When a process sends messages M1 and M2 to different processes, order of arrival is undefined L04, ID1218, Christian Schulte 19

Message Receipt  Only receive inspects mailbox all messages are put into the mailbox  Messages are processed in order of arrival that is, receive processes mailbox in order  If the receive statement has a matching clause for the first message remove message and execute clause always choose the first matching clause  Otherwise, continue with next message  Unmatched messages are kept in original order L04, ID1218, Christian Schulte 20

Receive Example 1. receive c -> … end 2. receive d -> …; b -> … end 3. receive M -> … end L04, ID1218, Christian Schulte 21 a b c d a b d a d d mailbox head

Receiving Multiple Messages seq() -> receive a -> receive b -> … end; c -> … end.  With other words: processes can use different receive statements  What does it mean is a sent before b ? is a received before b ? L04, ID1218, Christian Schulte 22

Receive With Timeouts receive … after Time -> Expr end  If no matching message arrived within Time milliseconds, evaluate Expr  If only after clause present, process sleeps for Time milliseconds L04, ID1218, Christian Schulte 23

Flushing the Mailbox flush() -> receive _ -> flush() after 0 -> true end. L04, ID1218, Christian Schulte 24

Priority Receipt priority() -> receive alarm -> … after 0 -> receive M -> …, priority() end end. L04, ID1218, Christian Schulte 25

Timed Repeater start(T,F) -> spawn(fun() -> rep(T,F) end). stop(PID) -> PID ! stop. rep(T,F) -> receive stop -> true after T -> F(), rep(T,F) end. L04, ID1218, Christian Schulte 26

Different Message Types receive {a, … } -> … ; {b, … } -> … … end  Use tuples as messages first field of tuple describes message type L04, ID1218, Christian Schulte 27

Client Server Architectures L04, ID1218, Christian Schulte

Client Server  Single server processing requests wait for request perform request reply to request (ok or result)  Multiple clients sending requests send request wait for reply  Very common architecture WWW, RPC, RMI, … example: RPC L04, ID1218, Christian Schulte 29

How to Reply: RPC  Server must know how to reply to client client sends request… …plus its own PID PID of process available via self()  After server has fulfilled request sends back reply to sender's PID  RPC is synchronous client must wait until reply received L04, ID1218, Christian Schulte 30

RPC Server serve() -> receive {Client,Request} -> Response = process(Request), Client ! Response, serve() end. L04, ID1218, Christian Schulte 31

RPC Client rpc(Server,Request) -> Server ! {self(), Request}, receive Response -> Response end.  This is easy… but wrong… assumption: first message in mailbox is from server but: can be from anybody! L04, ID1218, Christian Schulte 32

Who Talks To Me?  If we only want to receive messages from process PID, messages must include PID  Sending P ! {self(), … }  Receipt PID= …, receive {P,…} when P==PID -> … end L04, ID1218, Christian Schulte 33

Scoping in Patterns Revisited  The following PID= …, receive {P,…} when P==PID -> … end can be rewritten to PID= …, receive {PID,…} -> … end  Variables already introduced are not pattern variables but the values they are assigned to whoa, this is ugly (my personal taste) L04, ID1218, Christian Schulte 34

A Working RPC Client rpc(Server,Request) -> Server ! {self(),Request}, receive {Server,Response} -> Response end.  This is still easy… but correct… but why: there can only be one pending reply not so easy to see L04, ID1218, Christian Schulte 35

The Registry  Register processes under names (atoms) for example: clock, logger, …  Operations register(Name,Pid ) unregister(Name) whereis(Name) returns PID or undefined registered() returns all registered names  Example register(a,PID), a ! M  As always: the registry is scary… L04, ID1218, Christian Schulte 36

Summary & Outlook L04, ID1218, Christian Schulte

Summary: Concurrency  Processes communicate by message sending feature ordered mailbox execute selective receive statements messages buffered until removed by receive are scheduled fairly can use timeouts  Simple concurrency pattern client – server request – reply L04, ID1218, Christian Schulte 38

Outlook: L05  How can concurrent computations synchronize with each other cooperate  What are the properties of programs with and without message sending and message receipt L04, ID1218, Christian Schulte 39