Presentation is loading. Please wait.

Presentation is loading. Please wait.

Concurrent Programming. Concurrency  Concurrency means for a program to have multiple paths of execution running at (almost) the same time. Examples:

Similar presentations


Presentation on theme: "Concurrent Programming. Concurrency  Concurrency means for a program to have multiple paths of execution running at (almost) the same time. Examples:"— Presentation transcript:

1 Concurrent Programming

2 Concurrency  Concurrency means for a program to have multiple paths of execution running at (almost) the same time. Examples:  A web server can handle connections from several clients while still listening for new connections.  A multi-player game may allow several players to move things around the screen at the same time. How can we do this? How to the tasks communicate with each other?

3 Performing Concurrency A computer can implement concurrency by...  parallel execution: run tasks on different CPU in the same computer (requires multi-processor machine)  time-sharing: divide CPU time into slices, and multi- process several tasks on the same CPU  distributed computing: use CPU of several computers in a cluster to run different tasks tasks time

4 Design for Concurrency If we have multiple processes or threads of execution for a single job, we must decide issues of...  Allocating CPU: previous slide  Memory: do all tasks share the same memory? Or separate memory?  Communication: how can tasks communicate?  Control: how do we start a task? stop a task? wait for a task?

5 Memory  shared memory: if one thread changes memory, it effects the others. problem: how to you share the stack?  separate memory: each thread has its own memory area  combination: each thread has a separate stack area they share a static area, may share the current environment, may compete for same heap.

6 Shared Memory and Environment /* fork( ) creates new process */ pid = vfork( ); if ( pid == 0 ) { /* child process, pid=0 */ task1( ); task3( ); } else { /* parent process, pid=child */ task2( ); } free space task1() frame main stack: SP Tasks don't share registers. After the parent calls task1() where is the Stack Pointer (SP) of the child process? Where will task2( ) be placed?

7 Processes and memory  Heavy-weight processes: each process gets its own memory area and own environment. Unix "process" fits this model.  Light-weight processes: processes share the same memory area; but each has its own context and maybe its own stack. "threads" in Java, C, and C# fit this model

8 Heavy-weight Processes  UNIX fork( ) system call: child process gets a copy of parent's memory pages

9 Example: Web Server /* bind to port 80 */ bind( socket, &address, sizeof(address) ); while ( 1 ) { /* run forever */ /* wait for a client to connect */ client = accept(socket, &clientAddress, &len ); /* fork a new process to handle client */ pid = fork( ); if ( pid == 0 ) handleClient( client, clientAddress ); } Server forks a new process to handle clients, so server can listen for more connections.

10 Example: fork and wait for child pid = fork( ); if ( pid == 0 ) childProcess( ); else { wait( *status ); // wait for child to exit } wait( ) cause the process to wait for a child to exit.

11 Threads: light-weight processes  Threads share memory area.  Conserve resources, better communication between tasks. task1 = new Calculator( ); task2 = new AlarmClock( ); Thread thread1 = new Thread( task1 ); Thread thread2 = new Thread( task2 ); thread1.start( ); thread2.start( );

12 States of a Thread

13 Stack Management for Threads  In some implementations (like C) threads share the same memory, but require their own stack space.  Each thread must be able to call functions separately. thread2 stack space main stack: thread3 stack space thread1 stack thread4 stack thread5 stack Cactus Stack: Dynamic and static links can refer to parent's stack.

14 Communication between Tasks  Reading and writing to a shared buffer. Producer - consumer model (see Java Tutorial)  Using an I/O channel called a pipe.  Signaling: exceptions or interrupts. pin = new PipedInputStream( ); pout = new PipedOutputStream( pin ); task1 = new ReaderTask( pin ); task2 = new WriterTask( pout ); Thread thread1 = new Thread( task1 ); Thread thread2 = new Thread( task2 ); thread1.start( ); thread2.start( ); task1 task2 pipe

15 Thread Coordination thread1thread2 processing sleeping notify( ); wait( ); processing notify( ); wait( ); sleeping yield( ); yield gives other threads a chance to use CPU.

16 Critical Code: avoiding race conditions Example: one thread pushes data onto a stack, another thread pops data off the stack. Problem: you may have a race condition where one thread starts to pop data off stack. but thread is interrupted (by CPU) and other thread pushes data onto stack. race this a is push("problem?") { n = top; stack[n] = "problem?"; top=n++; top pop() { return stack[top--]; } thread1: this a is problem? thread2:

17 Exclusive Access to Critical Code programmer control: use a shared flag variable or semaphore to indicate when critical block is free executer control: use synchronization features of the language to restrict access to critical code public void synchronized push(Object value) { if ( top < stack.length ) stack[top++] = value; } public Object synchronized pop( ) { if ( top >= 0 ) return stack[top--]; }

18 Avoiding Deadlock  Deadlock: when two or more tasks are waiting for each other to release a required resource.  Program waits forever.  Rule for Avoiding Deadlock: exercise for students

19 Design Patterns and Threads Observer Pattern: one task is a source of events that other tasks are interested in. Each task wants to be notified when an interesting event occurs. Solution: wrap the source task in an Observable object. Other tasks register with Observable as observers. Observable task calls notifyObservers ( ) when interesting event occurs

20

21 Simple Producer-Consumer Cooperation Using Semaphores Figure 11.2

22 Multiple Producers-Consumers Figure 11.3

23 Producer-Consumer Monitor Figure 11.4

24 States of a Java Thread Figure 11.5

25 Ball Class Figure 11.6

26 Initial Application Class Figure 11.7

27 Final Bouncing Balls init Method Figure 11.8

28 Final Bouncing Balls paint Method Figure 11.9

29 Bouncing Balls Mouse Handler Figure 11.10

30 Bouncing Balls Mouse Handler Figure 11.11

31 Buffer Class Figure 11.12

32 Producer Class Figure 11.13

33 Consumer Class Figure 11.14

34 Bounded Buffer Class Figure 11.15

35 Sieve of Eratosthenes Figure 11.16

36 Test Drive for Sieve of Eratosthenes Figure 11.17


Download ppt "Concurrent Programming. Concurrency  Concurrency means for a program to have multiple paths of execution running at (almost) the same time. Examples:"

Similar presentations


Ads by Google