Download presentation
Presentation is loading. Please wait.
1
Concurrency
2
What is Concurrency Ability to execute two operations at the same time Physical concurrency –multiple processors on the same machine –distributing across networked machines Logical concurrency –illusion or partial parallelism Designer/programmer doesn’t care which!
3
Real or Apparent depends on your point of view Multiple computers in distributed computing or multiprocessors on a computer Multiple clients on same or multiple computers Multiple servers on one or many machines Where does the complexity lie? –It varies on how you design the system
4
Why is concurrency important? One machine is only capable of a limited speed Multiple machines/processors –share workload and gain better utilization –optimize responsibility/requirements to each machine’s ability –place secure processes in secure environments –parallel processors are tailored to the problem domain Single machine –OS can give limited parallelism through scheduling
5
Concurrency considerations What level of concurrency to consider How it is handled on a single processor –understand process scheduling How to share resources How to synchronize activity How to synchronize I/O specifically
6
Process Scheduling
7
Process The activation of a program. Can have multiple processes of a program Entities associated with a process –Instruction pointer –user who owns it –memory location of user data areas –own run-time stack
8
Process Scheduling Order of process execution is unpredictable Duration of execution is unpredictable Whether there is an appearance or real concurrency is not relevant to the designer. There will generally be a need to resynchronize activity between cooperating processes
9
Context switch When OS switches running process, it manipulates internal process tables Loading/storing registers must be done Thread s minimize that effort. Same process but a different stack. Necessary overhead but must balance overhead with advantage of concurrency
10
Operating Systems Scheduling Ready 205 198 201 Running 200 Blocked 177 206 180 185 Process 200 blocks when reading from the disk Ready 198 201 Running 205 Blocked 177 206 180 185 200
11
What other activities are important in scheduling? Jobs go from RUNNING to READY when they lose their time slot Jobs go from blocked to READY when the I/O operation they are waiting for completes Jobs go from RUNNING to being removed completely upon exit
12
Concurrency at many levels Process level.. –Unix fork command…. (review example) –network client-server level Subprocess level (like a procedure).. thread Statement level Scheduling level
13
How do you get limited parallelism from the OS? time Process A runs onCPU; blocks on read Disk reads for B while blocked Disk reads for A while A blocked Process C runs for it’s time slice Process B runs cpu disk controller There are times when both processors (CPU and controller) are busy so real parallelism does occur but not at the level of the CPU
14
What if YOU have to create the concurrency?
15
High-level Concurrency
16
Unix and 95/98/NT Unix process concurrency use fork and exec fork clone self parent and child differ by ppid two processes exec new process replaces original single different process 95/98/NT thread part of same process own copy of locals shared copy of globals shared resources like file descriptors each has own activation record parent and child do not always begin/continue at same spot as in fork thread ceases execution on return _beginthread() needs procedure Createprocess() like fork/exec
17
Unix fork() process level void main() { fork(); cout << “Hi”; } produces HiHi Question is… who “Hi”ed first?
18
void main() { fork(); cout << “Hi”;} Process 1 time void main() { fork(); cout << “Hi”;} Process 1 void main() { fork(); cout << “Hi”;} Process 2 void main() { fork(); cout << “Hi”;} Process 2 OR Output Hi Hi OR
19
Another Example fork(); cout << “a”; fork(); cout << “b”; How many processes are generated? How many possible outputs can you see?
20
A more common unix Example
21
Use in servers (and clients) (not important for us) cid = fork(); if (cid > 0) { // this is parent code … } else { // this is child code … } While (1) { talksock = accept (listsock... Use listsock close talksock repeat loop close listsock use talksock … exit() }
22
Threads (windows style)
23
Non-threaded void main() { count(4); } void count(int i) {int j; for (j=1; j<=i; j++) cout << j<<endl; } 12341234
24
Threaded void count(int i) {int j; for (j=1; j<=i; j++) cout << j<<endl; } 1212334412123344 void main() { _beginthread(( (void(*) void()) count, 0, (void*) 5); count(4); } 1212334412123344
25
The Synchronization Problem
26
Concurrency frequently requires synchronization! Cooperation Synchronization A is working on something B must wait for A to finish x = f + g; h = x + y; B does this A does this Competition Synchronization A needs to read a stream B needs to read the stream Only one can read at a time instr >> a; instr >> b; B does this A does this We’ll see how to do this later!
27
The synchronization problem time Shared Memory Task ATask B T=3 T = T + 1 T = T * 2 fetch T(3) incr T(4) lose CPU (time) fetch T(3) double T(6) store T T=6 get CPU store TT=4 TRY THIS: What other combinations could occur?
28
The essence of the problem There are times during which exclusive access must be granted These areas of our program are called critical sections Sometimes this is handled by disabling interrupts so process keeps processor Most often through a more controlled mechanism like a semaphore
29
Where do we see it? EVERYWHERE Database access Any data structures File/Printer/Memory resources Any intersection of need between processing entities for data
30
Synchronization Solutions for c/c++ (java later)
31
Semaphore One means of synchronizing activity Managed by the operating system –implementing yourself will not work (mutual exclusion) Typical wait and release functions called by applications Count associated –0/1 binary implies only a single resource to manage –larger value means multiple resources Queue for waiting processes (blocked)
32
wait and release wait ( semA ) { if semA>0 then decr semA; else put in semA queue; (block it) } release ( semA ) {if semA queue empty incr semA; else remove job in semA queue (unblock it) } This represents what the operating system does when an application asks for access to the resource by calling wait or release on the semaphore
33
Standard example producer-consumer semaphore fullspots, emptyspots; fullspots.count=0; emptyspots.count= BUFLEN; task producer; loop wait (emptyspots); DEPOSIT(VALUE); release (fullspots); end loop; end producer; task consumer; loop wait (fullspots); FETCH(VALUE); release (emptyspots); end loop; end producer; shared resource Why do you need TWO semaphores? Are adding and removing the same?
34
Competition Synchronization What if multiple processes want to put objects in the buffer? We might have a similar synchronization problem. Use BINARY semaphore for access COUNTING semaphores for slots semaphore access, fullspots, emptyspots; access.count=1; // BINARY fullspots.count=0; emptyspots.count= BUFLEN; task producer; loop wait (emptyspots); wait (access); DEPOSIT(VALUE); release (access); release (fullspots); end loop; end producer; task consumer; loop wait (fullspots); wait (access); FETCH(VALUE); release (access); release (emptyspots); end loop; end producer; Remind you of a printer queue problem?
35
Statements Parallel Fortran
36
Statement-level Parallel Process PARALLEL LOOP 20 K=1,20 PRIVATE (T) DO 20 I = 1, 500 T=0.0; DO 30 J = 1, 1500 30 T = T + ( B(J,K)*A(I,J)) 20 C(I,K)=T
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.