Download presentation
Presentation is loading. Please wait.
1
© 2004, D. J. Foreman 2-1 Concurrency, Processes and Threads
2
© 2004, D. J. Foreman 2-2 Concurrency The appearance that multiple actions are occurring at the same time On a uni-processor, something must make that happen ■ A collaboration between the OS and the hardware On a multi-processor, the same problems exist (for each CPU) as on a uni-processor
3
© 2004, D. J. Foreman 2-3 Combines multiplexing types: Space-multiplexing - Physical Memory Time-multiplexing - Physical Processor Multiprogramming Process 0 Process 1 Process n …
4
© 2004, D. J. Foreman 2-4 Multiprogramming-2 Multiprogramming ■ N programs apparently running simultaneously space-multiplexed in executable memory time-multiplexed across the central processor Reason why desired ■ Greater throughput (work done per unit time) ■ More work occurring at the same time Resources required ■ CPU ■ Memory
5
© 2004, D. J. Foreman 2-5 The CPU Instruction cycles ■ Access memory and/or registers ■ Sequential flow via "instruction register" ■ One instruction-completion at a time (Pipelines only increase the # of completions per time unit). They are still sequential! Modes of execution ■ Privileged (System) ■ Non-privileged (User )
6
Context Switching 4 atomic actions, depending on direction Kernel to user ■ Memory protection on ■ Privilege mode off ■ Interrupts on (allowed) ■ Set instruction counter User to kernel ■ Memory protection off ■ Privilege mode on ■ Interrupts off (NOT allowed) ■ Save instruction counter © 2004, D. J. Foreman 2-6
7
© 2004, D. J. Foreman 2-7 Memory Sequential addressing (0 – n) Partitioned ■ System Inaccessible by user programs ■ User Partitioned for multiple users Accessible by system programs
8
© 2004, D. J. Foreman 2-8 Processes-1 A Process is ■ A running program & its address space ■ A unit of resource management ■ Independent of other processes NO sharing of memory with other processes May share files open at Fork time One program may start multiple processes, each in its own address space
9
© 2004, D. J. Foreman 2-9 Operating System Processes-2 Abstraction Process-1Process-n Memory CPU Instruction stream Data stream
10
© 2004, D. J. Foreman 2-10 Process & Address Space Address Space Code Resources Abstract Machine Environment Stack Data Resources
11
© 2004, D. J. Foreman 2-11 Processes-3 The Process life-cycle ■ Creation User or scheduled system activation ■ Execution Running – Performing instructions (using the ALU) Waiting – Resources or Signals Ready – All resources available except memory and ALU ■ Termination Process is no longer available
12
© 2004, D. J. Foreman 2-12 Processes-4 Space multiplexing ■ Each process operates in its own "address space" ■ Address space is a sequence of memory locations (addresses) from 0 to 'n' as seen by the application ■ Process addresses must be "mapped" to real addresses in the real machine More on this later
13
© 2004, D. J. Foreman 2-13 Processes-5 Time multiplexing ■ Each process is given a small portion of time to perform instructions ■ O/S controls the time per process and which process gets control next Many algorithms for this No rules (from user's/programmer's view) on which process will run next or for how long Some OS's dynamically adjust both time and sequence
14
© 2004, D. J. Foreman 2-14 Processes-7 FORK (label) ■ Starts a process running from the labeled instruction – gets a copy of address space QUIT() ■ Process terminates itself JOIN (count) (an atomic operation) ■ Merges >=2 processes ■ Really more like "quit, unless I'm the only process left"
15
© 2004, D. J. Foreman 2-15 Threads-1 A unit of execution within a process (like a lightweight process – an "lwp") also called a "task" Share address space, data and devices with other threads within the process Private stack, status (IC, state, etc) Multi-threading ■ >1 thread per process Limited by system to some max # ■ Per system ■ Per process
16
© 2004, D. J. Foreman 2-16 Thread Models DOS JRE Classic UNIX WinXX, Solaris, Linux, OS/2
17
© 2004, D. J. Foreman 2-17 Threads-2 Several thread API's ■ Solaris: kernel-level threads & pthreads ■ Windows: kernel-level threads & pthreads ■ OS/2: kernel-level threads ■ Posix (pthreads) – full set of functions #include // for C, C++ Allows porting without re-coding ■ Java threads implemented in JVM, independent of OS support Like multiprogramming implementation in Win3.1 Uses underlying kernel support where available
18
© 2004, D. J. Foreman 2-18 Threads-3 Windows (native) ■ CreateThread( DWORD dwCreateFlags = 0, UINT nStackSize = 0, LPSECURITY_ATTRIBUTES lpSecurityAttrs = NULL ); POSIX (Linux, Solaris, Windows) ■ iret1 = pthread_create( &thread1, NULL, (void*)&print_message_function, (void*) message1);
19
© 2004, D. J. Foreman 2-19 Threads-4 Advantages of kernel-supported threads: ■ May request resources with or without blocking on the request ■ Blocked thread does NOT block other threads ■ Inexpensive context switch ■ Utilize MP architecture Thread library for user threads is in user space ■ Thread library schedules user threads onto LWP’s ■ LWP’s are: implemented by kernel threads scheduled by the kernel.
20
© 2004, D. J. Foreman 2-20 Notes on Java The JVM ■ uses monitors for mutual exclusion ■ provides wait and notify for cooperation
21
© 2004, D. J. Foreman 2-21 Java & Threads-1 Thread creation – 2 ways 1. import java.lang.*; public class Counter extends Thread { public void run() //overrides Thread.run {.... } } extension from the Thread class
22
© 2004, D. J. Foreman 2-22 Java & Threads-2 2. import java.lang.*; public class Counter implements Runnable { Thread T; public void run() {.... } } ■ Instance of the Thread class as a variable of the Counter class – creates an interface ■ Can still extend the Counter class
23
© 2004, D. J. Foreman 2-23 Java & Threads-3 Difference between the two methods ■ Implementing Runnable, -> greater flexibility in the creation of the class counter Thread class also implements the Runnable interface
24
© 2004, D. J. Foreman 2-24 Wait & Signal - semaphores Classical definitions ■ Wait - P (s) // make me wait for something DO WHILE (s<=0) END s=s-1 // when s becomes > 0, decrement it ■ Signal - V (s) // tell others: my critical job is done s=s+1 These MUST appear as ATOMIC operations to the application
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.