Download presentation
Presentation is loading. Please wait.
Published byAmie Horton Modified over 9 years ago
1
COSC 3407: Operating Systems Lecture 3: Processes
2
Concurrency u Uniprogramming: one process at a time (e.g., MS/DOS, Macintosh) – Easier for operating system builder: get rid of problem of concurrency by defining it away. – For personal computers, idea was: one user does only one thing at a time. – Harder for user: can’t work while waiting for printer u Multiprogramming: more than one process at a time (UNIX, OS/2, Windows NT/2000/XP). – Note: This is often called multitasking, but multitasking sometimes has other meanings – so not used in this course.
3
Concurrency u The basic problem of concurrency involves resources: – Hardware: single CPU, single DRAM, single I/O devices. – Multiprogramming API: users think they have machine to themselves. u OS has to coordinate all the activity on a machine: – multiple users, I/O, interrupts, etc. – How can it keep all these things straight? u Answer: use Virtual Machine abstraction – Decompose hard problem into simpler ones. – Abstract the notion of an executing program – Then, worry about multiplexing these abstract machines
4
Fetch Exec R0 … R31 F0 … F30 PC … Data1 Data0 Inst237 Inst236 … Inst5 Inst4 Inst3 Inst2 Inst1 Inst0 Addr 0 Addr 2 32 -1 Recall: What happens during execution? u Execution sequence: – Fetch Instruction at PC – Decode – Execute (possibly using registers) – Write Results to registers – PC = Next Instruction(PC) – Repeat PC
5
How can we give the illusion of multiple processors? CPU3CPU2CPU1 Shared Memory u How do we provide the illusion of multiple processors? – Multiplex in time! u Each virtual “CPU” needs a structure to hold: – Program Counter (PC) – Registers (Integer, Floating point, others…?) u How switch from one CPU to the next? – Save PC and registers in current state block – Load PC and registers from new state block u What triggers switch? – Timer, voluntary yield, I/O, other things CPU1CPU2CPU3CPU1CPU2 Time
6
Properties of this simple multiprogramming technique u All virtual CPUs share same non-CPU resources – I/O devices the same – Memory the same u Consequence of sharing: – Each thread can access the data of every other thread (good for sharing, bad for protection) – Threads can share instructions (good for sharing, bad for protection) – Can threads overwrite OS functions? u This (unprotected) model common in: – Embedded applications – Windows 3.1/Machintosh (switch only with yield) – Windows 95/ME? (switch with both yield and timer)
7
Modern Technique: SMT/Hyperthreading u Hardware technique – Exploit natural properties of superscalar processors to provide illusion of multiple processors – Higher utilization of processor resources u Can schedule each thread as if were separate CPU – However, not linear speedup! – If have multiprocessor, should schedule each processor first u Original technique called “Simultaneous Multithreading” – See http://www.cs.washington.edu/research/smt/ – Alpha, SPARC, Pentium 4 (“Hyperthreading”), Power 5
8
How to protect threads from one another? u Need three important things: – Protection of memory » Every task does not have access to all memory – Protection of I/O devices » Every task does not have access to every device – Preemptive switching from task to task » Use of timer » Must not be possible to disable timer from usercode
9
Program Address Space Recall: Program’s Address Space u Address space the set of accessible addresses + state associated with them: – For a 32-bit processor there are 2 32 = 4 billion addresses u What happens when you read or write to an address? – Perhaps Nothing – Perhaps acts like regular memory – Perhaps ignores writes – Perhaps causes I/O operation » (Memory-mapped I/O) – Perhaps causes exception (fault)
10
Providing Illusion of Separate Address Space: Load new Translation Map on Switch
11
Traditional Unix Process u Process: Operating system abstraction to represent what is needed to run a single program – Often called a “HeavyWeight Process” – Formally, a process is a sequential stream of execution in its own address space. u Two parts to a (traditional Unix) process: 1. Sequential program execution: the code in the process is executed as a single, sequential stream of execution (no concurrency inside a process). – This is known as a thread of control. 2. State Information: everything specific to a particular execution of a program: Encapsulates protection: address space – CPU registers – Main memory (contents of address space) – I/O state (in UNIX this is represented by file descriptors)
12
Process != Program u More to a process than just a program: – Program is just part of process state. – We both run Firefox – same program, different processes. u Less to a process than a program: – A program can invoke more than one process to get the job done – cc starts up cpp, cc1, cc2, as, ld (each are programs themselves) Main(){ … } A(){ …} PROGRAM Main(){ … } A(){ … } PROCESS heap Stack A Main Registers, PC
13
Process states u As a process executes, it changes state – new: The process is being created. – running: executing on the CPU – waiting: waiting for another event (I/O, lock) – ready: waiting to be assigned to CPU – terminated: The process has finished execution.
14
Process Control Block (PCB) Information associated with each process. u Process state – new, ready, running, waiting, halted u Program counter – address of the next instruction to be executed for this process u CPU registers – accumulators, index registers, stack pointers, general- purpose registers, condition code registers u CPU scheduling information – process priority, pointers to scheduling queues, etc.
15
Process Control Block (PCB) u Memory-management information – value of the base and limit registers, page tables, segment tables u Accounting information – amount of CPU and real time used, time limits, account numbers, job or process numbers u I/O status information – list of I/O devices allocated to this process, a list of open files PCB
16
How do we multiplex processes? u The current state of process held in a process control block (PCB): – This is a “snapshot” of the execution and protection environment – Only one PCB active at a time u Give out CPU time to different processes (Scheduling): – Only one process “running” at a time – Give more time to important processes u Give pieces of resources to different processes (Protection): – Controlled access to non-CPU resources – Sample mechanisms: » Memory Mapping: Give each process their own address space » Kernel/User duality: Arbitrary multiplexing of I/O through system calls
17
CPU Switch From Process to Process u This is also called a “context switch” u Code executed in kernel above is overhead – Overhead sets minimum practical switching time – Less overhead with SMT/hyperthreading, but… contention for resources instead
18
State Queues u PCB’s are organized into queues according to their state… – Ready, waiting, u An example of such an arrangement is given below
19
Representation of Process Scheduling u PCBs move from queue to queue as they change state – Decisions about which order to remove from queues are Scheduling decisions – Many algorithms possible (few weeks from now)
20
Context Switch u Very machine dependent. Must save: – the state of the old process, load the saved state for the new process. u Pure overhead! u Speed depends on: – memory speed, number of registers which must be copied, existence of special instructions, such as – single instruction to load or store all registers. u Sometimes special hardware to speed up u SUN UltraSPARC provides multiple sets of registers. – change the pointer to the current register set.
21
Process Creation u Parent process creates children processes, which, in turn create other processes, forming a tree of processes. u A process will need certain resources, like CPU time, memory, files, I/O devices, to accomplish its task. u Resource sharing – Parent and children share all resources. – Children share subset of parent’s resources. – Parent and child share no resources. u Also, initialization data may be passed to the child process, e.g. name of a file to be displayed on the terminal.
22
Process Creation (Cont.) u Execution – Parent and children execute concurrently. – Parent waits until children terminate. u Address space – Child process is a duplicate of the parent process. – Child process has a program loaded into it. u UNIX examples – fork system call creates new process – each process has a process identifier, which is a unique integer – A new process consists of a copy of the address space of the original process
23
Example: Unix u Both processes continue execution at the instruction after the fork. u The return code for the child process is zero, whereas the process identifier (non zero PID) of the child is returned to the parent. u execlp system call used after a fork to replace the process’ memory space with a new program. u The execlp system call loads a binary file into memory and starts its execution. u The parent can create more children, or it can issue a wait system call to move itself off the ready queue until the termination of the child.
24
C program forking a separate process #include void main(int argc, char *argv[]) { int pid; /* fork another process */ pid = fork(); if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed\n"); exit(-1); } else if (pid == 0) { /* child process */ execlp("/bin/ls","ls",NULL); } else { /* parent process */ /* parent will wait for the child to complete */ wait(NULL); printf("Child Complete\n"); exit(0); }
25
Multiple Processes to Contribute on Task u High Creation/memory Overhead u (Relatively) High Context-Switch Overhead u Need Communication mechanism: – Separate Address Spaces Isolates Processes – Shared-Memory Mapping » Accomplished by mapping addresses to common DRAM » Read and Write through memory – Message Passing » send() and receive() messages » Works across network Proc 1 Proc 2 Proc 3
26
Shared Memory Communication Prog 1 Virtual Address Space 1 Prog 2 Virtual Address Space 2 Data 2 Stack 1 Heap 1 Code 1 Stack 2 Data 1 Heap 2 Code 2 Shared u Communication occurs by “simply” reading/writing to shared address page – Really low overhead communication – Introduces complex synchronization problems Code Data Heap Stack Shared Code Data Heap Stack Shared
27
Inter-process Communication (IPC) u Mechanism for processes to communicate and to synchronize their actions u Message system – processes communicate with each other without resorting to shared variables u IPC facility provides two operations: – send(message) – message size fixed or variable – receive(message) u If P and Q wish to communicate, they need to: – establish a communication link between them – exchange messages via send/receive u Implementation of communication link – physical (e.g., shared memory, hardware bus) – logical (e.g., logical properties)
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.