Download presentation
Presentation is loading. Please wait.
Published byBennett Fitzgerald Modified over 9 years ago
1
Exam2 Review Bernard Chen Spring 2007
2
Deadlock Example semaphores A and B, initialized to 1 P0 P1 wait (A); wait(B) wait (B); wait(A)
3
Deadlock Characterization Deadlock can arise if four conditions hold simultaneously: 1. Mutual Exclusion 2. Hold and Wait 3. No Preemption 4. Circular Wait
4
Resource-Allocation Graph A set of vertices V and a set of edges E. V is partitioned into two types: P= {P1, P2, …, Pn}, the set consisting of all the processes in the system. R= {R1, R2, …, Rm}, the set consisting of all resource types in the system.
5
Resource-Allocation Graph E is also partitioned into two types: request edge –directed edge P1 → Rj assignment edge –directed edge Rj → Pi
6
Example of Resource-Allocation Graph
7
7.4 Deadlock Prevention We try to ensure that one of the four necessary conditions cannot hold, then we can prevent it 1. Mutual Exclusion 2. Hold and Wait 3. No Preemption 4. Circular Wait
8
Resource-Allocation Graph Scheme Claim edge Pi → Rj indicated that process Pj may request resource Rj in the future; represented by a dashed line. Claim edge converts to request edge when a process requests a resource. Request edge converted to an assignment edge when the resource is allocated to the process. When a resource is released by a process, assignment edge reconverts to a claim edge.
9
Resource-Allocation Graph
10
Banker’s Algorithm Two algorithms need to be discussed: 1. Safety state check algorithm 2. Resource request algorithm
11
Data Structures for the Banker’s Algorithm
12
Safety Algorithm
13
Resource-Request Algorithm for Process Pi
14
Wait-for Graph Maintain wait-for graph 1. Nodes are processes. 2. Pi → Pj if Pi is waiting for Pj. Periodically invoke an algorithm that searches for a cycle in the graph. If there is a cycle, there exists a deadlock.
15
Wait-for Graph
16
Detection Algorithm
17
Memory
18
Base and Limit Registers
19
Binding of Instructions and Data to Memory Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes Load time: Must generate relocatable code if memory location is not known at compile time Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another.
21
Logical vs. Physical Address Space The concept of a logical address space that is bound to a separate physical address space is central to proper memory management Logical address–generated by the CPU; also referred to as virtual address Physical address– address seen by the memory unit
22
Contiguous Allocation Main memory usually into two partitions: Resident operating system, usually held in low memory with interrupt vector User processes then held in high memory
23
Memory Allocation The simplest method for memory allocation is to divide memory into several fix-sized partitions Initially, all memory is available for user processes and is considered one large block of available memory, a hole.
24
Memory Allocation
25
Dynamic Storage- Allocation Problem How to satisfy a request of size n from a list of free holes First-fit: Allocate the first hole that is big enough Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size (Produces the smallest leftover hole) Worst-fit: Allocate the largest hole; must also search entire list (Produces the largest leftover hole)
26
Fragmentation All strategies for memory allocation suffer from external fragmentation external fragmentation: as process are loaded and removed from memory, the free memory space is broken into little pieces External fragmentation exists when there is enough total memory space to satisfy the request, but available spaces are not contiguous
27
Fragmentation If the hole is the size of 20,000 bytes, suppose that next process requests 19,000 bytes. 1,000 bytes are lose This is called internal fragmentation- memory that is internal to a partition but is nor being used
28
Paging
31
Hardware Support on Paging If we want to access location I, we must first index into page table, this requires one memory access With this scheme, TWO memory access are needed to access a byte The standard solution is to use a special, small, fast cache, called Translation look- aside buffer (TLB) or associative memory
32
Hierarchical paging One way is to use a two-level paging algorithm
33
Hierarchical paging Remember the example is a 32-bit machine with a page size of 4 KB. A logical address is divided into a page number consisting of 20 bits and a page offset consisting of 12 bits 10 10 12
34
Hierarchical paging Address translation scheme:
35
Segmentation The user specifies each address by two quantities: a segment name and an offset Compare with page scheme, user specifies only a single address, which is partitioned by hardware into a page number and an offset, all invisible to the programmer
36
Segmentation Although the user can refer to objects in the program by a two-dimensional address, the actual physical address is still a one- dimensional sequence Thus, we need to map the segment number This mapping is effected by a segment table In order to protect the memory space, each entry in segment table has a segment base and a segment limit
37
Example of Segmentation For example, segment 2 starts from 4300 with size 400, if we reference to byte 53 of segment 2, it mapped to 4335 A reference to segment 3, byte 852? A reference to segment 0, byte 1222?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.