Download presentation
Presentation is loading. Please wait.
1
CS414 Review Session
2
Address Translation
3
Example Logical Address: 32 bits Number of segments per process: 8
Page size: 2 KB Page table entry size: 2B Physical Memory: 32MB Paged Segmentation 2 level paging
4
Logical Address Space Total Number of bits 32
Page Offset: 11 bits (2KB = 211B) Segment Number: 3 bits (8 = 23) Number of pages per segment: 218 ( =16) Number of page table entries in one page of page table: 1K (2KB/2B) Page number in inner page table: 10 bits (1K = 210) Page number in outer page table: 8 bits (18-10)
5
Segment Table Number of entries = 8 Width of each entry (sum of)
Base Address of outer page table: 14 bits Number of page frames = 16K (32MB/2KB) Length of Segment: 29 bits (32 – 3) Miscellaneous items
6
Page Table Outer Page Table: Inner Page Table Number of entries = 28
Width of entry (sum of) Page frame number of inner page table: 14 bits Miscellaneous bits (total 2B specified) Inner Page Table Number of entries = 210 Width: same as outer page table
7
Translation Look-aside Buffer
Just an Associative Cache Number of entries (pre fixed size) Width of each entry (sum of) Key: segment#+page# = = 21 bits Some TLBs may also use process IDs Value: page frame# = 14 bits
8
The Page Size Issue With a very small page size, each page matches the code that is actually used page faults are low Increased page size causes each page to contain code that is not used Fewer pages in memory Page faults rise. (Thrashing) Small pages large page tables costly translation 2KB to 8KB
9
Load Control Determines the number of processes that will be resident in main memory (i.e. multiprogramming level) Too few processes: often all processes will be blocked and the processor will be idle Too many processes: the resident size of each process will be too small and flurries of page faults will result: thrashing
10
Handling Interrupts and Traps
Terminate current instruction (instructions) Pipeline flush. Save state Registers, PC, may need to repeat instructions. Invoke Interrupt Handling Routine Interrupt vector table User space to Kernel space context switch Execute the interrupt handling routine Invoke the scheduler to schedule a ready process. Kernel space to user space context switch
11
Disk Optimizations Seek Time (biggest overhead)
Disk Scheduling Algorithms SSTF, SCAN, C-SCAN, LOOK, C-LOOK Contiguous file allocation Place contiguous block on same cylinder Same track, if not same numbered track on another disk. Organ Pipe Distribution Place most used blocks (I-nodes, directory structure) closer to the middle of the disk. Place the head in the middle of the disk Use multiple heads.
12
Disk Optimizations Rotational Latency (next biggest) Interleaving
Adjacent sectors are actually not adjacent on the disk. Disk Cache Cache all sectors on the track. (2 rotations) 6 3 2 5 7 1 4
13
Redundant Array of Inexpensive Disks
Mirroring or Shadowing Expensive, small gain in read time, reliable Striping Inexpensive, faster access time, not reliable Striping + Parity Inexpensive, small performance gain, reliable Interleaving + Parity + Striping Inexpensive, faster access time, reliable
14
Storage Hierarchy B nsec Level 1 Cache KB+ 100 nsec Level 2 Cache
Register B nsec Level 1 Cache KB+ 100 nsec Level 2 Cache 500 KB+ usec 100 MB+ Main Memory msec usec GB+ Hard Disk Network ?? sec Tertiary Storage TB
15
Paging vs Segmentation
Fixed size partitions Internal Fragmentation (average=page size/2) No External Fragmentation. Small chunk of memory. (~ 4 KB) Linear address space, invisible to programmer. Variable size partition No Internal Fragmentation External Fragmentation (compaction, page segs) Large chunk of memory. (~ 1 MB) Logical address space, visible to programmer.
16
Demand-paging vs Pre-paging
Pages swapped in on demand. More page faults (especially initially). No wastage of page frames. No such overhead. Pages swapped in before use in anticipation. Reduce future page faults. Pages may not be used (wastage of memory space). Good strategies to pre-page. (working set, contiguous pages, etc…)
17
Local vs Global Page Replacement.
Only swap out current process’ pages. Page frame allocation strategies required. (page fault frequency) Thrashing affects only current process. Admission control required. Can use different page replacement algorithms for each process. Swap out any page in memory. No explicit allocation of page frames. Can affect performance of other processes. Admission control required. Single page replacement algorithm.
18
Interrupt driven IO vs Polling
Each interrupt has a fixed processing time overhead (context switches). Other processes can execute while waiting for response. Good for long and indefinite response time. Ex: Printer The response time on polling is variable. (device and request specific) No other process can execute while waiting for response. Good for short and predictable response time (< fixed interrupt overhead). Ex: Fast Networks
19
Contiguous vs Indexed Allocation
All blocks of the file in contiguous disk locations. No additional index overhead. (Disk addresses can be computed) Disk fragmentation is a major problem. (compaction overhead) Smart allocation strategies required. Low average latency for sequential access. (only one long seek, smart block layouts) Blocks of the file randomly distributed throughout the disk. Each access involves a search in the index. (Involves fetching additional blocks from the disk) No Fragmentation on the disk. No allocation strategies required. High average latency (disk scheduling algorithms)
20
Contiguous vs Linked Allocation
All blocks are in contiguous disk addresses. Disk addresses can be computed for each access. Suffers from fragmentation of disk. Bad sectors affect contiguity of blocks. Blocks are arranged in a link list fashion. Each access involves browsing the entire list. No disk fragmentation. All bad blocks can be hidden away as a file.
21
Hard Disks vs Tapes Small capacity (few GB)
Subject to various failures (disk crashes, bad sectors, etc…) Random access latency is very small (msec) Huge capacity per unit volume (TB) Permanent storage (no corruption for long time.) Very high random access latency (sec) (need to read the tape from the beginning)
22
Unix FS vs Log FS Index used to map I-nodes to physical blocks.
Same read latency as indexed allocation. Writes take place on the same block where data is read from. Write latency equals is dominated by seek time. No garbage collection required. Crash recovery is extremely difficult. Index used to map I-nodes to physical blocks. Same read latency as UNIX FS Writes are bunched together and done on sequential blocks. Write latency is small because of amortized seek time. Garbage collection required to free old blocks. Checkpoints enable efficient recovery from crashes.
23
Routing Strategies Fixed Permanent path between A and B
Congestion independent of paths. No set-up costs. Sequential delivery. Virtual Circuit Per session path between A and B Some attempt to uniform congestion. Per session set-up cost. Sequential delivery. Dynamic Different path per message between A and B Uniform congestion across paths. Per message set-up cost. Out of order delivery.
24
Connection Strategies
Circuit Switch Permanent link between A and B (hardware) Congestion independent of paths. No set-up costs. Sequential delivery. Message Switch Per message link between A and B Some attempt to uniform congestion. Initial set-up cost. Sequential delivery. Packet Switch Different link per packet between A and B Uniform congestion across links. (best link) No set-up cost. Out of order delivery.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.