Midterm 3 Revision Prof. Sin-Min Lee Department of Computer Science
Memory Allocation Compile for overlays Compile for fixed Partitions –Separate queue per partition –Single queue Relocation and variable partitions –Dynamic contiguous allocation (bit maps versus linked lists) Fragmentation issues Swapping Paging
Overlays Overlay Manager Overlay Area Main Program Overlay 1 Overlay 2 Overlay 3 Secondary Storage Overlay 1Overlay 2Overlay 3Overlay 1 0K 5k 7k 12k
Multiprogramming with Fixed Partitions Divide memory into n (possible unequal) partitions. Problem: –Fragmentation Free Space 0k 4k 16k 64k 128k
Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated)
Fixed Partition Allocation Implementation Issues Separate input queue for each partition –Requires sorting the incoming jobs and putting them into separate queues –Inefficient utilization of memory when the queue for a large partition is empty but the queue for a small partition is full. Small jobs have to wait to get into memory even though plenty of memory is free. One single input queue for all partitions. –Allocate a partition where the job fits in. Best Fit Worst Fit First Fit
Relocation Correct starting address when a program starts in memory Different jobs will run at different addresses –When a program is linked, the linker must know at what address the program will begin in memory. Logical addresses, Virtual addresses – Logical address space, range (0 to max) Physical addresses, Physical address space –range (R+0 to R+max) for base value R. User program never sees the real physical addresses Memory-management unit (MMU) –map virtual to physical addresses. Relocation register –Mapping requires hardware (MMU) with the base register
Relocation Register Memory Base Register CPU Instruction Address + BA MA MA+BA Physical Address Logical Address
Storage Placement Strategies Best fit –Use the hole whose size is equal to the need, or if none is equal, the whole that is larger but closest in size. –Rationale? First fit –Use the first available hole whose size is sufficient to meet the need –Rationale? Worst fit –Use the largest available hole –Rationale?
Storage Placement Strategies Every placement strategy has its own problem –Best fit Creates small holes that cant be used –Worst Fit Gets rid of large holes making it difficult to run large programs –First Fit Creates average size holes
Locality of Reference Most memory references confined to small region Well-written program in small loop, procedure or function Data likely in array and variables stored together Working set –Number of pages sufficient to run program normally, i.e., satisfy locality of a particular program
Page Replacement Algorithms Page fault - page is not in memory and must be loaded from disk Algorithms to manage swapping –First-In, First-Out FIFO – Belady’s Anomaly –Least Recently Used LRU –Least Frequently Used LFU –Not Used Recently NUR Referenced bit, Modified (dirty) bit –Second Chance Replacement algorithms Thrashing –too many page faults affect system performance
Virtual Memory Tradeoffs Disadvantages SWAP file takes up space on disk Paging takes up resources of the CPU Advantages Programs share memory space More programs run at the same time Programs run even if they cannot fit into memory all at once Process separation
Virtual Memory vs. Caching Cache speeds up memory access Virtual memory increases amount of perceived storage –Independence from the configuration and capacity of the memory system –Low cost per bit compared to main memory
How Bad Is Fragmentation? Statistical arguments - Random sizes First-fit Given N allocated blocks 0.5 N blocks will be lost because of fragmentation Known as 50% RULE
Solve Fragmentation w. Compaction MonitorJob 3 Free Job 5Job 6Job 7Job 8 5 MonitorJob 3 Free Job 5Job 6Job 7Job 8 6 MonitorJob 3 Free Job 5Job 6Job 7Job 8 7 MonitorJob 3 Free Job 5Job 6Job 7Job 8 8 MonitorJob 3 Free Job 5Job 6Job 7Job 8 9
Storage Management Problems Fixed partitions suffer from –internal fragmentation Variable partitions suffer from –external fragmentation Compaction suffers from –overhead
Placement Policy Determines where in real memory a process piece is to reside Important in a segmentation system Paging or combined paging with segmentation hardware performs address translation
Replacement Policy Placement Policy –Which page is replaced? –Page removed should be the page least likely to be referenced in the near future –Most policies predict the future behavior on the basis of past behavior
Replacement Policy Frame Locking –If frame is locked, it may not be replaced –Kernel of the operating system –Control structures –I/O buffers –Associate a lock bit with each frame
Basic Replacement Algorithms Optimal policy –Selects for replacement that page for which the time to the next reference is the longest –Impossible to have perfect knowledge of future events
Basic Replacement Algorithms Least Recently Used (LRU) –Replaces the page that has not been referenced for the longest time –By the principle of locality, this should be the page least likely to be referenced in the near future –Each page could be tagged with the time of last reference. This would require a great deal of overhead.
Basic Replacement Algorithms First-in, first-out (FIFO) –Treats page frames allocated to a process as a circular buffer –Pages are removed in round-robin style –Simplest replacement policy to implement –Page that has been in memory the longest is replaced –These pages may be needed again very soon
Basic Replacement Algorithms Clock Policy –Additional bit called a use bit –When a page is first loaded in memory, the use bit is set to 1 –When the page is referenced, the use bit is set to 1 –When it is time to replace a page, the first frame encountered with the use bit set to 0 is replaced. –During the search for replacement, each use bit set to 1 is changed to 0
Early memory management schemes Originally used to devote computer to single user: User has all of memory
Limitations of single-user contiguous scheme Only one person using the machine--lots of computer time going to waste (why?) Largest job based on size of machine memory
Next: fixed partitions Created chunks of memory for each job: Job 1Job 2Job
Limitations of fixed partitions Operator had to correctly guess size of programs Programs limited to partitions they were given Memory fragmentation resulted The kind illustrated here is called internal memory fragmentation
Dynamic Partitions
Internal versus external memory fragmentation: Job 8 Space previously allocated by Job 1 Space currently allocated by Job 8
Dynamic Partitions Contiguous memory is still required for processes How do we decide size of the partitions? Once the machine is going, how do old jobs get replaced by new ones?
Dyanmic Partitions: First Fit In this scheme, we search forward in the “free list” for a partition large enough to accommodate the next job Fast, but the gaps left can be large
Dynamic Partitions: Best Fit In this scheme, we try to find the smallest partition large enough to hold the next job This tends to minimize the size of the gaps But it also requires that we keep list of free spaces
Deallocating memory If the block we are deallocating is adjacent to one or two free blocks, then it needs to be merged with them. So either we are returning a pointer to the free block, or we are changing the size of a block, or both
Relocatable Dynamic Partitions We can see that in some cases, a job can “fit” into the combined spaces within or between partitions of the early schemes So how do we take advantage of that space? One way is to move programs while they are in the machine--compacting them down into the lower end of memory above the operating system
Several names for this Garbage collection Defragmentation Compaction All share a problem: relative addressing!
Special registers Early machine architectures went through a phase of “more complexity is better” Few registers, but large sets of commands RISC computers have changed all of that (for architectural reasons we won’t go into in detail)
But... In order to manage dynamic relocatable partitions, two registers were assigned to each partition Note: jobs were not broken into parts, but were relocated whole hog Note that the whole job is stored in memory--still no heavy use of external storage devices
Summary of early schemes These were adequate for batch jobs But speed of components continued to advance Storage devices appear And then the biggie: remote terminals Note that processor speeds double about every 1.5 years--much faster than memory!
To accommodate changes We really need to be able to extend memory by hiding unused parts of programs on storage devices--called virtual memory We can do this if we can break a program into pieces--called pages--that can be independently loaded and removed from memory without affecting the running program
Paged Memory Allows jobs to reside in noncontiguous memory More jobs can be squeezed into memory But—some internal fragmentation, particularly if the number of jobs is large
Disk partitions to page frames Disk partitions varied in size before Now we want to fix the size to match the chunk size of the programs coming in (plus a tiny bit of bookkeeping space) We call these chunks of memory page frames
Is there any fragmentation? We can see that if page frames are chosen right, we shouldn’t have external fragmentation Seldom will the size of code exactly fit the frames, so there will be a little bit of internal fragmentation Tradeoff between internal fragmentation and processor time spent managing frames
Other problems Earliest schemes didn’t allow virtual memory Tables for managing memory a significant part of the operating system--its growing!
Demand Paging Demand paging broke jobs into pieces and became popular because it allowed only parts of jobs to be active in memory New problems: thrashing and page faults
Page Replacement Algorithms Optimal page replacement simply not possible Keep referenced (R) and Modify (M) bits to allow us to keep track of past usage instead –Page is referenced by any read or write in it –Page is modified by any change (write) made to it
Page Replacement Algorithms, Continued FIFO = First in, first out LRU = Least recently used LFU = Least frequently used both of the latter rely on a page request call to the operating system a failure to find a page = page interrupt we might measure quality by failure rate = page interrupts / page requests
Page Replacement Algorithms, Continued Clock page replacement –Hand of the clock points to the oldest page –If a page fault occurs, check R bits in “clockwise” order A variant called the “two-handed clock” is used in some UNIX systems
FIFO solution is not more memory Called Belady’s anomaly the page request order is an important factor, not just the size of memory
LRU Doesn’t suffer from Belady’s anomaly Presumes locality of reference But while it works well, it is a little more complex to implement in software –Consequently, aging and various clock algorithms are the most common in practice –Aging can yield a good approximation
Segmented Memory Allocation Instead of equal divisions, try to break code into its natural modules Compiler now asked to help operating system No page frames--different sizes required (meaning we get external fragmentation again)
Segmented/Demand Paging Subdivide the natural program segments into equal sized parts to load into page frames eliminates external fragmentation allows for large virtual memory, so it is often used in more modern OS’s
Tradeoffs Note that there is a tradeoff between external fragmentation and page faults in paging systems Note also that we probably want slightly smaller page frames in a Segmented- Demand Paging framework
And Onward! Next Thursday we’ll do our Mid3 exam Study guide already posted Don’t miss the exam!