Chapter 8.2: Memory Management

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Page Table Implementation
OS Fall’02 Virtual Memory Operating Systems Fall 2002.
Paging Hardware With TLB
4/14/2017 Discussed Earlier segmentation - the process address space is divided into logical pieces called segments. The following are the example of types.
Chapter 8.3: Memory Management
Memory Management (II)
Chapter 9: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Chapter 9: Memory Management Background.
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Memory Management.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
03/22/2004CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
A. Frank - P. Weisberg Operating Systems Simple/Basic Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Segmentation Memory-management scheme that supports user view of memory. A program.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Chapter 8: Main Memory.
Chap 8 Memory Management. Background Program must be brought into memory and placed within a process for it to be run Input queue – collection of processes.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 8: Memory Management.
Operating Systems Chapter 8
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Main Memory.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 34 Paging Implementation.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-8 Memory Management (2) Department of Computer Science and Software.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Implementation of Page Table Page table is kept in main memory Page-table base.
CE Operating Systems Lecture 14 Memory management.
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
9.1 Operating System Concepts Paging Example. 9.2 Operating System Concepts.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Objectives To provide a detailed description of various ways of organizing.
Page Table Implementation. Readings r Silbershatz et al:
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 8: Memory Management Strategies.
Silberschatz, Galvin and Gagne  Operating System Concepts Paging Logical address space of a process can be noncontiguous; process is allocated.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Chapter 8: Main Memory.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
W4118 Operating Systems Instructor: Junfeng Yang.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 33 Paging Read Ch. 9.4.
COMP 3500 Introduction to Operating Systems Paging: Basic Method Dr. Xiao Qin Auburn University Slides.
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Non Contiguous Memory Allocation
Chapter 8: Main Memory Source & Copyright: Operating System Concepts, Silberschatz, Galvin and Gagne.
Paging and Segmentation
Chapter 8: Main Memory.
Operating System Concepts
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Lecture 29: Virtual Memory-Address Translation
Lecture 3: Main Memory.
Chapter 8: Memory Management strategies
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Presentation transcript:

Chapter 8.2: Memory Management

Chapter 8: Memory Management Background Swapping Contiguous Allocation Chapter 8.2 Paging Chapter 8.3 Segmentation Segmentation with Paging

Paging Paging is a memory management scheme where the physical address space does not have to be contiguous. That is, your executable may be broken up into different parts and exist in different parts of memory during its execution. Storage is non-contiguous. Paging avoids the fragmentation problems of contiguous memory allocation schemes and swapping with spaces in the backup store. Backup stores too had problems similar to memory issues, except that access was much slower. While older implementations of paging schemes relied on the hardware, newer approaches, especially with 64-bit microprocessors, closely integrate the operating system with the hardware, as we shall see. Basic idea of paging: Divide physical memory into fixed-sized blocks called frames (size is power of 2, between 512 bytes and 16384 (16M) bytes. Divide logical memory into blocks of same size called pages Backup stores are configured the same way.

Page – Introduction / Basic Methods To run a program of size n pages, need to find n free frames and load program Set up a page table to translate (map) logical to physical addresses Internal fragmentation could, as we are aware, be found on the last page, but more later on this topic. Of course, we need to be able to map every address in a program from a page number (p) and a page offset (d) (displacement) into a physical frame plus a displacement. Address generated by the CPU as part of development of an address is divided into: Page number (p) – used as an index into a page table which contains base address of each page in physical memory and a Page offset (d) – displacement into the page that when combined with base address define the physical memory address that is sent to the memory unit

Address Translation Architecture Can readily see how this mapping occurs in this figure: The page number is used as an index into the page table; Displacement is merely added. Page size is dependent upon the computer’s architecture. Mapping a reference from a logical page and offset into a physical page number (frame) and offset is reasonably easy.

Paging Example Easy example from book: Assume each page = 4bytes and physical memory is 32 bytes (eight pages) Can see that logical address 0 is page 0, offset 0. Using the page table, we can readily see that logical address 0 maps to frame 5. Logical address 3 (d) (page 0, offset 3), maps to physical address frame 5. (5*4 + 3) 23 = ‘’d.” Now, logical address 4 (e) is page 1, offset 0; Page 1 maps to frame 6. So logical address 4 maps to physical address 24 (6*4 + 0) Logical address 13 (n) is found in page 3, which maps to frame 2, and address 13 then maps to 4*2 + 1 = frame 2 plus 1, or logical 9.

Paging – more Can readily see that any free frame can be mapped to by any page. We have no external fragmentation, but the last page may not completely fill a frame (internal fragmentation). The average amount of internal fragmentation will clearly be one-half of one page per process. This suggests that we select a small page size. However, the more pages we have, the more management is needed. Generally speaking, page sizes have grown over time as everything seems to be getting larger: database, processes, etc. Today, most pages sizes lie between 8K and 4MB, depending on data stored by the pages. For the page table itself, each entry is usually four bytes long. For a system with 4-byte page table entry, a 32-bit entry can point to one of 2**32 physical page frames. If each page frame is 4K, then a system with 4-byte entries can address 2**44 bytes (or 16TB) of physical memory.

Paging – Continued For new processes of ‘n’ pages, there must be ‘n’ frames available. If available, they are allocated to the process. See next page for figure example. A user’s view of his program is that there is one single space which contains address translation hardware to map logical pages into physical memory. User is totally unaware of this, and the mapping is controlled by the OS. This mapping is clear from the figure on the next slide. A page table for each process must be maintained for each process. Thus the page table is also ‘part’ of the process often contained in the PCB. This, naturally does slightly increase context-switch time a bit (but not much!)

Paging Example

Free Frames Discuss this: Before allocation After allocation

Implementation of Page Table Hardware Support The page table is kept in main memory Most operating systems allocate a page table to a process So, when a process resumes, it must reload the user registers and the page table values from the stored user- page table in the PCB. Easiest way to implement a page table is via a set of very high speed registers so that the address translations can be done very quickly. Since every memory access requires the use of the page table, the speed of the hardware is essential. Clearly, the ability to access (load and modify) page table registers are privileged instructions. Further, this is a good solution if the page table is small (e.g. 256 entries) Unfortunately, many large systems allow for a page table to be in the order of one million entries,

Implementation of Page Table Hardware Support – larger machines So for large systems with huge page tables, the page table is kept in main memory and a page table base register (PTBR) points to the page table for a specific process. This PTBR points to the page table, so when a process is resumed, only the contents of this register must be reloaded into the CPU for address translation. This substantially reduces context-switching time. The PTBR is maintained in the process’ PCB. The PTBR will then point to that portion of the page table appropriate to the executing process. But there is a problem with this approach: Every data/instruction access requires two memory accesses: One for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs)

Associative Memory Idea behind an associative memory (very expensive I might add) is that each entry has two parts: a key (search argument) and a value. The electronics search all entries in an associative memory simultaneously. When (and if) found, the value field is returned Book cites the size of the AM is usually between 64 and 1024 (all integer powers of two) largely due to the expense and the practicality of the hardware. This Associative memory employs a parallel search Important to recognize that the AM only contains a few of the page-table entries… Address translation (A´, A´´) If A´ is in associative register, get frame number out – and memory may be immediately accessed. (desirable) This is very fast. Otherwise get frame number from page table in memory – (less desirable) Called a TLB miss. Page # Frame #

Paging Hardware With TLB If we get a miss, then we must access the page table. Once the frame is gotten, we can then access memory. See figure ------------------- Observe the figure. What is very important to note is that the lookup on the AM is much faster than the look up in the page table! If AM is full, LRU algorithm is used to replace an entry with the most recent frame accessed. Some AM have entries that cannot be replaced. These are normally for kernel code.

Paging Hardware Protection via the TLB Some TLBs provide protection for the address space This is called an address-space identifier (ASID), and it is unique. When a translation is attempted using the Associative Memory, we want to ensure the ASID for the current process undergoing translation matches the ASID associated with the virtual page. If no match,  treated as a TLB miss. Remember, the associative memory does not have all mappings for a process. So, as it turns out: One more very nice feature of an AM – it contains entries for several processes at the same time. Otherwise, during context switching, a TLB would have to be totally flushed and replaced with entries for the new ‘current’ process. In order to support this feature, the TLB must contain ASIDs.

Memory Protection Protection Bits via Page Table Memory protection may be implemented by associating protection bit with each frame in the page table. ‘Read’ or ‘read-write’ protection can be implemented via a valid-invalid bit associated with a frame number in the page table. These bits can be used to verify that no writes are attempted for a page that is to be read-only. Problems result in traps to the operating system. Can extend the hardware just a wee bit more by having a bit used for each kind of ‘access’ such as: read-only, read-write, execute-only protections, or combinations of these.

Valid (v) or Invalid (i) Bit In A Page Table to show ‘Availability’ Can see how the valid-invalid bit might ‘appear’ for a process: Sometimes one more bit may be used to indicate whether the page is in the process’s logical address space. Attempts to map a logical page into a frame that is not part of the executing process’s logical space are trapped by use of this bit and handled by the operating system. The OS sets each bit for each page to allow access to the page. Can see how this works:  (next slide)

Valid (v) or Invalid (i) Bit – continued Can readily see how pages 0-5 are mapped nicely into frames. If an address is somehow developed (using indexes, displacements, etc.) to an address in page 6 or 7, the valid bit will trap to the operating system. But we must be careful because the program size might not totally fit a logical page size (multiple of 2K in this example, but can be of any size…) So, any references beyond the logical end of the program are illegal even though they might still be in page 5. So some of the addresses in page 5 are valid but illegal! Reflects internal fragmentation issues.

More bits… In truth, it is very rare indeed that a process uses all of its entire address range. ‘Address range’ is merely the range of addresses conceivably available using the number of available address bits in the addressing scheme of the computer…. A 14-bit addressing scheme allows addresses up to 16383; a 16-bit addressing scheme allows addresses from 0 to 64535. etc. These are maximums. It is undesirable to create a page table for every conceivable page in address space. Since page table is memory resident, this would represent significant wasted space! So, some systems have a page table length register (PTLR) which indicates the size of the page table. Every reference is thus checked to verify that the address developed is in the valid range for the process. Errors? Trap to the operating system. Recall: PTBR (Page Table Base Register) Now we have a PTLR (Page Table Length Register)

Reentrant Code Reentrant Code is code that does not modify itself. This thus allows many processes to use the same code over and over. Each will have its own data area, however, Another definition of Reentrant Code is that it does not modify itself. Hence it can be used over and over and interrupted over and over. Each process will, of course, have its own registers and data. But the executable code may be shared simultaneously! Each process’s page table will map onto the same copy of the executable code. Many commonly used programs such as compilers, window systems, run-time libraries, database systems and more are often written as reentrant code. Discuss.

8.5 Page Table Structure Commonly-used techniques for organizing the page tables includes: Hierarchical Paging Hashed Page Tables Inverted Page Tables We will discuss each of these in some detail.

Hierarchical Page Tables Idea here is to break up the logical address space into multiple page tables A simple technique is a two-level page table Why? Most modern computers have very large logical address spaces. If we’re talking about an address space of 232 to 264 via formation of a physical address in memory, a page table used for translations for, say, a 4K page size, will be excessively large. More: If we consider a 32-bit logical address space, and if each address in a page table is 4 bytes, then we’re talking about a page table requiring 4MB of primary memory! And this table size would be a requirement for each process!!! This is prohibitively large and prohibitively expensive. Enter Hierarchical Page Table organization.

Two-Level Paging Example Since this approach does not work well on 64 bit machines, our example is best shown via a 32-bit addressing example… A logical address (on 32-bit machine with 4K page size) is divided into: a page number consisting of 20 bits a page offset consisting of 12 bits Since the page table is itself paged, the page number is further divided into: a 10-bit page number a 10-bit page offset Thus, a logical address is as follows: where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table Let’s see how this looks……………… page number page offset pi p2 d 10 10 12

Two-Level Page-Table Scheme p1 indexes to here p2 displacement (offset) Note also that 2**10 = 1K or 1024 entries and up to a 1024 displacement in each. There are a number of similar schemes. But as we have larger address spaces, most implementations require additional page tables. SPARC architecture with 32-bit addressing supports a 3-level paging scheme; 32-bit 68030 Motorola chip uses 4-level scheme; scheme; The 64-bit UltraSPARC requires 7 levels of paging. Again, this scheme is not very good, in general, for architectures in excess of 32-bits.

Address-Translation Scheme – a bit more Address-translation scheme for a two-level 32-bit paging architecture p1 points to page in outer page table. p2 points to a page that is the displacement in the page pointed to by p1 d is the displacement into this selected page.

Hashed Page Tables For larger paging architectures, we need more efficient schemes The Hashed Page Table approach is common for address spaces > 32 bits In this approach, logical address consists of a virtual page number and a displacement. The virtual page number in the logical address is ‘hashed’ to in the hash table. Each entry in the hash table consists of synonyms; that is, a linked list of elements that hashed to the same location in the hash table. Each element in the hash table (that we hash to using the virtual page number) consists of three components: The virtual page number (search argument) The value of the mapped page frame (target), and A pointer to the next element in the linked list (collision handling for hashing….) Once the virtual page number is hashed to an entry in the hash table, it is compared to the first component of this hit in the hash table. If hit: the corresponding page frame is used to form the physical address we’re after If no hit, the remaining members of linked list are searched using the forward pointer. . .The identified page frame plus the displacement from the original logical address (displacement) constitute the physical memory address of the desired item.

Hashed Page Table Virtual page number In action, ‘p’ from the logical address is the input to the hashing function. Through hashing, we will point to an entry in the hash table. If the first field (of the three) of the entry matches the virtual page number of the desired page, we then have a page frame (‘r’ above), which is then incremented by d (displacement) to form the precise physical address. If this is not a hit, then we use the link field to access the next entry in the linked list…

Inverted Page Table In this approach, as its name implies, we ‘invert’ the process. There is one entry for each real page of memory. Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page Table is sorted by virtual address, so the OS can find where in the table the physical address entry is and use that value directly. (book) So, there’s only one page table in the system, and it contains only a single entry for each page of memory.

Inverted Page Table - 2 Most implementations that use an inverted page approach require some kind of a process identifier (pid) as the first entry in the inverted table. We will use a pid for each entry because there is but one table and different processes use the same table for mappings. We must ensure the correct logical page is used to develop the correct physical address. Some large 64-bit addressing schemes use the inverted page table approach. Consider the example on the next slide: Note the logical address consists of: a process-id, a page number and a displacement (pid, p, d).

Inverted Page Table Architecture The process-id acts as the address space identifier, since it is unique. (Recall ASID from before?) To develop the address, both the pid and the page are used by the memory subsystem looking for a match. If there is a match (remember, there’s one entry per real page of memory…) we now have the page frame identified. We can now simply add the displacement to get the real (physical) address. No match  an error condition.

Inverted Page Table Architecture Overall storage space is much improved, but the time to search the table is increased. Because the inverted page table is sorted by physical address, the entire table might need to be searched for a match. This is prohibitively expensive. As a result, we’d like to limit the search to at most a few page table entries. So we use a hash table as previously described. Unfortunately, this still requires an additional memory access: One for the hash table Another for the page table. Thus a virtual memory access requires at least two memory reads. To help this, a TLB can be used prior to the hash table search. This can speed up performance significantly.

Inverted Page Table Architecture Your book points out that systems that use this approach have difficulty in implementing shared memory, which is usually implemented via several multiple virtual addresses. Thus this standard method doesn’t cut it using inverted page tables because there is only one virtual page table entry per physical page, and one physical page cannot contain two virtual addresses. And we could continue… A solution would be that each entry contain only one process id (one mapping of a virtual address) Then, references to virtual addresses not mapped to would result in a page fault..

End of Chapter 8.2