Download presentation
Presentation is loading. Please wait.
Published bySusan Blair Modified over 9 years ago
1
NETW3005 Memory Management
2
Reading For this lecture, you should have read Chapter 8 (Sections 1-6). NETW3005 (Operating Systems) Lecture 07 – Memory Management2
3
This lecture – memory management Background – general concepts Getting programs/data into memory Logical and physical addresses Memory allocation methods Paging Segmentation NETW3005 (Operating Systems) Lecture 07 – Memory Management3
4
Recap on storage hierarchy NETW3005 (Operating Systems) Lecture 07 – Memory Management4 Cache Storage Main Memory Secondary Storage (hard disk) access speed increases access time decreases cost increases capacity decreases PRIMARY STORAGE: can be referenced directly by CPU Must be loaded into memory before being referenced
5
The instruction execution cycle The CPU fetches an instruction from main memory, according to the value of the program counter. The instruction may require other main memory locations to be accessed, for loading or storing contents of registers. NETW3005 (Operating Systems) Lecture 07 – Memory Management5
6
Memory unit Responsible for accessing main memory. Doesn’t know or care what the data being accessed are actually used for. NETW3005 (Operating Systems) Lecture 07 – Memory Management6
7
Memory management Deals with how to get processes into main memory, and the organisation of that memory. Non-trivial task. –The memory unit accesses addresses in primary memory, e.g. 00084. –Source code refers to symbols. variables: e.g. count. procedures: e.g. get_next_item. NETW3005 (Operating Systems) Lecture 07 – Memory Management7
8
Address binding Symbols need to be converted into addresses. Technically, we talk of symbols being bound to addresses. Symbols can be converted into addresses at one of several points. NETW3005 (Operating Systems) Lecture 07 – Memory Management8
9
Address binding (continued) Compile time. Link time. Load time. Run time. NETW3005 (Operating Systems) Lecture 07 – Memory Management9
10
Dynamic linking and loading When a source program is compiled, the code for the functions provided by the language (e.g. C++ standard template libraries) needs to be incorporated into the object program. This can be done statically or dynamically. NETW3005 (Operating Systems) Lecture 07 – Memory Management10
11
Dynamic linking and loading (2) Static linking — done at link or load time. Dynamic linking — done at run-time. – A ‘stub’ containing a pointer is included in place of the code for each routine. –When the stub is referenced, the pointer is initialised. –If the routine is already loaded, use it, other- wise load the routine. NETW3005 (Operating Systems) Lecture 07 – Memory Management11
12
Advantages of dynamic linking Saving on memory space – don’t need multiple copies of the same routines. Saving on load time – only need to load a routine once. Ease of updating libraries (e.g. to fix bugs, change versions, etc). NETW3005 (Operating Systems) Lecture 07 – Memory Management12
13
Logical and physical addresses A logical address is one referred to in a piece of executable code (e.g. to refer to a variable or a location). The CPU executes instructions involving logical addresses. A physical address is the address which is actually sent to the memory unit. NETW3005 (Operating Systems) Lecture 07 – Memory Management13
14
The memory-management unit The logical address space of a process runs from 0 to the memory limit of the process. Physical address space runs from 0 to the size of main memory. The memory management unit (MMU) maps between the two. NETW3005 (Operating Systems) Lecture 07 – Memory Management14
15
The memory-management unit (2) It takes a logical address generated by the CPU, and adds a number N to compute the physical address. The number is held in a relocation register To move a process in memory, just change the relocation register. NETW3005 (Operating Systems) Lecture 07 – Memory Management15
16
Run-time binding and multitasking Run-time binding is useful for saving on memory –Only load modules when they’re needed. –Allow several processes to share one copy of a module. NETW3005 (Operating Systems) Lecture 07 – Memory Management16
17
Run-time binding and multitasking There are also many benefits that relate to a multitasking scenario. –Until now, our multitasking scenario has involved switching between processes resident in main memory. –However, while a process is waiting for the CPU, there’s no reason for it to be in main memory. NETW3005 (Operating Systems) Lecture 07 – Memory Management17
18
CPU and memory as resources Multitasking means allowing several processes to take turns to use the CPU. Processes can also take turns to be allocated a region of memory. NETW3005 (Operating Systems) Lecture 07 – Memory Management18
19
Swapping Processes are swapped in and out of main memory from/to a backing store. Swapping is done by the memory manager, which is a module of the kernel. The memory manager needs to work in synch with the CPU scheduler. NETW3005 (Operating Systems) Lecture 07 – Memory Management19
20
Memory allocation and protection Processes are not allowed to access each other’s address space (except under special circumstances). How is this rule implemented? Each process has an associated relocation register and limit register. NETW3005 (Operating Systems) Lecture 07 – Memory Management20
21
Memory allocation and protection NETW3005 (Operating Systems) Lecture 07 – Memory Management21 main memory CPU limit register < no TRAP: addressing error relocation register physical address logical address +
22
Allocating memory for processes How to organise memory allocation for many processes? A simple method – divide memory into a number of fixed-size partitions. Problems? NETW3005 (Operating Systems) Lecture 07 – Memory Management22
23
Allocating memory for processes A more complex method: –Each free region of memory is termed a hole. –When a process arrives, we find a hole big enough to put it into. –If the hole is bigger than the process, we keep a record of the new (smaller) hole. NETW3005 (Operating Systems) Lecture 07 – Memory Management23
24
Algorithms for choosing a hole There are three different methods –First-fit: allocate the first hole you find that’s big enough. –Best-fit: find the hole that leaves the small- est leftover hole. –Worst-fit: find the hole that leaves the biggest leftover hole. NETW3005 (Operating Systems) Lecture 07 – Memory Management24
25
Advantages/disadvantages? First-fit: quickest but possibly non- optimal. Best-fit: exhaustive search. Also you end up creating very small holes. Worst-fit: exhaustive search. You might not use the large holes as effectively as you could. NETW3005 (Operating Systems) Lecture 07 – Memory Management25
26
External fragmentation If memory is broken into many holes, there might be enough memory in total to fit a particular process, but not all in one place. NETW3005 (Operating Systems) Lecture 07 – Memory Management26 P0 P1 holes
27
Internal fragmentation The operating system has to keep a table of all the currently available holes in memory. If we keep very many small holes, we’ll end up with a huge amount of house- keeping to do. To avoid creating tiny holes, we sometimes allocate more memory than a process needs. NETW3005 (Operating Systems) Lecture 07 – Memory Management27
28
Non-contiguous memory allocation The allocation methods described so far have all required the memory allocated for a process to be contiguous. Obviously, noncontiguous allocation has advantages. One way of providing noncontiguous allocation is through paging. NETW3005 (Operating Systems) Lecture 07 – Memory Management28
29
Background NETW3005 (Operating Systems) Lecture 07 – Memory Management29 physical memory logical memory backing store PAGES FRAMES
30
Paging The CPU generates addresses which the paging hardware breaks into two components: –A page number (identifying a page); –An offset (an address within that page). Each process is allocated a certain number of pages. NETW3005 (Operating Systems) Lecture 07 – Memory Management30
31
Paging NETW3005 (Operating Systems) Lecture 07 – Memory Management31 CPU addres s p off PAGE TABLE f off logical addres s physical address memory
32
Paging The page table also stores a valid/invalid bit, which is set to invalid for out-of-range memory references. Paging is a way of implementing run- time address binding. NETW3005 (Operating Systems) Lecture 07 – Memory Management32
33
Paging example NETW3005 (Operating Systems) Lecture 07 – Memory Management33 page table CPU page0 page1 page2 page3 0 1 v 1 4 v 2 3 v 3 6 v 01234560123456 page3 page1 page2 page0 memory
34
Some questions What about external fragmentation? –This scheme eliminates it. What about internal fragmentation? –Each process is allocated a discrete number of pages. So on average half a page per process How big should pages be? –Small, but not so small that the disk I/O and housekeeping become too much. NETW3005 (Operating Systems) Lecture 07 – Memory Management34
35
More questions How many page tables do we need? –One for each process. (Because they’re all going to have a Page0.) How does the O/S keep track of a process’ page table? –It keeps a copy in its PCB. (Remember I said that the PCB contains ‘memory management information’...) NETW3005 (Operating Systems) Lecture 07 – Memory Management35
36
Implementing paging The fastest/most expensive way of implementing pages is to store the page table in a set of special-purpose registers. But this isn’t feasible if the page table is big (as it normally is). NETW3005 (Operating Systems) Lecture 07 – Memory Management36
37
Implementing paging The alternative is to store the page table in main memory. This could slow things down because we now need two memory accesses. The solution is to keep a cache of page table entries that have been used recently, in a special set of parallel- access registers called associative registers. NETW3005 (Operating Systems) Lecture 07 – Memory Management37
38
Segmentation In the paging scheme above, it’s the hardware that partitions a CPU- generated address into pages. But there are some reasons for allowing a process to partition its own address space. NETW3005 (Operating Systems) Lecture 07 – Memory Management38
39
Segmentation NETW3005 (Operating Systems) Lecture 07 – Memory Management39 subroutine stack SQRT main program symbol table logical address space A segmentation allocation scheme supports this view of memory.
40
Implementing segmentation In segmentation, a logical address is a pair. These are mapped onto physical add- resses by the segmentation hardware, using a segment table. NETW3005 (Operating Systems) Lecture 07 – Memory Management40
41
Implementing segmentation Each entry in the table has a segment base (the starting physical address) and a segment limit (the size of the segment). Segments can be stored non-contiguously. Like the page table, the segment table can either be put into fast registers or main memory. NETW3005 (Operating Systems) Lecture 07 – Memory Management41
42
Advantages of segmentation Makes protection and sharing easy to deal with (as all items in a given segment are likely to have the same protection status). You just need one protection bit per segment. It just makes it easier to write machine code (and thus to write compilers.) NETW3005 (Operating Systems) Lecture 07 – Memory Management42
43
Disadvantages of segmentation External fragmentation is back, but less severely. Some schemes (e.g. MULTICS) use paging and segmentation. The idea is that a memory reference is a segment base plus segment offset, where the segment offset is itself broken into two components: a page number and a page offset. NETW3005 (Operating Systems) Lecture 07 – Memory Management43
44
Next Lecture Virtual Memory Chapter 9 (Sections 1-7)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.