Presentation is loading. Please wait.

Presentation is loading. Please wait.

Virtual Memory:Part 2 Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai.

Similar presentations


Presentation on theme: "Virtual Memory:Part 2 Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai."— Presentation transcript:

1 Virtual Memory:Part 2 Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai

2 Index Recap Translation lookaside buffer Segmentation Segmentation with paging Working set model References

3 Terms & Notions Virtual memory (VM) is Not a physical device but an abstract concept Comprised of the virtual address spaces (of all processes) Virtual address space (VAS) (of one process) Set of visible virtual addresses (Some systems may use a single VAS for all processes)

4 Paging Page: The Virtual Address Space is divided into equal number of units called a Page. Each page is of the same size. Page frame: The Physical Address Space is divided into equal number of units called Page Frames. Each page frame is of the same size. Memory Management Unit (MMU): it is used to map the virtual address onto the physical address.

5 Translation Lookaside Buffer Each virtual memory reference can cause two physical memory accesses one to fetch the page table one to fetch the data To overcome this problem a high-speed cache is set up for page table entries called the TLB - Translation Lookaside Buffer

6 Translation Lookaside Buffer Contains page table entries that have been most recently used Functions same way as a memory cache

7 Translation Lookaside Buffer Given a virtual address, processor examines the TLB If page table entry is present (a hit), the frame number is retrieved and the real address is formed If page table entry is not found in the TLB (a miss), the page number is used to index the process page table

8 Translation Lookaside Buffer First checks if page is already in main memory if not in main memory a page fault is issued The TLB is updated to include the new page entry

9 Operation of Paging and Translation Lookaside Buffer (Stallings Fig 8.8)

10 Use of a Translation Lookaside Buffer (Stallings Fig 8.7)

11 Segmentation What is Segmentation? Segmentation: Advantages Segmentation: Disadvantages

12 Virtual address space Call stack Parse tree Constant table Source text Symbol table Address space allocated to the parse tree free Space currently being used by the parse tree Symbol table has bumped into the source text table

13 MMU STBR STLR BaseLimitOther Segment table What is Segmentation? Seg 1 (code) Seg 2 (data) Seg 3 (stack) Virtual memory Segment #Offset virtual address memory access fault no Physical memory Seg 2 (data) Seg 1 (code) Seg 3 (stack) 0x00 as in paging: valid, modified, protection, etc. offset < limit ? External fragmentation Segment Base + Offset physical address yes

14 Segmentation: Advantages As opposed to paging: No internal fragmentation (but: external fragmentation) May save memory if segments are very small and should not be combined into one page (e.g. for reasons of protection) Segment tables: only one entry per actual segment as opposed to one per page in VM Average segment size >> average page size  less overhead (smaller tables)

15 Segmentation: Disadvantages External fragmentation Costly memory management algorithms Segmentation: find free memory area big enough (search!) Paging: keep list of free pages, any page is ok (take first!) Segments of unequal size not suited as well for swapping

16 Combined Segmentation and Paging (CoSP) What is CoSP? CoSP: Advantages CoSP: Disadvantages

17 CPU sso Seg ment limit Page table base Segment table( for process) < yes no + Logical address Memory trap so ppo Physical memory Page table (for segment) f f po Architecture For Segmentation With Paging

18 CoSP: Advantages Reduces memory usage as opposed to pure paging Page table size limited by segment size Segment table has only one entry per actual segment Simplifies handling protection and sharing of larger modules (define them as segments) Most advantages of paging still hold Simplifies memory allocation Eliminates external fragmentation Supports swapping, demand paging, prepaging etc.

19 CoSP: Disadvantages Internal fragmentation Yet only an average of about ½ page per contiguous address range Page 1 Page 2 Process requests a 6KB address range (4KB pages) internal fragmentation

20 Working Sets Working set of pages: minimum collection of pages that must be loaded in main memory for a process to operate efficiently without unnecessary page faults. “Smallest collection of information that must be present in main memory to assure efficient execution of the program. Process/Working Set: two manifestations of same ongoing computational activity.

21 Working Set Strategy W(t,D) = set of pages in memory at time t of that process that have been referenced in the last D virtual time units. Virtual time = time that elapses while process in execution measured in instruction steps. Working set size: number of pages in W(t,D).

22 Characteristics of Working Sets Size: Working set is non decreasing function of window (D) size. Specifically, W(t, D+1) contains W(t,D). Prediction: Expect intuitively that immediate past page reference behavior constitutes good prediction of immediate future behavior.

23 Detecting/Measuring W(t,D) Hardware mechanism to record if page referenced in last D seconds. Software: Sample page table entries at intervals of D/K. Any page that was referenced in these intervals is in working set.

24 Memory Allocation A program will not be run unless there is space in memory for its working set.

25 Using the Working Set Concept A strategy for resident set size: Monitor the working set of each process Periodically remove from the resident set of a process those pages that are not in its working set. A process may execute only if its working set is in main memory (if resident set includes its working set).

26 Issues With this Strategy Past does not necessarily predict future. Size and membership of working set change over time. A true measurement of WS for each process is impractical. Need to time stamp every page reference and keep a time-ordered queue. Optimal value of D is unknown and would vary.

27 Alternatively Look at page fault rate, not exact page references. Page fault rate falls as resident set size increases. If page fault rate is below some threshold, give a smaller resident set size If above some threshold, increase resident set size.

28 References Operating Systems,3 rd edition-Gary Nutt. Modern Operating systems,2 nd edition- Tanenbaum. World Wide Web. Operating Systems, William Stallings.

29 The End


Download ppt "Virtual Memory:Part 2 Kashyap Sheth Kishore Putta Bijal Shah Kshama Desai."

Similar presentations


Ads by Google