Memory Caches & TLB Virtual Memory

Slides:



Advertisements
Similar presentations
Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
Advertisements

EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.
CMPT 300: Final Review Chapters 8 – Memory Management: Ch. 8, 9 Address spaces Logical (virtual): generated by the CPU Physical: seen by the memory.
Spring 2003CSE P5481 Introduction Why memory subsystem design is important CPU speeds increase 55% per year DRAM speeds increase 3% per year rate of increase.
Memory Management Design & Implementation Segmentation Chapter 4.
Memory Management (II)
Operating System Support Focus on Architecture
Memory Management 2010.
Translation Buffers (TLB’s)
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
Computer Organization and Architecture
03/22/2004CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Layers and Views of a Computer System Operating System Services Program creation Program execution Access to I/O devices Controlled access to files System.
Computer Organization and Architecture Operating System Support Chapter 8.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Memory Management in Windows and Linux &. Windows Memory Management Virtual memory manager (VMM) –Executive component responsible for managing memory.
Paging. Memory Partitioning Troubles Fragmentation Need for compaction/swapping A process size is limited by the available physical memory Dynamic growth.
Review of Memory Management, Virtual Memory CS448.
Lecture 19: Virtual Memory
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-8 Memory Management (2) Department of Computer Science and Software.
© 2004, D. J. Foreman 1 Virtual Memory. © 2004, D. J. Foreman 2 Objectives  Avoid copy/restore entire address space  Avoid unusable holes in memory.
Chapter 4 Memory Management Virtual Memory.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
1 Virtual Memory. Cache memory: provides illusion of very high speed Virtual memory: provides illusion of very large size Main memory: reasonable cost,
1  2004 Morgan Kaufmann Publishers Chapter Seven Memory Hierarchy-3 by Patterson.
Memory Management – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: Memory Management *** Modified – look for Reading:
Translation Lookaside Buffer
Memory Management Paging (continued) Segmentation
Page Table Implementation
Modeling Page Replacement Algorithms
William Stallings Computer Organization and Architecture
Morgan Kaufmann Publishers
Virtual Memory © 2004, D. J. Foreman.
CS510 Operating System Foundations
Virtual Memory Chapter 8.
Lecture 28: Virtual Memory-Address Translation
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Evolution in Memory Management Techniques
Lecture 23: Cache, Memory, Virtual Memory
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 8 11/24/2018.
Memory Management Paging (continued) Segmentation
Lecture 22: Cache Hierarchies, Memory
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 12/1/2018.
Lecture 29: Virtual Memory-Address Translation
Modeling Page Replacement Algorithms
Translation Lookaside Buffer
Chapter 4: Memory Management
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
Translation Buffers (TLB’s)
Virtual Memory Overcoming main memory size limitation
CSE451 Virtual Memory Paging Autumn 2002
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Memory Management Lectures notes from the text supplement by Siberschatz and Galvin Modified by B.Ramamurthy Chapter 9 4/5/2019.
Translation Buffers (TLB’s)
CSC3050 – Computer Architecture
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 471 Autumn 1998 Virtual memory
Paging and Segmentation
Virtual Memory © 2004, D. J. Foreman.
Translation Buffers (TLBs)
Memory Management Paging (continued) Segmentation
Virtual Memory.
Review What are the advantages/disadvantages of pages versus segments?
Virtual Memory.
Virtual Memory 1 1.
Presentation transcript:

Memory Caches & TLB Virtual Memory

Cache Memory Issues Caching Issues: Improves performance of memory system Introduces some issues to the operating system Issues: Coherence Addressing mode (virtual vs. physical) context switching O.S. code structuring & line size Frame allocation

Coherence Issues A cache can be: combined: instruction + data split: instruction cache + data cache hardware may maintain coherence hardware may not maintain coherence Structure of the cache affects correctness during code segment downloading

Coherence & Instruction Cache Only an issue for split caches when hardware does not guarantee coherence I-cache read-only D-cache read-write 1. O.S. loads user program instructions through data cache 2. User program runs instructions off the I-cache, but I-cache may contain stale lines: OS must flush instruction cache

Addressing Modes A cache may be organized by physical address (most common) by virtual address Virtually addressable caches: No need for TLB check if line is present in cache To avoid flushing the entire cache on context switch, we may add a pid tag

Virtually Addressable Caches virtual address pid address tag virtual address pid address tag virtual address pid address tag By having the pid: No need to flush the entire cache on a context switch OS must supply the tag, usually using a privileged register or instruction

Aliasing vaddr 1 pid1 vaddr 2 pid2 If pid1 & pid2 are sharing a frame: Frame may be mapped in different cache lines Modifications by one process are not seen by the other OS must detect situation and flush the additional line

Context Switching It is often claimed that an O.S. can context switch in x microseconds Should alert your bogusity sensors Such numbers always quoted without cache effect Real penalty of context switch: Cache memory must reload the active process TLB misses at the beginning

Context Switching At the beginning of interval, too many cache and TLB misses. If frequent, performance goes down Introduces variability Some real-time O.S.’es disable cache entirely Some real-time O.S.’es partitions cache among processes (software or hardware)

Caching & O.S. Code Structure O.S. code does not have many loops follow the principle of locality fit in a small space It follows that: O.S. code has very poor cache hit ratio O.S. when invoked pollutes the cache a penalty even when no context switch occurs

Dependence on Cache Line Size Some OS’es are tuned to a particular cache line size e.g. MacOS assumes a 32-byte cache line Tradeoff between: Performance Portability of code

Frame Allocation Relevant to: Physically addressed caches O.S. must allocate frames to a single process (or kernel) such that: cache collisions would be minimized

Example Direct mapped caches use physical address as a hashing key When choice is possible, OS must avoid allocations that result in poor cache performance 96 80 77 Free frames 88 Allocated page assume lines are frame no % 8 + some offset

Virtual Memory Extend the physical memory into disk For each process Keep only the needed pages in memory Swap out the unneeded pages Can have more processes Process size is no longer limited by memory size

Demand Paging Virtual Physical Virtual Process 1 Process 0 Swap space

Demand Paging is not Swapping In demand paging Can page out only unneeded pages Process size is limited by swap space Address space growth requires only a new page Fine grain control In swapping Must swap entire process out Process size is limited by physical memory Partition growth is difficult Cannot control portions of partition

Implementation Page table entries may overload valid bit Swap space mgmt For each frame, must track the inverse mapping (when shared) Frame No. v w r x f m Frame No. v w r x f m Frame No. v w r x f m

On Page Faults Operating system checks access If valid, schedules a disk read in order to bring the required page from swap space Do we wait for the page, or do we context switch to another process? When disk comes back with data, need to readjust page table entry (ies, if sharing)

On Page Faults When swapping in a page, we need: A free frame to read the page into Free frame locked while disk reads page (so that it does not get reallocated) But what if we run out of free frames? Pick a victim frame If dirty, then page it out, else use it

Properties Virtual Memory Page replacement Demand paging Working sets Association of virtual-physical addresses changes over time Page faults can be very expensive (two disk reads) Requires instructions to be restartable Virtual Memory Page replacement Demand paging lazy downloading on demand pre-paging Working sets Thrashing Local vs. Global allocation Page size fragmentation paging overhead TLB coverage Locking kernel pages user pages I/O interlocking Instruction set issues Paging Daemons Page Fault handling