COMP3221: Microprocessors and Embedded Systems

Slides:



Advertisements
Similar presentations
16.317: Microprocessor System Design I
Advertisements

Modified from notes by Saeid Nooshabadi COMP3221: Microprocessors and Embedded Systems Lecture 25: Cache - I Lecturer:
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Virtual Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
CS 153 Design of Operating Systems Spring 2015
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley.
CS61C L35 VM I (1) Garcia © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture.
CS61C L34 Virtual Memory I (1) Garcia, Spring 2007 © UCB 3D chips from IBM  IBM claims to have found a way to build 3D chips by stacking them together.
Chap. 7.4: Virtual Memory. CS61C L35 VM I (2) Garcia © UCB Review: Caches Cache design choices: size of cache: speed v. capacity direct-mapped v. associative.
CS 61C L24 VM II (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
Modified from notes by Saeid Nooshabadi
Virtual Memory Adapted from lecture notes of Dr. Patterson and Dr. Kubiatowicz of UC Berkeley and Rabi Mahapatra & Hank Walker.
Virtual Memory.
CS 61C L23 VM I (1) A Carle, Summer 2006 © UCB inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #23: VM I Andy Carle.
COMP3221 lec36-vm-I.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 13: Virtual Memory - I
CS61C L25 Virtual Memory I (1) Beamer, Summer 2007 © UCB Scott Beamer, Instructor inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #25.
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Virtual Memory - II Lecturer: Hui Wu Session 2, 2005 Modified.
CS61C L19 VM © UC Regents 1 CS61C - Machine Structures Lecture 19 - Virtual Memory November 3, 2000 David Patterson
ECE 232 L27.Virtual.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 27 Virtual.
CS61C L36 VM II (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L33 Virtual Memory I (1) Garcia, Fall 2006 © UCB Lecturer SOE Dan Garcia inst.eecs.berkeley.edu/~cs61c UC Berkeley.
©UCB CS 162 Ch 7: Virtual Memory LECTURE 13 Instructor: L.N. Bhuyan
CS 430 – Computer Architecture 1 CS 430 – Computer Architecture Virtual Memory William J. Taffe using slides of David Patterson.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 33 – Virtual Memory I OCZ has released a 1 TB solid state drive (the biggest.
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 34 – Virtual Memory II Researchers at Stanford have developed “nanoscale.
COMP3221 lec37-vm-II.1 Saeid Nooshabadi COMP 3221 Microprocessors and Embedded Systems Lectures 13: Virtual Memory - II
Lecture 19: Virtual Memory
Chapter 8 – Main Memory (Pgs ). Overview  Everything to do with memory is complicated by the fact that more than 1 program can be in memory.
CE Operating Systems Lecture 14 Memory management.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Review (1/2) °Caches are NOT mandatory: Processor performs arithmetic Memory stores data Caches simply make data transfers go faster °Each level of memory.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
Csci 136 Computer Architecture II – Virtual Memory
IT 251 Computer Organization and Architecture
Memory Management Continued Questions answered in this lecture: What is paging? How can segmentation and paging be combined? How can one speed up address.
COMP 3221: Microprocessors and Embedded Systems Lectures 27: Cache Memory - III Lecturer: Hui Wu Session 2, 2005 Modified.
CS161 – Design and Architecture of Computer
Memory Hierarchy Ideal memory is fast, large, and inexpensive
ECE232: Hardware Organization and Design
CS161 – Design and Architecture of Computer
CSE 120 Principles of Operating
Lecture 12 Virtual Memory.
CS703 - Advanced Operating Systems
CS 704 Advanced Computer Architecture
Address Translation Mechanism of 80386
IT 251 Computer Organization and Architecture
Some of the slides are adopted from David Patterson (UCB)
CSE 153 Design of Operating Systems Winter 2018
There is one handout today at the front and back of the room!
Lecture 28: Virtual Memory-Address Translation
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Review (1/2) Caches are NOT mandatory:
Csci 211 Computer System Architecture – Review on Virtual Memory
Instructor Paul Pearce
Some of the slides are adopted from David Patterson (UCB)
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
CSE 451: Operating Systems Autumn 2005 Memory Management
Virtual Memory Overcoming main memory size limitation
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
CSE 451: Operating Systems Autumn 2003 Lecture 9 Memory Management
Albert Chae, Instructor
Evolution in memory management techniques
CS703 - Advanced Operating Systems
CSE 153 Design of Operating Systems Winter 2019
Virtual Memory Lecture notes from MKP and S. Yalamanchili.
Virtual Memory 1 1.
Presentation transcript:

COMP3221: Microprocessors and Embedded Systems Lecture 28: Virtual Memory-I http://www.cse.unsw.edu.au/~cs3221 Lecturer: Hui Wu Session 2, 2005 Modified from notes by Saeid Nooshabadi

Overview Virtual Memory Page Table

Caches are NOT mandatory: Cache Review (#1/2) Caches are NOT mandatory: Processor performs arithmetic Memory stores instructions & data Caches simply make things go faster Each level of memory hierarchy is just a subset of next lower level Caches speed up due to Temporal Locality: store data used recently Block size > 1 word speeds up due to Spatial Locality: store words adjacent to the ones used recently

Cache Review (#2/2) Cache design choices: size of cache: speed vs. capacity direct-mapped vs. associative for N-way set assoc: choice of N block replacement policy 2nd level cache? Write through vs. write back? Use performance model to pick between choices, depending on programs, technology, budget, ...

Another View of the Memory Hierarchy Regs Upper Level Instr. Operands Faster Thus far { Cache Blocks L2 Cache Blocks Memory { Next: Virtual Memory Pages Disk Files Larger Tape Lower Level

Called “Virtual Memory” If Principle of Locality allows caches to offer (usually) speed of cache memory with size of DRAM memory, then why not, recursively, use at next level to give speed of DRAM memory, size of Disk memory? Called “Virtual Memory” Also allows OS to share memory, protect programs from each other Today, more important for protection vs. just another level of memory hierarchy Historically, it predates caches

Problems Leading to Virtual Memory (#1/2) Programs address space is larger than the physical memory. Need to swap code and data back and forth between memory and Hard disk using Virtual Memory) ¥ Stack >>64 MB Heap Physical Memory 64 MB Static Code

Problems Leading to Virtual Memory (#2/2) ¥ Code Static Heap Stack ¥ Code Static Heap Stack ¥ Code Static Heap Stack Many Processes (programs) active at the same time. (Single Processor - many Processes) Processor appears to run multiple programs all at once by rapidly switching between active programs. The rapid switching is managed by Memory Management Unit (MMU) by using Virtual Memory concept. Each program sees the entire address space as its own. How to avoid multiple programs overwriting each other.

Segmentation Solution Segmentation provides simple MMU Program views its memory as set of segments. Code segment, Data Segment, Stack segment, etc. Each program has its own set of private segments. Each access to memory is via a segment selector and offset within the segment. It allows a program to have its own private view of memory and to coexist transparently with other programs in the same memory space.

Segmentation Memory Management Unit Virtual Address to memory segment selector logical address Look up table held by OS in mem base bound Segment Descriptor Table (SDT) + >? physical address access fault Base: The base address of the segment Logical address: an offset within a segment Bound: Segment limit SDT: Holds Access and other information about the segment

Virtual to Physical Addr. Translation Program operates in its virtual address space Physical memory (incl. caches) HW mapping virtual address (inst. fetch load, store) physical address (inst. fetch load, store) Each program operates in its own virtual address space; Each is protected from the other OS can decide where each goes in memory Hardware (HW) provides virtual -> physical mapping

Simple Example: Base and Bound Reg ¥ Enough space for User D, but discontinuous (“fragmentation problem”) User C base+ bound User B base Want discontinuous mapping Process size >> mem Addition not enough! User A OS

Mapping Virtual Memory to Physical Memory Divide into equal sized chunks (about 4KB) ¥ Stack Any chunk of Virtual Memory assigned to any chuck of Physical Memory (“page”) Physical Memory Heap 64 MB Static Code

Paging Organization (assume 1 KB pages) Page is unit of mapping page 0 1K 1024 31744 Virtual Memory Virtual Address page 1 page 31 2048 page 2 ... page 0 1024 7168 Physical Address Memory 1K page 1 page 7 ... Addr Trans MAP Page also unit of transfer from disk to physical memory Addr Trans MAP is organised by OS

Virtual Memory Mapping Function Cannot have simple function to predict arbitrary mapping Use table lookup of mappings Page Number Offset Use table lookup (“Page Table”) for mappings: Page number is index Virtual Memory Mapping Function Physical Offset = Virtual Offset Physical Page Number = PageTable[Virtual Page Number] (P.P.N. also called “Page Frame”)

Address Mapping: Page Table Virtual Address: page no. offset (actually, concatenation) Reg #2 in CP #15 in ARM index into page table + Physical Memory Address Page Table Val -id Access Rights Physical Page Number . V A.R. P. P. A. ... Page Table Base Reg Page Table located in physical memory

Each process running in the operating system has its own page table A page table is an operating system structure which contains the mapping of virtual addresses to physical locations There are several different ways, all up to the operating system, to keep this data around Each process running in the operating system has its own page table “State” of process is PC, all registers, plus page table OS changes page tables by changing contents of Page Table Base Register

Paging/Virtual Memory for Multiple Pocesses User A: Virtual Memory User B: Virtual Memory ¥ Physical Memory ¥ Stack Stack 64 MB A Page Table B Page Table Heap Heap Static Static Code Code

Page Table Entry (PTE) Format Contains either Physical Page Number or indication not in Main Memory OS maps to disk if Not Valid (V = 0) ... Page Table Val -id Access Rights Physical Page Number V A.R. P. P. N. P. P.N. P.T.E. If valid, also check if have permission to use page: Access Rights (A.R.) may be Read Only, Read/Write, Executable

Things to Remember Apply Principle of Locality Recursively Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings vs. tag/data in cache Virtual Memory allows protected sharing of memory between processes with less swapping to disk, less fragmentation than always swap or base/bound Virtual Memory allows protected sharing of memory between processes with less swapping to disk, less fragmentation than always swap or base/bound in Segmentation

Reading Material Steve Furber: ARM System On-Chip; 2nd Ed, Addison-Wesley, 2000, ISBN: 0-201-67519-6. Chapter 10.