Address Translation Mechanism of 80386

Slides:



Advertisements
Similar presentations
Memory Management Unit
Advertisements

Virtual Memory In this lecture, slides from lecture 16 from the course Computer Architecture ECE 201 by Professor Mike Schulte are used with permission.
EECS 470 Virtual Memory Lecture 15. Why Use Virtual Memory? Decouples size of physical memory from programmer visible virtual memory Provides a convenient.
16.317: Microprocessor System Design I
Intel MP.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
Vacuum tubes Transistor 1948 ICs 1960s Microprocessors 1970s.
Memory Management (II)
Computer ArchitectureFall 2008 © November 10, 2007 Nael Abu-Ghazaleh Lecture 23 Virtual.
Computer ArchitectureFall 2007 © November 21, 2007 Karem A. Sakallah Lecture 23 Virtual Memory (2) CS : Computer Architecture.
Chapter 3.2 : Virtual Memory
Translation Buffers (TLB’s)
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Vacuum tubes Transistor 1948 –Smaller, Cheaper, Less heat dissipation, Made from Silicon (Sand) –Invented at Bell Labs –Shockley, Brittain, Bardeen ICs.
Virtual Memory and Paging J. Nelson Amaral. Large Data Sets Size of address space: – 32-bit machines: 2 32 = 4 GB – 64-bit machines: 2 64 = a huge number.
Virtual Memory I Chapter 8.
Virtual Memory By: Dinouje Fahih. Definition of Virtual Memory Virtual memory is a concept that, allows a computer and its operating system, to use a.
UNIT 2 Memory Management Unit and Segment Description and Paging
CS 346 – Chapter 8 Main memory –Addressing –Swapping –Allocation and fragmentation –Paging –Segmentation Commitment –Please finish chapter 8.
Intel MP (32-bit microprocessor) Designed to overcome the limits of its predecessor while maintaining the software compatibility with the.
Computer Architecture Lecture 28 Fasih ur Rehman.
Lecture 19: Virtual Memory
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
1 Virtual Memory. 2 Outline Pentium/Linux Memory System Core i7 Suggested reading: 9.6, 9.7.
Virtual Memory Expanding Memory Multiple Concurrent Processes.
Fall 2000M.B. Ibáñez Lecture 17 Paging Hardware Support.
System Address Registers/Memory Management Registers Four memory management registers are used to specify the locations of data structures which control.
Virtual Memory Part 1 Li-Shiuan Peh Computer Science & Artificial Intelligence Lab. Massachusetts Institute of Technology May 2, 2012L22-1
Pentium III Memory.
1 Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: –illusion of having more physical memory –program relocation.
80386DX.
Virtual Memory. Virtual Memory: Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB)
Virtual Memory 1 1.
1 Some Real Problem  What if a program needs more memory than the machine has? —even if individual programs fit in memory, how can we run multiple programs?
1 i386 Memory Management Professor Ching-Chi Hsu 1998 年 4 月.
Chapter 91 Logical Address in Paging  Page size always chosen as a power of 2.  Example: if 16 bit addresses are used and page size = 1K, we need 10.
4.3 Virtual Memory. Virtual memory  Want to run programs (code+stack+data) larger than available memory.  Overlays programmer divides program into pieces.
EFLAG Register of The The only new flag bit is the AC alignment check, used to indicate that the microprocessor has accessed a word at an odd.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
CS2100 Computer Organisation Virtual Memory – Own reading only (AY2015/6) Semester 1.
Lab4: Virtual Memory CS 3410 : Computer System Organization & Programming Spring 2015.
Virtual Memory Ch. 8 & 9 Silberschatz Operating Systems Book.
Virtual Memory Review Goal: give illusion of a large memory Allow many processes to share single memory Strategy Break physical memory up into blocks (pages)
Constructive Computer Architecture Virtual Memory: From Address Translation to Demand Paging Arvind Computer Science & Artificial Intelligence Lab. Massachusetts.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
LECTURE 12 Virtual Memory. VIRTUAL MEMORY Just as a cache can provide fast, easy access to recently-used code and data, main memory acts as a “cache”
Memory Management memory hierarchy programs exhibit locality of reference - non-uniform reference patterns temporal locality - a program that references.
CS161 – Design and Architecture of Computer
Translation Lookaside Buffer
ECE232: Hardware Organization and Design
CS161 – Design and Architecture of Computer
Today How was the midterm review? Lab4 due today.
Address Translation Mechanism of 80386
Some Real Problem What if a program needs more memory than the machine has? even if individual programs fit in memory, how can we run multiple programs?
CSE 153 Design of Operating Systems Winter 2018
Virtual Memory Chapter 8.
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Operating Modes UQ: State and explain the operating modes of X86 family of processors. Show the mode transition diagram highlighting important features.(10.
Translation Lookaside Buffer
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Translation Buffers (TLB’s)
Virtual Memory Overcoming main memory size limitation
CSE451 Virtual Memory Paging Autumn 2002
Translation Buffers (TLB’s)
CSE 153 Design of Operating Systems Winter 2019
Translation Buffers (TLBs)
Virtual Memory Use main memory as a “cache” for secondary (disk) storage Managed jointly by CPU hardware and the operating system (OS) Programs share main.
Review What are the advantages/disadvantages of pages versus segments?
Virtual Memory 1 1.
Presentation transcript:

Address Translation Mechanism of 80386

Protected Mode Addressing Mechanism 80386 transforms logical addresses into physical address two steps: Segment translation: a logical address is converted to a linear address. Page translation: a linear address is converted to a physical address.(optional) These translations are performed in a way that is not visible to applications programmers.

The following figure illustrates the two translations:

Segmentation

Segmentation Unit Logical(Virtual) Address Linear Address

Base Address in LDTR Register Base Address in GDTR Register

Segment Descriptor

For currently executing task

For each Task

Paging

Paging Unit Linear Address Physical Address

Page Descriptor Base Register CR2 is used to store the 32-bit linear address of page fault. CR3 (Page Directory Physical Base Address Register) stores the physical starting address of Page Directory.

Page Descriptor Base Register The lower 12 bits of CR3 are always zero to ensure that the Page Directory is always page aligned A move operation to CR3 automatically loads the Page Table Entry caches and a task switch through a TSS changes the value of CR0.

Page Directory It is at the most 4KB in size and allows upto 1024 entries are allowed. The upper 10 bits of the linear address are used as an index to corresponding page directory entry Page directory entry points to page tables.

Page Directory Entry

Page Tables Each Page Table is 4KB and holds up to 1024 Page Table Entries(PTE). PTEs contain the starting address of the page frame and statistical information about the page. Upper 20 bit page frame address is concatenated with the lower 12 bits of the linear address to form the physical address. Page tables can be shared between tasks and swapped to disks.

Page Table Entry P(Present)Bit: indicates if the entry can be used in address translation. P-bit of the currently executed page is always high. A (Accessed) Bit: It is set before any access to the page.

Page Table Entry D (Dirty) bit: It is set before a write operation to the page is carried out. The D bit is undefined for PDEs. OS Reserved Bits: They are defined by the operating system software. U/S (User/Supervisor)Bit and R/W (Read/Write) Bit: They are used to provide protection. They are decoded as

Example Linear Address : 0301008A Binary Hex 00C 010 08A 0000 0011 0000 0001 0000 0000 1000 1010 Binary 00 0000 1100 (10bits) 00 0001 0000 (10bits) 0000 1000 1010 (12bits) Hex 00C 010 08A

Example

Hex Binary 00 0000 1100 00 0000 1100 x 0100 _____________ 00 0011 0000 00C(DIR) x4 030 Binary 00 0000 1100 00 0000 1100 x 0100 _____________ 00 0011 0000

CR3 DIR*4 + Index to PDE = (20-bit) (12-bit) 00010H 030H 00010030H + =

Example

Page Directory Entry

Hex Binary 00 0001 0000 00 0001 0000 x 0100 _____________ 00 0100 0000 010(TABLE) x4 040 Binary 00 0001 0000 00 0001 0000 x 0100 _____________ 00 0100 0000

PTA Table*4 05001H 05001040H + 040H + Index to PTE = = (20-bit)

Example

Page Table Entry

PFA Offset 03000H 03000 08AH + 08AH + Physical Address = = (20-bit)

Example

Translation Lookaside Buffer(TLB) Performance degrades if the processor access two levels of tables for every memory reference. To solve this problem, the Intel386 DX keeps a cache of the most recently accessed pages and this cache is called Translation Lookaside Buffer (TLB). TLB is a 4 way set associative 32 entry page table cache

Translation Lookaside Buffer(TLB)

Translation Lookaside Buffer(TLB) TLB has 4 sets of eight entries each. Each entry consists of a TAG and a DATA. Tags are 24 bit wide. They contain 20 upper bits of linear address, a valid bit (Validation of Entry) and three attribute bits(D,U/S and R/W) Data portion of each entry contains upper 20 bits of the Physical address.

TLB Entry V D U/S R/W Upper 20 bit Linear Address Upper 20-bit Physical Address

Translation Lookaside Buffer(TLB) It automatically keeps the most commonly used Page Table Entries. 32-entry TLB coupled with a 4K page size results in the coverage of 128KB of memory addresses.

Paging Operation The paging unit hardware receives a 32-bit linear address from the segmentation unit. The upper 20 linear address bits are compared with all 32 entries in the TLB to determine if there is a match. If there is a match (i.e. a TLB hit), then the 32-bit physical address is calculated and will be placed on the address bus.

Paging Operation If PTE entry is not in TLB, the 80386 DX will read the appropriate PDE Entry. If P = 1 on PDE (the page table is in memory), then the 80386 DX will read the appropriate PTE and set the Access bit. If P = 1 on PTE ( the page is in memory), then the Intel386 DX will update the Access and Dirty bits as needed and fetch the operand.

Paging Operation The upper 20 bits of the linear address read from the page table will be stored in the TLB for future accesses. If P = 0 for either PDE or PTE, then the processor will generate a page fault exception This exception is also generated when protection rules are violated and the CR2 is loaded with the page fault address

Paging Operation Linear Address Y Upper 20 bits available in TLB (Page is present in physical memory, set A and D(if needed)) N N P=1 in PDE? (Page Table is present in physical memory, set access bit) Y N P=1 in PTE? Y (Page is present in physical memory but entry is not there in TLB,set A and D(if needed) ) Page Fault Exception Update TLB

Paging Operation

Paging Main Memory The operating system uses page tables to map the pages in the linear virtual address space onto main memory . . . Page 0 Page 1 Page 2 . . . Page m Page 0 Page 1 Page 2 . . . Page n linear virtual address space of Program 1 linear virtual address space of Program 2 Hard Disk The operating system swaps pages between memory and the hard disk Each running program has its own page table Pages that cannot fit in main memory are stored on the hard disk As a program is running, the processor translates the linear virtual addresses onto real memory (called also physical) addresses