Fundamentals of Programming Languages-II Subject Code: 110010 Teaching SchemeExamination Scheme Theory: 1 Hr./WeekOnline Examination: 50 Marks Practical:

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Virtual Memory Management
Memory Management (II)
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
Memory Management CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Memory Management.
Memory Management 2010.
Chapter 3.2 : Virtual Memory
Translation Buffers (TLB’s)
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Chapter 7: Main Memory CS 170, Fall Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation.
Computer Organization and Architecture
Virtual Memory I Chapter 8.
03/22/2004CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne  Operating System Concepts Segmentation Memory-management scheme that supports user view of memory. A program.
Main Memory. Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are only.
03/17/2008CSCI 315 Operating Systems Design1 Virtual Memory Notice: The slides for this lecture have been largely based on those accompanying the textbook.
1 Tuesday, July 04, 2006 "Programs expand to fill the memory available to hold them." - Modified Parkinson’s Law.
Memory Management CS 519: Operating System Theory
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 8: Memory Management Background Swapping Contiguous.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Review of Memory Management, Virtual Memory CS448.
Part 8: Virtual Memory. Silberschatz, Galvin and Gagne ©2005 Virtual vs. Physical Address Space Each process has its own virtual address space, which.
Operating Systems Chapter 8
Chapter 8 Memory Management Dr. Yingwu Zhu. Outline Background Basic Concepts Memory Allocation.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 34 Paging Implementation.
Operating Systems COMP 4850/CISG 5550 Page Tables TLBs Inverted Page Tables Dr. James Money.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
8.1 Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Paging Physical address space of a process can be noncontiguous Avoids.
1 Address Translation Memory Allocation –Linked lists –Bit maps Options for managing memory –Base and Bound –Segmentation –Paging Paged page tables Inverted.
8.1 Silberschatz, Galvin and Gagne ©2005 Operating System Principles Implementation of Page Table Page table is kept in main memory Page-table base.
CE Operating Systems Lecture 14 Memory management.
Chapter 4 Memory Management Virtual Memory.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
Virtual Memory 1 1.
1 Memory Management. 2 Fixed Partitions Legend Free Space 0k 4k 16k 64k 128k Internal fragmentation (cannot be reallocated) Divide memory into n (possible.
9.1 Operating System Concepts Paging Example. 9.2 Operating System Concepts.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Virtual Memory Various memory management techniques have been discussed. All these strategies have the same goal: to keep many processes in memory simultaneously.
Page Table Implementation. Readings r Silbershatz et al:
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
1 Lecture 8: Virtual Memory Operating System Fall 2006.
Memory Management. Background Memory consists of a large array of words or bytes, each with its own address. The CPU fetches instructions from memory.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Silberschatz, Galvin and Gagne  Operating System Concepts Paging Logical address space of a process can be noncontiguous; process is allocated.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Memory: Page Table Structure
Translation Lookaside Buffer
Module 9: Memory Management
Chapter 9: Virtual Memory
Virtual Memory Chapter 8.
Operating System Concepts
Module 9: Memory Management
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Chapter 8: Memory Management strategies
Translation Buffers (TLB’s)
CSE 451: Operating Systems Autumn 2003 Lecture 10 Paging & TLBs
Translation Buffers (TLBs)
Review What are the advantages/disadvantages of pages versus segments?
Virtual Memory.
CSE 542: Operating Systems
Virtual Memory 1 1.
Presentation transcript:

Fundamentals of Programming Languages-II Subject Code: Teaching SchemeExamination Scheme Theory: 1 Hr./WeekOnline Examination: 50 Marks Practical: 2 Hrs./Week

Unit-I Microprocessors and Micro-Controllers Architectures and Programming Concepts 2

1.5 SEGMENTATION AND PAGING 3

Basic Concepts: Pure Paging and Segmentation Paging: memory divided into equal-sized frames. All process pages loaded into non- necessarily contiguous frames Segmentation: each process divided into variable-sized segments. All process segments loaded into dynamic partitions that are not necessarily contiguous 4

Hardware Translation Translation from logical to physical can be done in software but without protection – Why “without” protection? Hardware support is needed to ensure protection Simplest solution with two registers: base and size Processor Physical memory translation box (MMU)

Segmentation Hardware virtual address offset segment segment table + physical address

Segmentation Segments are of variable size Translation done through a set of (base, size, state) registers - segment table State: valid/invalid, access permission, reference bit, modified bit Segments may be visible to the programmer and can be used as a convenience for organizing the programs and data (i.e code segment or data segments) 7

Paging hardware virtual address page table + physical address page # offset

Paging Pages are of fixed size The physical memory corresponding to a page is called page frame Translation done through a page table indexed by page number Each entry in a page table contains the physical frame number that the virtual page is mapped to and the state of the page in memory State: valid/invalid, access permission, reference bit, modified bit, caching Paging is transparent to the programmer 9

Combined Paging and Segmentation Some MMU combine paging with segmentation Segmentation translation is performed first The segment entry points to a page table for that segment The page number portion of the virtual address is used to index the page table and look up the corresponding page frame number Segmentation not used much anymore so we’ll concentrate on paging – UNIX has simple form of segmentation but does not require any hardware support 10

Address Translation CPU pd p f fd f d page table Memory virtual address physical address 11

Translation Lookaside Buffers Translation on every memory access  must be fast What to do? Caching, of course … Why does caching work? That is, we still have to lookup the page table entry and use it to do translation, right? Same as normal memory cache – cache is smaller so can spend more $$ to make it faster 12

Translation Lookaside Buffer Cache for page table entries is called the Translation Lookaside Buffer (TLB) – Typically fully associative – No more than 64 entries Each TLB entry contains a page number and the corresponding PT entry On each memory access, we look for the page  frame mapping in the TLB 13

Translation Lookaside Buffer 14

Address Translation CPU pd fd f d TLB Memory virtual address physical address p/f f 15

TLB Miss What if the TLB does not contain the appropriate PT entry? – TLB miss – Evict an existing entry if does not have any free ones Replacement policy? – Bring in the missing entry from the PT TLB misses can be handled in hardware or software – Software allows application to assist in replacement decisions 16

Where to Store Address Space? Address space may be larger than physical memory Where do we keep it? Where do we keep the page table? 17

Where to Store Address Space? On the next device down our storage hierarchy, of course … Memory VM Disk

Where to Store Page Table? Interestingly, use memory to “enlarge” view of memory, leaving LESS physical memory This kind of overhead is common – For example, OS uses CPU cycles to implement abstraction of threads Gotta know what the right trade- off is Have to understand common application characteristics Have to be common enough! Page tables can get large. What to do? OS Code Globals Stack Heap P1 Page Table P0 Page Table In memory, of course …

Two-Level Page-Table Scheme 20

Two-Level Paging Example A logical address (on 32-bit machine with 4K page size) is divided into: – a page number consisting of 20 bits. – a page offset consisting of 12 bits. Since the page table is paged, the page number is further divided into: – a 10-bit page number. – a 10-bit page offset. 21

Two-Level Paging Example Thus, a logical address is as follows: where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table. page number page offset pipi p2p2 d

Address-Translation Scheme Address-translation scheme for a two-level 32-bit paging architecture 23

Multilevel Paging and Performance Since each level is stored as a separate table in memory, covering a logical address to a physical one may take four memory accesses. Even though time needed for one memory access is quintupled, caching permits performance to remain reasonable. Cache hit rate of 98 percent yields: effective access time = 0.98 x x 520 = 128 nanoseconds. which is only a 28 percent slowdown in memory access time. 24

Paging the Page Table Page tables can still get large What to do? P1 PT P0 PT Kernel PT Non-page-able Page-able OS Segment Page the page table!

Inverted Page Table One entry for each real page of memory. Entry consists of the virtual address of the page stored in that real memory location, with information about the process that owns that page. Decreases memory needed to store each page table, but increases time needed to search the table when a page reference occurs. Use hash table to limit the search to one — or at most a few — page-table entries. 26

Inverted Page Table Architecture 27

How to Deal with VM  Size of Physical Memory? If address space of each process is  size of physical memory, then no problem – Still useful to deal with fragmentation When VM larger than physical memory – Part stored in memory – Part stored on disk How do we make this work? 28

Demand Paging To start a process (program), just load the code page where the process will start executing As process references memory (instruction or data) outside of loaded page, bring in as necessary How to represent fact that a page of VM is not yet in memory? v i i A B C A B C VM Paging TableMemory Disk 29

Vs. Swapping 30

Page Fault What happens when process references a page marked as invalid in the page table? – Page fault trap – Check that reference is valid – Find a free memory frame – Read desired page from disk – Change valid bit of page to v – Restart instruction that was interrupted by the trap Is it easy to restart an instruction? What happens if there is no free frame? 31

Page Fault (Cont’d) So, what can happen on a memory access? – TLB miss  read page table entry – TLB miss  read kernel page table entry – Page fault for necessary page of process page table – All frames are used  need to evict a page  modify a process page table entry TLB miss  read kernel page table entry – Page fault for necessary page of process page table – Uh oh, how deep can this go? – Read in needed page, modify page table entry, fill TLB 32

Cost of Handling a Page Fault Trap, check page table, find free memory frame (or find victim) … about  s Disk seek and read … about 10 ms Memory access … about 100 ns Page fault degrades performance by ~100000!!!!! – And this doesn’t even count all the additional things that can happen along the way Better not have too many page faults! If want no more than 10% degradation, can only have 1 page fault for every 1,000,000 memory accesses OS had better do a great job of managing the movement of data between secondary storage and main memory 33

1.6 PROCESSING OF INTERRUPTS AND EXCEPTIONS 34

INTERRUPTS The event that causes the interruption is called interrupt and the special routine which is executed is called interrupt service routine. When many I/O devices are connected to a microprocessor based system, one or more than one of the I/O devices may request for service at any time. The microprocessor stops the execution of the current program and gives service to the I/O devices. This feature is called as interrupt. 35

The interrupts are classified as: 1.Single level interrupts 2.Multi level interrupts In a single level interrupt there can be many interrupting devices. But all interrupt requests are made via a single input pin of the CPU. In multilevel interrupts, processor has more than one interrupt pins. 1.The processor completes its current instruction. No instruction is cut off in the middle of its execution. 36

2.The program counter’s current contents are stored on the stack. During the execution of the program counter is pointing to the memory location for the next instruction. 3.The program counter is loaded with the address of an interrupt service routine. 4.Program execution continues with the instruction taken from the memory location pointed by the new program counter contents. 5.The interrupt program continues to execute until a return instruction is executed. 37

6.After execution of the RET instruction processor gets the old address of the program counter from the stack and puts it back into the program counter. This allows the interrupted program to continue executing at the instruction following one where it was interrupted. 38

Exceptions The internally generated errors which produce an interrupt for microprocessor are called exceptions. The exceptions are classified as  Faults: Faults are the exceptions that are detected and serviced before the execution of the faulting instruction.  Traps: Traps are exceptions that are reported immediately after the execution of the instructions which causes the problem.  Aborts: Aborts are exceptions which do not permit the precise location of the instruction causing the exception to be determined. Aborts are used to report severe errors, such as hardware error, or illegal values in the system tables. 39

40