MEMORY ORGANIZATION Memory Hierarchy Main Memory Auxiliary Memory

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

1 Lecture 13: Cache and Virtual Memroy Review Cache optimization approaches, cache miss classification, Adapted from UCB CS252 S01.
M. Mateen Yaqoob The University of Lahore Spring 2014.
Computer Organization CS224 Fall 2012 Lesson 44. Virtual Memory  Use main memory as a “cache” for secondary (disk) storage l Managed jointly by CPU hardware.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
1 Memory Organization Computer Organization Prof. H. Yoon Memory Hierarchy Main Memory Auxiliary Memory Associative Memory Cache Memory Virtual Memory.
MEMORY ORGANIZATION Memory Hierarchy Main Memory Auxiliary Memory
Chapter 101 Virtual Memory Chapter 10 Sections and plus (Skip:10.3.2, 10.7, rest of 10.8)
Memory Management (II)
Virtual Memory Chapter 8. Hardware and Control Structures Memory references are dynamically translated into physical addresses at run time –A process.
CS 104 Introduction to Computer Science and Graphics Problems
Virtual Memory Chapter 8.
Memory Organization.
Chapter 3.2 : Virtual Memory
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Computer Organization Cs 147 Prof. Lee Azita Keshmiri.
Computer Architecture Part III-C: Memory Access and Management.
VIRTUAL MEMORY. Virtual memory technique is used to extents the size of physical memory When a program does not completely fit into the main memory, it.
Unit-4 (CO-MPI Autonomous)
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
MEMORY MANAGEMENT Presented By:- Lect. Puneet Gupta G.P.C.G. Patiala.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Computer Architecture Lecture 28 Fasih ur Rehman.
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
1 Chapter 3.2 : Virtual Memory What is virtual memory? What is virtual memory? Virtual memory management schemes Virtual memory management schemes Paging.
1 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms.
1 Memory Organization Computer Organization Computer Architectures Lab Memory Hierarchy Main Memory Auxiliary Memory Associative Memory Cache Memory Virtual.
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
CS203 – Advanced Computer Architecture Virtual Memory.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Virtual Memory Chapter 8.
CS161 – Design and Architecture of Computer
Basic Paging (1) logical address space of a process can be made noncontiguous; process is allocated physical memory whenever the latter is available. Divide.
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Computer Organization
Chapter 2 Memory and process management
Memory COMPUTER ARCHITECTURE
CS161 – Design and Architecture of Computer
MEMORY ORGANIZATION Memory Hierarchy Main Memory Auxiliary Memory
Chapter 9: Virtual Memory
Chapter 8: Main Memory.
Overview Memory Hierarchy Main Memory Auxiliary Memory
Paging and Segmentation
Operating System Concepts
Chapter 9: Virtual-Memory Management
Module IV Memory Organization.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Computer Architecture
MEMORY ORGANIZATION Memory Hierarchy Main Memory Auxiliary Memory
Memory Management-I 1.
Main Memory Background Swapping Contiguous Allocation Paging
MICROPROCESSOR MEMORY ORGANIZATION
Chap. 12 Memory Organization
Morgan Kaufmann Publishers Memory Hierarchy: Virtual Memory
Memory Organization.
Translation Buffers (TLB’s)
Contents Memory types & memory hierarchy Virtual memory (VM)
Translation Buffers (TLB’s)
CS703 - Advanced Operating Systems
Translation Buffers (TLBs)
COMP755 Advanced Operating Systems
Operating Systems: Internals and Design Principles, 6/E
Review What are the advantages/disadvantages of pages versus segments?
Virtual Memory.
Chapter 8 & 9 Main Memory and Virtual Memory
CSE 542: Operating Systems
Virtual Memory 1 1.
Module IV Memory Organization.
Presentation transcript:

MEMORY ORGANIZATION Memory Hierarchy Main Memory Auxiliary Memory Associative Memory Cache Memory Virtual Memory Memory Management Hardware

MEMORY HIERARCHY Memory Hierarchy is to obtain the highest possible access speed while minimizing the total cost of the memory system Auxiliary memory Magnetic tapes I/O Main processor memory Magnetic disks CPU Cache memory Register Cache Main Memory Magnetic Disk Magnetic Tape

MAIN MEMORY RAM and ROM Chips Typical RAM chip Typical ROM chip Chip select 1 CS1 Chip select 2 CS2 128 x 8 Read RD 8-bit data bus RAM Write WR 7-bit address AD 7 CS1 CS2 RD WR Memory function State of data bus 0 0 x x Inhibit High-impedence 0 1 x x Inhibit High-impedence 1 0 0 0 Inhibit High-impedence 1 0 0 1 Write Input data to RAM 1 0 1 x Read Output data from RAM 1 1 x x Inhibit High-impedence Typical ROM chip Chip select 1 CS1 Chip select 2 CS2 512 x 8 8-bit data bus ROM 9-bit address AD 9

MEMORY ADDRESS MAP Address space assignment to each memory chip Main Memory MEMORY ADDRESS MAP Address space assignment to each memory chip Example: 512 bytes RAM and 512 bytes ROM RAM 1 RAM 2 RAM 3 RAM 4 ROM 0000 - 007F 0080 - 00FF 0100 - 017F 0180 - 01FF 0200 - 03FF Component Hexa address 0 0 0 x x x x x x x 0 0 1 x x x x x x x 0 1 0 x x x x x x x 0 1 1 x x x x x x x 1 x x x x x x x x x 10 9 8 7 6 5 4 3 2 1 Address bus Memory Connection to CPU - RAM and ROM chips are connected to a CPU through the data and address buses - The low-order lines in the address bus select the byte within the chips and other lines in the address bus select a particular chip through its chip select inputs

CONNECTION OF MEMORY TO CPU Main Memory CONNECTION OF MEMORY TO CPU CPU Address bus 16-11 10 9 8 7-1 RD WR Data bus Decoder 3 2 1 CS1 CS2 RD 128 x 8 Data RAM 1 WR AD7 CS1 CS2 RD 128 x 8 Data RAM 2 WR AD7 CS1 CS2 RD 128 x 8 Data RAM 3 WR AD7 CS1 CS2 RD 128 x 8 Data RAM 4 WR AD7 CS1 CS2 1- 7 512 x 8 Data 8 ROM AD9 9

AUXILIARY MEMORY Information Organization on Magnetic Tapes EOF IRG block 1 block 2 block 3 block 1 block 2 R1 R2 R3 R4 R5 R6 R3 R2 R5 R4 file i Organization of Disk Hardware Track Moving Head Disk Fixed Head Disk

Associative Memory ASSOCIATIVE MEMORY - Accessed by the content of the data rather than by an address - Also called Content Addressable Memory (CAM) Hardware Organization Argument register(A) Key register (K) Match register Input Associative memory array and logic M Read m words Write n bits per word - Compare each word in CAM in parallel with the content of A(Argument Register) - If CAM Word[i] = A, M(i) = 1 - Read sequentially accessing CAM for CAM Word(i) for M(i) = 1 - K(Key Register) provides a mask for choosing a particular field or key in the argument in A (only those bits in the argument that have 1’s in their corresponding position of K are compared)

ORGANIZATION OF CAM Internal organization of a typical cell Cij Associative Memory ORGANIZATION OF CAM A1 Aj An K1 Kj Kn Word 1 C11 C1j C1n M1 Word i Ci1 Cij Cin Mi Word m Cm1 Cmj Cmn Mm Bit 1 Bit j Bit n Internal organization of a typical cell Cij Aj Kj Input Write R S Match F ij To M i Read logic Output

MATCH LOGIC Associative Memory K A K A K A F' F F' F . . . . F' F M 1 2 2 n n F' F i1 F' F . . . . F' in F i1 i2 i2 in M i

Cache Memory CACHE MEMORY The references to memory at any given time interval of time tend to be confined within a few localized areas in memory. This phenomenon is known as the property of locality of reference. If the active portions of the program and data are placed in a fast small memory, the average memory access time can be reduced, thus reducing the total execute on time of the program. Such a fast small memory is referred to as a cache memory. The performance of cache memory is frequently measured in terms of a quantity called hit ratio. When the CPU refers to memory and finds the word in cache, it is said to produce a hit. If the word is not found in cache, it is in main memory and it counts as a miss. The ratio of the number of hits divided by the total CPU references to memory is the hit ratio. Main memory CPU Cache memory

MEMORY AND CACHE MAPPING - ASSOCIATIVE MAPPLING - Cache Memory MEMORY AND CACHE MAPPING - ASSOCIATIVE MAPPLING - Mapping Function The transformation of data from main memory to cache memory is Referred to as a mapping process. Associative mapping Direct mapping Set-associative mapping Associative Mapping -The associative memory stores both the address and content Of the memory word. address (15 bits) Argument register Address Data 0 1 0 0 0 3 4 5 0 CAM 0 2 7 7 7 6 7 1 0 2 2 2 3 5 1 2 3 4

MEMORY AND CACHE MAPPING - DIRECT MAPPING - Cache Memory MEMORY AND CACHE MAPPING - DIRECT MAPPING - - Each memory block has only one place to load in Cache - Mapping Table is made of RAM instead of CAM - n-bit memory address consists of 2 parts; k bits of Index field and n-k bits of Tag field - n-bit addresses are used to access main memory and k-bit Index is used to access the Cache Addressing Relationships Tag(6) Index(9) 00 000 32K x 12 000 512 x 12 Main memory Cache memory Address = 15 bits Address = 9 bits Data = 12 bits Data = 12 bits 77 777 777 Direct Mapping Cache Organization Memory address Memory data 00000 1 2 2 0 Cache memory Index address Tag Data 00777 2 3 4 0 000 0 0 1 2 2 0 01000 3 4 5 0 01777 4 5 6 0 02000 5 6 7 0 777 0 2 6 7 1 0 02777 6 7 1 0

DIRECT MAPPING Operation Cache Memory DIRECT MAPPING Operation - CPU generates a memory request with (TAG;INDEX) - Access Cache using INDEX ; (tag; data) Compare TAG and tag - If matches -> Hit Provide Cache[INDEX](data) to CPU - If not match -> Miss M[tag;INDEX] <- Cache[INDEX](data) Cache[INDEX] <- (TAG;M[TAG; INDEX]) CPU <- Cache[INDEX](data) Direct Mapping with block size of 8 words Index tag data 6 6 3 000 0 1 3 4 5 0 Tag Block Word Block 0 007 0 1 6 5 7 8 010 INDEX Block 1 017 770 0 2 Block 63 777 0 2 6 7 1 0

MEMORY AND CACHE MAPPING - SET ASSOCIATIVE MAPPING - Cache Memory MEMORY AND CACHE MAPPING - SET ASSOCIATIVE MAPPING - - Each memory block has a set of locations in the Cache to load Set Associative Mapping Cache with set size of two Index Tag Data 000 0 1 3 4 5 0 0 2 5 6 7 0 777 6 7 1 0 0 0 2 3 4 0 Operation - CPU generates a memory address(TAG; INDEX) - Access Cache with INDEX, (Cache word = (tag 0, data 0); (tag 1, data 1)) - Compare TAG and tag 0 and then tag 1 - If tag i = TAG -> Hit, CPU <- data i - If tag i  TAG -> Miss, Replace either (tag 0, data 0) or (tag 1, data 1), Assume (tag 0, data 0) is selected for replacement, (Why (tag 0, data 0) instead of (tag 1, data 1) ?) M[tag 0, INDEX] <- Cache[INDEX](data 0) Cache[INDEX](tag 0, data 0) <- (TAG, M[TAG,INDEX]), CPU <- Cache[INDEX](data 0)

CACHE WRITE Write Through When writing into memory Cache Memory CACHE WRITE Write Through When writing into memory If Hit, both Cache and memory is written in parallel If Miss, Memory is written For a read miss, missing block may be overloaded onto a cache block Write-Back (Copy-Back) If Hit, only Cache is written If Miss, missing block is brought to Cache and write into Cache For a read miss, candidate block must be written back to the memory

Virtual Memory VIRTUAL MEMORY Give the programmer the illusion that the system has a very large memory, even though the computer actually has a relatively small main memory Address Space(Logical) and Memory Space(Physical) virtual address (logical address) physical address address space memory space address generated by programs actual main memory address Mapping Address Mapping Memory Mapping Table for Virtual Address -> Physical Address Virtual address Virtual Main memory Memory Main address address mapping memory register register table Physical Address Memory table Main memory buffer register buffer register

PAGE REPLACEMENT Decision on which page to displace to make room for Virtual Memory PAGE REPLACEMENT Decision on which page to displace to make room for an incoming page when no free frame is available Modified page fault service routine 1. Find the location of the desired page on the backing store 2. Find a free frame - If there is a free frame, use it - Otherwise, use a page-replacement algorithm to select a victim frame - Write the victim page to the backing store 3. Read the desired page into the (newly) free frame 4. Restart the user process valid/ frame invalid bit swap out 1 victim page 2 change to f v i victim invalid 3 4 swap f v reset page desired table for page in new page backing store page table physical memory

PAGE REPLACEMENT ALGORITHMS Virtual Memory PAGE REPLACEMENT ALGORITHMS FIFO Reference string 7 1 2 3 4 2 3 3 2 1 2 1 7 1 7 7 7 2 2 2 4 4 4 7 7 7 3 3 3 2 2 2 1 1 1 1 1 1 3 3 3 2 2 2 1 Page frames FIFO algorithm selects the page that has been in memory the longest time Using a queue - every time a page is loaded, its identification is inserted in the queue Easy to implement May result in a frequent page fault - Optimal Replacement (OPT) - Lowest page fault rate of all algorithms Replace that page which will not be used for the longest period of time Reference string 7 1 2 3 4 2 3 3 2 1 2 1 7 1 7 7 7 2 2 2 2 2 7 4 1 1 3 3 3 1 1 Page frames

PAGE REPLACEMENT ALGORITHMS Virtual Memory PAGE REPLACEMENT ALGORITHMS LRU - OPT is difficult to implement since it requires future knowledge - LRU uses the recent past as an approximation of near future. Replace that page which has not been used for the longest period of time Reference string 7 1 2 3 4 2 3 3 2 1 2 1 7 1 7 7 7 2 2 4 4 4 1 1 1 3 3 3 1 1 3 3 2 2 2 2 2 7 Page frames - LRU may require substantial hardware assistance - The problem is to determine an order for the frames defined by the time of last use

MEMORY MANAGEMENT HARDWARE Basic Functions of MM - Dynamic Storage Relocation - mapping logical memory references to physical memory references - Provision for Sharing common information stored in memory by different users - Protection of information against unauthorized access Segmentation - A segment is a set of logically related instructions or data elements associated with a given name - Variable size User's view of memory The user does not think of memory as a linear array of words. Rather the user prefers to view memory as a collection of variable sized segments, with no necessary ordering among segments. Stack Subroutine Symbol Table SQRT Main Program User's view of a program

SEGMENTATION - A memory management scheme which supports Memory Management Hardware SEGMENTATION - A memory management scheme which supports user's view of memory - A logical address space is a collection of segments - Each segment has a name and a length - Address specify both the segment name and the offset within the segment. - For simplicity of implementations, segments are numbered. Segmentation Hardware Segment Table s limit base CPU (s,d) Memory y < + n error

SEGMENTATION EXAMPLE Memory Management Hardware 1400 Stack Subroutine 2400 Segment 0 Symbol Table 3200 SQRT Segment 4 Segment 3 Segment 1 Main Program 4300 Segment 2 Segment 2 4700 Segment 4 Logical Address Space 5700 6300 Segment 1 Segment Table 6700 limit base 1000 1400 1 400 6300 2 400 4300 3 1100 3200 4 1000 4700

SHARING OF SEGMENTS Memory Management Hardware Editor limit base 43062 25286 43062 Segment 0 1 4425 68348 Editor Data 1 Segment Table (User 1) 68348 Segment 1 Data 1 72773 Logical Memory (User 1) 90003 Data 2 98556 limit base Editor 25286 43062 1 8550 90003 Physical Memory Segment 0 Segment Table Data 2 (User 2) Segment 1 Logical Memory (User 2)

SEGMENTED PAGE SYSTEM Memory Management Hardware Logical address Word Segment table Page table + Block Word Physical address

IMPLEMENTATION OF PAGE AND SEGMENT TABLES Memory Management Hardware IMPLEMENTATION OF PAGE AND SEGMENT TABLES Implementation of the Page Table - Hardware registers (if the page table is reasonably small) - Main memory - Page Table Base Register(PTBR) points to PT - Two memory accesses are needed to access a word; one for the page table, one for the word - Cache memory (TLB: Translation Lookaside Buffer) - To speedup the effective memory access time, a special small memory called associative memory, or cache is used Implementation of the Segment Table Similar to the case of the page table

EXAMPLE Logical and Physical Addresses Memory Management Hardware EXAMPLE Logical and Physical Addresses Segment Page Word 4 8 Block 12 Physical address format: 4096 blocks of 256 words each, each word has 32bits 2 x 32 Physical memory 20 Logical address format: 16 segments of 256 pages each, each page has 256words Logical and Physical Memory Address Assignment Hexa address Page number Page 0 Page 1 Page 2 Page 3 Page 4 60000 60100 60200 60300 60400 604FF Segment Page Block 6 00 012 01 000 02 019 03 053 04 A61 (a) Logical address assignment (b) Segment-page versus memory block assignment

LOGICAL TO PHYSICAL MEMORY MAPPING Memory Management Hardware LOGICAL TO PHYSICAL MEMORY MAPPING Segment and page table mapping Logical address (in hexadecimal) 6 02 7E Segment table Page table Physical memory 00 00000 Block 0 000FF 6 35 35 012 36 000 01200 37 019 Block 12 38 053 012FF F A3 39 A61 01900 32-bit word 0197E 019FF A3 012 Associative memory mapping Segment Page Block 6 02 019 04 A61

MEMORY PROTECTION Protection information can be included in the Memory Management Hardware MEMORY PROTECTION Protection information can be included in the segment table or segment register of the memory management hardware - Format of a typical segment descriptor - The protection field in a segment descriptor specifies the Access Rights to the particular segment - In a segmented-page organization, each entry in the page table may have its own protection field to describe the Access Rights of each page - Access Rights: Base address Length Protection Full read and write privileges. Read only (write protection) Execute only (program protection) System only (O.S. Protection)

A Typical Cache and TLB Design From translator CPU Virtual Address Real Address Virtual Address Page Number Line Number Word in Line To translator A CPU Memory Hash Function Real Address Data TLB Cache S S A Real Address Compare Virtual Addresses Compare Addresses & Select Data Data Word Select & Align To Main Memory S = Select Data Out

Structure of Cache Entry and Cache Set Real Address Tag Data Valid Cache Entry Entry 1 Entry 2    Entry E Replacement status Cache Set

Cache Operation Flow Chart Receive Virtual Address Hash Page Number Use Line Number to Select Set Search TLB Read Out Address Tags A yes In TLB ? Compare Addresses no yes Match ? Send Virtual Address to Translator Update Replacement Status in TLB no Send Real Address to Main Memory Update Replacement Status Use Page & Segment tables to Translate Address Select Correct Line Receive Line from Main Memory Put in TLB Store Line in Cache Select Correct Word from Line A Read Out

Virtual Address Format - Example Page number Byte within page Byte within line 31 21 20 17 12 11 10 4 3 2 1 0 Byte within word Select set in cache Select set in TLB Map through page directory Word within line Map through page table Line number Virtual Address of Fairchild Clipper