Memory Organization.

Slides:



Advertisements
Similar presentations
Memory.
Advertisements

Operating Systems Lecture 10 Issues in Paging and Virtual Memory Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard. Zhiqing.
OS Fall’02 Virtual Memory Operating Systems Fall 2002.
Characteristics of Computer Memory
Memory Management and Paging CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han.
Multiprocessing Memory Management
Paging and Virtual Memory. Memory management: Review  Fixed partitioning, dynamic partitioning  Problems Internal/external fragmentation A process can.
Memory Management.
Memory Management 2010.
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
CS 524 (Wi 2003/04) - Asim LUMS 1 Cache Basics Adapted from a presentation by Beth Richardson
Characteristics of Computer Memory
CH05 Internal Memory Computer Memory System Overview Semiconductor Main Memory Cache Memory Pentium II and PowerPC Cache Organizations Advanced DRAM Organization.
Memory Management ◦ Operating Systems ◦ CS550. Paging and Segmentation  Non-contiguous memory allocation  Fragmentation is a serious problem with contiguous.
Virtual Memory Chantha Thoeun. Overview  Purpose:  Use the hard disk as an extension of RAM.  Increase the available address space of a process. 
Memory Systems Architecture and Hierarchical Memory Systems
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 346, Royden, Operating System Concepts Operating Systems Lecture 24 Paging.
Operating System Chapter 7. Memory Management Lynn Choi School of Electrical Engineering.
Lecture 21 Last lecture Today’s lecture Cache Memory Virtual memory
Chapter 6: Memory Memory is organized into a hierarchy
Operating Systems Chapter 8
8.4 paging Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. The basic method for implementation.
Memory Hierarchy and Cache Memory Jennifer Tsay CS 147 Section 3 October 8, 2009.
Cosc 2150: Computer Organization Chapter 6, Part 2 Virtual Memory.
Lecture 10 Memory Hierarchy and Cache Design Computer Architecture COE 501.
Memory Management – Page 1 of 49CSCI 4717 – Computer Architecture Memory Management Uni-program – memory split into two parts –One for Operating System.
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
3-May-2006cse cache © DW Johnson and University of Washington1 Cache Memory CSE 410, Spring 2006 Computer Systems
1 Memory Management (b). 2 Paging  Logical address space of a process can be noncontiguous; process is allocated physical memory whenever the latter.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
Computer Organization & Programming
Introduction: Memory Management 2 Ideally programmers want memory that is large fast non volatile Memory hierarchy small amount of fast, expensive memory.
Lecture#15. Cache Function The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that.
Memory Architecture Chapter 5 in Hennessy & Patterson.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
Caches Hiding Memory Access Times. PC Instruction Memory 4 MUXMUX Registers Sign Ext MUXMUX Sh L 2 Data Memory MUXMUX CONTROLCONTROL ALU CTL INSTRUCTION.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Virtual Memory Pranav Shah CS147 - Sin Min Lee. Concept of Virtual Memory Purpose of Virtual Memory - to use hard disk as an extension of RAM. Personal.
8.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Fragmentation External Fragmentation – total memory space exists to satisfy.
Understanding Operating Systems Fifth Edition Chapter 3 Memory Management: Virtual Memory.
Memory Management Chapter 5 Advanced Operating System.
1 Contents Memory types & memory hierarchy Virtual memory (VM) Page replacement algorithms in case of VM.
Virtual Memory By CS147 Maheshpriya Venkata. Agenda Review Cache Memory Virtual Memory Paging Segmentation Configuration Of Virtual Memory Cache Memory.
Chapter 7: Main Memory CS 170, Fall Program Execution & Memory Management Program execution Swapping Contiguous Memory Allocation Paging Structure.
Characteristics Location Capacity Unit of transfer Access method Performance Physical type Physical characteristics Organisation.
Memory Hierarchy and Cache. A Mystery… Memory Main memory = RAM : Random Access Memory – Read/write – Multiple flavors – DDR SDRAM most common 64 bit.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 33 Paging Read Ch. 9.4.
CMSC 611: Advanced Computer Architecture
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Understanding Operating Systems Seventh Edition
Memory and cache CPU Memory I/O.
Ramya Kandasamy CS 147 Section 3
Chapter 8: Main Memory.
Memory Management 11/17/2018 A. Berrached:CS4315:UHD.
Memory and cache CPU Memory I/O.
Background Program must be brought into memory and placed within a process for it to be run. Input queue – collection of processes on the disk that are.
Computer Architecture
Main Memory Background Swapping Contiguous Allocation Paging
Chap. 12 Memory Organization
CMSC 611: Advanced Computer Architecture
Lecture 3: Main Memory.
Contents Memory types & memory hierarchy Virtual memory (VM)
Fundamentals of Computing: Computer Architecture
Operating Systems: Internals and Design Principles, 6/E
Memory Management & Virtual Memory
Presentation transcript:

Memory Organization

Data Organization Big endian Little endian Alignment Most significant byte stored in first memory location each additional n bytes stored in next n locations Little endian Least significant byte stored in first memory location each additional n bytes stored in next n locations Alignment -Data requires more than one byte to represent a value. -Memory byte addressed. -Values must be stored in more than one location. -Neither format is better than the other. CPU expects data to be stored in one or the other. Problems come when data will be transferred between computers using different organizations. -

Memory Organization/Interfacing Types ROM Masked ROM PROM EPROM EEPROM

Memory… RAM DRAM SRAM Organization Linear Two-dimensional

Memory Configuration Single chip Multiple chips Address bus, data bus, control bus are connected to the memory chip Multiple chips Address bus and control bus connected to the chips Different bits of data bus connected to data pins

Computer Architectures von Neumann Instructions and data stored in same memory module Harvard Separate memory modules for each Modern PCs Harvard used in cache memory

Memory Hierarchy Hierarchical memory system Registers Cache Main memory Secondary memory -One of the most important considerations in understanding the performance capabilities of a processor. -some types of memory far less efficient (cheaper) than others. -computer systems use a combination of memory types to provide the best performance at the best cost.(hierarchical memory approach) -in general, the faster memory is the more expensive it is per bit of storage. -by using a hierarchy of memories (each with different access speeds and storage capabilities) a computer system can exhibit performance above what would be possible without a combination of the various types. -memory is classified based on its distance from the processor -distance is measured in the number of machine cycles it takes to access the memory (closer:faster)

Memory Hierarchy Terminology Hit - Requested data resides in a given level of memory Miss - Requested data not found in the given level of memory Hit rate – percentage of memory accesses found in a given level of memory Miss rate – percentage of memory accesses not found in a given level of memory (1- Hit rate) Hit time – time required to access the requested data in a given level of memory Miss penalty – time required to process a miss -typically, we are concerned with the hit rate only for upper levels of memory -miss penalty includes replacing a block in upper level memory, plus the additional time to deliver the requested data to the processor. Time to process a miss is typically significantly larger than the time to process a hit. -

Memory Hierarchy Access Time Registers – 1 ns -> 2ns System L1 Cache – 3 ns -> 10 ns System L2 Cache – 25 ns -> 50 ns System Main Memory – 30 ns -> 90 ns System Fixed disk – 5 ms -> 29 ms Online Optical disk – 100 ms -> 5s Near line Magnetic – 10 s -> 3 m Offline

Locality of Reference Temporal locality – recently accessed items tend to be accessed again in the near future Spatial locality – accesses tend to be clustered in the address space (arrays or loops) Sequential locality – instructions tend to be accessed sequentially Processors access memory in a patterned way. If memory location X is accessed at time t, there is a high probability that location X+1 will be accessed in the near future. Locality of reference can be exploited by implementing the memory as a hierarchy; when a miss is processed, instead of simply transferring the requested data to a higher level, the entire block containing the data is transferred. Since it is likely that the additional data in the block will be needed in the near future, and if so, this data can be loaded quickly from the faster memory. - this principle provides the opportunity for a system to use a small amount of very fast memory to effectively accelerate the majority of memory accesses.

Cache Small High speed Temporarily stores data from frequently used memory locations Connected to main memory Very high speed, small amount Data from frequently used memory locations is temporarily stored L2 typically 256K or 512K – resides between the CPU and main memory L1 smaller (8K or 16K) – resides on processor Purpose is to speed up memory accesses by storing recently used data closer to the CPU instead of storing it in main memory. Cache composed of SRAM Cache is not accessed by address; it is accessed by content (content addressable memory)

Cache Mapping Schemes The mapping scheme determines where the data is placed when it is originally copied into cache and provides a method for the CPU to find previously copied data when searching cache Direct mapped cache Fully associative cache Set associative cache For cache to be functional it must store useful data. The data isn’t useful, though, if the CPU can’t find it. When accessing data or instructions the CPU first generates a main memory address. If the data has been copied to cache the address of the data in cache is not the same as the main memory address. How does the CPU find the data when it is in cache? It uses a specific mapping scheme that “converts” the main memory address into a cache location by giving special significance to the bits in the main memory address. The bits are divided into distinct groups called fields. Depending on the mapping scheme there may be 2 or 3 fields. How the fields are used depends on the mapping scheme as well.

Direct Mapped Cache Modular approach Block X of main memory is mapped to block Y of cache, mod N, where N is the total number of blocks in cache. In direct mapping the binary main memory address is partitioned into the fields shown: More main memory blocks than cache blocks. Main memory blocks compete for cache locations. Inexpensive, restrictive approach A given block of memory can only be placed in a certain block in cache. Tag Block Word

Example Small system with 16 words of main memory divided into 8 blocks (each block has 2 words). Assume cache is 4 blocks in size (total of 8 words). Main memory address has 4 bits (24 = 16 words) 4-bit address is divided into three fields word field: 1 bit, block field: 2 bits, tag field: 1 bit Mapping: Main Memory Maps to Cache Block 0 (addresses 0,1) Block 0 Block 1 (addresses 2,3) Block 1 Block 2 (addresses 4,5) Block 2 Block 3 (addresses 6,7) Block 3 Block 4 (addresses 8, 9) Block 0 Block 5 (addresses 10, 11) Block 1 Block 6 (addresses 12, 13) Block 2 Block 7 (addresses 14, 15) Block 3

Main Memory Address 9 = 10012 Split into fields: tag = 1 (1 bit) block = 00 (2 bits) word = 1 (1 bit)

Fully Associative Cache Built from associative memory so it can be searched in parallel. A single search must compare the requested tag to all tags in cache to determine if the block is present. Special hardware required to allow associative searching (expensive). Block of memory can be placed in any block in cache. Not as restrictive as direct mapping. Requires a larger tag to be stored, which results in a larger cache.

Set Associative Cache N-way set associative cache mapping Combination of direct mapped and fully associative The address maps the block to a set of cache blocks Address is partitioned into three fields: tag, set, and word. This scheme is in the middle between Fully associative and direct mapped. All sets in cache must be the same size 2-way associative cache each set is two blocks, 8-way 8 cache blocks per set, etc Tag and word field are the same as in direct mapped. Set field indicates into which cache set the main memory block maps. 2-way assoc. mapping with a main memory of 2^14 words, a cache of 16 blocks each of 8 words = 8 sets in cache. The main memory address has to be 14 bits long, the set field then has to be 3 bits, the word field is 3 bits, and the tag field is the remaining 8 bits.

Main Memory Medium speed Much larger than cache Complemented by a very large secondary memory Composed of DRAM

Secondary Memory Very large Slower access Hard disk Removable media

Virtual Memory Virtual Address Physical Address Mapping Page Frames Pages Paging Fragmentation Page Fault use hard disk space as an extension of RAM. Increases the available address space a process can use. This allows a program to run when only specific pieces are present in memory. Parts not currently being used are stored in the page file on disk. Even 512 MB RAM is not enough memory to hold multiple applications concurrently and the OS Area on the hard drive used for virtual memory is called a page file. Most common way to implement virtual memory is paging Virtual Address – The logical or program address that the process uses. Whenever the CPU generates an address, it is always in terms of virtual address space. physical address – The real address in physical memory mapping – The mechanism by which virtual addresses are translated into physical ones (similar to cache mapping) page frames – the equal size chunks or blocks into which main memory is divided pages – the chunks or blocks into which virtual memory (the logical address space) is divided, each equal in size to a page frame. Virtual pages are stored on disk until needed. paging – the process of copying a virtual page from disk to a page frame in main memory (most popular implementation of VM. VM can also be implemented with segmentation or a combination of paging and segmentation) fragmentation – memory that becomes unstable (system allocates more memory to a process than it needs because it has to allocate a page) page fault – a requested page is not in main memory and must be copied into memory from disk. success of paging is very dependant on the locality principle just like caching

Paging Allocate physical memory to processes in fixed size chunks Page table in main memory (typically) N rows (N = # of virtual pages in the process) valid bit 0 page is not in main memory 1 page is in main memory every process has its own page table which resides in main memory. page table stores the physical location of each virtual page of the process. Additional fields can be added to the page table to provide more information: dirty bit- (aka modify bit) indicates whether the page has been changed. If the page has not been modified it does not need to be rewritten to disk. usage bit – indicate page usage. 1 whenever the page is accessed; set to 0 after a certain period of time.

How Paging Works Extract the page number Extract the offset Translate page number into physical page frame number using page table Os dynamically translates a virtual address generated by a process into the physical address in main memory where the data actually resides. To convert the address the virtual address is divided into two fields: page and offset the offset field represents the offset within the page where the data is located.

How Paging Works cont’d look up page number in page table check the valid bit valid bit = 0 system generates a page fault and OS must intervene locate the page on disk find a free page frame copy the page into the free page frame update the page table resume execution of process if a process has free frames in main memory when a page fault occurs, the newly retrieved page can be placed in any of those free frames. If the memory allocated to the process is full, a victim page must be selected. Replacement algorithms used to select a victim page include FIFO, Random, and LRU (least recently used)

How Paging Works cont’d valid bit = 1 page is in memory replace virtual page number with actual frame number access data at offset in physical page frame add the offset to the frame number for the given virtual page

Example process has a virtual address space of 28 words physical memory of 4 page frames 32 word in length Virtual address is 8 bits Physical address is 7 bits in this example the system has no cache. physical address is 7 bits because 4 frames of 32 words each is 128 words = 2^7

Example cont’d 2 fields of virtual address Page – 3 bits offset – 5 bits system generates virtual address 13 (00001101 in binary) Page 000 Offset 01101 Physical address = 1001101 offset must be 5 bits because 2^5 = 32. Need 5 bits to address 32 words use page field as an index into the page table 0th entry in page table virtual page 0 maps to physical page frame 2 (10 in binary) Translated physical address becomes page frame 2 offset 13. combine the page frame (10) and offset (01101) for the physical address. Physical address only has 7 bits (2 for the frame (4 frames), and 5 for the offset).

Access Time Time penalty associated with virtual memory two physical memory accesses for each memory access the processor generates two memory accesses: 1. page table, 2. the actual data

Disadvantages extra resource consumption memory overhead for storing page tables special hardware and OS support required

Advantages Programs are no longer restricted by the amount of physical memory available Easier to write programs don’t need to worry about physical address space limitations Allows multitasking virtual memory allows us to run programs whose virtual address space is larger than physical memory

Segmentation virtual address space divided into logical variable length units (segments) To copy a segment into memory OS looks for a chunk of free memory large enough Segment base address – where located in memory bounds limit – indicates its size Segment table – base/bounds pairs Physical memory isn’t really divided into anything Memory accesses are translated into segment number and an offset within the segment Check is performed to make sure the offset is within the segment If offset is within bounds, base value for the segment (from the segment table) is added to the offset to yield the physical address

Segmentation External fragmentation as segments are copied into and out of memory free chunks of memory are broken up eventually many small chunks none big enough for any segment Garbage collection combats external fragmentation Enough total memory may exist but it exists as a large number of small, unusable holes garbage collection shuffles occupied chunks of memory to collect the smaller, fragmented chunks into fewer, larger, usable chunks. Similar to defragmenting a disk drive.

Paging and Segmentation Systems can use a combination Virtual address space is divided into segments of variable length Segments are divided into fixed-size pages Main memory is divided into the same size frames Each segment has a page table Physical address divided into 3 fields: segment, page number, offset Paging is easier to manage allocation, freeing, swapping, and relocating are easy when everything is the same size Segmentation less overhead segments are usually larger than pages Segmentation eliminates internal fragmentation Paging eliminates external fragmentation Segmentation has the ability to support sharing and protection (difficult with paging) -segment field points the system to the correct page table, page number is used as an offset into the page table, offset is offset within the page Combination advantageous because it allows for segmentation from the user’s point of view and paging from the system point of view