Download presentation
Presentation is loading. Please wait.
1
Computer Architecture and Organization (CS-507)
Muhammad Zeeshan Haider Ali Lecturer ISP. Multan Lecture 4 (b) Components of memory Systems, Direct Memory Access (DMA), Magnetic Disk Drives, Memory Modules, Virtual Memory, Storage Technologies, RAM and Enhanced DRAM, Disk Storage Concept of Cache Memory
2
Computer Memory What is Memory?
Memory refers to a storage location Data (input/output) is stored in memory Like humans computer system also have memory
3
Memory hierarchy Computer memory is organized into a hierarchy.
At the highest level (closest to the processor) are the processor registers. Next comes one or more levels of cache, When multiple levels are used, they are denoted L1, L2, and so on. Next comes main memory, which is usually made out of dynamic random-access memory (DRAM). All of these are considered internal to the computer system. The hierarchy continues with external memory, with the next level typically being a fixed hard disk, and one or more levels below that consisting of removable media such as optical disks and tape.
4
Continue… As one goes down the memory hierarchy, one finds decreasing cost. But increasing capacity, and slower access time. It would be nice to use only the fastest memory, but because that is the most expensive memory, we trade off access time for cost by using more of the slower memory.
5
Memory categorization
We can mainly categorized computer memory into: Internal memory External memory Internal memory: Internal memory is often equated with main memory. But there are other forms of internal memory. The processor requires its own local memory in form of registers Cache is another form of internal memory. External Memory: External memory consists of peripheral storage devices, such as disk and tape, that are accessible to the processor via I/O controllers.
6
Characteristics of Memory
An obvious characteristic of memory is its capacity. For internal memory, this is typically expressed in terms of bytes (1 byte 8 bits) or words Common word lengths are 8, 16, and 32 bits. External memory capacity is typically expressed in terms of MBs or even more in form of GBs. Another concept is the unit of transfer. For internal memory, the unit of transfer is equal to the number of electrical lines into and out of the memory module. This may be equal to the word length, but is often larger, such as 64, 128, or 256 bits.
7
Continue… three related concepts for internal memory: Word: The “natural” unit of organization of memory. The size of the word is typically equal to the number of bits used to represent an integer and to the instruction length The Intel x86 architecture has a wide variety of instruction lengths, expressed as multiples of bytes, and a word size of 32 bits. Addressable units: In some systems, the addressable unit is the word. However, many systems allow addressing at the byte level. Unit of transfer: For main memory, this is the number of bits read out of or written into memory at a time.
8
Method of accessing Sequential access:
Memory is organized into units of data, called records. Access must be made in a specific linear sequence. This type of accessing is done in an analog storage unit like a tape. Pass all the intermediate records to reach at desired record. Direct access: individual blocks or records have a unique address based on physical location. Digital storage unit like compact disk, hard drive etc. Don’t need to pass intermediate tracks, can directly access desired track.
9
Continue… Random access:
Each addressable location in memory has a unique, physically wired-in addressing mechanism. any location can be selected at random and directly addressed and accessed. Main memory and some cache systems are random access. Associative: This is a random access type of memory that enables one to make a comparison of desired bit locations within a word for a specified match a word is retrieved based on a portion of its contents rather than its address. Cache memories may employ associative access.
10
Performance parameters
Three parameters which are: Access time (latency): For random-access memory, this is the time it takes to perform a read or write operation Memory cycle time: This concept is primarily applied to random-access memory and consists of the access time plus any additional time required before a second access can commence. memory cycle time is concerned with the system bus, not the processor. Transfer rate: the rate at which data can be transferred into or out of a memory unit. For random-access memory, it is equal to 1/(cycle time).
11
Memory Hierarchy
12
Cache Memory Cache memory is intended to give memory speed fastest.
The cache contains a copy of portions of main memory. When the processor attempts to read a word of memory, it firstly checks in Cache If desired word exist in cache it return back to processor Otherwise control go to main memory (usually RAM) to retrieve that desired word. Transfer is shown in below fig.
13
Continue… Cache Hit: If the accessed word is found in the cache, that is defined as a cache hit. Cache Miss: if the accessed word is not found in the Cache, it is referred as Cache miss .
14
Continue… Cache is divided into level to make it more efficient and to make access more quickly L1 Cache: L1 cache is also referred as Main/Primary cache. Processor check it first. It is built in on processor. L2 Cache: L2 cache is referred as secondary cache. If data/content not available in L1 cache processor goes to L2 cache. L2 cache is slower and larger than L1 cache. L3 cache It is the slowest and larger cache, and it is the last option for processor. It exist on board. It is also referred as on board cache
15
Virtual Memory
16
Memory register CPU cache larger capacity lower speed Main Memory
lower cost Main Memory Secondary Storage Server (or INTERNET)
17
What is… Virtual memory as an alternate set of memory addresses.
Programs use these virtual addresses rather than real addresses to store instructions and data. When the program is actually executed, the virtual addresses are converted into real memory addresses.
18
History virtual memory was developed in approximately – 1962, at the University of Manchester for the Atlas Computer, completed in 1962. In 1961, Burroughs released the B5000, the first commercial computer with virtual memory.
19
Why is it needed…. Before the development of the virtual memory technique, programmers in the 1940s and 1950s had to manage directly two- level storage such as main memory or ram and secondary memory in the form of hard disks or earlier, magnetic drums. Enlarge the address space, the set of addresses a program can utilize. Virtual memory might contain twice as many addresses as main memory.
20
Object… When a computer is executing many programs at the same time, Virtual memory make the computer to share memory efficiently. Eliminate a restriction that a computer works in memory which is small and be limited. When many programs is running at the same time, by distributing each suitable memory area to each program, VM protect programs to interfere each other in each memory area.
21
How does it work… To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses. Each page is stored on a disk until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses.
22
MMU (Memory Management Unit)
The hardware base that makes a virtual memory system possible. Allows software to reference physical memory by virtual addresses, quite often more than one. It accomplishes this through the use of page and page tables. Use a section of memory to translate virtual addresses into physical addresses via a series of table lookups. The software that handles the page fault is generally part of an operating system and the hardware that detects this situation.
23
Segmentation…… Segmentation involves the relocation of variable sized segments into the physical address space. Generally these segments are contiguous units, and are referred to in programs by their segment number and an offset to the requested data. Efficient segmentation relies on programs that are very thoughtfully written for their target system. Since segmentation relies on memory that is located in single large blocks, it is very possible that enough free space is available to load a new module, but can not be utilized. Segmentation may also suffer from internal fragmentation if segments are not variable-sized, where memory above the segment is not used by the program but is still “reserved” for it.
24
Paging…… Paging provides a somewhat easier interface for programs, in that its operation tends to be more automatic and thus transparent. Each unit of transfer, referred to as a page, is of a fixed size and swapped by the virtual memory manager outside of the program’s control. Instead of utilizing a segment/offset addressing approach, as seen in segmentation, paging uses a linear sequence of virtual addresses which are mapped to physical memory as necessary. Due to this addressing approach, a single program may refer to series of many non-contiguous segments. Although some internal fragmentation may still exist due to the fixed size of the pages, the approach virtually eliminates external fragmentation.
25
Paging……(cont’d) A technique used by virtual memory operating systems to help ensure that the data you need is available as quickly as possible. The operating system copies a certain number of pages from your storage device to main memory. When a program needs a page that is not in maim memory, the operating system copies the required page into memory and copies another page back to the disk.
26
Virtual Memory (Paging)
Page table Page table Address Space Address Space Physical Memory
27
Page fault An interrupt to the software raised by the hardware when a program accesses a page that is not mapped in physical memory. when a program accesses a memory location in its memory and the page corresponding to that memory is not loaded when a program accesses a memory location in its memory and the program does not have privileges to access the page corresponding to that memory.
28
Paging replacement algorithms
OPT(MIN) : eliminate the page that be not expected to be used. FIFO(first input/first output) : rather than choosing the victim page at random, the oldest page is the first to be removed. LRU(Least Recently used) : move out the page that is the least rarely used. LFU(Least Frequently used) : move out the page that is not used often in the past.
29
Summary… Virtual memory is a common part of most operating systems on computers. It has become so common because it provides a big benefit for users at a very low cost. benefits of executing a program that is only partially in memory. program is no longer constrained by the amount of physical memory. ⇒ user would be able to write programs for an extremely large virtual address space. more programs could be run at the same time ⇒ increase CPU utilization and throughput. less I/O would be needed to load or swap each user program ⇒ run faster
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.