Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Organization & Architecture 3416

Similar presentations


Presentation on theme: "Computer Organization & Architecture 3416"— Presentation transcript:

1 Computer Organization & Architecture 3416
Prepared by Muhammad Asif

2 Memory Organization There are two type of memory Internal memory
External memory

3 Memory Organization Computer Memory organized in to hierarchy.
Highest Level (Closest to processor) are processor registers. Next level is cache Where multiple levels are used, are denoted by L1, L2…. Ln Next Main memory, usually made out of dynamic random-access memory (DRAM). All of these are considered internal to computer system. Hierarchy continue with external memory….

4 Memory Organization Hierarchy continue with external memory….
The next level typically being a fixed hark disk One or more levels below the consisting of removable media such as optical disks and taps… As one goes down the memory hierarchy, one finds decreasing cost/bit, increasing capacity, and slower access time. It would be nice to use only fastest memory, because that is the most expensive memory, We trade off access time for cost by using more of the slower memory. The design challenge is to organize the data and programs in memory so that the accessed memory words are usually in the faster memory.

5 Memory Organization In general, it is likely that most future accesses to main memory by the processor will be to locations recently accessed. So the cache automatically retains a copy of some of the recently used words from the DRAM. If the cache is designed properly, then most of the time the processor will request memory words that are already in the cache.

6 Computer Memory System Overview
The complex subject of computer memory is made more manageable. Classify of memory systems according to their key characteristics.

7 Computer Memory System Overview
Location: Refers to whether memory is internal and external to the computer. Internal memory is often equated with main memory. But there are other forms of internal memory. The processor requires its own local memory, in the form of registers (e.g., see Figure next slide(2.3, CH 2). Further, as we shall see, the control unit portion of the processor may also require its own internal memory.

8

9 Computer Memory System Overview
Capacity: For internal memory, this is typically expressed in terms of bytes (1 byte 8 bits) or words. Common word lengths are 8, 16, and 32 bits. External memory capacity is typically expressed in terms of bytes. Unit Of Transfer: For internal memory, the unit of transfer is equal to the number of electrical lines into and out of the memory module. This may be equal to the word length, but is often larger, such as 64, 128, or 256 bits. To clarify this point, consider three related concepts for internal memory: Word: the natural unit or organization or memory. The size of the word is typically equal to the number of bits used to represent an integer and to the instruction length. Address Unit: In some systems, the addressable unit is the word. Many systems allow addressing at byte level. Unit of Transfer: this is the number of bits read out of or witten into memory at a time.

10 Computer Memory System Overview
Method of Accessing: Another distinction among memory types is the method of accessing units of data. These include the following: Sequential access: Memory is organized into units of data, called records. Access must be made in a specific linear sequence. Direct access: As with sequential access, direct access involves a shared read–write mechanism. However, individual blocks or records have a unique address based on physical location. Access is accomplished by direct access to reach a general vicinity plus sequential searching, counting, or waiting to reach the final location.

11 Computer Memory System Overview
Method of Accessing: Another distinction among memory types is the method of accessing units of data. These include the following: Random access: Each addressable location in memory has a unique, physically wired-in addressing mechanism. The time to access a given location is independent of the sequence of prior accesses and is constant. Thus, any location can be selected at random and directly addressed and accessed. Main memory and some cache systems are random access. Associative: This is a random access type of memory that enables one to make a comparison of desired bit locations within a word for a specified match, and to do this for all words simultaneously. Thus, a word is retrieved based on a portion of its contents rather than its address.

12 Computer Memory System Overview
Performance: Three performance parameters are used Access time: For random-access memory, this is the time it takes to perform a read or write operation, that is, the time from the instant that an address is presented to the memory to the instant that data have been stored or made available for use. Memory cycle time: This concept is primarily applied to random-access memory and consists of the access time plus any additional time required before a second access can commence Transfer rate: This is the rate at which data can be transferred into or out of a memory unit. For random- access memory, it is equal to 1/(cycle time).

13 Computer Memory System Overview
Physical Types: A variety of physical types of memory have been employed. The most common today are semiconductor memory, magnetic surface memory, used for disk and tape, and optical and magneto-optical. Physical Characteristics: Several physical characteristics of data storage are important. In a volatile memory, information decays naturally or is lost when electrical power is switched off. In a nonvolatile memory, information once recorded remains without deterioration until deliberately changed; no electrical power is needed to retain information. Magnetic-surface memories are nonvolatile. Semiconductor memory may be either volatile or nonvolatile.

14 Memory Hierarchy As one goes down the hierarchy, the following occur:
a. Decreasing cost per bit b. Increasing capacity c. Increasing access time d. Decreasing frequency of access of the memory by the processor Thus, smaller, more expensive, faster memories are supplemented by larger, cheaper, slower memories.

15 Memory Hierarchy

16 Cache memory Principles
Cache memory is intended to give memory speed approaching that of the fastest memories available And at the same time provide a large memory size at the price of less expensive types of semiconductor memories. The concept is illustrated in Figure: a

17 Cache memory Principles
There is a relatively large and slow main memory together with a smaller, faster cache memory. The cache contains a copy of portions of main memory. When the processor attempts to read a word of memory, a check is made to determine if the word is in the cache. If not, a block of main memory, consisting of some fixed number of words, is read into the cache and then the word is delivered to the processor. Figure (b), next page, depicts the use of multiple levels of cache. The L2 cache is slower and typically larger than the L1 cache, and the L3 cache is slower and typically larger than the L2 cache.

18 Cache memory Principles

19 Cache memory Principles
Figure (c, next page) depicts the structure of a cache/main-memory system. Main memory consists of up to 2n addressable words, with each word having a unique n-bit address. For mapping purposes, this memory is considered to consist of a number of fixed length blocks of K words each. That is, there are M=2n/K blocks in main memory. The cache consists of m blocks, called lines. Each line contains K words, plus a tag of a few bits. Each line also includes control bits (not shown), such as a bit to indicate whether the line has been modified since being loaded into the cache.

20 Cache memory Principles
The length of a line, not including tag and control bits, is the line size. The line size may be as small as 32 bits, with each “word” being a single byte; in this case the line size is 4 bytes. The number of lines is considerably less than the number of main memory blocks (m << M). At any time, some subset of the blocks of memory resides in lines in the cache. If a word in a block of memory is read, that block is transferred to one of the lines of the cache.

21 Cache memory Principles

22 Cache memory Principles
Figure 4.5 illustrates the read operation. The processor generates the read address (RA) of a word to be read. If the word is contained in the cache, it is delivered to the processor. Otherwise, the block containing that word is loaded into the cache, and the word is delivered to the processor. .

23

24 Cache memory Principles
Figure 4.5 shows these last two operations occurring in parallel and reflects the organization shown in Figure 4.6, which is typical of contemporary cache organizations. In this organization, the cache connects to the processor via data, control, and address lines. The data and address lines also attach to data and address buffers, which attach to a system bus from which main memory is reached.

25 Typical Cache Organization

26 Elements of cache Design

27 DRAM Organization Dynamic random access memory (DRAM) is a type of memory that is typically used for data or program code that a computer processor needs to function. DRAM is a common type of random access memory (RAM) used in personal computers (PCs), workstations and servers. Random access allows the PC processor to access any part of the memory directly rather than having to proceed sequentially from a starting place. RAM is located close to a computer’s processor and enables faster access to data than storage media such as hard disk drives and solid-state drives.

28 DRAM Organization There had been no significant changes in DRAM architecture since the early 1970s. The traditional DRAM chip is constrained both by its internal architecture and by its interface to the processor’s memory bus.

29 DRAM Organization one attack on the performance problem of DRAM main memory has been to insert one or more levels of high-speed SRAM cache between the DRAM main memory and the processor. But SRAM is much costlier than DRAM In recent years, a number of enhancements to the basic DRAM architecture have been explored, and some of these are now on the market. The schemes that currently dominate the market are SDRAM,DDR-DRAM and RDRAM.

30 DRAM Organization SDRAM: DRAM (synchronous DRAM) is a generic name for various kinds of dynamic random access memory (DRAM) that are synchronized with the clock speed that the microprocessor is optimized for. DDR-SDRAM: Double data rate synchronous dynamic random-access memory (DDR SDRAM) is a class of memory integrated circuits used in computers. DDR SDRAM, also called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM and DDR4 SDRAM. RDRAM: Rambus Dynamic Random Access Memory (RDRAM) is a memory subsystem that promises to transfer up to 1.6 billion bytes per second. The subsystem consists of the random access memory, the RAM controller, and the bus (path) connecting RAM to the microprocessor and devices in the computer that use it.

31 External Memory Magnetic disks remain the most important component of external memory. Both removable and fixed, or hard, disks are used in systems ranging from personal computers to mainframes and supercomputers. To achieve greater performance and higher availability, servers and larger systems use RAID disk technology. RAID is a family of techniques for using multiple disks as a parallel array of data storage devices, with redundancy built in to compensate for disk failure. Optical storage technology has become increasingly important in all types of computer systems. While CD- ROM has been widely used for many years, more recent technologies, such as writable CD and DVD, are becoming increasingly important.

32 MAGNETIC DISK A disk is a circular platter constructed of nonmagnetic material, called the substrate, coated with a magnetizable material. Traditionally, the substrate has been an aluminum or aluminum alloy material. More recently, glass substrates have been introduced. The glass substrate has a number of benefits, including the following: Improvement in the uniformity of the magnetic film surface to increase disk reliability A significant reduction in overall surface defects to help reduce read-write errors Ability to support lower fly heights (described subsequently) Better stiffness to reduce disk dynamics Greater ability to withstand shock and damage

33 Magnetic Read and Write Mechanisms
Data are recorded on and later retrieved from the disk via a conducting coil named the head; in many systems, there are two heads, a read head and a write head. During a read or write operation, the head is stationary while the platter rotates beneath it. The write mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below, with different patterns for positive and negative currents.

34 Magnetic Read and Write Mechanisms
The traditional read mechanism exploits the fact that a magnetic field moving relative to a coil produces an electrical current in the coil. When the surface of the disk passes under the head, it generates a current of the same polarity as the one already recorded. The structure of the head for reading is in this case essentially the same as for writing and therefore the same head can be used for both. Such single heads are used in floppy disk systems and in older rigid disk systems.

35 Physical Characteristics

36

37

38 RAID RAID (originally redundant array of inexpensive disks, now commonly redundant array of independent disks) is a data storage virtualization technology that combines multiple physical disk drive components into a single logical unit for the purposes of data redundancy, performance improvement, or both. Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance.

39 RAID

40 RAID The different schemas, or data distribution layouts, are named by the word RAID followed by a number, for example RAID 0 or RAID 1. Each schema, or a RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives.


Download ppt "Computer Organization & Architecture 3416"

Similar presentations


Ads by Google