Presentation is loading. Please wait.

Presentation is loading. Please wait.

Topics in Memory System Design 2016/2/5\course\cpeg323-07F\Topic7.ppt1.

Similar presentations


Presentation on theme: "Topics in Memory System Design 2016/2/5\course\cpeg323-07F\Topic7.ppt1."— Presentation transcript:

1 Topics in Memory System Design 2016/2/5\course\cpeg323-07F\Topic7.ppt1

2 2016/2/5\course\cpeg323-07F\Topic7.ppt2 Reading List Slides: Topic7x Henn & Patt: Chapter 7 Other papers as assigned in class or homeworks

3 2016/2/5\course\cpeg323-07F\Topic7.ppt3 MAIN PROCESSOR MEMORY MANAGE- MENT UNIT HIGH- SPEED CACHE MAIN MEMORY BACKING STORE

4 2016/2/5\course\cpeg323-07F\Topic7.ppt4 Program Characteristics and Memory Organization RAM vs. sequential access trade-off: between performance/cost and technology Locality in memory access patterns - hierarchy in memory design - cache - virtual memory

5 2016/2/5\course\cpeg323-07F\Topic7.ppt5 Random Access Memory DATA REGISTER ADDRESS REGISTER 01230123 N-2 N-1 ADDRESSES MEMORY CELLS { MEMORY BUS (To-from Processor) The structure of a random-access memory (RAM) Key: fixed access time

6 2016/2/5\course\cpeg323-07F\Topic7.ppt6 The Basic Structure of a Memory Hierarchy See P&H Fig. 7.1 3 rd Ed or 5.1 4 th Ed

7 2016/2/5\course\cpeg323-07F\Topic7.ppt7 Memory Hierarchy and Data Transfer as Blocks Every pair of levels in the memory hierarchy can be thought of as having an upper and lower level. Within each level, the unit of information that is present or not is called a block or a line. Usually we transfer an entire block when we copy something between levels. See P&H Fig. 7.2 3 rd Ed or 5.2 4 th Ed

8 2016/2/5\course\cpeg323-07F\Topic7.ppt8 Structure of a Memory Hierarchy See P&H Fig. 7.3 3 rd Ed or 5.3 4 th Ed

9 2016/2/5\course\cpeg323-07F\Topic7.ppt9 Memory Performance Bandwidth = # bits/sec “ that can be accessed” = (bit/word) x (word/cycle) x (cycle/sec) So, improve bandwidth? “Von Neumann Bottleneck”

10 2016/2/5\course\cpeg323-07F\Topic7.ppt10 How to Improve Memory System Performance? Reduce cycle time Increase word size Concurrency Efficient memory design

11 2016/2/5\course\cpeg323-07F\Topic7.ppt11  Almost all higher-performance microprocessors on the market use cache  Why not Cray vector architectures ? Cache

12 2016/2/5\course\cpeg323-07F\Topic7.ppt12 The improvements in IC technology affected not only DRAMs, but also SRAMs, making the cost of caches much lower. Caches are one of the most important ideas in computer architecture because they can substantially improve performance by the use of memory. The growing gap between DRAM cycle times and processor cycle times, as the next figure shows, is a key motivation for caches. If we are to run processors at the speeds they are capable of, we must have higher speed memories to provide data. [Joupi & Hennessy 91]

13 2016/2/5\course\cpeg323-07F\Topic7.ppt13

14 2016/2/5\course\cpeg323-07F\Topic7.ppt14 Latency in a Single System Memory Access Time CPU Time Ratio THE WALL

15 2016/2/5\course\cpeg323-07F\Topic7.ppt15 Locality of Reference “ Program references tend to be clustered in time.”

16 2016/2/5\course\cpeg323-07F\Topic7.ppt16 Regions with High Access Probabilities PC vicinity Stack frame (local) “Nearest” subroutines Active data

17 2016/2/5\course\cpeg323-07F\Topic7.ppt17 Address of Reference Probability of Reference

18 2016/2/5\course\cpeg323-07F\Topic7.ppt18 Address of Reference Probability of Reference Problem: often predict too big a page size than actually needed

19 2016/2/5\course\cpeg323-07F\Topic7.ppt19 The success of cache memories has been explained by reference to the “property of locality” [Denn72]. The property of locality has two aspects, temporal and spatial. Over short periods of time, a program distributes its memory references nonuniformly over its address space, and which portions of the address space are favored remain largely the same for long periods of time.

20 2016/2/5\course\cpeg323-07F\Topic7.ppt20 This first property, called temporal locality, or locality by time, means that the information which will be in use in the near future is likely to be in use already. This type of behavior can be expected from program loops in which both data and instructions are reused.

21 2016/2/5\course\cpeg323-07F\Topic7.ppt21 The second property, locality by space, means that portions of the address space which are in use generally consist of a fairly small number of individually contiguous segments of that address space. Locality by space, then, means that the loci of reference of the program in the near future are likely to be near the current loci of reference.

22 2016/2/5\course\cpeg323-07F\Topic7.ppt22 This type of behavior can be expected from common knowledge of programs; related data items (variables, arrays) are usually stored together, and instructions are mostly executed sequentially. Since the cache memory buffers segments of information that have been recently used, the property of locality implies that needed information is also likely to be found in the cache. [Smith82, p475]

23 2016/2/5\course\cpeg323-07F\Topic7.ppt23 Information in USE in the near future is likely to consist of that information in current use (locality by time) and that information logically adjacent to that in currently use (locality by space). TemporalSpatial


Download ppt "Topics in Memory System Design 2016/2/5\course\cpeg323-07F\Topic7.ppt1."

Similar presentations


Ads by Google