Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hierarchical Memory Systems Prof. Sin-Min Lee Department of Computer Science.

Similar presentations


Presentation on theme: "Hierarchical Memory Systems Prof. Sin-Min Lee Department of Computer Science."— Presentation transcript:

1 Hierarchical Memory Systems Prof. Sin-Min Lee Department of Computer Science

2

3

4

5

6

7

8 Implementing JK Flip-Flop using only a T Flip-Flop Note how the areas marked off with a blue box behave like a T flip-flop, while the area within the purple box behave like a D flip-flop. From this last chart, we can derive the following chart:

9 Implementing JK Flip-Flop using only a T Flip-Flop To derive the next chart, we work in reverse, asking, “What is the input into the T (toggle) function that will result in the output shown in the previous chart?” In this case, the first column of Q is 0 and our circled value is a 0; a 0 will give this result. The input that will give us a 1, when Q is 1, is also 0. Refer back to the T flip-flop chart to see that on 0, there is no change; 1 “toggles”.

10 Implementing JK Flip-Flop using only a T Flip-Flop This is the final Karnaugh map and the associated equation for T.

11 Implementing T Flip-Flop using only a JK Flip-Flop This time, we are doing the reverse again, asking what values of J and K will give us the corresponding values in the T chart above. 00 or 01 will give 0, so we enter “0X”. X is our “don't care” value; it can be 0 or 1.

12 Implementing T Flip-Flop using only a JK Flip-Flop Once we derive all the values, we have to split this into two, in order to get an equation that defines J and another than defines K.

13 Implementing T Flip-Flop using only a JK Flip-Flop Here is the final implementation.

14 Implementing this FSM using a T Flip-Flop Using the values from the first chart, we can get this second chart. Then, we apply the same reverse method to determine what input values we would need to arrive at the ones listed in this second chart.

15 Implementing this FSM using a T Flip-Flop T = XQ' + X'Q

16 Implementing this FSM using a D Flip-Flop This time we use the same FSM and same initial chart, but now derive an equation for D.

17 Implementing this FSM using a D Flip-Flop Since this is a delay, the corresponding chart is the same.

18 Implementing this FSM using a D Flip-Flop Finally, here is our graph.

19 Implementing Flip-Flop

20 How can we create a flip-flop using another flip-flop?  Say we have a flip-flop BG with the following properties:  Let ’ s try to implement this flip-flop using a T flip-flop BGQ+ 00 Q’Q’ 01Q 101 110

21 Step 1:Create Table The first step is to draw a table with created flip- flop first (in this case BG), Q, Q+, and the creator flip-flop (in this case T) -Look at Q, Q+ to determine value of T BGQQ+T 00011 101 01000 110 10011 110 11000 101

22 Step 2:Karnaugh Map  Draw a Karnaugh Map, based on when T is a 1 BGQQ+T 00011 101 01000 110 10011 110 11000 101 11 00 01 10 BG Q 00 01 10 11 01 T=B’G’+BGQ+G’Q’

23 Step 3: Draw Diagram T=B’Q’+ BGQ+G’Q’ BG Q Q’

24

25

26

27

28

29

30 The Root of the Problem: Economics  Fast memory is possible, but to run at full speed, it needs to be located on the same chip as the CPU Very expensive Limits the size of the memory  Do we choose: A small amount of fast memory? A large amount of slow memory?

31 Memory Hierarchy Design (2)  It is a tradeoff between size, speed and cost and exploits the principle of locality.  Register Fastest memory element; but small storage; very expensive  Cache Fast and small compared to main memory; acts as a buffer between the CPU and main memory: it contains the most recent used memory locations (address and contents are recorded here)  Main memory is the RAM of the system  Disk storage - HDD

32 Memory Hierarchy Design (3)  Comparison between different types of memory size: speed: $/Mbyte: 32 - 256 B 2 ns RegisterCacheMemory 32KB - 4MB 4 ns $100/MB 128 MB 60 ns $1.50/MB 20 GB 8 ms $0.05/MB larger, slower, cheaper HDD

33

34

35

36

37

38

39 Memory Hierarchy  Can only do useful work at the top  90-10 rule: 90% of time is spent of 10% of program  Take advantage of locality  temporal locality keep recently accessed memory locations in cache  spatial locality keep memory locations nearby accessed memory locations in cache

40

41

42

43

44

45 The connection between the CPU and cache is very fast; the connection between the CPU and memory is slower

46

47

48

49

50

51

52

53 The Cache Hit Ratio  How often is a word found in the cache?  Suppose a word is accessed k times in a short interval 1 reference to main memory (k-1) references to the cache  The cache hit ratio h is then

54 Reasons why we use cache Cache memory is made of STATIC RAM – a transistor based RAM that has very low access times (fast) STATIC RAM is however, very bulky and very expensive Main Memory is made of DYNAMIC RAM – a capacitor based RAM that has very high access times because it has to be constantly refreshed (slow) DYNAMIC RAM is much smaller and cheaper

55 Performance (Speed)  Access time Time between presenting the address and getting the valid data (memory or other storage)  Memory cycle time Some time may be required for the memory to “recover” before next access cycle time = access + recovery  Transfer rate rate at which data can be moved for random access memory = 1 / cycle time (cycle time) -1

56

57 Memory Hierarchy  size ? speed ? cost ?  registers in CPU  internal may include one or more levels of cache  external memory backing store smallest, fastest, most expensive, most frequently accessed medium, quick, price varies largest, slowest, cheapest, least frequently accessed

58

59

60

61

62


Download ppt "Hierarchical Memory Systems Prof. Sin-Min Lee Department of Computer Science."

Similar presentations


Ads by Google