Download presentation
Presentation is loading. Please wait.
Published byDorthy Sharon Ward Modified over 9 years ago
1
On-Chip Cache Analysis A Parameterized Cache Implementation for a System-on-Chip RISC CPU
2
Presentation Outline Informal Introduction Underpin Design – xr16 Cache Design Issue Implementation Details Results & Conclusion Future Work Questions
3
Informal Introduction Field Programmable Gate Array (FPGAs) Verilog HDL System-on-Chip (SoC) Reduced Instruction Set Computer (RISC) Caches Project Theme
4
Underpin Design – xr16 Classical pipelined RISC Big-Endian, Von-Numen Architecture Sixteen 16-bit registers Forty Two Instructions (16-bit) Result Forwarding, Branch Annulments, Interlocked instructions
5
Underpin Design – xr16 (cont’d) Internal and external Buses (CPU clocked) Pipelined Memory Interface Single-cycle read, 3-cycle write DMA and Interrupt Handling Support Ported Compiler and Assembler
6
Underpin Design – xr16 (cont’d) Block Diagram
7
Underpin Design – xr16 (cont’d) Datapath
8
Underpin Design – xr16 (cont’d) Memory Preferences
9
Underpin Design – xr16 (cont’d) RAM Interface
10
Cache Design Issues Cache Size * Line Size Fetch Algorithm Placement Policy * Replacement Policy * Split vs. Unified Cache
11
Cache Design Issues (cont’d) Write Back Strategy * Write Allocate Policy * Blocking vs. Non-Blocking Pipelined Transactions Virtually addressed Caches Multilevel Caches
12
Cache Design Issues (cont’d) Cache Size32 – 256K Data Bits Placement Policy Direct Mapped, Set Associative, Fully Associative Replacement Policy FIFO, Random* Write Back Strategy Write Back, Write Through Write Allocate Policy Write Allocate, Write No Allocate
13
Implementation Details Configurable Parameters Cache Size Placement Strategy Write Back Policy Write Allocate Policy Replacement Policy
14
Implementation Details (cont’d)
16
1. Miss Read Replacement NOT Required Let the memory operation complete and place fetched data from memory in cache.
17
Implementation Details (cont’d) 2. Miss Read Replacement Required Initiate a write memory operation and write back the set to be replaced. Initiate read operation for desired data.
18
Implementation Details (cont’d) 3. Miss Write No Allocate Let the memory operation complete and do nothing else.
19
Implementation Details (cont’d) 4. Miss Write Yes Allocate WriteThrough Let the memory operation complete and place the new data in cache.
20
Implementation Details (cont’d) 5. Miss Write Yes Allocate WriteBack Replacement NOT Required Cancel memory operation and only update the cache, mark the data dirty.
21
Implementation Details (cont’d) 6. Miss Write Yes Allocate WriteBack Replacement Required Instead of writing the data that caused the write miss, write back the set that is to be replaced and update the cache with data that caused the miss.
22
Implementation Details (cont’d) 7. Hit Read Cancel memory operation and provide data for either instruction fetch or data load instruction.
23
Implementation Details (cont’d) 8. Hit Write WriteThrough Let the memory operation complete and update the cache when memory operation completes.
24
Implementation Details (cont’d) 9. Hit Write WriteBack Cancel the memory operation and update the cache.
25
Implementation Details (cont’d) 1. Read Hit 2. Write Hit 3. Read Miss (rep) 4. Read Miss (no rep) 5. Write Miss (rep) 6. Write Miss (no rep)
26
Results & Conclusion Proof of Concept Rigid Design Parameters R&D Options Architecture Innovation
27
Future Work LRU Implementation Victim Cache Buffer Split Caches Level 2 Cache Pipeline Enrichment Multiprocessor Support
28
Questions
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.