Cache Data Compaction: Milestone 2 Edward Ma, Siva Penke, Abhijeeth Nuthan
Concept Divide super-cache lines into smaller cache lines By having the larger super-cache lines we can save space in the tag store while maintaining the benefit for locality and bandwidth by having a smaller cache-line granularity. A pointer in the tag array stores location of first data in super cache-line
Progress From Last Milestone Implemented the modified tag store and data store into the simulator, with basic replacement and placement policy. Have some preliminary results comparing our implementation with conventional caches (in terms of hit rate) Savings vs. Conventional Cache Tag Store M = Addressable Memory Space, B = Block Size, C = Cache Size, N = Number of cache lines per super cache line.
Process Flow Chart
Some Numbers (Hit Rate)
Still to be Done The Latency Delays for the Data Movement – the likely method will be to multiply a constant by the number of entries we have to reorder The reordering issue will hopefully be alleviated by a more complex placement policy (we may evict even when the data store is not completely full if it means less data have to be reordered in the cache) and also offset bits in the tag store (which will determine the subsequent cache line locations as a function of offset from the pointer address) LRU will be replaced with a more realistic and cheaper replacement policy that can be implemented in hardware (Pseudo-LRU, perhaps?)