RANDOM FILL CACHE ARCHITECTURE

Slides:



Advertisements
Similar presentations
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Advertisements

Computer Structure 2014 – Out-Of-Order Execution 1 Computer Structure Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
Performance of Cache Memory
Sim-alpha: A Validated, Execution-Driven Alpha Simulator Rajagopalan Desikan, Doug Burger, Stephen Keckler, Todd Austin.
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
Computer Architecture 2011 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
Computer Architecture 2011 – out-of-order execution (lec 7) 1 Computer Architecture Out-of-order execution By Dan Tsafrir, 11/4/2011 Presentation based.
Computer vision: models, learning and inference Chapter 10 Graphical Models.
Computer Architecture 2010 – Out-Of-Order Execution 1 Computer Architecture Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
Establishing Pairwise Keys in Distributed Sensor Networks Donggang Liu, Peng Ning Jason Buckingham CSCI 7143: Secure Sensor Networks October 12, 2004.
COEN 180 Main Memory Cache Architectures. Basics Speed difference between cache and memory is small. Therefore:  Cache algorithms need to be implemented.
Secure Embedded Processing through Hardware-assisted Run-time Monitoring Zubin Kumar.
Pipelines for Future Architectures in Time Critical Embedded Systems By: R.Wilhelm, D. Grund, J. Reineke, M. Schlickling, M. Pister, and C.Ferdinand EEL.
A Novel Cache Architecture with Enhanced Performance and Security Zhenghong Wang and Ruby B. Lee.
CMPE 421 Parallel Computer Architecture
Hardware Assisted Control Flow Obfuscation for Embedded Processors Xiaoton Zhuang, Tao Zhang, Hsien-Hsin S. Lee, Santosh Pande HIDE: An Infrastructure.
Computer Architecture and Operating Systems CS 3230: Operating System Section Lecture OS-8 Memory Management (2) Department of Computer Science and Software.
Computer Science Research and Development Department Computing Sciences Directorate, L B N L 1 Storage Management and Data Mining in High Energy Physics.
Computer Architecture Memory organization. Types of Memory Cache Memory Serves as a buffer for frequently accessed data Small  High Cost RAM (Main Memory)
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
CE Operating Systems Lecture 14 Memory management.
Hardware-Software Integrated Approaches to Defend Against Software Cache-based Side Channel Attacks Jingfei Kong* University of Central Florida Onur Acıiçmez.
Super computers Parallel Processing By Lecturer: Aisha Dawood.
Serverless Network File Systems Overview by Joseph Thompson.
Virtual Memory Virtual Memory is created to solve difficult memory management problems Data fragmentation in physical memory: Reuses blocks of memory.
Computer Architecture Ch5-1 Ping-Liang Lai ( 賴秉樑 ) Lecture 5 Review of Memory Hierarchy (Appendix C in textbook) Computer Architecture 計算機結構.
Computer Architecture Lecture 27 Fasih ur Rehman.
Memory Management & Virtual Memory © Dr. Aiman Hanna Department of Computer Science Concordia University Montreal, Canada.
Information Leaks Without Memory Disclosures: Remote Side Channel Attacks on Diversified Code Jeff Seibert, Hamed Okhravi, and Eric Söderström Presented.
Reduction of Register File Power Consumption Approach: Value Lifetime Characteristics - Pradnyesh Gudadhe.
Topics covered: Memory subsystem CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
Lightweight Runtime Control Flow Analysis for Adaptive Loop Caching + Also Affiliated with NSF Center for High- Performance Reconfigurable Computing Marisha.
Chapter 8: Main Memory. 8.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Feb 22, 2005 Memory and Addressing It all starts.
Out-of-order execution Lihu Rappoport 11/ MAMAS – Computer Architecture Out-Of-Order Execution Dr. Lihu Rappoport.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
Computer Architecture Lecture 25 Fasih ur Rehman.
CMSC 611: Advanced Computer Architecture
Hardware-rooted Trust for Secure Key Management & Transient Trust
Memory Hierarchy Ideal memory is fast, large, and inexpensive
Computer Organization
Memory Management.
Jonathan Walpole Computer Science Portland State University
COSC3330 Computer Architecture
Chapter 2 Memory and process management
Ioannis E. Venetis Department of Computer Engineering and Informatics
Computer Architecture
New Cache Designs for Thwarting Cache-based Side Channel Attacks
4. NCdisk SP-based SoC Architecture 5. NCdisk Security Protocol
CSC 4250 Computer Architectures
Multilevel Memories (Improving performance using alittle “cash”)
Cache Memory Presentation I
Lu Peng, Jih-Kwon Peir, Konrad Lai
Bruhadeshwar Meltdown Bruhadeshwar
Main Memory Management
Professor, No school name
Page Replacement.
Memory Management & Virtual Memory
Ka-Ming Keung Swamy D Ponpandi
Control unit extension for data hazards
CSE451 Virtual Memory Paging Autumn 2002
CSC3050 – Computer Architecture
ECE 463/563, Microprocessor Architecture, Prof. Eric Rotenberg
Control unit extension for data hazards
BulkCommit: Scalable and Fast Commit of Atomic Blocks
Virtual Memory: Working Sets
Control unit extension for data hazards
Update : about 8~16% are writes
Information Theoretical Analysis of Digital Watermarking
COMP755 Advanced Operating Systems
Ka-Ming Keung Swamy D Ponpandi
Presentation transcript:

RANDOM FILL CACHE ARCHITECTURE Fangfei Liu and Ruby B. Lee Princeton Architecture Laboratory for Multimedia and Security (PALMS) Department of Electrical Engineering, Princeton University PRESENTED BY : SUMUKHI PARVATHARAJAN

OVERVIEW OF CACHE SIDE CHANNEL ATTACKS Classification of cache side channel attacks: Contention Based Attacks and Reuse Based Attacks. These attacks exploit the interaction of the key-dependent data flow in a program mostly with L1 data cache. Reuse based attacks rely on the fact that a previously accessed data will be cached, hence the reuse of the same data by a later memory access will result in a cache hit. Two types of reuse based attacks: -- Flush-Reload Attack: There are three stages: Flush, Idle, Reload -- Cache Collision Attack: For a pair of accesses - xi and xj , Assume xi and xj both depend on the key K, where xi = f(K), and xj = g(K). It is easy for the attacker to establish a relationship for the secret key as f(K) = g(K).

PAST WORK TO ELIMINATE CACHE SIDE CHANNEL ATTACKS Two general approaches have been used that can only eliminate contention based attacks - Partitioning the cache and Randomizing memory-to-cache mapping. To eliminate reuse based attacks: Constant Execution Time Solutions: Achieving constant execution time in cache accesses could potentially eliminate cache side channel attacks based on the timing difference of cache hits and misses, including reuse based attacks. This is done by “preloading” all the security-critical data into the cache on a cache miss, so that all accesses to them are cache hits. “Preloading” based approaches still have scalability and performance issues.

WHY RANDOM FILL CACHE ARCITECTURE? Reuse based attacks are much harder to defend against. Hence, a new general approach for securing caches against reuse based attacks is needed. To re-design the cache fill strategy so that it is de-correlated with the demand memory access. It’s a new security-aware cache fill strategy. It is more flexible than the demand fetch strategy. It has a simpler hardware based solution, without the need to frequently preload or reload all the security-critical data into the cache.

RANDOM FILL CACHE ARCHITECTURE Upon a cache miss, random fill engine generates a random fill request with an address within a neighborhood window [i − a, i + b]. The two boundaries a and b are stored in two new range registers, RR1 and RR2, which bound the range of the random number generated from a free running random number generator (RNG). By default, the two range registers are set to zero and the random fill cache works just like the conventional demand-fetch cache. Hardware for No Demand Fill: Normal request, Nofill request, Random fill request.

BLOCK DIAGRAM

OPTIMIZATION Optimization to provide constrains bounds for window size as a power of two : a + b + 1 = 2^n. Instead of set RR, a different system call set window is implemented: the range registers store the lower bound −a and a mask for the window (i.e., 2^n − 1). Since the bounded random number can be computed ahead of time, the critical path only consists of one adder that adds the demand miss line address i and the bounded random number.

SECURITY EVALUATION Timing Channel time: Data reuse always comes in a pair of memory accesses(xi and xj). The difference of the expected execution time under the conditions of cache collision (μ1) and no collision (μ2) is given as: The expectation of the timing difference (μ2 − μ1) is the signal that an attacker wants to extract, which depends on P1 and P2 as follows:

SECURITY EVALUATION-continued This means that no information can be extracted from the timing characteristics if P1 − P2 = 0 The number of measurements required for a successful attack is related to P1 − P2 as: Where, α = the desired likelihood of discovering the secret key Zα = quantile of the standard normal distribution for a probability α. σT = variance of the execution time. It indicates that when P1 − P2 = 0, the attack cannot succeed.

SECURITY EVALUATION-continued Assume the security-critical data is contained in a contiguous memory region with M cache lines, starting at M0. Consider two memory accesses xi and xj to the security-critical memory lines i and j where i, j ∈ [M0,M0 + M − 1]. Assume that the cache state is clean. if j is in the random fill window of i, P1−P2 = 0 always holds.

PERFORMANCE EVALUATION

PERFORMANCE EVALUATION-continued

EFFECTIVENESS OF THE RANDOM FILL CACHE ARCHITECTURE The effectiveness (reference ratio) of the random fill is defined as the ratio of the number of cache lines that are referenced before being evicted after being brought into the cache, over the total number of fetched memory lines:

CONCLUSIONS Reuse based cache side channel attacks are serious sources of information leakage in the microprocessor. A random fill cache architecture, which is able to dynamically de-correlate the cache fill with the demand memory access is required for security against reuse based attacks. Random fill cache incurs very slight performance degradation for cryptographic algorithms and has no performance impact on concurrent non-security-critical programs.

ANY QUESTIONS?