Download presentation
Presentation is loading. Please wait.
Published byNelson Gregory Modified over 8 years ago
1
Hardware-Software Integrated Approaches to Defend Against Software Cache-based Side Channel Attacks Jingfei Kong* University of Central Florida Onur Acıiçmez Samsung Electronics Jean-Pierre Seifert TU Berlin & Deutsche Telekom Laboratories Huiyang Zhou University of Central Florida
2
University of Central Florida2 Why Should We Care about Side Channel Attacks? Cryptographic applications are the very important software component in modern computers( e.g. secure online transactions) Cryptographic algorithms are designed to impose unreasonable time and resource cost on a successful attack To break a 128-bit symmetric key in brute-force: 2 128 possibilities, a device that can check 2 60 /second still requires around 9.4*2 40 years, about 700 times the age of the universe. By exploiting certain features of modern microprocessors, it may just take few hours to get the secret key!
3
University of Central Florida3 What are Software Cache-based Side Channel Attacks? Side channel attacks exploit any observable information generated as a byproduct of the cryptosystem implementation, e.g. power trace, electromagnetic radiation infer the secret information, e.g. the secret key Software cache-based side channel attacks exploit latency difference between cache access and memory access the source of information leakage: cache misses of critical data whose addresses are dependent on the secret information mainly access-driven attacks and time-driven attacks
4
University of Central Florida4 An Example: Advanced Encryption Standard (AES) one of the most popular algorithms in symmetric key cryptography 16-byte input (plaintext) 16-byte output (ciphertext) 16-byte secret key (for standard 128-bit encryption) several identical rounds of 16 XOR operations and 16 table lookups in a performance-efficient software implementation index byte secret key byte input/output byte Lookup Table
5
University of Central Florida5 Access-driven Attacks Cache Main Memory spy process’s datavictim process’s data a b c d b>(a≈c≈d)
6
University of Central Florida6 Time-driven Attacks cache hit/miss computation cache hit/miss computation Total execution time is affected by cache misses indices of table lookups secret key byteinput/output byte
7
University of Central Florida7 Cache-collision Time-driven Attacks on AES XiXj cache hit/miss i computation cache hit/miss j computation KiKj Case 2: Xj Kj = Xi Ki Case 1: Xj Kj ≠ Xi Ki Statistically speaking, Case 1 takes longer execution time than Case 2. Only when Ki Kj = Xi Xj, AES encryption exhibits the shortest execution time Xj Kj = Xi Ki => Ki Kj = Xi Xj cache access j is a cache miss assuming no same cache access before cache access j is a cache hit assuming no conflict miss in between
8
University of Central Florida8 The Foundation of Cache-Collision Attacks the number of collisions in the final round of AES A higher number of collisions, a smaller number of cache misses, thus a shorter encryption time one Pentium 4 processor
9
University of Central Florida9 Current Proposed Software/Hardware Countermeasures Software proposals + easy to deploy with no hardware changes −application specific −substantial performance overhead −data layout and code have to be changed −no security guarantee Hardware proposals + generic (not application specific) + performance efficient −still with some security issues −hardware changes −not flexible
10
University of Central Florida10 Hardware-Software Integrated Approaches Hardware tackles the source of information leakage: cache misses over critical data Software offers the flexibility, even against future attacks Three approaches for enhancing the security of various cache designs with tradeoffs between hardware complexity and performance overhead preloading to protect PLcache (from ISCA’07) securing RPcache (from ISCA’07) with informing loads securing regular caches with informing loads
11
University of Central Florida11 Informing Loads Approach: Informing Memory Operations Informing load instructions work as normal load instructions upon cache hits generate an user-level exception upon cache misses originally proposed as a lightweight support for memory optimization (ISCA’96) Leverage the same information exploited by attacks Use informing load instructions to read data from lookup tables The flexible countermeasures are provided by software implementation in the exception handler
12
University of Central Florida12 Defend against access-driven attacks !Even the very first cache miss is security-critical to access-driven attacks software random permutation in AES implementation +randomize the mapping between table indices and cache lines +obfuscate attackers’ observation !Fixed software random permutation is vulnerable detect the event of cache misses using informing loads and perform permutation update in the exception handler +every time there is a chance ( cache miss ) to leak the information, the permutation is changed randomly +balance the tradeoff between security and performance Overall, a software random permutation scheme with permutation update only when necessary ( cache misses )
13
University of Central Florida13 Defend against time-driven attacks !The correlation between the secret key and number of cache misses detect the event of cache misses using informing loads load all the critical data into cache in the exception handler +avoid cache misses for subsequent cache access +break the correlation
14
University of Central Florida14 The Defense Procedure 0. AES implementation uses the software random permutation version instead of the original one 1. Informing load instructions are used to load those critical data 2. The cache miss over critical data is detected by informing load instructions. The program execution is redirected to the user-level exception handler. 3. Inside the exception handler, all critical data are preloaded into cache. Also permutation update is performed between the missing cache line and a randomly-chosen cache line. Cache Main Memory other process’s dataAES’ data
15
University of Central Florida15 The Implementation of Software Random Permutation in AES converted lookup tablesoriginal lookup table
16
University of Central Florida16 Countermeasure Implementation in the Exception Handler permutation update to defend against access-driven attacks by swapping both the pointers and the data preload all table data to defend against time-driven attacks through prefetching from address pointers T’[0], T’[1], …, T’[K-1]
17
University of Central Florida17 Experiments Experimental Setup Default processor configuration in a MIPS-like SimpleScalar simulator pipeline bandwidth:4, 128 ROB, 64 IQ, 64 LSQ 32KB 2-way I/D L1, 2MB L2, cache block size 64B fetch policy for SMT: round-robin AES software implementation (baseline): OpenSSL 0.9.7c implementation AES performance microbenchmark: OpenSSL speed test program Security Evaluation impact of ILP and MLP on cache collision time-driven attacks security analysis on our regular caches with informing loads approach Performance Evaluation performance impact on AES performance impact on an SMT processor
18
University of Central Florida18 Impact of ILP and MLP on Cache-collision Time-driven Attacks the less correlation between the number of cache collisions and the execution time the more ILP and MLP, the less observable trend the number of cache collisions in the final round of AES the less correlation between the key and execution time, the more number of samples required for a successful attack
19
University of Central Florida19 Security Evaluation on Regular Caches with Informing Loads Mitigation against access-driven attacks (see the theoretical proof from Wang et al. at ISCA’07) Mitigation against cache collision time-driven attacks the number of cache collisions in the final round of AES
20
University of Central Florida20 Performance Impact on AES performance takes a hit because of cache conflict misses between the lookup table data and other data, which causes lots of exception handling performance improves as cache conflict misses between lookup table data and other data are almost gone because of larger caches/more associativties most of the overhead is because of the indirection table introduced for software randomization
21
University of Central Florida21 Performance Impact on a 2-way SMT Processor AES running with SPEC2000 INT With larger caches / more associativities, the performance overheads on throughput and fairness from the exception handling are diminishing Still the indirection table lookup imposes certain performance overhead on the throughput
22
University of Central Florida22 Conclusions Software cache-based side channel attacks are emerging threats Cache misses are the source of information leakage We proposed hardware-software integrated approaches to provide stronger security protection over various cache designs A light-weight hardware support, informing loads, is proposed to protect regular caches with flexible software countermeasures and it incurs certain performance overhead Preloading and informing loads are also proposed to enhance the security of previously proposed secure cache designs.
23
University of Central Florida23 Thank you! Questions?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.