Presentation is loading. Please wait.

Presentation is loading. Please wait.

CMSC 414 Computer and Network Security Lecture 12 Jonathan Katz.

Similar presentations


Presentation on theme: "CMSC 414 Computer and Network Security Lecture 12 Jonathan Katz."— Presentation transcript:

1 CMSC 414 Computer and Network Security Lecture 12 Jonathan Katz

2 Announcement  Midterm exam Oct. 28, covering material through Oct. 21

3 Side channel attacks

4  Power attacks –Possible for crypto implemented on hardware in possession of the adversary Smartcards DRM modules TPM  Timing attacks –Possible for crypto implemented on hardware in possession of the adversary –Also possible over a network…

5 Simple timing attack  Consider the following code for verifying a MAC Vrfy(k, m, t){ correct = MAC(k,m); for (i=0 to 15) if correct[i] != t[i] return FALSE; return TRUE; }  Assume adversary able to measure the response time until authentication fails –Do you see the attack?

6 Power attack  Example of experimental results (DES)  Each of the 16 rounds of DES clearly visible

7 Attacking exponentiation  Recall that naïve exponentiation does not run in polynomial time  Fast exponentiation, basic idea exp(x, d, N) { // compute x d mod N ans = 1; for i=0 to length(d) { ans = ans * ans mod N; if (d[i] == 1) ans = ans * x mod N; } return ans; }  Time/power differ in each iteration depending on d[i] !

8 Power/timing attacks on RSA  Power/timing analysis of RSA implemented on a smartcard can recover d  Attacks via timing analysis (over a network) more difficult –Attacker only gets the total time of the exponentiation, but cannot see timing for individual iterations –Must also contend with network effects –Boneh-Brumley show that it is possible when exponentiation implemented in a particular way

9 Fault analysis  Induce (controlled) faults in an attempt to learn information  Simple example (in practice more difficult!): –Assume key hardwired into smartcard –Set bits of key to 0, one-by-one; observe whether output changes –This yields all bits of the original key!

10 Preventing side channel attacks  Difficult to prevent entirely…  Power analysis –Shielding, tamper resistance, etc. –“Obfuscating” the algorithm to make power analysis less useful –Add random noise using “empty” instructions  Timing analysis –Make all operations take the same amount of time –Randomize amount of time operations take (essentially a variant of the above)

11 System Security

12 What is “system security”?  We now know about the crypto… –…but how do we build a secure system?  The existence of crypto doesn’t solve everything: –Tells us how to achieve secrecy/integrity, not when secrecy/integrity are required –Doesn’t tell us how to integrate crypto into the system –Doesn’t address detection/recovery

13 System security -- components  Policy (specification) –What security properties to focus on –What is and is not allowed  Mechanism (implementation) –The mechanism enforces the policy –Includes procedural controls, not just technical ones E.g., who may enter the room where backup tapes are stored How new accounts are established –Prevention/detection/recovery  Assurance –Verifying that the mechanism implements the policy

14 Mechanisms for enforcing policy  The precision of a mechanism measures how closely the mechanism matches the policy –E.g., an overly restrictive mechanism may prevent things that are allowed by the policy  Hard (in general) to develop a “maximally- precise” mechanism for an arbitrary given policy  Impossible (in general) to determine whether a given mechanism implements a given policy  Made worse by the fact that policy is often specified informally to begin with

15 Design considerations  Where should security mechanism(s) be placed? Applications Services (DBMS *, object reference broker) OS (file/memory management, I/O) Hardware Kernel (mediates access to processor/memory) * Database management system

16 Design considerations  How to prevent an attacker from getting access to a layer below the protection mechanism? –E.g., recovery tools (Norton utilities) that access raw physical memory without regard for file structure –E.g., memory reuse when multiple processes (owned by different users) run on the same machine –E.g., access to backup tapes or core dumps

17 Design considerations  Simplicity or feature-rich system?  A simpler system will be easier to secure, and it will be easier to provide assurance of security

18 Design considerations  Centralized or decentralized?  A centralized system may be more secure –Policy will always be enforced consistently –No propagation delays if policy changes  A centralized system can lead to performance bottlenecks, and is less flexible

19 Security Principles

20 General principles  Seminal article by Saltzer and Schroeder (1975) –Linked from the course homepage  Eight principles underlying design and implementation of security mechanisms  These are guidelines, not hard and fast rules  Not exhaustive

21 Key point I: Simplicity  Make designs/mechanisms/requirements easy to understand and use  This applies to both the policy and the mechanism!  Less chance of error

22 Key point II: Restriction  Minimize the “power” of an entity –E.g., only allow access to information it needs –E.g., only allow necessary communication; restrict type of communication allowed  Less chance of harm!

23 Principle 1  “Principle of least privilege” –A subject should be given only the privileges it needs to accomplish its task –The function of a subject (not its identity) should determine this I.e., if a subject needs certain privileges only to complete a specific task, it should relinquish those privileges upon completion of the task If reduced privileges are sufficient for a given task, the subject should request only those privileges

24 In practice…  There is a limit to how much granularity a system can handle  Systems are often not designed with the necessary granularity –E.g., “append” may not be distinct from “write” –E.g., in UNIX, nothing between user and root Anyone who can make backup files can also delete those files

25 Principle 2  “Principle of Fail-Safe Defaults” –Unless a subject is given explicit access to an object, it should be denied access I.e., the default is no access –More generally, in case of ambiguity the system should default to the more restrictive case

26 Principle 3  “Economy of Mechanism” –Security mechanisms should be as simple as possible… –…but no simpler! –Can simplify assurance process  Offering too much functionality can be dangerous –E.g., finger protocol: cap how much information can be returned, or allow an arbitrary amount? DoS attack by returning an infinite stream of characters –E.g., macros in Excel, Word –E.g., postscript can execute arbitrary code

27 Principle 4  “Principle of Complete Mediation” –All accesses to objects should be checked to ensure they are allowed –OS should mediate any request to read an object --- even on the second such request by the same subject! Don’t cache authorization results Don’t rely on authentication/authorization performed by another module –Time of check to time of use flaws…  Good examples: re-authentication (on websites, or systems) when performing sensitive tasks

28 Insecure example…  In UNIX, when a process tries to read a file, the system checks access rights –If allowed, it gives the process a file descriptor –File descriptor is presented to OS for access  If permissions are subsequently revoked, the process still has a valid file descriptor! –Insufficient mediation

29 Principle 5  “Open Design” –No “security through obscurity” –Security of a system should not depend on the secrecy of its implementation Of course, secret keys do not violate this principle!

30 Principle 6  “Separation of Privilege” –(As much as is feasible…) a system should not grant permission based on a single condition –E.g., require more than one sys admin to issue a critical command, or more than one teller to issue an ATM card

31 Principle 7  “Principle of Least Common Mechanism” –Minimize mechanisms depended upon by all users Minimize effect of a security flaw! –Shared mechanisms are a potential information path, and so may be used to compromise security –Shared mechanisms also expose the system to potential DoS attacks

32 Principle 8  “Psychological Acceptability” –Security mechanisms should not make access to the resource more difficult –If mechanisms are too cumbersome, they will be circumvented! –Even if they are used, they may be used incorrectly

33 OS Security

34 Overview  Traditional OS security motivated by multi-user systems –E.g., unix –This is what we focus on in this segment  There is a movement away from such systems –Computing done on desktop PC, and/or on a remote (shared) server but without “awareness” of other users  Multi-user systems still used in business, govt. environments (shared files, shared databases, etc.)

35 Threats in multi-user context  Unauthorized reading/modification of files by legitimate users  Unauthorized use of system resources by legitimate users  Unauthorized access by illegitimate users


Download ppt "CMSC 414 Computer and Network Security Lecture 12 Jonathan Katz."

Similar presentations


Ads by Google