Download presentation
Presentation is loading. Please wait.
Published byAriel Ward Modified over 6 years ago
1
E0-256 Computer Systems Security Vinod Ganapathy Lecture 1: Introduction
Slide #1-1 1
2
Course administration
Instructor: Vinod Ganapathy Slide #1-2 2
3
Course Info Everything you need for the course is at the class website: Slide #1-3 3
4
Syllabus Goal of the course:
To get you up-to-speed on the state- of-the-art in computer security The field is very dynamic: attacks and defences continuously evolve. Most textbooks are woefully out of date. So, we will read research papers. 4
5
Syllabus Computer security is a VERY broad area.
Impossible to cover breadth in a single course: We will focus on some system security issues Memory-errors and sandboxes, Web, Cloud, Mobile Even for these topics, we’ll only see the tip of the iceberg. The realities are vast and deep. Goal is to give you a taste of what’s there and encourage you to start looking. Not even touching cryptography or privacy (see courses by Profs. Bhavana, Arpita, Sanjit) 5
6
Reading research papers
Not written in your usual textbook style Each paper written in a slightly different style Can take 3-4 hours to read a paper when you’re starting off to truly understand and appreciate the ideas that it presents. One of the goals of the course is to teach you how to read research papers Getting to the core ideas of the paper Adding these core ideas to your “toolbox” 6
7
Reading research papers
Try to read the paper before coming to class. You may not understand everything, and that’s okay. Just get a general feel for the topic that the paper covers. In class, we will discuss the context, background and technical details of the paper. Go home, and study the paper with your new-found understanding. 7
8
Reading research papers
The rules are the same, whether you’re in kindergarten or doing your post-graduate studies 8
9
Reading research papers
Most papers that we will read are seminal research results Google the names of the authors These are the big-players in computer systems security. Learn about their research by perusing through other papers they have written A good way to come up with project ideas 9
10
Other course-related stuff
Weekly workload and what is expected. Evaluation components: 1 HW + 4 Exams OR 1 HW + 3 Exams + Project Grading criteria Project details ALL THESE DETAILS ARE ON THE CLASS WEB PAGE! 10
11
Grading What this means
If all of you do well, you all get A+ grades. The converse holds too :-) WYSIWYG: No room for uncertainty due to “curving” issues. No back-and-forth discussions at end of semester about the grade you got Grades are non-negotiable 11 11
12
Exams All exams will be open-book. Will test deep understanding of material rather than mere knowledge of material 12
13
Project details Required to commit to Exam+Project or Exam-Only by the first exam. You cannot change your selection once you have committed If you choose a project option and later on decide you don’t have time, you CANNOT take the final exam. You will be graded on the project that you produce So think and make a careful selection! 13
14
Project details Requires you to conduct original research
Full conference-paper-style report expected at end of semester. Not to mention a working system and a presentation. Continuous work expected throughout the semester Last-minute scrambling will be penalized. If your project is not satisfactory, you will receive an F grade, and will be required to complete the project in subsequent semesters. 14
15
Attending class You are encouraged to attend every class. But you're all grown ups -- So no attendance policy will be enforced. BUT, in each class, I will often cover material that is not in the papers, not in the slides, and not in any textbook. The exams may contain questions based on this material. Slide #1-15 15 15
16
Class discipline Two basic common-courtesy rules
Rules that you would expect me to follow as your instructor Slide #1-16 16 16
17
No using laptops and smart phones in class
Class discipline: Rule #1 No using laptops and smart phones in class Facebook, Twitter, , SMS, Whatsapp, etc., can wait until after class. Want to take notes? Use paper/pencil If I see a violation, I will stop teaching till you shut your device Slide #1-17 17 17
18
Class discipline: Rule #2
Come to class on time Class starts at 11am, not 11:10am or 11:15am. Unless otherwise noted, e.g., if there is an interesting department seminar before class (which I may ask you to attend as well). Ambling into class minutes late is unacceptable Slide #1-18 18 18
19
Contacting us ++ Attend and ask questions in class
++ Post your doubts to the class mailing list. I will monitor and answer them there. Will be done via the “Teams” app in IISc’s Office 365 (you all have accounts on it). I have created a team called "E Computer Systems Security" Will add students after August 16th. Send me your IISc IDs and I will add you to the team. Slide #1-19 19 19
20
Computer security is the study of …
Weaknesses in systems and attacks against them. Defending against such attacks Pro-actively protecting data against various attacker models Q1: Can you define a “secure system”? Q2: Real examples of “secure systems”? Slide #1-20 20
21
A “cold boot” attack Photos courtesy of: Slide #1-21 21
22
A “cold boot” attack Photos courtesy of: Slide #1-22 22
23
Basic Components Confidentiality Integrity Availability
Keeping data and resources hidden Integrity Data integrity (integrity) Origin integrity (authentication) Availability Enabling access to data and resources Confidentiality: a good example is cryptography, which traditionally is used to protect secret messages. But cryptography is traditionally used to protect data, not resources. Resources are protected by limiting information, for example by using firewalls or address translation mechanisms. Integrity: a good example here is that of an interrupted database transaction, leaving the database in an inconsistent state (this foreshadows the Clark- Wilson model). Trustworthiness of both data and origin affects integrity, as noted in the book’s example. That integrity is tied to trustworthiness makes it much harder to quantify than confidentiality. Cryptography provides mechanisms for detecting violations of integrity, but not preventing them (e.g., a digital signature can be used to determine if data has changed). Availability: this is usually defined in terms of “quality of service,” in which authorized users are expected to receive a specific level of service (stated in terms of a metric). Denial of service attacks are attempts to block availability. Slide #1-23 23 23
24
Goals of Security Prevention Detection Recovery
Prevent attackers from violating security policy Detection Detect attackers’ violation of security policy Recovery Stop attack, assess and repair damage Continue to function correctly even if attack succeeds Prevention is ideal, because then there are no successful attacks. Detection occurs after someone violates the policy. The mechanism determines that a violation of the policy has occurred (or is underway), and reports it. The system (or system security officer) must then respond appropriately. Recovery means that the system continues to function correctly, possibly after a period during which it fails to function correctly. If the system functions correctly always, but possibly with degraded services, it is said to be intrusion tolerant. This is very difficult to do correctly; usually, recovery means that the attack is stopped, the system fixed (which may involve shutting down the system for some time, or making it unavailable to all users except the system security officers), and then the system resumes correct operations. Slide #1-24 24 24
25
Threat Models A threat model is a set of assumptions about the power of an adversary. The concept of a “secure system” is defined with respect to a threat model Examples of threat models? Slide #1-25 25
26
Trusted Computing Base
To prove that a system is “secure” against a particular adversary, we often make certain assumptions about portions of the system. Those constitute the Trusted Computing Base (TCB) of the system If the TCB is compromised, can no longer guarantee security of the system. Slide #1-26 26
27
Trusted Computing Base
Examples of the TCB on real systems? Given the TCB, is it reasonable for us to place trust in the TCB? Or is our trust misplaced? How low can you go on a real system? Slide #1-27 27
28
Policies and Mechanisms
Policy says what is, and is not, allowed This defines “security” for the site/system/etc. Mechanisms enforce policies Real-world example: locks on door. Composition of policies If policies conflict, discrepancies may create security vulnerabilities Policy: may be expressed in natural language, which is usually imprecise but easy to understand; mathematics, which is usually precise but hard to understand; policy languages, which look like some form of programming language and try to balance precision with ease of understanding Mechanisms: may be technical, in which controls in the computer enforce the policy; for example, the requirement that a user supply a password to authenticate herself before using the computer procedural, in which controls outside the system enforce the policy; for example, firing someone for ringing in a disk containing a game program obtained from an untrusted source The composition problem requires checking for inconsistencies among policies. If, for example, one policy allows students and faculty access to all data, and the other allows only faculty access to all the data, then they must be resolved (e.g., partition the data so that students and faculty can access some data, and only faculty access the other data). Slide #1-28 28 28
29
Policies and Mechanisms
Unambiguously partition system states Correctly capture security requirements Mechanisms Assumed to enforce policy Support mechanisms work correctly All security policies and mechanisms rest on assumptions; we’ll examine some in later chapters, most notably Chapter 22, Malicious Logic. Here is a taste of the assumptions. Policies: as these define security, they have to define security correctly for the particular site. For example, a web site has to be available, but if the security policy does not mention availability, the definition of security is inappropriate for the site. Also, a policy may not specify whether a particular state is “secure” or “non-secure.” This ambiguity causes problems. Mechanisms: as these enforce policy, they must be appropriate. For example, cryptography does not assure availability, so using cryptography in the above situation won’t work. Further, security mechanisms rely on supporting infrastructure, such as compilers, libraries, the hardware, and networks to work correctly. Ken Thompson’s modified C preprocessor (see the example on p. 615) illustrates this point very well. Slide #1-29 29 29
30
Security Design Principles
It is very hard to design systems that are resistant to attack. Even harder to prove that a system is secure. We make several assumptions: Threat model Trusted Computing Base Attackers are clever: Not subject to these assumptions and constantly improvise. Nevertheless, can we follow certain design principles that result in “more secure” systems? Slide #1-30 30 30
31
1: Least Privilege A subject should be given only those privileges necessary to complete its task Function, not identity, controls Rights added as needed, discarded after use Minimal protection domain This is an example of restriction. Key concepts are: Function: what is the task, and what is the minimal set of rights needed? “Minimal” here means that if the right is not present, the task cannot be performed. A good example is a UNIX network server that needs access to a port below 1024 (this access requires root). Rights being added, discarded: if the task requires privileges for only one action, then the privileges should be added before the action and then removed. Going back to the UNIX network server, if the server need not act as root (for example, an SMTP server), then drop the root privileges immediately after the port is opened. The protection domain statement emphasizes the other two. Slide #1-31 31 31
32
2: Fail-Safe Defaults Default action is to deny access
If action fails, system as secure as when action began Blacklists versus Whitelists The first is well-known. Add rights explicitly; set everything to deny, and add back. This follows the Principle of Least Privilege. You see a variation when writing code that has security considerations. If you take untrusted data (such as input) that may contain meta-characters. The rule of thumb is to specify the LEGAL characters, and discard all others, rather than to specify the ILLEGAL characters and discard them. More on this in chapter 29 … The second is often overlooked, but goes to the meaning of “fail safe”: if something fails, the system is still safe. Failure should never change the security state of the system. Hence, if an action fails, the system should be as secure as if the action never took place. Slide #1-32 32 32
33
3: Economy of Mechanism Keep it as simple as possible
KISS Principle Simpler means less can go wrong And when errors occur, they are easier to understand and fix Interfaces and interactions Applies particularly to TCB design. KISS principle: “Keep It Simple, Silly” (often stronger pejoratives are substituted for “silly) Simplicity refers to all dimensions: design, implementation, operation, interaction with other components, even in specification. The toolkit philosophy of the UNIX system is excellent here; each tool is designed and implemented to perform a single task. The tools are then put together. This allows the checking of each component, and then their interfaces. It is conceptually much less complex than examining the unit as a whole. The key, though, is to define all interfaces completely (for example, environment variables and global variables as well as parameter lists). Slide #1-33 33 33
34
4: Complete Mediation Check every access
Usually done once, on first action UNIX: access checked on open, not checked thereafter If permissions change after, may get unauthorized access The reason for relaxing this one is efficiency: if you do lots of accesses, the checks will slow you down substantially. It’s not clear if that is really true, though. Exercise: Have a process open a UNIX file for reading. From the shell, delete the read permissions that allow the process to read the file. Then have the process read from the open file. The process can do so. This shows the check is done at the open. If you want to be sure, have the process close the file. Then have the process try to reopen the file for reading. This open will fail. Note that UNIX systems fail to enforce this principle to any degree on a superuser process, where access permissions are not even checked for an open! This is why people create management accounts (more properly, role accounts) like bin or mail: by restricting processes to those accounts, so access control checking applies. It also is an application of the principle of least privilege. Slide #1-34 34 34
35
5: Open Design Security should not depend on secrecy of design or implementation Popularly misunderstood to mean that source code should be public “Security through obscurity” Does not apply to information such as passwords or cryptographic keys Note that source code need not be available to meet this principle. It simply says that your security cannot depend upon your design being a secret. Secrecy can enhance the security, but if the design becomes exposed, the security of the mechanism cannot be affected. The problem is that people are very good at finding out what secrets protect you. They may figure it out from the way the system works, or from reverse engineering the interface or system, or by more prosaic techniques such as dumpster diving. This principle does not speak to secrets not involving design or implementation. For example, you can keep crypto keys and passwords secret. Slide #1-35 35 35
36
6: Separation of Privilege
Require multiple conditions to grant privilege Separation of duty Defense in depth You need to meet more than one condition to gain access. Separation of duty says that the one who signs the checks cannot be the one who prints the checks because then a single person could steal money. To make that more difficult, the thief must compromise two people, not one. This also provides a finer-grained control over a resource than a single condition. The analogy with non-computer security mechanisms is that of “defense in depth.” To get into a castle, you need to cross the moat, scale the walls, and drop down over the walls before you can get in. That is three barriers (conditions) that must be overcome (met). Slide #1-36 36 36
37
7: Least Common Mechanism
Mechanisms should not be shared Information can flow along shared channels Covert channels Isolation Virtual machines Sandboxes Isolation prevents communication, and communication with something— another process or a resource—is necessary for a breach of security. Limit the communication, and you limit the damage. So, if two processes share a resource, by coordinating access, they can communicate by modulating access to the entire resource. Examples: percent of CPU used. To send a 1 bit, the first process uses 75% of the CPU; to send a 0 bit, it uses 25% of the CPU. The other process sees how much of the CPU it can get and from that can tell what the first process used, and hence is sending. Variations include filling disks, creating files with fixed names, and so forth. Approaches to implementing this principle: isolate each process, via virtual machines or sandboxes (a sandbox is like a VM, but the isolation is not complete). Slide #1-37 37 37
38
8: Psychological Acceptability
Security mechanisms should not add to difficulty of accessing resource Hide complexity introduced by security mechanisms Ease of installation, configuration, use Human factors critical here This recognizes the human element. General rules are: Be clear in error messages. You don’t need to be detailed (e.g., did the user mistype the password or login name) but you do need to state the rules for using the mechanism (in the example, that the user must supply both password and login name). Use (and compilation, installation, etc.) must be straightforward. It’s okay to have scripts or other aids to help here. But, for example, data types in the configuration file should either be obvious or explicitly stated. A “duration” field does not indicate if the period of time is to be expressed in hours, minutes, seconds, or something else, nor whether fractional units are allowed (a float) or not (an integer). The latter is actually a common bug. If duration is in minutes, then does “0.5” mean 30 seconds (as a float) or 0 seconds (as an integer)? If the latter, an error message should be given (“invalid type”). This is usually interpreted as meaning the mechanism must not impose an onerous burden. Strictly speaking, passwords violate this rule (because it’s not as easy to access a resource by giving a password as accessing the resource without a password), but the password is considered a minimal burden. Slide #1-38 38 38
39
Key Points Principles of secure design underlie all security-related mechanisms Require: Good understanding of goal of mechanism and environment in which it is to be used Careful analysis and design Careful implementation To sum up: these principles require Simplicity (and when not possible, minimal complexity); Restrictiveness; Understanding the goals of the proposed security; and Understanding the environment in which the mechanism will be developed and deployed. Slide #1-39 39 39
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.