Download presentation
Presentation is loading. Please wait.
Published byArron Page Modified over 6 years ago
1
Talk oscillates between deep and high views of security
Goal is to be an engaging keynote, which connects with difference disciples and experience levels
2
CVE IE CMshtmlEd UAF In the above scenario, the use-after-free occurred inside the CMshtmlEd::Exec() function and the freed object was the object pointed to by the this pointer. The CMshtmlEd::Exec() function is reachable via the execCommand() JavaScript DOM method. To free the CMshtmlEd object, the attack invoked execCommand() in a way that a JavaScript event will be triggered, this allowed an attacker-controlled JavaScript event handler to be invoked in the execution path of CMshtmlEd::Exec(). The freeing of the CMshtmlEd object occurred when the attacker-controlled JavaScript event handler invoked document.write() which in turn results in the freeing of several objects, one of which is the currently in-use CMshtmlEd object. When execution finally returns to CMshtmlEd::Exec(), the this pointer, which became a dangling pointer, is dereferenced resulting to a use-after-free condition. The root cause in this particular example is an incorrect reference count of the CMshtmlEd object which allowed the attacker to prematurely free it while it is in-use. Interestingly, while looking at this zero-day vulnerability: there might be similar use-after-free scenarios (i.e. an event handler can trigger a free of currently in-use objects). So just to find similar potentially low-hanging bugs, via ad hoc manual fuzzing, I tried different combinations of JavaScript methods and JavaScript events. After several unsuccessful attempts, I eventually stumbled across a similar UAF vulnerability which essentially was due to the CPasteCommand object being freed while currently in-use. The root cause of the bug was also due to an incorrect reference count of an object. We reported this vulnerability to Microsoft and it was patched last February 2013.
3
But did you ever wonder how we ended up here? And where are we going?
4
Sir William Thomson's third tide-predicting machine
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", after working on his difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible: conceptualized and invented the first mechanical computer. A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. A computer is a general purpose device that can be programmed to carry out a set of arithmetic or logical operations automatically. Since a sequence of operations can be readily changed, the computer can solve more than one kind of problem.
5
Samuel Morse: first to get political backing for his telegraph and a business model for making it work. 1843 he built a telegraph system from Washington, D.C., to Baltimore with the financial support of Congress One obvious attack is to simple cut/destroy the wires. More interesting, was to change/tap: Derail trains, corporate espionage, etc
6
-An Enigma invented by the German engineer Arthur Scherbius
-An Enigma invented by the German engineer Arthur Scherbius. Used most by Nazi Germany World War II. -Computer Science Theory (Turing, 1936) providing a formalisation of the concepts of "algorithm" and "computation" with the Turing machine, which can be considered a model of a general purpose computer. -Turning also worked to improve the bomba invented by Polish cryptologist Marian Rejewski -Colossus (by engineer Tommy Flowers) was the first electronic digital programmable computing device, and was also used to break German ciphers during World War II.
7
1947, John Bardeen and Walter Brattain at AT&T's Bell Labs, performed experiments and observed that when two gold point contacts were applied to a crystal of germanium, a signal was produced with the output power greater than the input. A transistor is a semiconductor device used to amplify and switch electronic signals and electrical power. It is composed of semiconductor material with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals changes the current through another pair of terminals. Because the controlled (output) power can be higher than the controlling (input) power, a transistor can amplify a signal. Integrated circuit 1958 1971-now: A microprocessor consists of a huge array of transistors on a silicon die that interconnect in a way that provides a set of useful basic functions. These transistors alter their states based on internal changes in voltage, or on transitions between voltage levels. These transitions are triggered by a clock signal, which is actually a square wave that switches between high and low voltage at a high frequency - this is where we get "speed" measurements for CPUs, e.g. 2GHz. Every time a clock cycle switches between low and high voltage a single internal change is made. This is called a clock tick. In the simplest devices a single clock tick might constitute a whole programmed operation, but these devices are extremely limited in terms of what they're capable of doing.
8
60s-80s: -An operating system (OS) is software that manages computer hardware and software resources and provides common services for computer programs. The operating system is an essential component of the system software in a computer system. Application programs usually require an operating system to function. -Time-sharing operating systems schedule tasks for efficient use of the system and may also include accounting software for cost allocation of processor time, mass storage, printing, and other resources. -For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is usually executed directly by the hardware and will frequently make a system call to an OS function or be interrupted by it. Operating systems can be found on almost any device that contains a computer—from cellular phones and video game consoles to supercomputers and web servers.
9
Throughout the 1960s, Leonard Kleinrock, Paul Baran, and Donald Davies independently developed network systems that used packets to transfer information between computers over a network. In 1969, the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, and the University of Utah were connected as the beginning of the ARPANET network using 50 kbit/s circuits. In 1972, commercial services using X.25 were deployed, and later used as an underlying infrastructure for expanding TCP/IP networks. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system that was based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks"[3] and collaborated on several patents received in 1977 and 1978. In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices.
10
In symmetric-key schemes, the encryption and decryption keys are the same. Thus communicating parties must have the same key before they can achieve secret communication. In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; before then all encryption schemes were symmetric-key A publicly available public key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code; it was purchased by Symantec in 2010 and is regularly updated
11
Phun Hacking For: Profit/War 70s Now 1. 70s-early 80s for fun
Robert Morris, Cornell student - First conviction for damage 3. According to court papers, the original Blaster was created in 2003 after security researchers from the Chinese group Xfocus reverse engineered the original Microsoft patch that allowed for execution of the attack. The worm spread by exploiting a buffer overflow discovered by the Polish security research group Last Stage of Delirium in the DCOM RPC service on the affected operating systems, for which a patch had been released one month earlier in MS and later in MS This allowed the worm to spread without users opening attachments simply by spamming itself to large numbers of random IP addresses. Four versions have been detected in the wild. These are the most well known exploits of the original flaw in RPC, but there were in fact another 12 different vulnerabilities which never saw very much media attention. Nice timeline of viri from each decade 70s, 80s, 90s, 00s, present 70s Now
12
Because of such things… our industry is really born into the full swing it now enjoys
A state of computer "security” : threat prevention, detection, and response.
13
strcpy(dst, src); Invention, architecture, and algorithms allow for a system that runs user applications: Storing code and data together in a non-memory safe programming language, turned out to be a bad thing in terms of security/robustness. (Good for other things) We see the typical stack smash. It and other similar memory corruption vulnerabilities have been actively exploited for a couple decades now.
14
Important memo from Bill Gates in 2002 started modern trustworthy computing movement
2004 SDL internal at MS, 2008 SDL released to public Secure design, Fuzzing, code audit, plan for patching in the field – all become part of any reasonably mature software dev process
15
NX bit (Non-Execute) prohibits the execution of code that is stored in certain memory pages to prevent buffer-overflow attacks on x86 Enhanced Virus Protection and Execute Disable Bit technologies from AMD and Intel, respectively, introduce an additional attribute bit in paging structures for address translation. This attribute indicates if the code stored in the given memory page can be executed.
16
Rare, in major production chips, processor bugs happen.
1994, Dr. Thomas Nicely of Lynchburg College discovered that some Pentium chips don't do math so well. The Pentium FDIV bug is a bug in the Intel P5 Pentium floating point unit (FPU). Because of the bug, the processor can return incorrect decimal results, an issue troublesome for the precise calculations needed in fields like math and science. -Intel blamed the error on missing entries in the lookup table used by the floating-point division circuitry
17
Gordon Moores (co-founder of Intel) law (1965) states that the density of transistors in a dense integrated circuit will double ever 2 years This has lead to wonderful advances in many fields, including the complexity of CPUs themselves
18
2006 brought VT-x 90s In the late 1990s x86 virtualization was achieved by complex software techniques, necessary to compensate for the processor's lack of virtualization support while attaining reasonable performance. In 2006, both Intel (VT-x) and AMD (AMD-V) introduced limited hardware virtualization support that allowed for simpler virtualization software but offered very little speed benefits. Greater hardware support, which allowed for substantial speed improvements, came with later processor models. In computing, x86 virtualization refers to hardware virtualization for the x86 architecture. It allows multiple operating systems to simultaneously share x86 processor resources in a safe and efficient manner.
19
SMM Vuln BluePill used hardware virtualization to move a running OS into a virtual machine. In 2007 she demonstrated that certain types of hardware-based memory acquisition (e.g. FireWire based) are unreliable and can be defeated. Later in 2007, together with a team member Alexander Tereshkin, presented further research on virtualization malware. In 2008, Rutkowska with her team focused on Xen hypervisor security. In 2009, together with a team member Rafal Wojtczuk, presented an attack against Intel Trusted Execution Technology and Intel System Management Mode. Intel Trusted Execution Technology (Intel TXT) is the name of a computer hardware technology whose primary goals are (a) attestation – attest to the authenticity of a platform and its operating system (OS); (b) assure that an authentic OS starts in a trusted environment and thus can be considered a trusted OS; (c) provide the trusted OS with additional security capabilities not available to an unproven OS. System Management Mode (SMM) is an operating mode in which all normal execution (including the operating system) is suspended, and special separate software (usually firmware or a hardware-assisted debugger) is executed in high-privilege mode. --SysRet Vuln Some 64-bit operating systems and virtualization software running on Intel CPU hardware are vulnerable to a local privilege escalation attack. The vulnerability may be exploited for local privilege escalation or a guest-to-host virtual machine escape. Intel claims that this vulnerability is a software implementation issue, as their processors are functioning as per their documented specifications. However, software that fails to take the Intel-specific SYSRET behavior into account may be vulnerable. -A ring3 attacker may be able to specifically craft a stack frame to be executed by ring0 (kernel) after a general protection exception (#GP). The fault will be handled before the stack switch, which means the exception handler will be run at ring0 with an attacker's chosen RSP causing a privilege escalation.
20
add ecx, ebx == many micro instructions
Microcode updates should be signed… so this shouldn’t be a problem… Chip Vendor OS Vendor As processors have gotten more complex, the amount of work that needs to be done at the hardware level to provide even the most basic operations (e.g. an addition of two 32-bit integers) has increased. A single native assembly instruction (e.g. add eax, ebx) might involve quite a lot of internal work, and microcode is what defines that work. Each clock tick performs a single microcode instruction, and a single native instruction might involve hundreds of microcode instructions. Previously, the only way to fix a processor bug was to work around it or replace the chip with one that had the bug fixed. Starting with the Intel P6 and P7 family processors, including the Pentium Pro through Pentium D and Core i7, many bugs in a processor’s design can be fixed by altering the microcode in the processor. Microcode is essentially a set of instructions and tables in the processor that control the way the processor operates. These processors incorporate a new feature called reprogrammable microcode, which enables certain types of bugs to be worked around via microcode updates. The microcode updates reside in either the motherboard ROM BIOS or Windows updates and are loaded into the processor by the motherboard BIOS during the POST or by Windows during the boot process. Each time the system is rebooted, the updated microcode is reloaded, ensuring that it will have the bug fix installed anytime the system is operating. The updated microcode for a given processor is provided by Intel to either the motherboard manufacturers or to Microsoft so the code can be incorporated into the flash ROM BIOS for the board, or directly into Windows via Windows Update. This is one reason it is important to keep Windows up to date, as well as to install the most recent motherboard BIOS for your systems. Because it is easier for most people to update Windows than to update the motherboard BIOS, it seems that more recent microcode updates are being distributed via Microsoft than the motherboard manufacturers. Although modern x86 processors allow for runtime microcode upload, the format is model-specific, undocumented, and controlled by checksums and possibly signatures. Also, the scope of microcode is somewhat limited nowadays, because most instructions are hardwired. Modern operating systems upload microcode blocks upon boot, but these blocks are provided by the CPU vendors themselves for bugfixing purposes. CPU vendors publish updates for the microcode of their CPU. Such an update can be uploaded by the operating system using some specific opcodes (this requires kernel-level privileges). Since we are talking about RAM, this is not permanent, and must be performed again after each boot. The contents of these microcode updates are not documented at all; they are very specific to the CPU exact model, and there is no standard. Moreover, there are checksums which are believed to be MAC or possibly even digital signatures: vendors want to keep a tight control of what enters the microcode area. It is conceivable that maliciously crafted microcode could damage the CPU by triggering "short circuits" within the CPU. The postinst script wgets a microcode from and loads it into the CPU, without any checking for authenticity, or integrity. And without ever asking the user! Thenafter, the microcode is loaded by /etc/rcS.d/S80microcode.ctl start . On every reboot (warm too?), the microcode is lost. The attack vectors are obvious. Although I am not particularly verbose in malicious microcode writing, I am sure there are people enjoying this kind of sadomasochism. Your CPU after boot
21
ASLR ROP Stack func Heap spray attack NOPs + Shellcode SEH
Though attacks against the server was first, the browser wasn’t long in getting attacked as well Now instead of attacking the server, the bad guys setup malicious servers and force/trick users into visiting the server Heap sprays were first Dep + aslr could stop this But ROP finds a way
22
Security checks get lower
Security checks get lower. Not just at OS or page level, but also done at critical APIs
23
Benefits of using managed code include programmer convenience (by increasing the level of abstraction, creating smaller models) and enhanced security guarantees, depending on the platform (including the VM implementation). Drawbacks include slower startup speed (the managed code must be JIT compiled by the VM) and generally increased use of system resources on any machine that is executing the code. There are many historical examples of code running on virtual machines, such as the language UCSD Pascal using p-code, and the operating system Inferno from Bell Labs using the Dis virtual machine. Java popularized this approach with its bytecode executed by the Java virtual machine.
24
Not always seeing the security benefits that the JVM promised
Not the least of which was a poorly implemented JVM in C++ Also, good vs. bad applets is intractable to solve really, just like good vs bad exe is a hard problem
25
Smart Pointers 1. It always points either to a valid allocated object or is NULL. 2. It deletes the object once there are no more references to it. 3. It can be used with existing code. 4. Programs that don’t do low-level stuff can be written exclusively using this pointer. Native pointers don’t have to be used at all. Native pointers can be converted into smart pointers via a special construct. It should be easy to search for this construct using grep since it is unsafe. 5. Thread-safe. 6. Exception safe. 7. Fast. It should have zero dereferencing overhead and minimal manipulation overhead. 8. It shouldn’t have problems with circular references.
26
All dynamic Objects in IE
Heap Separation All dynamic Objects in IE User created Objs Critical IE Objs IE as of June 2014 Process Heap Isolated Heap Isolated Heap: To make it difficult for attackers to find objects which they can control and replace the memory of dangling pointer Process Heap
27
Delay Free Secure HeapFree() HeapFree() IE as of July 2014 Freed by
Allocator Right away Put on List to be Freed later Based on heuristics Also a delayed free mechanism Helps prevent malicious scripts from occupying the place of a dangling pointer
28
Hypervise every process
Kernel Exploits Least privilege Hypervise every process OS again focused on least privilege Each user/process should only be able to do what it needs to do. Unfortunately kernel exploits break out of application sandboxing techniques. Enter uVMs. Each process in a full uVM greatly raises the bar.
29
Robert Cailliau, Jean-François Abramatic and Tim Berners-Lee at the 10th anniversary of the WWW Consortium 1990: Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9,[11] the HyperText Markup Language (HTML), the first Web browser (named WorldWideWeb, which was also a Web editor), the first HTTP server software (later known as CERN httpd), the first web server ( and the first Web pages that described the project itself.
31
Sanitizing input is big
The recent shellshock is another great example of command injection
32
Our information is everywhere thanks to the cloud.
Cloud computing is internet-based computing in which large groups of remote servers are networked to allow the centralized data storage, and online access to computer services or resources. Clouds can be classified as public, private or hybrid. The idea of being able to access our data, however/whenever/where ever is powerful
33
Without a hypervisor, weakness is Kernel exploit or something like:
/* shocker: docker PoC VMM-container breakout (C) 2014 Sebastian Krahmer * Demonstrates that any given docker image someone is asking * you to run in your docker setup can access ANY file on your host, * e.g. dumping hosts /etc/shadow or other sensitive info, compromising * security of the host and any other docker VM's on it. And of course with every new feature that comes out to help with distributed computing, hosting, load balancing, and more, non will be without risk. Docker is an open-source project that automates the deployment of applications inside software containers, by providing an additional layer of abstraction and automation of operating system–level virtualization on Linux. Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.
34
And the points at which data will be collected and monitored will continue to grow in amazing ways
But.. Will these devices be updated, or outdated like home routers?
35
And what about privacy? Right now, data about everyone is being collected mostly for commercial/marketing reasons… But for what other purposes will that data someday be used, we do not yet fully understand…
36
while(this_side_of_heaven) {
c = catalyst(war||commerce); //currently cyberwar fuels 0day fascination x = build(c); y = break(x); if( y && motivation) attack(); x = fix(x, y); c, x = rebuild_innovate(x, smaller, cheaper, complexity++, security); } So what will be the next defense event which will trigger a new age of innovation? Cyberwar is already upon us, rather we know it or not. And in fact that has been a big part of what has fueled our fascination with 0day
37
Questions? @jareddemott
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.