David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 21: Proof-Carrying Code and.

Slides:



Advertisements
Similar presentations
Foundational Certified Code in a Metalogical Framework Karl Crary and Susmit Sarkar Carnegie Mellon University.
Advertisements

David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 20: Total Correctness; Proof-
5.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 5: CPU Scheduling.
David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 13: Operational Semantics “Then.
David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 19: Minding Ps & Qs: Axiomatic.
The Design and Implementation of a Certifying Compiler [Necula, Lee] A Certifying Compiler for Java [Necula, Lee et al] David W. Hill CSCI
1/25 Concurrency and asynchronous computing How do we deal with evaluation when we have a bunch of processors involved?
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
CS 536 Spring Intermediate Code. Local Optimizations. Lecture 22.
Concurrency CS 510: Programming Languages David Walker.
A Type System for Expressive Security Policies David Walker Cornell University.
1 Ivan Lanese Computer Science Department University of Bologna Italy Concurrent and located synchronizations in π-calculus.
CS 330 Programming Languages 09 / 16 / 2008 Instructor: Michael Eckmann.
1 Organization of Programming Languages-Cheng (Fall 2004) Concurrency u A PROCESS or THREAD:is a potentially-active execution context. Classic von Neumann.
02/19/2007CSCI 315 Operating Systems Design1 Process Synchronization Notice: The slides for this lecture have been largely based on those accompanying.
Memory Consistency Models Some material borrowed from Sarita Adve’s (UIUC) tutorial on memory consistency models.
CS252: Systems Programming Ninghui Li Final Exam Review.
A Bridge to Your First Computer Science Course Prof. H.E. Dunsmore Concurrent Programming Threads Synchronization.
Abstraction IS 101Y/CMSC 101 Computational Thinking and Design Tuesday, September 17, 2013 Carolyn Seaman University of Maryland, Baltimore County.
Comparative Programming Languages hussein suleman uct csc304s 2003.
© 2009 Matthew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
High level & Low level language High level programming languages are more structured, are closer to spoken language and are more intuitive than low level.
An intro to programming. The purpose of writing a program is to solve a problem or take advantage of an opportunity Consists of multiple steps:  Understanding.
University of Houston-Clear Lake Proprietary© 1997 Evolution of Programming Languages Basic cycle of improvement –Experience software difficulties –Theory.
Let’s Stop Beating Dead Horses, and Start Beating Trojan Horses! David Evans INFOSEC Malicious Code Workshop San Antonio, 13.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 3 (26/01/2006) Instructor: Haifeng YU.
Games Development 2 Concurrent Programming CO3301 Week 9.
SEMAPHORE By: Wilson Lee. Concurrency Task Synchronization Example of semaphore Language Support.
ICS 313: Programming Language Theory Chapter 13: Concurrency.
Memory Consistency Models. Outline Review of multi-threaded program execution on uniprocessor Need for memory consistency models Sequential consistency.
Programming Languages and Design Lecture 3 Semantic Specifications of Programming Languages Instructor: Li Ma Department of Computer Science Texas Southern.
CS 655: Programming Languages David Evans Office 236A, University of Virginia Computer.
David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 14: Types of Types “It would.
CS603 Basics of underlying platforms January 9, 2002.
Concurrency Properties. Correctness In sequential programs, rerunning a program with the same input will always give the same result, so it makes sense.
CSCI1600: Embedded and Real Time Software Lecture 28: Verification I Steven Reiss, Fall 2015.
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
Programming Language Descriptions. What drives PL Development? Computers are “in charge” of extremely important issues Execute a program literally. Exercise.
Fall 2008Programming Development Techniques 1 Topic 20 Concurrency Section 3.4.
David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 22: Abstractions for Concurrency.
David Evans CS201j: Engineering Software University of Virginia Computer Science Lecture 9: Designing Exceptionally.
The single most important skill for a computer programmer is problem solving Problem solving means the ability to formulate problems, think creatively.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Unit 4: Processes, Threads & Deadlocks June 2012 Kaplan University 1.
Lecture9 Page 1 CS 236 Online Operating System Security, Con’t CS 236 On-Line MS Program Networks and Systems Security Peter Reiher.
Lecture 4 Page 1 CS 111 Online Modularity and Memory Clearly, programs must have access to memory We need abstractions that give them the required access.
Operational Semantics Mooly Sagiv Reference: Semantics with Applications Chapter 2 H. Nielson and F. Nielson
December 1, 2006©2006 Craig Zilles1 Threads & Atomic Operations in Hardware  Previously, we introduced multi-core parallelism & cache coherence —Today.
Learning to Program D is for Digital.
Memory Consistency Models
Atomic Operations in Hardware
Atomic Operations in Hardware
Faster Data Structures in Transactional Memory using Three Paths
Memory Consistency Models
^ About the.
Threads and Memory Models Hal Perkins Autumn 2011
Lecture 15: Concurring Concurrently CS201j: Engineering Software
Concurring Concurrently
Security in Java Real or Decaf? cs205: engineering software
Threads and Memory Models Hal Perkins Autumn 2009
COMP60611 Fundamentals of Parallel and Distributed Systems
Lecture 19: Proof-Carrying Code Background just got here last week
CS510 Operating System Foundations
CSE 153 Design of Operating Systems Winter 19
Relaxed Consistency Finale
David Evans Lecture 19: ||ism I don’t think we have found the right programming concepts for parallel computers yet.
Lecture 14: Mocking Mockingbirds
CSE 153 Design of Operating Systems Winter 2019
Presentation transcript:

David Evans CS655: Programming Languages University of Virginia Computer Science Lecture 21: Proof-Carrying Code and ||ism I don’t think we have found the right programming concepts for parallel computers yet. When we do, they will almost certainly be very different from anything we know today. Birch Hansen, “Concurrent Pascal” (last sentence), HOPL 1993 My only serious debate with your account is with the very last sentence. I do not believe there is any “right” collection of programming concepts for parallel (or even sequential) computers. The design of a language is always a compromise, in which the designer must take into account the desired level of abstraction, the target machine architecture, and the proposed range of applications. C. A. R. Hoare, comment at HOPL II 1993.

12 April 2001CS 655: Lecture 212 Menu INFOSEC Malicious Code Talk Concurrency

Let’s Stop Beating Dead Horses, and Start Beating Trojan Horses! David Evans INFOSEC Malicious Code Workshop San Antonio, 13 January 2000 University of Virginia Department of Computer Science Charlottesville, VA

12 April 2001CS 655: Lecture 214 Analogy: Security Cryptography –Fun to do research in, lots of cool math problems, opportunities to dazzle people with your brilliance, etc. But, % of break ins do not involve attack on sensible cryptography –Guessing passwords and stealing keys –Back doors, buffer overflows –Ignorant implementers choosing bad cryptography [Netscape Navigator Mail]

12 April 2001CS 655: Lecture 215 Structure of Argument Low-level code safety (isolation) is the wrong focus Agree Disagree PCC is not a realistic solution for the real problems in the foreseeable future PCC is not the most promising solution for low- level code safety Lots of useful research and results coming from PCC, but realistic solution to malicious code won’t be one of them.

12 April 2001CS 655: Lecture 216 Low-level code safety Type safety, memory safety, control flow safety [Kozen98] All high-level code safety depends on it Many known pretty good solutions: separate processes, SFI, interpreter Very few real attacks exploit low-level code safety vulnerabilities –One exception: buffer overflows Many known solutions to this Just need to sue vendors to get them implemented

12 April 2001CS 655: Lecture 217 High-Level Code Safety Enforcement is (embarrassingly) easy –Reference monitors (since 1970s) –Can enforce most useful policies [Schneider98] –Performance penalty is small Writing good policies is the hard part –Better ways to define policies –Ways to reason about properties of policies –Ideas for the right policies for different scenarios –Ways to develop, reason about, and test distributed policies

12 April 2001CS 655: Lecture 218 ProofsReference Monitors All possible executionsCurrent execution so far No run-time costsMonitoring and calling overhead Checking integrated into code Checking separate from code Excruciatingly difficultTrivially easy Vendor sets policyConsumer sets policy

12 April 2001CS 655: Lecture 219 Fortune Cookie “That which be proved cannot be worth much.” Fortune cookie quoted on Peter’s web page must can True for all users True for all executions Exception: Low-level code safety

12 April 2001CS 655: Lecture 2110 Reasons you might prefer PCC Run-time performance? –Amortizes additional download and verification time only rarely –SFI Performance penalty: ~5% If you care, pay $20 more for a better processor or wait 5 weeks Smaller TCB? –Not really smaller: twice as big as SFI (Touchstone VCGen+checker – 8300 lines / MisFiT x86 SFI implementation – 4500 lines) You are a vendor who cares more about quality than time to market (not really PCC)

12 April 2001CS 655: Lecture 2111 Concurrency

12 April 2001CS 655: Lecture 2112 Sequential Programming So far, most languages we have seen provide a sequential programming model: –Language definition specifies a sequential order of execution –Language implementation may attempt to parallelize programs, but they must behave as though they are sequential Exceptions: Algol68, Ada, Java include support for concurrency

12 April 2001CS 655: Lecture 2113 Definitions Concurrency – any model of computation supporting partially ordered time. (Semantic notion) Parallelism – hardware that can execute multiple threads simultaneously (Pragmatic notion) Can you have concurrency without parallelism? Can you have parallelism without concurrency?

12 April 2001CS 655: Lecture 2114 Concurrent Programming Languages Expose multiple threads to programmer Some problems are clearer to program using explicit parallelism –Modularity Don’t have to explicitly interleave code for different abstractions High-level interactions – synchronization, communication –Modelling Closer map to real world problems Provide performance benefits of parallelism when compiler could not find it automatically

12 April 2001CS 655: Lecture 2115 Fork & Join Concurrency Primitives: –fork E  ThreadHandle Creates a new thread that evaluates Expression E; returns a unique handle identifying that thread. –join T Waits for thread identified by ThreadHandle T to complete.

12 April 2001CS 655: Lecture 2116 Bjarfk (BARK with Fork & Join) Program ::= Instruction*Program is a sequence of instructions Instructions are numbered from 0. Execution begins at instruction 0, and completes with the initial thread halts. Instruction ::= Loc := Expression Loc gets the value of Expression | Loc := FORK ExpressionLoc gets the value of the ThreadHandle returned by FORK; Starts a new thread at instruction numbered Expression. | JOIN Expression Waits until thread associated with ThreadHandle Expression completes. | HALTStop thread execution. Expression ::= Literal | Expression + Expression | Expression * Expression

12 April 2001CS 655: Lecture 2117 Bjarfk Program [0] R0 := 1 [1] R1 := FORK 10 [2] R2 := FORK 20 [3] JOIN R1 [4] R0 := R0 * 3 [5] JOIN R2 [6] HALT % result in R0 [10] R0 := R0 + 1 [11] HALT [20] R0 := R0 * 2 [21] HALT Atomic instructions: a1:R0 := R0 + 1 a2: R0 := R0 + 2 x3:R0 := R0 * 3 Partial Ordering: a1 <= x3 So possible results are, (a1, a2, x3) = 12 (a2, a1, x3) = 9 (a1, x3, a2) = 12 What if assignment instructions are not atomic?

12 April 2001CS 655: Lecture 2118 What formal tool should be use to understand FORK and JOIN?

12 April 2001CS 655: Lecture 2119 Operational Semantics Game Input Function Abstract Machine Initial Configuration Final Configuration Output Function Answer Intermediate Configuration Intermediate Configuration Transition Rules Real World Program

12 April 2001CS 655: Lecture 2120 Structured Operational Semantics SOS for a language is five-tuple: C Set of configurations for an abstract machine  Transition relation (subset of C x C ) I Program  C (input function) F Set of final configurations OF  Answer(output function)

12 April 2001CS 655: Lecture 2121 Sequential Configurations Configuration defined by: –Array of Instructions –Program counter –Values in registers (any integer) C = Instructions x PC x RegisterFile Instruction[0] Instruction[1] Instruction[2] …. Instruction[-1] …. PC Register[0] Register[1] Register[2] …. Register[-1] ….

12 April 2001CS 655: Lecture 2122 Concurrent Configurations Configuration defined by: –Array of Instructions –Array of Threads Thread = –Values in registers (any integer) C = Instructions x Threads x RegisterFile Instruction[0] Instruction[1] Instruction[2] …. Instruction[-1] …. Thread 1 Register[0] Register[1] Register[2] …. Register[-1] …. Thread 2 Architecture question: Is this SIMD/MIMD/SISD/MISD model?

12 April 2001CS 655: Lecture 2123 Input Function: I : Program  C C = Instructions x Threads x RegisterFile where For a Program with n instructions from 0 to n - 1: Instructions[m] = Program[m] for m >= 0 && m < n Instructions[m] = ERROR otherwise RegisterFile[n] = 0 for all integers n Threads = [ ] The top thread (identified with ThreadHandle = 0) starts at PC = 0.

12 April 2001CS 655: Lecture 2124 Final Configurations F = Instructions x Threads x RegisterFile where  Threads and Instructions [PC] = HALT Different possibility: F = Instructions x Threads x RegisterFile where for all  Threads, Instructions [PC t ] = HALT

12 April 2001CS 655: Lecture 2125 Assignment  Threads & Instructions[PC t ] = Loc := Value  where Threads = Threads – { } + {<t, PC t + 1} RegisterFile’[n] = RegisterFile[n] if n  Loc RegisterFile’[n] = value of Value if n  Loc Note: need rule to deal with Loc := Expression also; can rewrite until we have a literal on RHS.

12 April 2001CS 655: Lecture 2126 Fork  Threads & Instructions[PC t ] = Loc := FORK Literal  where Threads = Threads – { } + {<t, PC t + 1} + { } where  Threads for all possible x. RegisterFile’[n] = RegisterFile[n] if n  Loc RegisterFile’[n] = value of ThreadHandle nt if n  Loc

12 April 2001CS 655: Lecture 2127 Join  Threads & Instructions[PC t ] = JOIN Value &  Threads & Instructions[PC v ] = HALT & v = value of Value  where Threads = Threads – { } + {<t, PC t + 1}

12 April 2001CS 655: Lecture 2128 What else is needed? Can we build all the useful concurrency primitives we need using FORK and JOIN? Can we implement a semaphore? –No, need an atomic test and acquire operation

12 April 2001CS 655: Lecture 2129 Locking Statements Program ::= LockDeclaration* Instruction* LockDeclaration ::= PROTECT LockHandle Loc Prohibits reading or writing location Loc in a thread that does not hold the loc LockHandle. Instruction ::= ACQUIRE LockHandle Acquires the lock identified by LockHandle. If another thread has acquired the lock, thread stalls until lock is available. Instruction ::= RELEASE LockHandle Releases the lock identified by LockHandle.

12 April 2001CS 655: Lecture 2130 Locking Semantics C = Instructions x Threads x RegisterFile x Locks where Locks = { < LockHandle, ThreadHandle  free, Loc } I : Program  C same as before with Locks = { | PROTECT LockHandle Loc  LockDeclarations }

12 April 2001CS 655: Lecture 2131 Acquire  Threads & Instructions[PC t ] = ACQUIRE LockHandle & { }  Locks  where Threads = Threads – { } + {<t, PC t + 1}; Locks’= Locks – { } + { }

12 April 2001CS 655: Lecture 2132 Release  Threads & Instructions[PC t ] = RELEASE LockHandle & { }  Locks  where Threads = Threads – { } + {<t, PC t + 1}; Locks’= Locks – { } + { }

12 April 2001CS 655: Lecture 2133 New Assignment Rule  Threads & Instructions[PC t ] = Loc := Value & ({ }  Locks |  x { }  Locks same as old assignment

12 April 2001CS 655: Lecture 2134 Abstractions Can we describe common concurrency abstractions using only our primitives? Binary semaphore: equivalent to our ACQUIRE/RELEASE Monitor: abstraction using a lock But no way to set thread priorities with our mechanisms (operational semantics gives no guarantees about which rule is used when multiple rules match)

12 April 2001CS 655: Lecture 2135 Summary Hundreds of different concurrent programming languages –[Bal, Steiner, Tanenbaum 1989] lists over 200 papers on 100 different concurrent languages! Primitives are easy (fork, join, acquire, release), finding the right abstractions is hard

12 April 2001CS 655: Lecture 2136 Charge Linda Papers –Describes an original approach to concurrent programming –Basis for Sun’s JavaSpaces technology (framework for distributed computing using Jini) Project progress –Everyone should have received a reply from me about your progress