Midterm & Concept Review CS 510, Fall 2002 David Walker.

Slides:



Advertisements
Similar presentations
Types and Programming Languages Lecture 4 Simon Gay Department of Computing Science University of Glasgow 2006/07.
Advertisements

Substitution & Evaluation Order cos 441 David Walker.
More ML Compiling Techniques David Walker. Today More data structures lists More functions More modules.
Objects and Classes David Walker CS 320. Advanced Languages advanced programming features –ML data types, exceptions, modules, objects, concurrency,...
Type Inference David Walker COS 320. Criticisms of Typed Languages Types overly constrain functions & data polymorphism makes typed constructs useful.
Type Checking, Inference, & Elaboration CS153: Compilers Greg Morrisett.
- Vasvi Kakkad.  Formal -  Tool for mathematical analysis of language  Method for precisely designing language  Well formed model for describing and.
Current Techniques in Language-based Security David Walker COS 597B With slides stolen from: Steve Zdancewic University of Pennsylvania.
The lambda calculus David Walker CS 441. the lambda calculus Originally, the lambda calculus was developed as a logic by Alonzo Church in 1932 –Church.
Type checking © Marcelo d’Amorim 2010.
Proofs, Recursion and Analysis of Algorithms Mathematical Structures for Computer Science Chapter 2 Copyright © 2006 W.H. Freeman & Co.MSCS SlidesProofs,
A Type System for Well-Founded Recursion Derek Dreyer Carnegie Mellon University POPL 2004 Venice, Italy.
Catriel Beeri Pls/Winter 2004/5 last 55 Two comments on let polymorphism I. What is the (time, space) complexity of type reconstruction? In practice –
CSE341: Programming Languages Lecture 4 Records (“each of”), Datatypes (“one of”), Case Expressions Dan Grossman Fall 2011.
CSE341: Programming Languages Lecture 8 Lexical Scope and Function Closures Dan Grossman Fall 2011.
CS 611 Advanced Programming Languages Andrew Myers Cornell University Lecture 27 Type inference 2 Nov 05.
Parametric Polymorphism COS 441 Princeton University Fall 2004.
Inductive Definitions COS 510 David Walker. Inductive Definitions Inductive definitions play a central role in the study of programming languages They.
Introduction to ML Last time: Basics: integers, Booleans, tuples,... simple functions introduction to data types This time, we continue writing an evaluator.
MinML: an idealized programming language CS 510 David Walker.
Type Inference David Walker COS 441. Criticisms of Typed Languages Types overly constrain functions & data polymorphism makes typed constructs useful.
Type Inference David Walker CS 510, Fall Criticisms of Typed Languages Types overly constrain functions & data polymorphism makes typed constructs.
Type Inference II David Walker COS 441. Type Inference Goal: Given unannotated program, find its type or report it does not type check Overview: generate.
Normal forms for Context-Free Grammars
Tim Sheard Oregon Graduate Institute Lecture 11: A Reduction Semantics for MetaML CS510 Section FSC Winter 2005 Winter 2005.
Introduction Even though the syntax of Scheme is simple, it can be very difficult to determine the semantics of an expression. Hacker’s approach: Run it.
Imperative Programming
© Kenneth C. Louden, Chapter 11 - Functional Programming, Part III: Theory Programming Languages: Principles and Practice, 2nd Ed. Kenneth C. Louden.
CSI 3125, Axiomatic Semantics, page 1 Axiomatic semantics The assignment statement Statement composition The "if-then-else" statement The "while" statement.
Typed Lambda Calculus Chapter 9 Benjamin Pierce Types and Programming Languages.
Type Systems CS Definitions Program analysis Discovering facts about programs. Dynamic analysis Program analysis by using program executions.
Lesson 4 Typed Arithmetic Typed Lambda Calculus 1/21/02 Chapters 8, 9, 10.
Type Safety Kangwon National University 임현승 Programming Languages.
Type Inference II David Walker COS 441. Type Inference Goal: Given unannotated program, find its type or report it does not type check Overview: generate.
Recursion. What is recursion? Rules of recursion Mathematical induction The Fibonacci sequence Summary Outline.
Recap form last time How to do for loops map, filter, reduce Next up: dictionaries.
© Kenneth C. Louden, Chapter 11 - Functional Programming, Part III: Theory Programming Languages: Principles and Practice, 2nd Ed. Kenneth C. Louden.
Chapter 3 Part II Describing Syntax and Semantics.
1 Formal Semantics. 2 Why formalize? ML is tricky, particularly in corner cases generalizable type variables? polymorphic references? exceptions? Some.
Languages and Compilers
Types and Programming Languages Lecture 12 Simon Gay Department of Computing Science University of Glasgow 2006/07.
12/9/20151 Programming Languages and Compilers (CS 421) Elsa L Gunter 2112 SC, UIUC Based in part on slides by Mattox.
Advanced Formal Methods Lecture 3: Simply Typed Lambda calculus Mads Dam KTH/CSC Course 2D1453, Some material from B. Pierce: TAPL + some from.
CMSC 330: Organization of Programming Languages Operational Semantics a.k.a. “WTF is Project 4, Part 3?”
CS April 2002 Lazy Evaluation, Thunks, and Streams.
Types and Programming Languages
CS412/413 Introduction to Compilers Radu Rugina Lecture 13 : Static Semantics 18 Feb 02.
CMSC 330: Organization of Programming Languages Operational Semantics.
Types and Programming Languages Lecture 3 Simon Gay Department of Computing Science University of Glasgow 2006/07.
Prof. Necula CS 164 Lecture 171 Operational Semantics of Cool ICOM 4029 Lecture 10.
CSE-321 Programming Languages Abstract Machine E POSTECH May 1, 2006 박성우.
Operational Semantics Mooly Sagiv Reference: Semantics with Applications Chapter 2 H. Nielson and F. Nielson
COMP 412, FALL Type Systems C OMP 412 Rice University Houston, Texas Fall 2000 Copyright 2000, Robert Cartwright, all rights reserved. Students.
Heath Carroll Bill Hanczaryk Rich Porter.  A Theory of Type Polymorphism in Programming ◦ Robin Milner (1977)  Milner credited with introducing the.
Lesson 10 Type Reconstruction
Programming Languages and Compilers (CS 421)
CSE-321 Programming Languages Simply Typed -Calculus
A Verified Compiler for an Impure Functional Language
CSE341: Programming Languages Lecture 8 Lexical Scope and Function Closures Dan Grossman Winter 2013.
Typed Arithmetic Expressions
CSE341: Programming Languages Lecture 8 Lexical Scope and Function Closures Dan Grossman Spring 2013.
Programming Languages and Compilers (CS 421)
CSE341: Programming Languages Lecture 8 Lexical Scope and Function Closures Dan Grossman Spring 2016.
Programming Languages and Compilers (CS 421)
CSE341: Programming Languages Lecture 8 Lexical Scope and Function Closures Dan Grossman Autumn 2018.
CSE341: Programming Languages Lecture 8 Lexical Scope and Function Closures Zach Tatlock Winter 2018.
Recursion as a Problem-Solving Technique
CSE341: Programming Languages Lecture 8 Lexical Scope and Function Closures Dan Grossman Autumn 2017.
CSE341: Programming Languages Lecture 8 Lexical Scope and Function Closures Dan Grossman Spring 2019.
Presentation transcript:

Midterm & Concept Review CS 510, Fall 2002 David Walker

Midterm Most people did just fine greater than 90 = exceptional = good less than 80 = need more work Three main parts: MaybeML: 35 Objects: 35 Imperative objects: 30

Type-directed Translations What is a type-directed translation?

Type-directed Translations What is a type-directed translation? it is a function (a compiler) that takes a typing derivation for a source-language expression and produces a target-language expression we can define type-directed translations using judgments: (G |- e : t) ==> e’ e is a source term; e’ is a target term we define the translation using inference rules. eg: (G |- e1 : t1 -> t2) ==> e1’ (G |- e2 : t1) ==> e2’ (G |- e1 e2 : t2) ==> e1’ e2’

Type-preserving Translations When is a translation is type- preserving?

Type-preserving Translations When is a translation is type- preserving? If given a valid derivation, it produces a well-typed target expression We often prove a theorem like this:  if (G |- e : t) ==> e’ then Trans(G) |- e’ : Trans(t)  where Trans(t) is a type translation function this is an ordinary typing judgment in the target language

Maybe ML Syntax:  t ::= Bool? | t1 ?-> t2  e ::= x | true | false | null | if e1 then e2 else e3 | if? e1 then e2 else e3 | fun f (x:t1) : t2 = e | e1 e2

MinML (unit,+,->,exn) Syntax:  t ::= unit | t1 + t2 | t1 -> t2  e ::= x | () | e1; e2 | inl (t,e1) | inr (t,e2) |... Question:  Define a type-directed, type-preserving translation from MaybeML to MinML

Type Translation What I expected: Trans (Bool?) = unit + (unit + unit) Trans (t1 ?-> t2) = unit + (Trans(t1) + Trans(t2)) Another possibility: Trans (Bool?) = unit + (unit + unit) Trans (t1 ?-> t2) = t1 -> t2

Type Translation Almost: Trans (Bool?) = anyt Trans (t1 ?-> t2) = anyt where anyt = unit + (unit + unit) + (anyt -> anyt) what is wrong here?

Type Translation Almost: Trans (Bool?) = anyt Trans (t1 ?-> t2) = anyt where anyt = unit + (unit + unit) + (anyt -> anyt) what is wrong here? Need a recursive type:  anyt = rec a. unit + (unit + unit) + (a -> a)

Term Translation Mirrors the form of the static semantics Uses judgments with the form: (G |- e1 : t) ==> e1’ Invariant: If (G |- e1 : t) ==> e1’ then Trans(G) |- e1’ : Trans(t) Key: e1’ must type check under fully translated environment

Term Translation (no opt.) (G |- true : Bool?) ==> inr (Trans(Bool?), inl (unit + unit, ())) (G |- e1 : t1 ?-> t2) ==> e1’ (G |- e2 : t1) ==> e2’ (x,y not in Dom(G)) (G |- e1 e2 : t2) ==> case e1’ of ( inl (x) => fail | inr (y) => y e2’) Example:

Wrong (but it was pretty tricky, so don’t worry): Term Translation (no opt.) (G,f : t1 ?-> t2, x : t1 |- e : t2) ==> e’ (G |- fun f (x : t1) : t2 = e ==> inr (..., fun f (x : Trans(t1)) : Trans(t2) = e’)

What goes wrong? Consider: fun f (x : unit) : unit = f x and its translation: inr (..., fun f (x : unit) : unit = case f of ( inl (y) => fail | inr (z) => z x)) this is a function NOT a sum value

The point of doing a proof is to discover mistakes! Must prove result of trans has the right type: Term Translation (no opt.) (by IH) G,f : T(t1) -> T(t2), x : T(t1) |- e’ : T(t2) (fun) Trans(G) |- fun f (x : T(t1)) : T(t2) = e’ : T(t1) -> T(t2) (inr) Trans(G) |- inr (...,....) : unit + (T(t1) -> T(t2))

We can’t apply the induction hypothesis!  (T(G) |- e : t) ==> e’ is necessary Term Translation (no opt.) (by IH) G,f : T(t1) -> T(t2), x : T(t1) |- e’ : T(t2) (fun) Trans(G) |- fun f (x : T(t1)) : T(t2) = e’ : T(t1) -> T(t2) (inr) Trans(G) |- inr (...,....) : unit + (T(t1) -> T(t2)) not the translation of a function type (unit + T(t1) -> T(t2))

How do we fix this?  create something with type (unit + T(t1) -> T(t2)) to use inside the function  then bind that something to a variable f for use inside e’  our old translation gave us something with the type T(t1) -> T(t2)... Term Translation (no opt.)

How do we fix this? Term Translation (no opt.) (G,f : t1 ?-> t2, x : t1 |- e : t2) ==> e’ (G |- fun f (x : t1) : t2 = e ==> inr (..., fun f (x : Trans(t1)) : Trans(t2) = let f = inr (..., f) in e’) where let x = e1 in e2 is ((fun _ (x :...) :... = e2) e1)

A useful fact A let expression is normally just “syntactic sugar” for a function application let x = e1 in e2 is the same as (fn x => e2) e1

Optimization Observation: There are only a couple of null checks that appear in our translation. Can we really do substantially better?

Optimization Observation: There are only a couple of null checks that appear in our translation. Can we really do substantially better?  YES!  The expressions that result from the translation have many, many, many null checks: if true then if false then if true then.... our translation inserts 3 unnecessary null checks!

Some Possibilities Some people defined a few special-case rules: Detect cases where values are directly elimiated and avoid null checks:  (fun f (x:t1) :t2 = e) e’ translated differently  if true then e1 else e2 translated differently Others:  if? x then (if? x then e1 else e2) else e3 These people received some marks

A Much More General Solution Do lazy injections into the sum type Keep track of whether or not you have done the injection using the type of the result expression  if t’ = (unit + unit) or (trans(t1) -> trans(t2)) then you haven’t injected the expression e’ into a sum yet  you can leave off unnecessary injections around any expression, not just values (G |- e : t) ==> (e’ : t’)

A Much More General Solution New rules for introduction forms: Extra rules for elimination forms: (G |- true : Bool?) ==> (inr (..., ()): unit + unit) (G |- e1 : Bool? ==> (e1’, t) t notnull (G |- e2 : ts ==> (e2’, t2’) (G |- e3 : ts ==> (e3’, t3’) e2’,e3’,t2’,t3’ unifies to e2’’,e3’’,t’’ (G |- if e1 then e2 else e3 : ts) ==> if e1 then e2’’ else e3’’ : t’’

New Judgments Natural Types: “Unification” (unit + unit) notnull (t1 -> t2) notnull t2 = t e2,e3,t2,t3 unifies to e2,e3,t2 t2 = unit + t e2,e3,t2,t3 unifies to e2,inr(t2,e3),t2 t3 = unit + t e2,e3,t2,t3 unifies to inr(t3,e2),e3,t3

Objects Most people did well on the definition of the static and dynamic semantics for objects if you want to know some detail, come see me during my office hours Less well on the imperative features

Terminology What is a closed expression?

Terminology What is a closed expression?  An expression containing no free variables.  ((fun f (x:bool):bool = x x) true) is closed  ((fun f (x:bool):bool = y x) true) is not closed  Mathematically: if FV is a function that computes the set of free variables of an expression then e is closed if and only if FV(e) = { }

Terminology What is a well-formed expression?

Terminology What is a well-formed expression?  An expression that type checks under some type context G.  ((fun f (x:bool):bool = x x) true) is not well formed  ((fun f (x:bool):bool = y x) true) is well-formed in the context G = [y:bool -> bool]

Terminology What is (e : t) an abbreviation for?

Terminology What is (e : t) an abbreviation for? the typing judgment: . |- e : t If (e : t) then what else do we know? empty context

Terminology What is (e : t) an abbreviation for? the typing judgment: . |- e : t If (e : t) then what else do we know? we know that e contains no free variables in other words, e is closed (we might know other things if e also happens to be a value) empty context

Terminology What is a value?

Terminology What is a value?  it is an expression that does not need to be further evaluated (and it is not stuck) How do we normally define values?

Terminology How do we normally define values?  We declare a new metavariable v and give its form using BNF: v ::= x | n | | fun f (x : t1) : t2 = e  What is the difference between a metavariable v and an expression variable x? Alternatively, we define a value judgment: |- n value |- x value |- v1 value |- v2 value |- value

Terminology Does it matter whether we use BNF or a series of judgments to define the syntax of values and expressions?

Terminology Does it matter whether we use BNF or a series of judgments to define the syntax of values and expressions? No! BNF is just an abbreviation for the inductive definition that we would give using judgments instead Why don’t we define typing rules using BNF if it is so darn convenient?

Terminology Does it matter whether we use BNF or a series of judgments to define the syntax of values and expressions? No! BNF is just an abbreviation for the inductive definition that we would give using judgments instead Why don’t we define typing rules using BNF if it is so darn convenient? Typing rules are context-sensitive. BNF is used for context-insensitive definitions.

Terminology What is strange about the following sentence?  If (v : t) and v is a closed, well-formed value then the canonical forms lemma can tell us something about the shape of v given the type t.

Terminology What is strange about the following sentence?  If (v : t) and v is a closed, well-formed value then the canonical forms lemma can tell us something about the shape of v given the type t.  The red part is totally redundant! If you are using the metavariable v, then you should have already defined it so that it refers to values. (v : t) should also have been defined before. It should trivially imply that v is closed. It defines what it means for v to be well-formed! If you write a sentence like this on the final, you might find yourself losing points....

Back to objects In the future, when I say “write an expression that does...” you should always write a well-formed, closed expression unless I specify otherwise. {getloop = fn (x). ({loop = fn(y).y.loop} : {loop : t}) } : {getloop : {loop : t} } isn’t really an expression! It contains the metavariable t.

Back to objects {getloop = fn (x). ({loop = fn(y).y.loop} : {loop : { }}) } : {getloop : {loop : { }} } is what you want to do.

Imperative objects Syntax  t ::= {l = t,...}  e ::= x | {l = b,...} | e.l | e.l <- b |...  b ::= fn(x).e

Operational semantics Without imperative features (field update) we can use the ordinary M- machine definitions e -> e’

Operational semantics The obvious M-machine definition for object update doesn’t work: e = {lk = b’,l’’ = b’’...} (e.lk {lk = b,l’’ = b’’...}

Operational semantics let x = {n = fn(_).3} in let _ = (x.n <- fn(_).2) in x.n let _ = ({n = fn(_).3}.n <- fn(_).2) in {n = fn(_).3}.n let _ = {n = fn(_).2} in {n = fn(_).3}.n 3 Here’s why: this update only has local effect

Operational semantics We need to augment our operational semantics with a global store. A store S is a finite partial map from locations (r) to values. (what is a finite partial map?) v ::= {l=b,...} | r run-time expressions include locations r Our semantics now has form: (S,e) -> (S’,e’)

Operational semantics Rules: (S, {l = b,...}) -> (S[r -> {l = b,...}], r) S(r) = {l = fn(x).e,...} (S, r.l) -> (S, e[S(r)/x]) S(r) = {l = b’’, l’ = b’,...} (S, r.l (S[r -> {l = b, l’ = b’,...}], r)

Operational semantics (., let x = {n = fn(_).3} in let _ = (x.n <- fn(_).2) in x.n) ([r -> {n = fn(_).3}], let _ = (r.n <- fn(_).2) in r.n) Our example: ([r -> {n = fn(_).2}], r.n) ([r -> {n = fn(_).2}], 2) empty store r is substituted for x everywhere but the contents of r are kept in one place

Summary Things to remember: how to define type-directed and type- preserving translations be able to use and define common terms  values, closed expressions, operational semantics, canonical form, inversion principle, type system, soundness, completeness, subtyping, the subsumption principle, etc. proofs are for finding mistakes imperative features are tricky