Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 17: Research Roman Manevich Ben-Gurion University.

Slides:



Advertisements
Similar presentations
Continuing Abstract Interpretation We have seen: 1.How to compile abstract syntax trees into control-flow graphs 2.Lattices, as structures that describe.
Advertisements

SSA and CPS CS153: Compilers Greg Morrisett. Monadic Form vs CFGs Consider CFG available exp. analysis: statement gen's kill's x:=v 1 p v 2 x:=v 1 p v.
Greta YorshEran YahavMartin Vechev IBM Research. { ……………… …… …………………. ……………………. ………………………… } P1() Challenge: Correct and Efficient Synchronization { ……………………………
Greta YorshEran YahavMartin Vechev IBM Research. { ……………… …… …………………. ……………………. ………………………… } T1() Challenge: Correct and Efficient Synchronization { ……………………………
ECOE 560 Design Methodologies and Tools for Software/Hardware Systems Spring 2004 Serdar Taşıran.
Compilation 2011 Static Analysis Johnni Winther Michael I. Schwartzbach Aarhus University.
Program Representations. Representing programs Goals.
AUTOMATIC GENERATION OF CODE OPTIMIZERS FROM FORMAL SPECIFICATIONS Vineeth Kumar Paleri Regional Engineering College, calicut Kerala, India. (Currently,
Parallel Inclusion-based Points-to Analysis Mario Méndez-Lojo Augustine Mathew Keshav Pingali The University of Texas at Austin (USA) 1.
Background information Formal verification methods based on theorem proving techniques and model­checking –to prove the absence of errors (in the formal.
ISBN Chapter 3 Describing Syntax and Semantics.
CS 355 – Programming Languages
The of Parallelism in Algorithms Keshav Pingali The University of Texas at Austin Joint work with D.Nguyen, M.Kulkarni, M.Burtscher, A.Hassaan, R.Kaleem,
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
Representing programs Goals. Representing programs Primary goals –analysis is easy and effective just a few cases to handle directly link related things.
1 Intermediate representation Goals: –encode knowledge about the program –facilitate analysis –facilitate retargeting –facilitate optimization scanning.
A High Performance Application Representation for Reconfigurable Systems Wenrui GongGang WangRyan Kastner Department of Electrical and Computer Engineering.
CS 330 Programming Languages 09 / 18 / 2007 Instructor: Michael Eckmann.
Programming Language Semantics Mooly SagivEran Yahav Schrirber 317Open space html://
Validating High-Level Synthesis Sudipta Kundu, Sorin Lerner, Rajesh Gupta Department of Computer Science and Engineering, University of California, San.
Semantics with Applications Mooly Sagiv Schrirber html:// Textbooks:Winskel The.
ASC Program Example Part 3 of Associative Computing Examining the MST code in ASC Primer.
Describing Syntax and Semantics
Betweenness Centrality: Algorithms and Implementations Dimitrios Prountzos Keshav Pingali The University of Texas at Austin.
Direction of analysis Although constraints are not directional, flow functions are All flow functions we have seen so far are in the forward direction.
A Lightweight Infrastructure for Graph Analytics Donald Nguyen Andrew Lenharth and Keshav Pingali The University of Texas at Austin.
272: Software Engineering Fall 2012 Instructor: Tevfik Bultan Lecture 4: SMT-based Bounded Model Checking of Concurrent Software.
Graph Algorithms. Overview Graphs are very general data structures – data structures such as dense and sparse matrices, sets, multi-sets, etc. can be.
Maria-Cristina Marinescu Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology A Synthesis Algorithm for Modular Design of.
Precision Going back to constant prop, in what cases would we lose precision?
Abstract Interpretation (Cousot, Cousot 1977) also known as Data-Flow Analysis.
Applying Data Copy To Improve Memory Performance of General Array Computations Qing Yi University of Texas at San Antonio.
Elixir : A System for Synthesizing Concurrent Graph Programs
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 2: Operational Semantics I Roman Manevich Ben-Gurion University.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 14: Numerical Abstractions Roman Manevich Ben-Gurion University.
1 Automatic Refinement and Vacuity Detection for Symbolic Trajectory Evaluation Orna Grumberg Technion Haifa, Israel Joint work with Rachel Tzoref.
Program Analysis and Verification Spring 2014 Program Analysis and Verification Lecture 14: Numerical Abstractions Roman Manevich Ben-Gurion University.
Chapter 25 Formal Methods Formal methods Specify program using math Develop program using math Prove program matches specification using.
“Software” Esterel Execution (work in progress) Dumitru POTOP-BUTUCARU Ecole des Mines de Paris
CS 363 Comparative Programming Languages Semantics.
1 Optimizing compiler tools and building blocks project Alexander Drozdov, PhD Sergey Novikov, PhD.
Compiler Principles Fall Compiler Principles Lecture 0: Local Optimizations Roman Manevich Ben-Gurion University.
AI Automated Planning In A Nutshell Vitaly Mirkis March 4, 2013 Netanya Academic College Acknowledgments: Some slides are based slides of Prof. Carmel.
Implementing Parallel Graph Algorithms Spring 2015 Implementing Parallel Graph Algorithms Lecture 2: Introduction Roman Manevich Ben-Gurion University.
Program Analysis and Verification Spring 2014 Program Analysis and Verification Lecture 4: Axiomatic Semantics I Roman Manevich Ben-Gurion University.
1 Outline:  Optimization of Timed Systems  TA-Modeling of Scheduling Tasks  Transformation of TA into Mixed-Integer Programs  Tree Search for TA using.
Semantics In Text: Chapter 3.
1 Compiler Design (40-414)  Main Text Book: Compilers: Principles, Techniques & Tools, 2 nd ed., Aho, Lam, Sethi, and Ullman, 2007  Evaluation:  Midterm.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 12: Abstract Interpretation IV Roman Manevich Ben-Gurion University.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 13: Abstract Interpretation V Roman Manevich Ben-Gurion University.
Static Techniques for V&V. Hierarchy of V&V techniques Static Analysis V&V Dynamic Techniques Model Checking Simulation Symbolic Execution Testing Informal.
Formal Verification. Background Information Formal verification methods based on theorem proving techniques and model­checking –To prove the absence of.
Roman Manevich Rashid Kaleem Keshav Pingali University of Texas at Austin Synthesizing Concurrent Graph Data Structures: a Case Study.
From Natural Language to LTL: Difficulties Capturing Natural Language Specification in Formal Languages for Automatic Analysis Elsa L Gunter NJIT.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
CSC3315 (Spring 2009)1 CSC 3315 Languages & Compilers Hamid Harroud School of Science and Engineering, Akhawayn University
Program Analysis and Verification Spring 2014 Program Analysis and Verification Lecture 8: Static Analysis II Roman Manevich Ben-Gurion University.
Presented by : A best website designer company. Chapter 1 Introduction Prof Chung. 1.
Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 8: Static Analysis II Roman Manevich Ben-Gurion University.
Marilyn Wolf1 With contributions from:
Spring 2017 Program Analysis and Verification
Spring 2017 Program Analysis and Verification
Chapter 1 Introduction.
Spring 2016 Program Analysis and Verification
Spring 2017 Program Analysis and Verification
Spring 2016 Program Analysis and Verification
Chapter 1 Introduction.
Spring 2017 Program Analysis and Verification Operational Semantics
Over-Approximating Boolean Programs with Unbounded Thread Creation
Spring 2016 Program Analysis and Verification Operational Semantics
Presentation transcript:

Program Analysis and Verification Spring 2015 Program Analysis and Verification Lecture 17: Research Roman Manevich Ben-Gurion University

Syllabus Semantics Natural Semantics Structural semantics Axiomatic Verification Static Analysis Automating Hoare Logic Control Flow Graphs Equation Systems Collecting Semantics Abstract Interpretation fundamentals LatticesFixed-Points Chaotic Iteration Galois Connections Domain constructors Widening/ Narrowing Analysis Techniques Numerical Domains Alias analysis Shape Analysis Interprocedural Analysis CEGAR Crafting your own Soot From proofs to abstractions Systematically developing transformers 2

Previously Pointer analysis Shape analysis 3

Agenda Projects Synthesizing parallel graph programs 4

Dynamic Security Analysis of Web Applications ( ) Joint project with Aviv Ron from IBM Goal: discover security violations in web applications, e.g., banks Challenge: search space is too large Current solution: restrict number of times each point of manipulation is tried Idea: – Guide search via abstraction – let it differentiate interesting moves from non-interesting ones – Use Machine learning to automatically obtain abstraction Tree Automata HTML Pages

Possible M.Sc. topic Language abstractions and synthesis for Software Defined Networks (SDN) Cooperation with Cisco 6

77 Elixir: Synthesis of Efficient Parallel Graph Algorithms Roman Manevich 2 Joint work with: Dimitrios Prountzos 1 and Keshav Pingali 1 1. The University of Texas at Austin 2. Ben Gurion University of the Negev

88 Parallel Computing Landscape Ubiquitous parallelism From data-centers to cellphones Emerging Problem Domains Sparse Graph Algorithms

9 Key Programming Challenge: Correctness + Performance Best solution: Input + Platform Dependent Parallel Programming is hard! Research Question Solution: Synthesis Input: Implicitly parallel specification Automatic Synchronization Search to find best solution Easy to explore algorithm/implementation insights

10 Key Programming Challenge: Correctness + Performance Best solution: Input + Platform Dependent Parallel Programming is hard! Research Question Solution: Elixir Input: Implicitly parallel specification Automatic Synchronization Search to find best solution Easy to explore algorithm/implementation insights

11 Outline Programming Challenge Language Abstractions Synthesis Technique Experimental Evaluation

Problem Formulation – Compute shortest distance from source node S to every other node Many algorithms – Bellman-Ford (1957) – Dijkstra (1959) – Chaotic relaxation (Miranker 1969) – Delta-stepping (Meyer et al. 1998) Common structure – Each node has label dist with known shortest distance from S Key operation – relax-edge(u,v) Example: Single-Source Shortest-Path A A B B C C D D E E F F G G S S A A C C 3 if dist(A) + W AC < dist(C) dist(C) = dist(A) + W AC

Scheduling of relaxations: Use priority queue of nodes, ordered by label dist Iterate over nodes u in priority order On each step: relax all neighbors v of u – Apply relax-edge to all (u,v) Dijkstra’s Algorithm A A B B C C D D E E F F G G S S

Chaotic Relaxation Scheduling of relaxations: Use unordered set of edges Iterate over edges (u,v) in any order On each step: – Apply relax-edge to edge (u,v) A A B B C C D D E E F F G G S S (S,A) (B,C) (C,D) (C,E)

15 Q = PQueue[Node] Q.enqueue(S) while Q ≠ ∅ { u = Q.pop foreach (u,v,w) { if d(u) + w < d(v) d(v) := d(u) + w Q.enqueue(v) } W = Set[Edge] W ∪ = (S,y) : y ∈ Nbrs(S) while W ≠ ∅ { (u,v) = W.get if d(u) + w < d(v) d(v) := d(u) + w foreach y ∈ Nbrs(v) W.add(v,y) } Expressivity Gap Graph Algorithm =Operators+Schedule Dijkstra-styleChaotic-Relaxation

16 What is the Schedule? Graph Algorithm =Operators+Schedule Order activity processing Identify new activities Static Dynamic : activity “TAO of parallelism” PLDI’11 Compile-time binding in “algorithm body” = Operator Delta

17 Static Identify new activities Operators Dynamic Algorithms as Schedules Graph Algorithm =+Schedule Order activity processing Dijkstra-style Chaotic-Relaxation Q = PQueue[Node] Q.enqueue(S) while Q ≠ ∅ { u = Q.pop foreach (u,v,w) { if d(u) + w < d(v) d(v) := d(u) + w Q.enqueue(v) } W = Set[Edge] W ∪ = (S,y) : y ∈ Nbrs(S) while W ≠ ∅ { (u,v) = W.get if d(u) + w < d(v) d(v) := d(u) + w foreach y ∈ Nbrs(v) W.add(v,y) }

18 Static Identify new activities Operators Dynamic The Elixir Approach Graph Algorithm =+Schedule Order activity processing Dijkstra-style Q = PQueue[Node] Q.enqueue(S) while Q ≠ ∅ { u = Q.pop foreach (u,v,w) { if d(u) + w < d(v) d(v) := d(u) + w Q.enqueue(v) } Elixir provides Programming Model separating Operators Static Schedule Dynamic Schedule Automatic inference of Operator Delta Incremental fixpoint computations Advantages Increased Productivity Refined view of schedule exposes more concurrency Operator Delta Declarative Spec

19 Identify new activities DynamicStatic Operators Generating Efficient Parallel Programs Graph Algorithm =+Schedule Order activity processing Challenges Synchronization Multiple Protocols: Order + spin, Speculation Multiple Implementations for High-Level Statements E.g. exploit Graph ADT API Multiple Orderings for High-Level Statements E.g. Different orders of evaluating operator guards High-Level Program + Efficient Parallel Program ? Subtle Interactions

20 Generating Efficient Parallel Programs Graph Algorithm Challenges Synchronization Multiple Protocols Order + spin, Speculation Multiple Implementations for High-Level Statements E.g. exploit Graph ADT API Multiple Orderings for High-Level Statements E.g. Different orders of evaluating operator guards High-Level Program Efficient Parallel Program Elixir: Synthesis via Automated Planning = Identify new activities DynamicStatic Operators=+Schedule Order activity processing +

21 Elixir Contributions Language Abstractions Incremental Computations via Operator Delta Inference Efficient Parallel Programs via Planning-based Synthesis

22 Outline Programming Challenge Elixir Methodology Language Abstractions Incremental Computations via Operator Delta Inference Efficient Parallel Programs via Planning-based Synthesis Experimental Evaluation

23 Elixir Specifications: SSSP Graph [ nodes(node : Node, dist : int) edges(src : Node, dst : Node, wt : int) ] relax = [ nodes(node a, dist ad) nodes(node b, dist bd) edges(src a, dst b, wt w) bd > ad + w ] ➔ [ bd = ad + w ] sssp = iterate relax ≫ Graph Type Operator Fixpoint Statement

24 Elixir Specifications: Operator Graph [ nodes(node : Node, dist : int) edges(src : Node, dst : Node, wt : int) ] relax = [ nodes(node a, dist ad) nodes(node b, dist bd) edges(src a, dst b, wt w) bd > ad + w ] ➔ [ bd = ad + w ] sssp = iterate relax ≫ Shape Update Guard b b a a w adbd ➔ b b a a w adad+w

25 Elixir Specifications: Schedule Graph [ nodes(node : Node, dist : int) edges(src : Node, dst : Node, wt : int) ] relax = [ nodes(node a, dist ad) nodes(node b, dist bd) edges(src a, dst b, wt w) bd > ad + w ] ➔ [ bd = ad + w ] sssp = iterate relax ≫ Static: group, unroll, fuse, split Dynamic: metric, approx metric, fifo, lifo

26 Unconstrained Schedule Graph [ nodes(node : Node, dist : int) edges(src : Node, dst : Node, wt : int) ] relax = [ nodes(node a, dist ad) nodes(node b, dist bd) edges(src a, dst b, wt w) bd > ad + w ] ➔ [ bd = ad + w ] sssp = iterate relax

27 Dijkstra-style Variant Graph [ nodes(node : Node, dist : int) edges(src : Node, dst : Node, wt : int) ] relax = [ nodes(node a, dist ad) nodes(node b, dist bd) edges(src a, dst b, wt w) bd > ad + w ] ➔ [ bd = ad + w ] … a b1b1 bnbn sssp = iterate relax ≫ metric ad ≫ group b Q = PQueue[Node] Q.enqueue(S) while Q ≠ ∅ { u = Q.pop foreach (u,v,w) if d(u) + w < d(v) d(v) := d(u) + w Q.enqueue(v) }

28 sssp = iterate relax ≫ approx metric ad ≫ group b ≫ unroll 1 Chaotic-Relaxation Variant Graph [ nodes(node : Node, dist : int) edges(src : Node, dst : Node, wt : int) ] relax = [ nodes(node a, dist ad) nodes(node b, dist bd) edges(src a, dst b, wt w) bd > ad + w ] ➔ [ bd = ad + w ] … … a b1b1 bnbn

29 Outline Programming Challenge Elixir Methodology Language Abstractions Incremental Computations via Operator Delta Inference Efficient Parallel Programs via Planning-based Synthesis Experimental Evaluation

Problem Statement Many graph programs have the form until no change do { redex := find redex in graph apply operator to redex } Naïve implementation: keep looking for places where operator can be applied to make a change – Problem: too slow Incremental implementation: after applying an operator, find smallest set of future active elements and schedule them (add to worklist) 30

31 Challenge: Inferring Operator Delta b b a a relax 1 ? ?

32 Delta Inference Example a a b b c c w1w1 w2w2 relax 1 relax 2 assume da + w 1 < db assume ¬(db + w 2 < dc) db’ = da + w 1 assert ¬(db’ + w 2 < dc) Query Program ➜ SMT Solver ➜ ✗

33 Influence Patterns a a b b c c a a b b c c a a b b a a b b c c a a b b a a b b c c

34 a a b b c c a a b b c c a a b b c c a a b b a a b b c c a a b b Influence Patterns

35 From Specification to High-Level Program relax = [ nodes(node a, dist ad) nodes(node b, dist bd) edges(src a, dst b, wt w) bd > ad + w ] ➔ [ bd = ad + w ] Operator Schedule Tactic sssp = iterate relax a a b b c c Delta + + ➜ for (a,b) : (nodes,nodes) do if edge(a,b) if a ≠ b if da + w1 < db dist(b) := da + w for (c,d) : (nodes,nodes) do if c = b if edge(c,d) if c ≠ d if d ≠ a if d ≠ b async relax(c,d) fi … od High-Level Program Operator Delta

36 Outline Programming Challenge Elixir Methodology Language Abstractions Incremental Computations via Operator Delta Inference Efficient Parallel Programs via Planning-based Synthesis Experimental Evaluation

37 From High-Level Programs to Efficient Parallel Programs Reordering (T R ) Implementation Selection (T IS ) Synchronization (T Sync ) Challenges: Identify new activities DynamicStatic Operators Graph Algorithm =+Schedule Order activity processing High-Level Program + Efficient Parallel Program ?

38 Example: Triangles for a : nodes do for b : nodes do for c : nodes do if edges(a,b) if edges(b,c) if edges(c,a) if a < b if b < c if a < c triangles++ fi … … … Iterators Graph Conditions Scalar Conditions

39 Example: Triangles for a : nodes do for b : nodes do for c : nodes do if edges(a,b) if edges(b,c) if edges(c,a) if a < b if b < c if a < c triangles++ fi … ≺≺ Iterators Graph Conditions Scalar Conditions

40 for a : nodes do for b : nodes do for c : nodes do if edges(a,b) if edges(b,c) if edges(c,a) if a < b if b < c if a < c triangles++ fi … ≺≺ Triangles: Reordering Iterators Graph Conditions Scalar Conditions

41 Iterators Graph Conditions Scalar Conditions for a : nodes do for b : nodes do for c : nodes do if edges(a,b) if edges(b,c) if edges(c,a) if a < b if b < c if a < c triangles++ fi … ≺≺ for a : nodes do for b : Succ(a) do for c : Succ(b) do if edges(c,a) if a < b if b < c if a < c triangles++ fi … Triangles: Implementation Selection for x : nodes do if edges(x,y) ⇩ for x : Succ(y) do Reordering + Implementation Selection Tile:

42 Implementing Synchronization for a : nodes do if status(a) = … if ∀ b : Nbrs(a) { … } status(a) := … map b : Nbrs(a) { … } … lock a lock b ∈ Nbrs(a) Novel speculation-based synchronization Requires data flow analysis to insert custom locks/unlocks Maximal Independent Set else unlock a fi ctx ∅ ctx a

43 Reordering (T R ) Implementation Selection (T IS ) Synchronization (T Sync ) Synthesis Challenge Challenges: Staged: Integrated Order T R, T IS, T Sync as separate phases Apply T R, T IS, T Sync simultaneously Phase Ordering Identify new activities DynamicStatic Operators Graph Algorithm =+Schedule Order activity processing High-Level Program + Efficient Parallel Program ?

44 STRIPS-style Automated Planning Fluents : Domain facts E.g. Hold(x), Clear(x), On(x,y), … State: Fluent Valuation Actions : a Stack(x,y) Pickup(x) PutDown(x) Unstack(x,y) A B C D Unstack(B,A) ⊙ Pickup(B) ⊙ Stack(B,D) ⊙ Pickup(A) ⊙ Stack(A,B) Find finite action sequence from Init state to Goal state A B C D ➜

45 Planning with Temporal Constraints Specify constraints on state sequences G p (always) pppp ……. InitGoal

46 Planning with Temporal Constraints Specify constraints on state sequences p ⊏ q (happens before) ¬p,¬qp,¬qp,q ……. InitGoal

47 Elixir Architecture

48 Planning Framework Architecture

49 Planning-based Synthesis for a : nodes do if status(a) = … if ∀ b : Nbrs(a) { … } status(a) := … map b : Nbrs(a) { … } else exit fi od begin commit

50 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd map else exit fi else exitfi od begin commit

51 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd mapelse exit fi else exit fi od begin commit U

52 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd map Planner else exit fi else exit fi od begin commit U ∀ u ∈ U: Once u for… begin if 1 (a) if 2 (N(a)) upd map commit else exit fi else exit fi od for… begin if 2 (N(a)) if 1 (a) upd map commit else exit fi else exit fi od od begin if 2 (a,N(a)) if 1 (a) upd map commit else exit fi else exit fi for…

53 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd map Planner else exit fi else exit fi od begin commit U ∀ u ∈ U: Once u for… begin if 1 (a) if 2 (N(a)) upd map commit else exit fi else exit fi od for… begin if 2 (N(a)) if 1 (a) upd map commit else exit fi else exit fi od od begin if 2 (a,N(a)) if 1 (a) upd map commit else exit fi else exit fi for… for… ⊏ od (if 1,fi)(if 2,fi) ✔ ✗ ✔

54 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd map Planner else exit fi else exit fi od begin commit U for… begin if 1 (a) if 2 (N(a)) upd map commit else exit fi else exit fi od for… begin if 2 (N(a)) if 1 (a) upd map commit else exit fi else exit fi od for… ⊏ od for… begin if 2 (N(a)) upd if 1 (a) map commit else exit fi else exit fi od (if 1,fi)(if 2,fi) if 1 ⊏ upd ∀ u ∈ U: Once u ✔ ✔ ✗

55 Synchronization Synthesis for… if 1 (a) if 2 (N(a)) upd map Planner else exit fi else exit fi od begin commit U for… begin lock a ctx ∅ if 1 (a) lock N(a) ctx a if 2 (N(a)) upd map unlock a,N(a) commit else unlock a,N(a) exit fi else unlock a exit fi od for… ⊏ od (if 1,fi)(if 2,fi) if 1 ⊏ upd lock a ctx ∅ lock a ctx N(a) lock N(a) ctx a lock N(a) ctx ∅ … Locked[r] ⊏ rd[r] ⋁ wr[r] ∀ u ∈ U: Once u if 2 with N(a) ctx rs …

56 Outline Programming Challenge Elixir Methodology Language Abstractions Incremental Computations via Operator Delta Inference Efficient Parallel Programs via Planning-based Synthesis Experimental Evaluation

57 Methodology Implementation Space Platform: 40-core Intel Xeon, 128 GB Times: median of 5 runs Plans Tiles, Schedules, Synchronization Elixir scheduling tactics group, unroll, fuse, split Use Graphs, Worklists from Galois Library Compare against solutions from expert programmers

58 Triangles foreach count ≫ group b,c A: B: foreach count ≫ group a,c foreach count ≫ group a,b C:C: for x : OrdSucc(y) do for x : nodes do if edges(x,y) if y < x →

59 MIS

60 SSSP 24 core Intel 2 GHz USA Florida Road Network Group+unroll improve performance

61 Connected Components Preflow-Push Connected ComponentsPreflow-Push

62 Conclusions Graph Algorithm = Operators + Schedule Elixir System Language Abstractions for increased productivity Imperative Operators + Declarative schedule Delta Inference leads to efficient fixpoint computations Planning-based Synthesis leads to efficient parallel programs Competitive performance with handwritten implementations

Agenda Introduction to automated planning – STRIPS model – Temporal goals – Soft goals Using planning for programming languages – Transformation – Analysis – Synthesis Open problems 63

Planning in blocks world Initial state Goal state Game rules: -No two blocks can be on top of the same block -Block can’t be on top of two blocks -Exact location on table or on block doesn’t matter ? ? B YG YG B

Planning in blocks world Initial state on-table(Y), clear(Y), on-table(G), on(R, G), on(B, R), clear(B) Goal state on-table(G), clear(G), on-table(Y), clear(Y), clear(R), on(R, B), on-table(B) on(i,j), on-table(i), clear(i)

Planning in blocks world Initial state Goal state Actions defined with: Move-Block( i )-from-Block(j)-to-Table Move-Block( i )-from-Table-to-Block(j) Move-Block( i )-from-Block(j)-to-Block(k) ? ?

Planning in blocks world B

Initial state Goal state Planning in blocks world B B G G Y Y R R G G Y Y R R B B G G R R

G R B Y GBY R

70 STRIPS-style Automated Planning Fluents : Domain facts E.g. Hold(x), Clear(x), On(x,y), … Literals: positive/negative fluents State: A set of fluents (fluents not in the set are false) Actions : a pre and post are sets of literals a(s) = if positive-pre  s and negative-pre  s = {} then s’ = s \ delete(a)  add(a) Stack(x,y) Pickup(x) PutDown(x) Unstack(x,y) A B C D Unstack(B,A) ⊙ Pickup(B) ⊙ Stack(B,D) ⊙ Pickup(A) ⊙ Stack(A,B) Find finite action sequence from Init state to subset of Goal state A B C D ➜

71 Planning with Temporal Goals

Temporal Goals Temporal goals specify properties of entire state sequences Specified in Linear Temporal Logic (LTL) Bad news: – Not well-supported by most planners – Some subtle issues with semantics of LTL when applied to finite sequences Good news – Can sometimes be compiled back to regular planning problems 72

73 Example: Always Goal G p(p always happens) pppp ……. InitGoal

74 Example: Once Goal Once p(p occurs exactly once) ¬p p ……. InitGoal How can we compile Once p to planning problem with regular goals?

75 Example: Happens Before p ⊏ q(p happens before q) ¬p,¬qp,¬qp,q ……. InitGoal How can we compile p ⊏ q to planning problem with regular goals?

Soft Goals Planner should attempt to satisfy as many soft goals as possible Can be used as a way to define quality of plans Some planners also allow associating numeric costs with actions – Planner strives for minimum-cost plans 76

77 Synthesis via Planning in Elixir

78 Elixir Architecture

79 Planning Framework Architecture

80 Planning-based Synthesis for a : nodes do if status(a) = … if ∀ b : Nbrs(a) { … } status(a) := … map b : Nbrs(a) { … } else exit fi od begin commit

81 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd map else exit fi else exitfi od begin commit

82 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd mapelse exit fi else exit fi od begin commit U

83 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd map Planner else exit fi else exit fi od begin commit U ∀ u ∈ U: Once u for… begin if 1 (a) if 2 (N(a)) upd map commit else exit fi else exit fi od for… begin if 2 (N(a)) if 1 (a) upd map commit else exit fi else exit fi od od begin if 2 (a,N(a)) if 1 (a) upd map commit else exit fi else exit fi for…

84 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd map Planner else exit fi else exit fi od begin commit U ∀ u ∈ U: Once u for… begin if 1 (a) if 2 (N(a)) upd map commit else exit fi else exit fi od for… begin if 2 (N(a)) if 1 (a) upd map commit else exit fi else exit fi od od begin if 2 (a,N(a)) if 1 (a) upd map commit else exit fi else exit fi for… for… ⊏ od (if 1,fi)(if 2,fi) ✔ ✗ ✔

85 Planning-based Synthesis for… if 1 (a) if 2 (N(a)) upd map Planner else exit fi else exit fi od begin commit U for… begin if 1 (a) if 2 (N(a)) upd map commit else exit fi else exit fi od for… begin if 2 (N(a)) if 1 (a) upd map commit else exit fi else exit fi od for… ⊏ od for… begin if 2 (N(a)) upd if 1 (a) map commit else exit fi else exit fi od (if 1,fi)(if 2,fi) if 1 ⊏ upd ∀ u ∈ U: Once u ✔ ✔ ✗

86 Synchronization Synthesis for… if 1 (a) if 2 (N(a)) upd map Planner else exit fi else exit fi od begin commit U for… begin lock a ctx ∅ if 1 (a) lock N(a) ctx a if 2 (N(a)) upd map unlock a,N(a) commit else unlock a,N(a) exit fi else unlock a exit fi od for… ⊏ od (if 1,fi)(if 2,fi) if 1 ⊏ upd lock a ctx ∅ lock a ctx N(a) lock N(a) ctx a lock N(a) ctx ∅ … Locked[r] ⊏ rd[r] ⋁ wr[r] ∀ u ∈ U: Once u if 2 with N(a) ctx rs …

87 Utilizing Planning for Programming Languages Operations

88 Project(s) Challenge: how to encode a programming task in terms of planning Steps: 1.Come up with algorithm, write it down formally 2.Download any modern planning tool and learn to use it Learn to program in PDDL 3.Demonstrate encoding on a few programs

89 Encoding Reordering Input: a program P Goal: generate a planning problem whose outputs are programs made from statements of P but possibly in different orders x := 1; if y>5 x := x+1; y := y*8; fi z := x-1; if z>6 x := x+2; fi if z>6 if y>5 y := y*8; z := x-1; fi x := x+1; x := x+2; fi x := 1;

90 Encoding Reordering u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u5: fi u6: z := x-1; u7: if z>6 u8: x := x+2; u9: fi Step 1: name each unit

91 Encoding Reordering u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u5: fi u6: z := x-1; u7: if z>6 u8: x := x+2; u9: fi Step 1: name each unit Planning problem Permutations(P) = Fluents = ? Initial = ? Actions = ? Goal = ?

92 Encoding Reordering Step 1: name each unit Planning problem Permutations(P) = Fluents = {u1,…,u9} Initial = {} Actions = { <> print_ui | i=1…9} Temporal goals = {Once u1,…,Once u9} u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u5: fi u6: z := x-1; u7: if z>6 u8: x := x+2; u9: fi u5: fi u9: fi u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u6: z := x-1; u7: if z>6 u8: x := x+2; What is the problem here and how do we prevent it?

93 Encoding Reordering Step 1: name each unit Planning problem = Fluents = {u1,…,u9} Initial = {} Actions = { <> print_ui | i=1…9} Temporal goals = {Once u1,…,Once u9}  SyntacticConstraints u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u5: fi u6: z := x-1; u7: if z>6 u8: x := x+2; u9: fi u5: fi u9: fi u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u6: z := x-1; u7: if z>6 u8: x := x+2;

Syntactic Constraints We want the output program to be in the context-free grammar of the output language For example – Opening and closing of scopes if b j ⊏ fi j while b j ⊏ od j – Balanced parentheses (if b j,fi j )(if b k,fi k ) 94

95 Syntactic Constraints Example Step 1: name each unit Planning problem = Fluents = {u1,…,u9} Initial = {} Actions = { <> print_ui | i=1…9} Temporal goals = {Once u1,…,Once u9}  {u2 ⊏ u5, u7 ⊏ u9, (u2,u5)(u7,u9) } u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u5: fi u6: z := x-1; u7: if z>6 u8: x := x+2; u9: fi u5: fi u9: fi u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u6: z := x-1; u7: if z>6 u8: x := x+2;

Challenge 1: Encode CFG How can we efficiently encode arbitrary (or at least substantial classes of) context-free grammars? Input: – Program P – CFG G whose alphabet is units(P) Output: planning problem  =Syntax(P,G) such that plans(  ) = Permutations(P)  L(G) Your task: design an algorithm for Syntax(P,G) for efficient  (small) and demonstrate on a few input programs 96

97 Encoding Dependencies Input: a program P Goal: generate a planning problem whose outputs preserve dependencies of the input program

98 Dependencies Example 1 Input: a program P Goal: generate a planning problem whose outputs preserve dependencies of the input program u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u5: fi u6: z := x-1; u7: if z>6 u8: y := y+2; u9: fi u1: x := 1; u2: if y>5 u4: y := y*8; u3: x := x+1; u5: fi u7: if z>6 u8: y := y+2; u9: fi u6: z := x-1; u2: if y>5 u1: x := 1; u3: x := x+1; u4: y := y*8; u5: fi u7: if z>6 u8: y := y+2; u9: fi u6: z := x-1; What is the problem here and how do we prevent it?

99 Dependencies Example 2 Input: a program P Goal: generate a planning problem whose outputs preserve dependencies of the input program u1: if y>5 u2: if z>5 u3: x := x+1; u4: y := y*8; u5: fi u6: fi u2: if z>5 u1: if y>5 u3: x := x+1; u4: y := y*8; u6: fi u5: fi Is this transformation correct?

100 Encoding Dependencies Run (simple) static analysis to identify dependencies Add appropriate scope constraints – If unit u is in scope of statement (if b, fi) then add if b ⊏ u ⊏ fi – Same for while b … u… od – Unless immediately-nested conditions/loops without dependencies between their conditions

Challenge 2: Encode DFG How can we efficiently encode data and control dependencies? Input: – Program P – DPG(P) (dependencies graph) Output: planning problem  =Depends(P,G) such that plans(  ) = Permutations(P)  {P’ | P  DFG(P)} Your task: – Read paper on dependence graphs – Design an algorithm for Depends(P,G) for efficient  (small) and demonstrate on a few input programs 101

Recap How can we efficiently encode data and control dependencies? Input: – Program P – DPG(P) (dependencies graph) Output: planning problem  = Perms(P) + Syntax(P,G) + Depends(P,G) Then every plan  Plans(  ) represents a program that is equivalent to P 102

Lowering How can we translate from one language to another? Idea: define for each input-language construct possible translations to low-level language and try to optimize Input: – Program P – Tiles T – A tile is a template of the form t = P’ where t is a unit and P’ is a sequence of units Output:  = Tiles(P,T) such that plans in  represent programs using only tiles 103

104 Tiles example 1 Tiles represent optimization patterns z:=z+2 = z:=z+1; z;=z+1 How can we encode them? u1: x := x+1; u2: x := x+1; u1,u2: x := x+2;

105 Tiles example 1 Tiles represent optimization patterns z:=z+2 = z:=z+1; z:=z+1 How can we encode them? Instantiate macro actions: <> print_”z:=z+2” Requires composing actions for individual unit actions An instance of inverse-homomorphism in DFA u1: x := x+1; u2: x := x+1; u1,u2: x := x+2;

106 Tiles example 2: multi-tiles We sometimes need tiles for non-atomic statements – Conditions – Loops Need to account for scopes – if b1 && b2 … fifi – Add temporal “tandem” goals if x>0 if y>0 S fi fi if x>0 && y>0 S fifi

107 Encoding Dataflow We would like to infer dataflow for programs in order to enable conditioned optimizations Input: – Program P – Dataflow problem D=(2 Factoid, , , {}, F : 2 Factoid  2 Factoid ) Output: plans such that each state includes dataflow information

108 Dataflow Example 1 Constant propagation – For each unit u add fluents u{v 1 =k 1,…, v n =k n } for all tuples of variables v 1,…,v n in program and some tuples of constants k 1,…,k n In general for each unit u and dataflow element d, add u{d} – works when number of dataflow facts is finite For each atomic action unit simulate transfer function update corresponding planning action u1: x := 1; u2: y := x+8; u1: x := 1; {x=1,y=  } u2: y := x+8;{x=1,y=9} Assume dataflow elements={1,9,  }

109 Dataflow Example 2 Handle conditions by remembering dataflow state before conditions – For each “if b” unit and dataflow element d create “if b” On corresponding fi unit join current dataflow element and remembered element u1: x := 1; u2: if y>5 u3: x := x+1; u4: y := y*8; u5: fi u1: x := 1; {x=1,y=  } u2: if y>5 {x=1,y= , u2 with x=1} u3: x := x+1;{x=2,y= , u2 with x=1} u4: y := y*8;{x=2,y= , u2 with x=1} u5: fi{x= ,y=  } x=2  x=1=x= 

110 Challenge 3a: Encoding Dataflow Input: – Program P – Dataflow problem D=(2 Factoid, , , {}, F : 2 Factoid  2 Factoid ) – Assume Factoid is finite set – Assume kill-gen transfer functions Output: planning problem  =Flow(P,D) such that plans(  ) are plans augmented with dataflow facts Task: – Find an encoding scheme that is sub-exponential in number of dataflow elements – Handle loops via soft goals Simpler task: handle context-aware locking

111 Implementing Synchronization for a : nodes do if status(a) = … if ∀ b : Nbrs(a) { … } status(a) := … map b : Nbrs(a) { … } … lock a lock b ∈ Nbrs(a) Novel speculation-based synchronization Requires data flow analysis to insert custom locks/unlocks Maximal Independent Set else unlock a fi ctx ∅ ctx a

112 Challenge 3b: Encoding Execution Goals Input: – Program P – Temporal goal EG over executions of program Output: planning problem  =Execution(P,EG) such that plans(  ) represent programs whose executions satisfy EG Task: consider execution goals given by finite automata – Demonstrate at least for – Globally – Happens-before – Happens-once – Balanced parentheses

113 Challenge 4: Register Allocation Given a program P with liveness information LV, and register set r1,…,rk Encode a planning problem for outputting a program P’ where atomic statements use registers r1,…,rk and memory addresses M1,…,Mn with minimal n

114 Challenge 5: Combining Metrics Use a planner to encode several optimality metrics – Order-relative for conditions – Duration metrics

Good Luck!