Hawkeye: Effective Discovery of Dataflow Impediments to Parallelization Omer Tripp John Field Greta Yorsh Mooly Sagiv.

Slides:



Advertisements
Similar presentations
Dataflow Analysis for Datarace-Free Programs (ESOP 11) Arnab De Joint work with Deepak DSouza and Rupesh Nasre Indian Institute of Science, Bangalore.
Advertisements

Abstract Interpretation Part II
Introduction to Formal Methods for SW and HW Development 09: SAT Based Abstraction/Refinement in Model-Checking Roberto Sebastiani Based on work and slides.
Shape Analysis by Graph Decomposition R. Manevich M. Sagiv Tel Aviv University G. Ramalingam MSR India J. Berdine B. Cook MSR Cambridge.
The complexity of predicting atomicity violations Azadeh Farzan Univ of Toronto P. Madhusudan Univ of Illinois at Urbana Champaign.
Context-Sensitive Interprocedural Points-to Analysis in the Presence of Function Pointers Presentation by Patrick Kaleem Justin.
Compilation 2011 Static Analysis Johnni Winther Michael I. Schwartzbach Aarhus University.
1 CS 201 Compiler Construction Machine Code Generation.
3-Valued Logic Analyzer (TVP) Tal Lev-Ami and Mooly Sagiv.
A Rely-Guarantee-Based Simulation for Verifying Concurrent Program Transformations Hongjin Liang, Xinyu Feng & Ming Fu Univ. of Science and Technology.
A Randomized Dynamic Program Analysis for Detecting Real Deadlocks Koushik Sen CS 265.
Chair of Software Engineering From Program slicing to Abstract Interpretation Dr. Manuel Oriol.
Names and Bindings.
Hashing. CENG 3512 Motivation The primary goal is to locate the desired record in a single access of disk. – Sequential search: O(N) – B+ trees: O(log.
Program Representations. Representing programs Goals.
Linked Lists Compiled by Dr. Mohammad Alhawarat CHAPTER 04.
Chapter 1 Object Oriented Programming. OOP revolves around the concept of an objects. Objects are crated using the class definition. Programming techniques.
Transaction Processing Lecture ACID 2 phase commit.
1 Operational Semantics Mooly Sagiv Tel Aviv University Textbook: Semantics with Applications.
Recap from last time We were trying to do Common Subexpression Elimination Compute expressions that are available at each program point.
Establishing Local Temporal Heap Safety Properties with Applications to Compile-Time Memory Management Ran Shaham Eran Yahav Elliot Kolodner Mooly Sagiv.
Program analysis Mooly Sagiv html://
Correctness. Until now We’ve seen how to define dataflow analyses How do we know our analyses are correct? We could reason about each individual analysis.
Program analysis Mooly Sagiv html://
1 Control Flow Analysis Mooly Sagiv Tel Aviv University Textbook Chapter 3
Run time vs. Compile time
Abstract Interpretation Part I Mooly Sagiv Textbook: Chapter 4.
Modular Shape Analysis for Dynamically Encapsulated Programs Noam Rinetzky Tel Aviv University Arnd Poetzsch-HeffterUniversität Kaiserlauten Ganesan RamalingamMicrosoft.
1 Program Analysis Mooly Sagiv Tel Aviv University Textbook: Principles of Program Analysis.
Overview of program analysis Mooly Sagiv html://
Basic Definitions Data Structures: Data Structures: A data structure is a systematic way of organizing and accessing data. Or, It’s the logical relationship.
Modular Shape Analysis for Dynamically Encapsulated Programs Noam Rinetzky Tel Aviv University Arnd Poetzsch-HeffterUniversität Kaiserlauten Ganesan RamalingamMicrosoft.
1 Run time vs. Compile time The compiler must generate code to handle issues that arise at run time Representation of various data types Procedure linkage.
Comparison Under Abstraction for Verifying Linearizability Daphna Amit Noam Rinetzky Mooly Sagiv Tom RepsEran Yahav Tel Aviv UniversityUniversity of Wisconsin.
Making Sequential Consistency Practical in Titanium Amir Kamil and Jimmy Su.
Program Analysis Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
Overview of program analysis Mooly Sagiv html://
1/25 Pointer Logic Changki PSWLAB Pointer Logic Daniel Kroening and Ofer Strichman Decision Procedure.
C o n f i d e n t i a l Developed By Nitendra NextHome Subject Name: Data Structure Using C Title: Overview of Data Structure.
Precise Interprocedural Dataflow Analysis via Graph Reachibility
Runtime Refinement Checking of Concurrent Data Structures (the VYRD project) Serdar Tasiran Koç University, Istanbul, Turkey Shaz Qadeer Microsoft Research,
1 Iterative Program Analysis Abstract Interpretation Mooly Sagiv Tel Aviv University Textbook:
Testing and Verifying Atomicity of Composed Concurrent Operations Ohad Shacham Tel Aviv University Nathan Bronson Stanford University Alex Aiken Stanford.
Shape Analysis Overview presented by Greta Yorsh.
Lecture 2 Foundations and Definitions Processes/Threads.
Formal Specification of Intrusion Signatures and Detection Rules By Jean-Philippe Pouzol and Mireille Ducassé 15 th IEEE Computer Security Foundations.
Page 1 5/2/2007  Kestrel Technology LLC A Tutorial on Abstract Interpretation as the Theoretical Foundation of CodeHawk  Arnaud Venet Kestrel Technology.
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
Convergence of Model Checking & Program Analysis Philippe Giabbanelli CMPT 894 – Spring 2008.
CMSC 202 Advanced Section Classes and Objects: Object Creation and Constructors.
Data Design and Implementation. Definitions Atomic or primitive type A data type whose elements are single, non-decomposable data items Composite type.
Generating Precise and Concise Procedure Summaries Greta Yorsh Eran Yahav Satish Chandra.
Operational Semantics Mooly Sagiv Tel Aviv University Textbook: Semantics with Applications Chapter.
Object Oriented Programming Session # 03.  Abstraction: Process of forming of general and relevant information from a complex scenarios.  Encapsulation:
Presented by: Belgi Amir Seminar in Distributed Algorithms Designing correct concurrent algorithms Spring 2013.
1 Iterative Program Analysis Abstract Interpretation Mooly Sagiv Tel Aviv University Textbook:
Operational Semantics Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
LINKED LISTS.
Textbook: Principles of Program Analysis
CS 326 Programming Languages, Concepts and Implementation
Janus: exploiting parallelism via hindsight
G. Ramalingam Microsoft Research, India & K. V. Raghavan
Threads and Memory Models Hal Perkins Autumn 2011
Over-Approximating Boolean Programs with Unbounded Thread Creation
CS 201 Compiler Construction
Threads and Memory Models Hal Perkins Autumn 2009
Abstraction.
Foundations and Definitions
Prof. Onur Mutlu Carnegie Mellon University
Presentation transcript:

Hawkeye: Effective Discovery of Dataflow Impediments to Parallelization Omer Tripp John Field Greta Yorsh Mooly Sagiv

Dataflow Impediments to Parallelization public void set(Object o) { this.f = calc_f(o); } public void process() { Object o = this.f; if (o == null) { doA(); } else { doB(); } public void setAndProcess(Object o) { set(o); process(); } set(o) || process()? RAW dependency

for (Vertex cutpoint : this.cutpoints) { UndirectedGraph subgraph = new SimpleGraph(); subgraph.addVertex(cutpoint); this.cutpointGraphs.put(cutpoint, subgraph); this.addVertex(subgraph); Set blocks = this.vertex2blocks.get(cutpoint); for (UndirectedGraph block : blocks) { int oldHitCount = this. block2hits.get(block); this.block2hits.put(block, oldHitCount+1); this.addEdge (subgraph, block); } Simplified version of the JGraphT algorithm for building a block- cutpoint graph Sometimes It’s Less Obvious for (Vertex cutpoint : this.cutpoints) { UndirectedGraph subgraph = new SimpleGraph(); subgraph.addVertex(cutpoint); this.cutpointGraphs.put(cutpoint, subgraph); this.addVertex(subgraph); Set blocks = this.vertex2blocks.get(cutpoint); for (UndirectedGraph block : blocks) { int oldHitCount = this.block2hits.get(block); this.block2hits.put(block, oldHitCount+1); this.addEdge (subgraph, block); } } This code admits a lot of available parallelism, but there are a few impediments that must be addressed toward parallelizing it. How can we pinpoint these dependencies precisely and concisely?

Field-based Dependence Analysis So let’s use dynamic dependence analysis instead… Static dependence analysis is challenged by dynamic containers, aliasing, etc

789 modcount table [0] next [8] next 1 K key value […] … … 2 K’ key value next m.put(k,1); m.put(k’,2); Spurious dependencies, which inhibit m.put(k,1) || m.put(k’,2)! m = new ConcurrentHashMap(); 2 m.put(k,2); Semantic dependency, which gets “lost” in the noise!

Eureka: Let’s Use Abstraction Abstract Locking Galois Leveraging ADT semantics in STM conflict detection Using ADT semantics in DB concurrency control (Muth et al., 93) Exploiting commutativity in DB transactions (Bernstein, 66) But… We need a predictive tool; our code is still sequential We want the tool to pinpoint impediments to parallelization before applying parallelization transformations

The Hawkeye Analysis Tool 789 modcount table [0] nex t [8] nex t 1 K key value […] … … 2 K’K’ key value nex t K 1 valueK K’K’ 2 K’ ? value ? ? ? Representation Function KeyValue Concrete Map state Map ADT state Dynamic analysis tool Uses abstraction while tracking (certain) dependencies User specifies representation function for data structures of choice; rest tracked concretely Allows concentrating on semantic dependencies while suppressing spurious dependencies

Specification Language foreach key k in m.keySet() adtState.add(m -> k); foreach entry (k,v) in m.entrySet() adtState.add(k -> v); foreach node n in g.nodes() adtState.add(g -> n); foreach edge (n 1,n 2 ) in g.edges() adtState.add(n 1 -> n 2 ); Map Graph

Specification Language foreach instance i 1 in instances() foreach instance i 2 in instances() adtState.add((i 1,i 2 ) -> distance(i 1,i 2 )); … DistanceFunction

Specification Language No need to model ADT operations User can refine approximation (though our experience shows that the default is mostly accurate) No need for a commutativity spec Hawkeye uses heuristics for (sound) approximation of the footrprint of an ADT operation

Concrete The Hawkeye Algorithm 789 M modcount table [0] next [8] next 1 K key value 2 K’ key value 2 M (M,X) (M,K) 1 (M,K,1) K (M,K’) 2 (M,K’,2) K’ m.put(k,1); m.put(k’,2); m.put(k,2); 2 (M,K,2) (R: {}, W: {(M,K),(M,K,1)}) (R: {}, W: {(M,K’),(M,K’,2)}) (R: {}, W: {(M,K),(M,K,1),(M,K,2)}) WAW Our assumptions: linearizability – for trace abstraction encapsulation – for state abstraction Logical

Challenges What is the meaning of dependencies under abstraction? How can we track both concrete and abstract dependencies simultaneously? We’ve developed a uniform framework for tracking data dependencies…

Best Write Set The write set of transition is the union of – the locations whose value was changed by ; – the locations allocated by ; and – the locations de-allocated by. Intuitively, the write set of a transition is its observable effect, i.e., the delta between the entry and exit states.

Best Read Set (More Tricky) is a sufficient read set of transition iff for every, such that and agree on, write( ) ≡ write( ). The read set of transition is the union of all its minimal sufficient read sets. Intuitively, the read set of a transition is the set of locations whose values determine the observable effect of the transition.

Simple Example ([y=3], set(y,4), [y=4]) Read set:{ y } Write set:{ y } ([y=3], set(y,3), [y=3]) Read set:{ y } Write set:{ } Secures y=4 in exit state Secures empty write set

Approximating the “Best” Definitions The good news: The “best” definitions apply both in concrete and in abstract semantics The bad news: The definition of the “best” read set is not computable in general An approximation r, w of read, write is sound iff read r w write w

Usage Scenario 7 modcount table [0] nex t [8] nex t 1 K key value […] … … 2 K’K’ key value nex t Hmmm… Too many dependencies!

Usage Scenario K 1 valueK K’K’ 2 K’ ? value ? ? ? Now I understand what’s going on!

Usage Scenario K 1 valueK K’K’ 2 K’ ? value ? ? ?

Number of inter-iteration dependencies at the level of ADT operations with and without abstraction Only built-in spec (Java collections)

Number of inter-iteration dependencies at the level of ADT operations with and without abstraction Including user spec (for user types)

789 modcount table [0] next T H A […] … … N next Y O U ! K

Backup

Preliminaries A state maps memory locations to values. A transition is a triple, where p is a program statement and are states, such that. A program trace is a sequence of transitions. We assume an interleaving semantics of concurrency.

Challenges What is the meaning of dependencies under abstraction? How can we track both concrete and abstract dependencies simultaneously? We’ve developed a uniform framework for tracking data dependencies…

Best Write Set The write set of transition is the union of – the locations whose value was changed by ; – the locations allocated by ; and – the locations de-allocated by. Intuitively, the write set of a transition is its observable effect, i.e., the delta between the entry and exit states.

Best Read Set (More Tricky) is a sufficient read set of transition iff for every, such that and agree on, write( ) ≡ write( ). The read set of transition is the union of all its minimal sufficient read sets. Intuitively, the read set of a transition is the set of locations whose values determine the observable effect of the transition.

Simple Example ([y=3], set(y,4), [y=4]) Read set:{ y } Write set:{ y } ([y=3], set(y,3), [y=3]) Read set:{ y } Write set:{ } Secures y=4 in exit state Secures empty write set

Approximating the “Best” Definitions The good news: The “best” definitions apply both in concrete and in abstract semantics The bad news: The definition of the “best” read set is not computable in general An approximation r, w of read, write is sound iff read r w write w

Approximate Read Set Take 1: all the locations reachable from arguments Take 2: all the locations reachable from arguments that were accessed during the statement’s execution Take 3: all the locations reachable from arguments that were accessed during the statement’s execution with user specification of the frame