Lecture 11: Code Optimization CS 540 George Mason University.

Slides:



Advertisements
Similar presentations
Example of Constructing the DAG (1)t 1 := 4 * iStep (1):create node 4 and i 0 Step (2):create node Step (3):attach identifier t 1 (2)t 2 := a[t 1 ]Step.
Advertisements

CSC 4181 Compiler Construction Code Generation & Optimization.
Data-Flow Analysis II CS 671 March 13, CS 671 – Spring Data-Flow Analysis Gather conservative, approximate information about what a program.
Course Outline Traditional Static Program Analysis –Theory Compiler Optimizations; Control Flow Graphs Data-flow Analysis – today’s class –Classic analyses.
Course Outline Traditional Static Program Analysis Software Testing
Chapter 9 Code optimization Section 0 overview 1.Position of code optimizer 2.Purpose of code optimizer to get better efficiency –Run faster –Take less.
Data Flow Analysis. Goal: make assertions about the data usage in a program Use these assertions to determine if and when optimizations are legal Local:
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
Course Outline Traditional Static Program Analysis –Theory Compiler Optimizations; Control Flow Graphs Data-flow Analysis – today’s class –Classic analyses.
Control-Flow Graphs & Dataflow Analysis CS153: Compilers Greg Morrisett.
SSA.
Chapter 10 Code Optimization. A main goal is to achieve a better performance Front End Code Gen Intermediate Code source Code target Code user Machine-
C Chuen-Liang Chen, NTUCS&IE / 321 OPTIMIZATION Chuen-Liang Chen Department of Computer Science and Information Engineering National Taiwan University.
1 Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
1 Code Optimization. 2 The Code Optimizer Control flow analysis: control flow graph Data-flow analysis Transformations Front end Code generator Code optimizer.
1 Introduction to Data Flow Analysis. 2 Data Flow Analysis Construct representations for the structure of flow-of-data of programs based on the structure.
Components of representation Control dependencies: sequencing of operations –evaluation of if & then –side-effects of statements occur in right order Data.
School of EECS, Peking University “Advanced Compiler Techniques” (Fall 2011) Dataflow Analysis Introduction Guo, Yao Part of the slides are adapted from.
1 CS 201 Compiler Construction Lecture 7 Code Optimizations: Partial Redundancy Elimination.
1 Data flow analysis Goal : collect information about how a procedure manipulates its data This information is used in various optimizations For example,
Common Sub-expression Elim Want to compute when an expression is available in a var Domain:
1 CS 201 Compiler Construction Lecture 5 Code Optimizations: Copy Propagation & Elimination.
1 Data flow analysis Goal : –collect information about how a procedure manipulates its data This information is used in various optimizations –For example,
CS 536 Spring Global Optimizations Lecture 23.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
CS 536 Spring Intermediate Code. Local Optimizations. Lecture 22.
Global optimization. Data flow analysis To generate better code, need to examine definitions and uses of variables beyond basic blocks. With use- definition.
4/25/08Prof. Hilfinger CS164 Lecture 371 Global Optimization Lecture 37 (From notes by R. Bodik & G. Necula)
1 CS 201 Compiler Construction Lecture 3 Data Flow Analysis.
Data Flow Analysis Compiler Design October 5, 2004 These slides live on the Web. I obtained them from Jeff Foster and he said that he obtained.
CS 412/413 Spring 2007Introduction to Compilers1 Lecture 29: Control Flow Analysis 9 Apr 07 CS412/413 Introduction to Compilers Tim Teitelbaum.
1 Copy Propagation What does it mean? – Given an assignment x = y, replace later uses of x with uses of y, provided there are no intervening assignments.
Prof. Fateman CS 164 Lecture 221 Global Optimization Lecture 22.
Intermediate Code. Local Optimizations
Improving Code Generation Honors Compilers April 16 th 2002.
Prof. Fateman CS164 Lecture 211 Local Optimizations Lecture 21.
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
Machine-Independent Optimizations Ⅰ CS308 Compiler Theory1.
Global optimization. Data flow analysis To generate better code, need to examine definitions and uses of variables beyond basic blocks. With use- definition.
Data Flow Analysis Compiler Design Nov. 8, 2005.
Direction of analysis Although constraints are not directional, flow functions are All flow functions we have seen so far are in the forward direction.
Prof. Bodik CS 164 Lecture 16, Fall Global Optimization Lecture 16.
Precision Going back to constant prop, in what cases would we lose precision?
1 CS 201 Compiler Construction Data Flow Analysis.
Topic #10: Optimization EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Code Optimization, Part III Global Methods Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412.
1 Code Generation Part II Chapter 8 (1 st ed. Ch.9) COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University,
CSc 453 Final Code Generation Saumya Debray The University of Arizona Tucson.
1 Code Generation Part II Chapter 9 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University, 2005.
Dataflow Analysis Topic today Data flow analysis: Section 3 of Representation and Analysis Paper (Section 3) NOTE we finished through slide 30 on Friday.
1 Code optimization “Code optimization refers to the techniques used by the compiler to improve the execution efficiency of the generated object code”
Jeffrey D. Ullman Stanford University. 2 boolean x = true; while (x) {... // no change to x }  Doesn’t terminate.  Proof: only assignment to x is at.
1 Data Flow Analysis Data flow analysis is used to collect information about the flow of data values across basic blocks. Dominator analysis collected.
1 CS 201 Compiler Construction Lecture 2 Control Flow Analysis.
CS 598 Scripting Languages Design and Implementation 9. Constant propagation and Type Inference.
Data Flow Analysis II AModel Checking and Abstract Interpretation Feb. 2, 2011.
Code Optimization Data Flow Analysis. Data Flow Analysis (DFA)  General framework  Can be used for various optimization goals  Some terms  Basic block.
Code Optimization Code produced by compilation algorithms can often be improved (ideally optimized) in terms of run-time speed and the amount of memory.
Code Optimization Overview and Examples
High-level optimization Jakub Yaghob
Data Flow Analysis Suman Jana
Fall Compiler Principles Lecture 8: Loop Optimizations
Topic 10: Dataflow Analysis
University Of Virginia
Code Optimization Overview and Examples Control Flow Graph
Fall Compiler Principles Lecture 10: Loop Optimizations
Data Flow Analysis Compiler Design
Optimization 薛智文 (textbook ch# 9) 薛智文 96 Spring.
EECS 583 – Class 5 Dataflow Analysis
Code Generation Part II
Presentation transcript:

Lecture 11: Code Optimization CS 540 George Mason University

CS 540 Spring 2009 GMU2 Code Optimization REQUIREMENTS: Meaning must be preserved (correctness) Speedup must occur on average. Work done must be worth the effort. OPPORTUNITIES: Programmer (algorithm, directives) Intermediate code Target code

CS 540 Spring 2009 GMU3 Code Optimization Scanner (lexical analysis) Parser (syntax analysis) Code Optimizer Semantic Analysis (IC generator) Code Generator Symbol Table Source language tokens Syntactic structure Syntactic/semantic structure Target language

CS 540 Spring 2009 GMU4 Levels Window – peephole optimization Basic block Procedural – global (control flow graph) Program level – intraprocedural (program dependence graph)

CS 540 Spring 2009 GMU5 Peephole Optimizations Constant Folding x := 32 becomes x := 64 x := x + 32 Unreachable Code goto L2 x := x + 1  unneeded Flow of control optimizations goto L1 becomes goto L2 … L1: goto L2

CS 540 Spring 2009 GMU6 Peephole Optimizations Algebraic Simplification x := x + 0  unneeded Dead code x := 32  where x not used after statement y := x + y  y := y + 32 Reduction in strength x := x * 2  x := x + x

CS 540 Spring 2009 GMU7 Peephole Optimizations Local in nature Pattern driven Limited by the size of the window

CS 540 Spring 2009 GMU8 Basic Block Level Common Subexpression elimination Constant Propagation Dead code elimination Plus many others such as copy propagation, value numbering, partial redundancy elimination, …

CS 540 Spring 2009 GMU9 Simple example: a[i+1] = b[i+1] t1 = i+1 t2 = b[t1] t3 = i + 1 a[t3] = t2 t1 = i + 1 t2 = b[t1] t3 = i + 1  no longer live a[t1] = t2 Common expression can be eliminated

CS 540 Spring 2009 GMU10 i = 4 t1 = i+1 t2 = b[t1] a[t1] = t2 i = 4 t1 = 5 t2 = b[t1] a[t1] = t2 Now, suppose i is a constant: i = 4 t1 = 5 t2 = b[5] a[5] = t2 i = 4 t2 = b[5] a[5] = t2 Final Code:

CS 540 Spring 2009 GMU11 Control Flow Graph - CFG CFG =, where V = vertices or nodes, representing an instruction or basic block (group of statements). E = (V x V) edges, potential flow of control Entry is an element of V, the unique program entry Two sets used in algorithms: Succ(v) = {x in V| exists e in E, e = v  x} Pred(v) = {x in V| exists e in E, e = x  v}

CS 540 Spring 2009 GMU12 Definitions point - any location between adjacent statements and before and after a basic block. A path in a CFG from point p 1 to p n is a sequence of points such that  j, 1 <= j < n, either p i is the point immediately preceding a statement and p i+1 is the point immediately following that statement in the same block, or p i is the end of some block and p i+1 is the start of a successor block.

CS 540 Spring 2009 GMU13 CFG c = a + b d = a * c i = 1 f[i] = a + b c = c * 2 if c > d g = a * c g = d * d i = i + 1 if i > 10 points path

CS 540 Spring 2009 GMU14 Optimizations on CFG Must take control flow into account –Common Sub-expression Elimination –Constant Propagation –Dead Code Elimination –Partial redundancy Elimination –… Applying one optimization may create opportunities for other optimizations.

CS 540 Spring 2009 GMU15 Redundant Expressions An expression x op y is redundant at a point p if it has already been computed at some point(s) and no intervening operations redefine x or y. m = 2*y*zt0 = 2*yt0 = 2*ym = t0*z n = 3*y*zt1 = 3*yt1 = 3*yn = t1*z o = 2*y–zt2 = 2*y o = t2-zo = t0-z redundant

CS 540 Spring 2009 GMU16 Redundant Expressions c = a + b d = a * c i = 1 f[i] = a + b c = c * 2 if c > d g = a * c g = d * d i = i + 1 if i > 10 Candidates: a + b a * c d * d c * 2 i + 1 Definition site Since a + b is available here,  redundant!

CS 540 Spring 2009 GMU17 Redundant Expressions c = a + b d = a * c i = 1 f[i] = a + b c = c * 2 if c > d g = a * c g = d * d i = i + 1 if i > 10 Candidates: a + b a * c d * d c * 2 i + 1 Definition site Kill site Not available  Not redundant

CS 540 Spring 2009 GMU18 Redundant Expressions An expression e is defined at some point p in the CFG if its value is computed at p. (definition site) An expression e is killed at point p in the CFG if one or more of its operands is defined at p. (kill site) An expression is available at point p in a CFG if every path leading to p contains a prior definition of e and e is not killed between that definition and p.

CS 540 Spring 2009 GMU19 Removing Redundant Expressions t1 = a + b c = t1 d = a * c i = 1 f[i] = t1 c = c * 2 if c > d g = a * c g = d*d i = i + 1 if i > 10 Candidates: a + b a * c d * d c * 2 i + 1

CS 540 Spring 2009 GMU20 Constant Propagation b = 5 c = 4*b c > b d = b + 2 e = a + b b = 5 c = 20 c > 5 d = 7 e = a + 5e = a + b t f t f b = 5 c = > 5 d = 7 e = a + 5 t f

CS 540 Spring 2009 GMU21 Constant Propagation b = 5 c = > 5 d = 7 e = a + 5 t f b = 5 c = 20 d = 7 e = a + 5

CS 540 Spring 2009 GMU22 Copy Propagation b = a c = 4*b c > b d = b + 2 e = a + b b = a c = 4*a c > a d = a + 2 e = a + ae = a + b

CS 540 Spring 2009 GMU23 Simple Loop Optimizations: Code Motion while (i <= limit - 2) t := limit - 2 while (i <= t) L1: t1 = limit – 2 if (i > t1) goto L2 body of loop goto L1 L2: t1 = limit – 2 L1: if (i > t1) goto L2 body of loop goto L1 L2:

CS 540 Spring 2009 GMU24 Simple Loop Optimizations: Strength Reduction Induction Variables control loop iterations j = j – 1 t4 = 4 * j t5 = a[t4] if t5 > v j = j – 1 t4 = t4 - 4 t5 = a[t4] if t5 > v t4 = 4*j

CS 540 Spring 2009 GMU25 Simple Loop Optimizations Loop transformations are often used to expose other optimization opportunities: –Normalization –Loop Interchange –Loop Fusion –Loop Reversal –…

CS 540 Spring 2009 GMU26 Consider Matrix Multiplication for i = 1 to n do for j = 1 to n do for k = 1 to n do C[i,j] = C[i,j] + A[i,k] + B[k,j] end = + i i jj k k C BA

CS 540 Spring 2009 GMU27 Memory Usage For A: Elements are accessed across rows, spatial locality is exploited for cache (assuming row major storage) For B: Elements are accessed along columns, unless cache can hold all of B, cache will have problems. For C: Single element computed per loop – use register to hold = + i i jj k k C BA

CS 540 Spring 2009 GMU28 Matrix Multiplication Version 2 for i = 1 to n do for k = 1 to n do for j = 1 to n do C[i,j] = C[i,j] + A[i,k] + B[k,j] end = + i i j j k C BA loop interchange k

CS 540 Spring 2009 GMU29 Memory Usage For A: Single element loaded for loop body For B: Elements are accessed along rows to exploit spatial locality. For C: Extra loading/storing, but across rows = + i i j j k C BA k

CS 540 Spring 2009 GMU30 Simple Loop Optimizations How to determine safety? –Does the new multiply give the same answer? –Can be reversed?? for (I=1 to N) a[I] = a[I+1] – can this loop be safely reversed?

CS 540 Spring 2009 GMU31 Data Dependencies Flow Dependencies - write/read x := 4; y := x + 1 Output Dependencies - write/write x := 4; x := y + 1; Antidependencies - read/write y := x + 1; x := 4;

CS 540 Spring 2009 GMU32 x := 4 y := 6 p := x + 2 z := y + p x := z y := p x := 4 y := 6 p := x + 2 z := y + p y := p x := z Flow Output Anti

CS 540 Spring 2009 GMU33 Global Data Flow Analysis Collecting information about the way data is used in a program. Takes control flow into account HL control constructs –Simpler – syntax driven –Useful for data flow analysis of source code General control constructs – arbitrary branching Information needed for optimizations such as: constant propagation, common sub-expressions, partial redundancy elimination …

CS 540 Spring 2009 GMU34 Dataflow Analysis: Iterative Techniques First, compute local (block level) information. Iterate until no changes while change do change = false for each basic block apply equations updating IN and OUT if either IN or OUT changes, set change to true end

CS 540 Spring 2009 GMU35 Live Variable Analysis A variable x is live at a point p if there is some path from p where x is used before it is defined. Want to determine for some variable x and point p whether the value of x could be used along some path starting at p. Information flows backwards May – ‘along some path starting at p’ is x live here?

CS 540 Spring 2009 GMU36 Global Live Variable Analysis Want to determine for some variable x and point p whether the value of x could be used along some path starting at p. DEF[B] - set of variables assigned values in B prior to any use of that variable USE[B] - set of variables used in B prior to any definition of that variable OUT[B] - variables live immediately after the block OUT[B] -  IN[S] for all S in succ(B) IN[B] - variables live immediately before the block IN[B] = USE[B] + (OUT[B] - DEF[B])

CS 540 Spring 2009 GMU37 d1: a = 1 d2: b = 2 d3: c = a + b d4: d = c - a d8: b = a + b d9: e = c - 1 d10: a = b * d d22: b = a - d d5: d = b * d d6: d = a + b d7: e = e + 1 B1 B2 B3 B4 B5 B6 DEF=a,b USE = DEF=c,d USE = a,b DEF= USE = b,d DEF=d USE = a,b,e DEF= e USE = a,b,c DEF= a USE = b,d

CS 540 Spring 2009 GMU38 Global Live Variable Analysis Want to determine for some variable x and point p whether the value of x could be used along some path starting at p. DEF[B] - set of variables assigned values in B prior to any use of that variable USE[B] - set of variables used in B prior to any definition of that variable OUT[B] - variables live immediately after the block OUT[B] -   IN[S] for all S in succ(B) IN[B] - variables live immediately before the block IN[B] = USE[B]  (OUT[B] - DEF[B])

CS 540 Spring 2009 GMU39 INOUTINOUTINOUT B1  a,b  ea,b,e B2a,ba,b,c,da,b,ea,b,c,d,ea,b,ea,b,c,d,e B3a,b,c,d ea,b,c,ea,b,c,d,e B4a,b,c,ea,b,c,d,ea,b,c,ea,b,c,d,ea,b,c,ea,b,c,d,e B5a,b,c,da,b,da,b,c,da,b,d,ea,b,c,da,b,d,e B6b,d    OUT[B] =  IN[S] for all S in succ(B) IN[B] = USE[B] + (OUT[B] - DEF[B]) BlockDEFUSE B1{a,b}{ } B2{c,d}{a,b} B3{ }{b,d} B4{d}{a,b,e} B5{e}{a,b,c} B6{a}{b,d}

CS 540 Spring 2009 GMU40 { } {b,d} {e} {a,b,e} {a,b,c,d,e} {a,b,c,d} {a,b,d,e} {a,b,c,e} {a,b,c,d,e}

CS 540 Spring 2009 GMU41 Dataflow Analysis Problem #2: Reachability A definition of a variable x is a statement that may assign a value to x. A definition may reach a program point p if there exists some path from the point immediately following the definition to p such that the assignment is not killed along that path. Concept: relationship between definitions and uses

CS 540 Spring 2009 GMU42 What blocks do definitions d2 and d4 reach? d1 i = m – 1 d2 j = n d3 i = i + 1 d4 j = j - 1 B1 B2 B3 B4 B5 d2 d4

CS 540 Spring 2009 GMU43 Reachability Analysis: Unstructured Input 1. Compute GEN and KILL at block—level 2. Compute IN[B] and OUT[B] for B IN[B] = U OUT[P] where P is a predecessor of B OUT[B] = GEN[B] U (IN[B] - KILL[B]) 3.Repeat step 2 until there are no changes to OUT sets

CS 540 Spring 2009 GMU44 Reachability Analysis: Step 1 For each block, compute local (block level) information = GEN/KILL sets –GEN[B] = set of definitions generated by B –KILL[B] = set of definitions that can not reach the end of B This information does not take control flow between blocks into account.

CS 540 Spring 2009 GMU45 Reasoning about Basic Blocks Effect of single statement: a = b + c Uses variables {b,c} Kills all definitions of {a} Generates new definition (i.e. assigns a value) of {a} Local Analysis: Analyze the effect of each instruction Compose these effects to derive information about the entire block

CS 540 Spring 2009 GMU46 Example d1 i = m – 1 d2 j = n d3 a = u1 B1 B2 B3 B4 d4 i = i + 1 d5 j = j - 1 d6 a = u2 d7 i = u2 Gen = 4,5 Kill = 1,2,7 Gen = 1,2,3 Kill = 4,5,6,7 Gen = 7 Kill = 1,4 Gen = 6 Kill = 3

CS 540 Spring 2009 GMU47 Reachability Analysis: Step 2 Compute IN/OUT for each block in a forward direction. Start with IN[B] =  –IN[B] = set of defns reaching the start of B =  (out[P]) for all predecessor blocks in the CFG –OUT[B] = set of defns reaching the end of B = GEN[B]  (IN[B] – KILL[B]) Keep computing IN/OUT sets until a fixed point is reached.

CS 540 Spring 2009 GMU48 Reaching Definitions Algorithm Input: Flow graph with GEN and KILL for each block Output: in[B] and out[B] for each block. For each block B do out[B] = gen[B], (true if in[B] = emptyset) change := true; while change do begin change := false; for each block B do begin in[B] := U out[P], where P is a predecessor of B; oldout = out[B]; out[B] := gen[B] U (in[B] - kill [B]) if out[B] != oldout then change := true; end

CS 540 Spring 2009 GMU49 INOUT B1  1,2,3 B2  4,5 B3  6 B4  7 IN[B] =  (out[P]) for all predecessor blocks in the CFG OUT[B] = GEN[B]  (IN[B] – KILL[B]) d1 i = m – 1 d2 j = n d3 a = u1 B1 B2 B3 B4 d4 i = i + 1 d5 j = j - 1 d6 a = u2 d7 i = u2 Gen = 4,5 Kill = 1,2,7 Gen = 1,2,3 Kill = 4,5,6,7 Gen = 7 Kill = 1,4 Gen = 6 Kill = 3

CS 540 Spring 2009 GMU50 ININ OUTINOUT B1  1,2,3  B2  4,5OUT[1]+OUT[4] = 1,2,3,7 4,5 + (1,2,3,7 – 1,2,7) = 3,4,5 B3  6OUT[2] = 3,4,56 + (3,4,5 – 3) = 4,5,6 B4  7OUT[2]+OUT[3] = 3,4,5,6 7 + (3,4,5,6 – 1,4) = 3,5,6,7 IN[B] =  (out[P]) for all predecessor blocks in the CFG OUT[B] = GEN[B] + (IN[B] – KILL[B])

CS 540 Spring 2009 GMU51 INOUTINOUTINOUT B1  1,2,3   B2  4,51,2,3,73,4,5OUT[1] + OUT[4] = 1,2,3,5,6,7 4,5 + (1,2,3,5,6,7-1,2,7) = 3,4,5,6 B3  63,4,54,5,6OUT[2] = 3,4,5,66 + (3,4,5,6 – 3) = 4,5,6 B4  73,4,5,63,5,6,7OUT[2] + OUT[3] = 3,4,5,6 7+(3,4,5,6 – 1,4) = 3,5,6,7 IN[B] =  (out[P]) for all predecessor blocks in the CFG OUT[B] = GEN[B] + (IN[B] – KILL[B])

CS 540 Spring 2009 GMU52 Forward vs. Backward Forward flow vs. Backward flow Forward: Compute OUT for given IN,GEN,KILL –Information propagates from the predecessors of a vertex. –Examples: Reachability, available expressions, constant propagation Backward: Compute IN for given OUT,GEN,KILL –Information propagates from the successors of a vertex. –Example: Live variable Analysis

CS 540 Spring 2009 GMU53 Forward vs. Backward Equations Forward vs. backward –Forward: IN[B] - process OUT[P] for all P in predecessors(B) OUT[B] = local U (IN[B] – local) –Backward: OUT[B] - process IN[S] for all S in successor(B) IN[B] = local U (OUT[B] – local)

CS 540 Spring 2009 GMU54 May vs. Must Must – true on all paths Ex: constant propagation – variable must provably hold appropriate constant on all paths in order to do a substitution May – true on some path Ex: Live variable analysis – a variable is live if it could be used on some path; reachability – a definition reaches a point if it can reach it on some path

CS 540 Spring 2009 GMU55 May vs. Must Equations May vs. Must –May – IN[B] =  (out[P]) for all P in pred(B) –Must – IN[B] =  (out[P]) for all P in pred(B)

CS 540 Spring 2009 GMU56 Reachability –IN[B] =  (out[P]) for all P in pred(B) –OUT[B] = GEN[B] + (IN[B] – KILL[B]) Live Variable Analysis –OUT[B] =  (IN[S]) for all S in succ(B) –IN[B] = USE[B]  (OUT[B] - DEF[B]) Constant Propagation –IN[B] =  (out[P]) for all P in pred(B) –OUT[B] = DEF_CONST[B]  (IN[B] – KILL_CONST[B])

CS 540 Spring 2009 GMU57 Discussion Why does this work? –Finite set – can be represented as bit vectors –Theory of lattices Is this guaranteed to terminate? –Sets only grow and since finite in size … Can we find ways to reduce the number of iterations?

CS 540 Spring 2009 GMU58 Choosing visit order for Dataflow Analysis In forward flow analysis situations, if we visit the blocks in depth first order, we can reduce the number of iterations. Suppose definition d follows block path 3  5  19  35  16  23  45  4  10  17 where the block numbering corresponds to the preorder depth-first numbering. Then we can compute the reach of this definition in 3 iterations of our algorithm. 3  5  19  35  16  23  45  4  10  17