Download presentation
Presentation is loading. Please wait.
1
Dynamic Program Analysis
Xiangyu Zhang
2
Introduction Dynamic program analysis is to solve problems regarding software dependability and productivity by inspecting software execution. Program executions vs. programs Not all statements are executed; one statement may be executed many times. Analysis on a single path – the executed path All variables are instantiated (solving the aliasing problem) Resulting in: Relatively lower learning curve. Precision. Applicability. Scalability. Dynamic program analysis can be constructed from a set of primitives Tracing Checkpointing and replay Dynamic slicing Applications Dynamic information flow tracking Abnormal behavior detection
3
Program Tracing 3
4
Outline What is tracing. Why tracing. How to trace.
Reducing trace size.
5
What is Tracing Tracing is a process that faithfully records detailed information of program execution (lossless). Control flow tracing the sequence of executed statements. Dependence tracing the sequence of exercised dependences. Value tracing the sequence of values that are produced by each instruction. Memory access tracing the sequence of memory references during an execution The most basic primitive.
6
Why Tracing Malware analysis Abnormal behavior detection
Forensic analysis Debugging Tracing is most widely used in debugging. That is because the most desirable in debugging is the capability of looking back and investigating what had happened. With traces, the programmer can time travel to a specific execution point and inspect the program state at that point. Indeed, the simplest form of tracing is to insert printf. Optimizations These profiles can be analyzed for presence of hot program paths or traces which have been exploited for performing path sensitive optimizations and path-sensitive prediction techniques. Value profiles have been compressed using value predictors and used to perform code specialization, data compression, value speculation, and value encoding. Address profiles have also been compressed and used for identifying hot data streams that exhibit data locality which can help in finding cache conscious data layouts and developing data prefetching mechanisms. Dependence profiles have been compressed in and used for computation of dynamic slices, studying the characteristics of performance degrading instructions, and studying instruction isomorphism. Security Malware stands for malicious code. Worms, viruses, trojans are examples of malware. Modern malware features a encryption/self mutation engine, that is to say, malicious code often exists in an encrypted form. At runtime, when a piece of malware is activated, it decrypts itself and starts executing, right after that, the code mutates and encrypts itself, potentially with a different key. That is to say, it is almost impossible to see the plain text malware at compile time. The solution is to monitor its runtime execution through tracing. Since each instruction has to be executed at its plain text. An instruction trace reveals the content of a malware. Testing While we will discuss testing later in the semester, tracing is very useful for testing too. Testing criteria such as control flow coverage, path coverage, and def-use coverage entails tracing program execution. In particular, control flow coverage criterion relies on how many instructions in a program have been executed at least once by a test suite. Path coverage criterion relies on how many unique program acyclic paths have been executed at least once. Def-use criterion describes how many unique dependence edges are exercised. Apparently, they require tracing to provide the needed information.
7
Outline What is tracing. Why tracing. How to trace.
Reducing trace size. Trace accessibility
8
Tracing by Printf Max = 0; for (p = head; p; p = p->next) {
if (p->value > max) max = p->value; } printf(“In loop\n”); printf(“True branch\n”);
9
The Minimum Set of Places to Instrument
if (…) S1 else S2 S3 S4 S5 if (…) S1 S2 else S3 2018年11月21日星期三2018年11月21日星期三
10
Tracing by Source Level Instrumentation
Read a source file and parse it into ASTs. Annotate the parse trees with instrumentation. Translate the annotated trees to a new source file. Compile the new source. Execute the program and a trace produced. Recall that a compiler works by first parsing source files into abstract syntax trees (ASTs). Intermediate code is generated based on the ASTs. Optimizations are applied and finally code is generated from the optimized intermediate code. The idea of source level instrumentation is to add an extra step to the compiler, which is to produce a new AST from the original AST.
11
An Example
12
Limitations of Source Level Instrumentation
Hard to handle libraries. Proprietary libraries: communication (MPI, PVM), linear algebra (NGA), database query (SQL libraries). Hard to handle multi-lingual programs Source code level instrumentation is heavily language dependent. Requires source code Worms and viruses are rarely provided with source code 13
13
Tracing by Binary Instrumentation
What is binary instrumentation Given a binary executable, parses it into intermediate representation. More advanced representations such as control flow graphs may also be generated. Tracing instrumentation is added to the intermediate representation. A lightweight compiler compiles the instrumented representation into a new executable. Features No source code requirement Easily handle libraries. What is intermediate representation? It can be considered as an abstract form of instructions, symbols such as LD, ST, ADD, and JMP are used to represent opcode, variables such as op1, op2 are used to represent operands. Program languages are often too high level, and assembly code is too low level with operands being virtual addresses and registers. For example, x=A[1]; assembly is mov [0x8c….], eax, IR is LD A[1], r
14
Static vs. Dynamic Instrumentation
Static: takes an executable and generate an instrumented executable that can be executed with many different inputs Dynamic: given the original binary and an input, starts executing the binary with the input, during execution, an instrumented binary is generated on the fly; essentially the instrumented binary is executed.
15
Dynamic Binary Instrumentation - Valgrind
Developed by Julian Seward at Cambridge University. Google-O'Reilly Open Source Award for "Best Toolmaker" 2006 A merit (bronze) Open Source Award 2004 Open source works on x86, AMD64 Easy to execute, e.g.: valgrind --tool=memcheck ls It becomes very popular One of the two most popular dynamic instrumentation tools Pin and Valgrind Very good usability, extendibility, robust 25MLOC Mozilla, MIT, Berkeley-security, Me, and many other places Overhead is the problem 5-10X slowdown without any instrumentation Reading assignment Valgrind: A Framework for Heavyweight Dynamic Binary Instrumentation (PLDI07) 16
16
Valgrind Infrastructure
Tool 1 VALGRIND CORE BB Decoder BB Tool 2 pc Binary Code Dispatcher pc BB Compiler …… New BB New BB Tool n New pc Instrumenter Trampoline Input Runtime state 17
17
Valgrind Infrastructure
1: do { 2: i=i+1; 3: s1; 4: } while (i<2) 5: s2; 1: do { 2: i=i+1; 3: s1; 4: } while (i<2) Tool 1 VALGRIND CORE BB Decoder Tool 2 1 Binary Code Dispatcher 1 BB Compiler …… Tool n Instrumenter Trampoline Input Runtime OUTPUT: 18
18
Valgrind Infrastructure
1: do { 2: i=i+1; 3: s1; 4: } while (i<2) 5: s2; 1: do { 2: i=i+1; 3: s1; 4: } while (i<2) Tool 1 VALGRIND CORE BB Decoder 1: do { 2: i=i+1; 3: s1; 4: } while (i<2) Tool 2 Binary Code Dispatcher BB Compiler …… Tool n Instrumenter Trampoline Input Runtime OUTPUT: 19
19
Valgrind Infrastructure
1: do { 2: i=i+1; 3: s1; 4: } while (i<2) 5: s2; 1: do { 2: i=i+1; 3: s1; 4: } while (i<2) Tool 1 VALGRIND CORE BB Decoder Tool 2 Binary Code Dispatcher BB Compiler …… Tool n Instrumenter Trampoline 1: do { print(“1”) 2: i=i+1; 3: s1; 4: } while (i<2) Input Runtime OUTPUT: 20
20
Valgrind Infrastructure
1: do { 2: i=i+1; 3: s1; 4: } while (i<2) 5: s2; 1: do { 2: i=i+1; 3: s1; 4: } while (i<2) Tool 1 VALGRIND CORE BB Decoder Tool 2 Binary Code Dispatcher BB Compiler …… Tool n Instrumenter 1 Trampoline Input Runtime 1: do { print(“1”) i=i+1; s1; } while (i<2) OUTPUT: 1 1 21
21
Valgrind Infrastructure
1: do { 2: i=i+1; 3: s1; 4: } while (i<2) 5: s2; Tool 1 VALGRIND CORE BB Decoder 5 Tool 2 5: s2; Binary Code Dispatcher BB Compiler …… Tool n Instrumenter 5 Trampoline Input Runtime 1: do { print(“1”) i=i+1; s1; } while (i<2) OUTPUT: 1 1 22
22
Valgrind Infrastructure
1: do { 2: i=i+1; 3: s1; 4: } while (i<2) 5: s2; 1: do { 2: i=i+1; 3: s1; 4: } while (i<2) Tool 1 VALGRIND CORE BB Decoder Tool 2 Binary Code Dispatcher BB Compiler …… Tool n Instrumenter Trampoline Input Runtime 1: do { print(“1”) i=i+1; s1; } while (i<2) 5: print (“5”); s2; OUTPUT: 1 1 23
23
Valgrind Infrastructure
1: do { 2: i=i+1; 3: s1; 4: } while (i<2) 5: s2; Tool 1 VALGRIND CORE BB Decoder Tool 2 Binary Code Dispatcher BB Compiler …… Tool n Instrumenter Trampoline 1: do { print(“1”) i=i+1; s1; } while (i<2) Input Runtime 5: print (“5”); s2; OUTPUT: 1 1 5 24
24
Instrumentation with Valgrind
UCodeBlock* SK_(instrument)(UCodeBlock* cb_in, …) { … UCodeBlock cb = VG_(setup_UCodeBlock)(…); for (i = 0; i < VG_(get_num_instrs)(cb_in); i++) { u = VG_(get_instr)(cb_in, i); switch (u->opcode) { case LD: case ST: case MOV: case ADD: case CALL: return cb; } Ask about the difference between instrumentation and defining handler functions The wrong instrumentation: … case LD: fprintf (log, “load %x\n”, u->addr); The correct instrumentation: VG_(insertCall) (cb, fprintf, …); Use the example to illustrate for (i=1 to 2) { ld [&A], r; ld [&B], r; ld [&C], r; } 25
25
Outline What is tracing. Why tracing. How to trace.
Reducing trace size.
26
Fine-Grained Tracing is Expensive
1: sum=0 2: i=1 1: sum=0 2: i=1 3: while ( i<N) do 4: i=i+1 5: sum=sum+i endwhile 6: print(sum) 3: while ( i<N) do 4: i=i+1 5: sum=sum+i 6: print (sum) The space complexity is 4 bytes * execution length if a trace is a sequence of PC (instruction addresses) for executed instructions. Trace(N=6): Space Complexity: 4 bytes * Execution length
27
Basic Block Level Tracing
1: sum=0 2: i=1 1: sum=0 2: i=1 3: while ( i<N) do 4: i=i+1 5: sum=sum+i endwhile 6: print(sum) 3: while ( i<N) do 4: i=i+1 5: sum=sum+i 6: print (sum) Trace(N=6): BB Trace:
28
More Ideas Would a function level tracing idea work? Predicate tracing
A trace entry is a function call with its parameters. Predicate tracing Instruction trace Predicate trace 1: sum=0 2: i=1 3: while ( i<N) do 4: i=i+1 5: sum=sum+i endwhile 6: print(sum) F T F Would a function level tracing idea work? Not really. First, functions may read global variables that are passed in as parameters. Second, sometimes it is too coarse-grained. For example, a function might have a very computation intensive loop. Functions may be nested. heap Predicate Tracing The idea of predicate tracing is to record all branch outcomes along program execution so that trace can be precisely reproduced. For example, a predicate trace with only a “F” indicates the first predicate took the false branch so that the control flow trace is “ ”. The limitation of such a approach is that traces can only be reproduced from the beginning, i.e., reproducing a trace segment from intermediate execution points is not feasible. Path based tracing A trace is divided into acyclic paths and these paths are assigned ids. Therefore, a path trace is a path id sequence. More details will be disclosed in the next lecture. Lose random accessibility Path based tracing
29
Intel PT 2018年11月21日星期三2018年11月21日星期三
30
Program Slicing Xiangyu Zhang
31
Outline What is slicing. Why slicing. Static slicing. Dynamic slicing.
Data dependence detection Control dependence detection Slicing algorithms (forward vs. backward) Chopping
32
What is a slice? Void main ( ) { int I=0; int sum=0; while (I<N) {
S: …. = f (v) Slice of v at S is the set of statements involved in computing v’s value at S. [Mark Weiser, 1982] 1 2 3 4 5 6 7 8 9 Void main ( ) { int I=0; int sum=0; while (I<N) { sum=add(sum,I); I=add(I,1); } printf (“sum=%d\n”,sum); printf(“I=%d\n”,I); The value of I at the line 9 is computed by statements 2, 4 and 6. Statement 4 also contributes to the value as it controls the execution of 6. One property is a slice is a part of the program that is executable. Executing the slice produces the same value. There are some slicing techniques that do not produce executable code segment. They are out of the scope of this course. Interested students are referred to Frank Tip’s survey on program slicing in 1995.
33
Why Slicing? Limit analysis scope: protocol reverse engineering
Code Reuse: Extracting modules for reuse. Partial Execution replay: Replay only part of the execution that is relevant to a failure. Partial roll back: partially roll back a transaction. Information flow: prevent confidential information from being sent out to untrusted environment. Others. DF testing: assume we are only interested in some variables. One thing we can do is to identify the slices of the variables. If the code changes are not in the slices. We don’t need to rerun the test cases. In transactional computation, a transaction is rolled back if conflicts happen when it is committed. Slicing enables partial roll back.
34
How to Compute Slices? Dependence Graph I=0 sum=0 I < N sum=sum+I
Data dep. Control dep. I=0 sum=0 I < N sum=sum+I I=I+1 print (sum); print(I) F T X is data dependent on Y if (1) there is a variable v that is defined at Y and used at X and (2) there exists a path of nonzero length from Y to X along which v is not re-defined. Note that data dependence is hard to computed compared to control dependence. Aliasing is the major problem. For readability, not all data dependences are shown.
35
How to Compute Slices? (continued)
Dependence Graph Data dep. Control dep. I=0 sum=0 I < N sum=sum+I I=I+1 print (sum); print(I) F T Y is control-dependent on X iff X directly determines whether Y executes X is not strictly post-dominated by Y there exists a path from X to Y s.t. every node in the path other than X and Y is post-dominated by Y
36
How to Compute Slices? (continued)
Given a slicing criterion, i.e., the starting point, a slice is computed as the set of reachable nodes in the dependence graph 1: I=0 2: sum=0 3: I < N T 4: sum=sum+I F A slicing criterion consists of a statement and a variable. 5: I=I+1 Slice(6)=? 6: print (sum); 7:print(I)
37
Static Slices are Imprecise
Don’t have dynamic control flow information 1: if (P) 2: x=f(…); 3: else 4: x=g(…); 5. …=x; Use of Pointers – static alias analysis is very imprecise 1: int a,b,c; 2: a=…; 3: b=…; 4: p=&a; 5: …=p[i]; Use the previous sum example. Use of function pointers
38
Dynamic Slicing Korel and Laski, 1988
The set of executed statement instances that did contribute to the value of the criterion. Dynamic slicing makes use of all information about a particular execution of a program. Dynamic slices are often computed by constructing a dynamic program dependence graph (DPDG). Each node is an executed statement (instruction). An edge is present between two nodes if there exists a data/control dependence. A dynamic slice criterion is a triple <Var, Execution Point, Input> The set of statements reachable in the DPDG from a criterion constitute the slice. Dynamic slices are smaller, more precise, more helpful to the user
39
An Example Slice(I@7)={1,3,5,7} 1: I=0 Trace (N=0) 2: sum=0 11: I=0
31: I<N 61: print(sum) 71: print(I); 3: I < N T 4: sum=sum+I F 5: I=I+1 6: print (sum); 7:print(I)
40
Another Example Slice(I@7)={1,3,5,7} 1: I=0 Trace (N=1) 2: sum=0
31: I<N 41: sum=sum+I 51: I=I+1 32: I<N 61: print(sum) 71: print(I); 3: I < N T 4: sum=sum+I F 5: I=I+1 6: print (sum); 7:print(I)
41
Offline Algorithms – Data Dep
Instrument the program to generate the control flow and memory access trace Void main ( ) { int I=0; int sum=0; while (I<N) { sum=add(sum,I); I=add(I,1); } printf (“sum=%d\n”,sum); printf(“I=%d\n”,I); 1 2 3 4 5 6 7 8
42
Offline Algorithms – Data Dep
Instrument the program to generate the control flow and memory access trace Trace (N=0) 1 W &I 2 W &sum 3 R &I &N 4 R &I &sum W &sum 5 R &I W &I 7 R &sum 8 R &I Void main ( ) { int I=0; trace(“1 W ”+&I); int sum=0; trace(“2 W ”+&sum); while (trace(“3 R ”+&I+&N),I<N) { sum=add(sum,I); trace(“4 R ”+&I+&sum+ “ W ” +&sum); I=add(I,1); } printf (“sum=%d\n”,sum); printf(“I=%d\n”,I); 1 2 3 4 5 6 7 8
43
Offline Algorithms – Data Dep
Instrument the program to generate the control flow and memory access trace Trace (N=0) 1 W &I 2 W &sum 3 R &I &N 4 R &I &sum W &sum 5 R &I W &I 7 R &sum 8 R &I For a “R, addr”, traverse backward to find the closest “W,addr”, introduce a DD edge, traverse further to find the corresponding writes of the reads on the identified write. “8, R &I” -> “5, W &I”-> “5, R &I”->”1, R&I”
44
Offline Algorithms – Control Dep
Assume there are no recursive functions and CD(i) is the set of static control dependence of i, traverse backward, find the closest x, s.t. x is in CD(i), introduce a dynamic CD from i to x. Problematic in the presence of recursion.
45
Efficiently Computing Dynamic Dependences
The previous mentioned graph construction algorithm implies offline traversals of long memory reference and control flow traces Efficient online algorithms Online data dependence detection. Online control dependence detection.
46
Efficient Data Dependence Detection
Basic idea i: x=… => hashmap[x]= i j: … =x… => dependence detected j hashmap[x], which is ji HashMap Data Dep. Trace (N=1) 11: I=0 21: sum=0 31: I<N 41: sum=sum+I 51: I=I+1 32: I<N 61: print(sum) 71: print(I); I: 11 I: 11 sum: 21 31 hashmap[I]=11 I: 11 sum: 41 41 hashmap[sum]=21 The algorithm requires a very large hash map. An efficient implementation of such a hash map will be discussed later in case studies. I: 51 sum: 41 51 hashmap[I]=11 32 hashmap[I]=51 61 hashmap[sum]=41 71 hashmap[I]=51
47
Efficient Dynamic Control Dependence (DCD) Detection
Def: yj DCD on xi iff there exists a path from xi to Exit that does not pass yj and no such paths for nodes in the executed path from xi to yj. Region: executed statements between a predicate instance and its immediate post-dominator form a region.
48
Region Examples 11. for(i=0; i<N, i++) { 21. if(i%2 == 0)
31. p = &a[i]; 41. foo(p); … 12. for(i=0; i<N, i++) { 22. if(i%2 == 0) 42. foo(p); 13. for(i=0; i<N, i++) { 61. a = a+1; 1. for(i=0; i<N, i++) { if(i%2 == 0) p = &a[i]; foo(p); 5. } 6. a = a+1; A statement instance xi DCD on the predicate instance leading xi ‘s enclosing region. Regions are either nested or disjoint. Never overlap.
49
DCD Properties Def: yj DCD on xi iff there exists a path from xi to Exit that does not pass yj and no such paths for nodes in the executed path from xi to yj. Region: executed statements between a predicate instance and its immediate post-dominator form a region. Property One: A statement instance xi DCD on the predicate instance leading xi ‘s enclosing region.
50
Property One A statement instance xi DCD on the predicate instance leading xi ‘s enclosing region. Proof: Let the predicate instance be pj and assume xi does not DCD pj. Therefore, either there is not a path from pj to exit that does not pass xi , which indicates xi is a post-dominator of pj, contradicting the condition that xi is in the region delimited by pj and its immediate post-dominator; or there is a yk in between pj and xi so that yk has a path to exit that does not pass xi . Since pj’s immediate post-dominator is also a post dominator of yk, yk and pj’s post-dominator form a smaller region that include xi , contradicting that pj leads the enclosing region of xi .
51
DCD Properties Def: yj DCD on xi iff there exists a path from xi to Exit that does not pass yj and no such paths for nodes in the executed path from xi to yj. Region: executed statements between a predicate instance and its immediate post-dominator form a region. Property Two: regions are disjoint or nested, never overlap.
52
Property Two Regions are either nested or disjoint, never overlap.
Proof: Assume there are two regions (x, y) and (m, n) that overlap. Let m reside in (x, y). Thus, y resides in (m, n), which implies there is a path from m to exit without passing y. Let the path be P. Therefore, the path from x to m and P constitute a path from x to exit without passing y, contradicting the condition that y is a post-dominator of x. Note that x, y, m, and n are execution points with the form of si with s being a statement and i being the execution instance.
53
Efficient DCD Detection
Observation: regions have the LIFO characteristic. Otherwise, some regions must overlap. Implication: the sequence of nested active regions for the current execution point can be maintained by a stack, called control dependence stack (CDS). A region is nested in the region right below it in the stack. The enclosing region for the current execution point is always the top entry in the stack, therefore the execution point is control dependent on the predicate that leads the top region. An entry is pushed onto CDS if a branching point (predicates, switch statements, etc.) executes. The current entry is popped if the immediate post-dominator of the branching point executes, denoting the end of the current region.
54
Algorithm Predicate (xi) { CDS.push(<xi, IPD(x) >); } Merge (tj)
while (CDS.top( ).second==t) CDS.pop( ); GetCurrentCD ( ) return CDS.top( ).first; Just to show we have a simple, elegent, neat algorithm. Merge is called upon each statement execution.
55
An Example 62, 14 5 61, 14 5 51, EXIT
56
Interprocedural Control Dependence
11cc1, 4 13cc3, 4 12cc2, 4 It looks like we can simply extend the CDS across function boundaries. if (p1) { foo ( ); } foo ( ) { s1; if (p2) s2; s3; What if s2 is a recursive call to foo ( ) itself. Annotate CDS entries with calling context.
57
Wrap Up We have introduced the concept of slicing and dynamic slicing
Offline dynamic slicing algorithms based on backwards traversal over traces is not efficient Online algorithms that detect data and control dependences are discussed.
58
Forward Dynamic Slice Computation
The approaches we have discussed so far are backwards. Dependence graphs are traversed backwards from a slicing criterion. The space complexity is O (execution length). Forward computation A slice is represented as a set of statements that are involved in computing the value of the slicing criterion. A slice is always maintained for a variable. Space complexity is O(execution length) because one node is created for each executed instruction.
59
The Algorithm An assignment statement execution is formulated as
si: x= pj? op (src1, src2, …); That is to say, the statement execution instance si is control dependent on pj and operates on variables of src1, src2, etc. Upon the execution of si, the slice of x is updated to Slice(x) = {s} U Slice(src1) U Slice(src2) U … U Slice(pj) The slice of variable x is the union of the current statement, the slices of all variables that are used and the slice of the predicate instance that si is control dependent on. Because they are all contributing to the value of x. Such slices are equivalent to slices computed by backwards algorithms. Proof is omitted. Slices are stored in a hashmap with variables being the keys. The computation of Slice (pj) is in the next slide. Note that pj is not a variable.
60
The Algorithm (continued)
A predicate is formulated as si: pj? op (src1, src2, …) That is to say, the predicate itself is control dependent on another predicate instance pj and the branch outcome is computed from variables of src1, src2, etc. Upon the execution of si A triple is pushed to CDS with the format of <si, IPD(s), s U Slice (src1) U Slice (src2) U… U Slice(pj) > The entry is popped at its immediate post dominator Slice(pj) can be retrieved from the top element of CDS.
61
Example Slice(a) = {1} 11: a=1 Slice(b) = {2} 21: b=2 1: a=1 2: b=2
Statements Executed Dynamic Slices Slice(a) = {1} 11: a=1 Slice(b) = {2} 21: b=2 1: a=1 2: b=2 3: c=a+b 4: if a<b then 5: d=b*c 6: …….. Slice(c) = {1,2,3} 31: c=a+b push(<41,6, {1,2,4}>) 41: if a<b then Slice(d) = {1,2,3,4,5} 51: d=b*c 41, 6, {1,2,4} … …
62
Properties The slices are equivalent to those computed by backwards algorithms The proof is omitted. The space complexity is bounded O ( (# of variables + MAX_CDS_DEPTH) * # of statements) Efficiency relies on the hash map implementation and set operations. A cost-effective implementation will be discussed later in cs510.
63
Extending Slicing Essentially, slicing is an orthogonal approach to isolate part of a program (execution) giving certain criterion. Mutations of slicing Event slicing – intrusion detection, execution fast forwarding, understanding network protocol, malware replayer. Forward slicing. Chopping. Probabilistic slicing.
64
Limitations of Dynamic Slicing
Execution omission x=input(); y=0; if (x>10) y=y+1; output (y); Horrible approximation of causality if (x==10) y=1; if (x!=10) y=1; for (i=0;i<n;i++) sum+=i; if (i<n) sum+=i;
65
In-class Exercise 1 Construct CFG and PDG for the following program
66
In-class Exercise 2 Compute the dynamic control dep for the following program, assuming the trace is 1, 3, 4, 6, 7, 12, 3, 4, 6, 9, 10, 3, 4, 5, 14, 16, 17
67
Discussion What research projects have you done in the past that use dynamic analysis? What are the challenges in those projects and how do you address those challenges? How to quantify dependencies? (First introduce yourself and your research)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.