Download presentation
Presentation is loading. Please wait.
Published byRosemary Daniels Modified over 9 years ago
1
Cross Language Clone Analysis Team 2 October 27, 2010
2
Current Tasks GOLD Parsing System Grammar Update Clone Analysis Demonstration Team Collaboration Path Forward 2
3
Allen Tucker Patricia Bradford Greg Rodgers Brian Bentley Ashley Chafin 3
4
What we are tackling… 4
5
Current tasks created for the first user story “Source Code Load & Translate”: ◦ Load & parse C# source code. ◦ Load & parse JAVA source code. ◦ Load & parse C++ source code. ◦ Translate the parsed C# source code to CodeDOM. ◦ Translate the parsed JAVA source code to CodeDOM. ◦ Translate the parsed C++ source code to CodeDOM. ◦ Associate the CodeDOM to the original source code. 5
6
6
7
7
8
8
9
GOLD Parsing Populating CodeDOM 9
10
What we are doing? Compiled Grammar Table Bookkeeping Testing 10
11
Grammar Compiled Grammar Table (*.cgt) Source Code Parsed Data 11
12
Grammar Compiled Grammar Table (*.cgt) Source Code Parsed Data Typical output from engine: a long nested tree 12
13
Compiled Grammar Table (*.cgt) Source Code Parsed Data CodeDOM Conversion Need to write routine to move data from Parsed Tree to CodeDOM Parsed data trees from parser are stored in consistent data structure, but are based on rules defined within grammars AST 13
14
For Java, there is… ◦ 359 production rules ◦ 249 distinctive symbols (terminal & non-terminal) For C#, there is… ◦ 415 production rules ◦ 279 distinctive symbols (terminal & non-terminal) 14
16
Since there are so many production rules, we came up with the following bookkeeping: A spreadsheet of the compiled grammar table (for each language) with each production rule indexed. ◦ This spreadsheet covers: various aspects of language what we have/have not handled from the parser what we have/have not implemented into CodeDOM percentage complete 16
17
17
18
White Box Testing: ◦ Unit Testing Black Box Testing: ◦ Production Rule Testing Allows us to test the robustness of our engine because we can force rule production errors. Regression Testing Automated 18
19
19
20
20
21
Three Step Process Step 1 Code Translation Step 2 Clone Detection Step 3 Visualization Source Files Translator Common Model Inspector Detected Clones UI Clone Visualization 21
22
Java & C# 22
23
Currently the grammars we have for the Gold parser are out dated. Current Gold Grammars ◦ C# version 2.0 ◦ Java version 1.4 Current available software versions ◦ C# version 4.0 ◦ Java version 6
24
Available updated grammars ◦ Antlr has grammars updated to more recent versions of both C# and Java. ◦ C# version 4.0 (latest version) ◦ Java version 1.5 (second to latest version) Currently we are attempting to transform the Antlr grammars into Gold Parser grammars.
25
Grammars for C# and Java are very complex and require a lot of work to build. Antler and Gold Parser grammars use completely different syntax. Positive note: Other development not halted by use of older grammars.
26
Overview and Dr. Kraft’s Student’s Tool 26
27
Software Clones:( Definitions from Wikipedia) ◦ Duplicate code: a sequence of source code that occurs more than once, either within a program or across different programs owned or maintained by the same entity. ◦ Clones: sequences of duplicate code. “Clones are segments of code that are similar according to some definition of similarity.” —Ira Baxter, 2002
28
How clones are created: ◦ copy and paste programming ◦ similar functionality, similar code ◦ plagiarism
29
3 Types of Clones: ◦ Type 1: an exact copy without modifications (except for whitespace and comments). ◦ Type 2: a syntactically identical copy only variable, type, or function identifiers have been changed. ◦ Type 3: a copy with further modifications statements have been changed, added, or removed.
30
Per our task, in order to find clones across different programming languages, we will have to first convert the code from each language over to a language independent object model. Some Language Independent Object Models: ◦ Dagstuhl Middle Metamodel (DMM) ◦ Microsoft CodeDOM Both of these models provide a language independent object model for representing the structure of source code.
31
Detecting clones across multiple programming languages is on the cutting edge of research. A preliminary version of this was done by Dr. Kraft and his students for C# and VB. ◦ They compared the Mono C# parser (written in C#) to the Mono VB parser (written in VB). ◦ Publication: Nicholas A. Kraft, Brandon W. Bonds, Randy K. Smith: Cross-language Clone Detection. SEKE 2008: 54-59
32
Token sequence of CodeDOM graphs with Levenshtein distance ◦ The Levenshtein distance between two sequences is defined as the minimum number of edits needed to transform one sequence into the other Performs Comparisons of code files CodeDOM tree is tokenized Based on Distances ◦ Percentage of matching tokens in a sequence
34
Only does file-to-file comparisons ◦ Does not detect clones in same source file Can only detect Type 1 and some Type 2 clones Not very efficient (brute force)
35
Split into parameter (identifiers and literals) and non-parameter tokens Non-parameter tokens summarized using a hash function Parameter tokens are encoded using a position index for their occurrence in the sequence ◦ Abstracts concrete names and values while maintaining order
36
Represent all prefixes of the sequence in a suffix tree Suffixes that share the same set of edges have a common prefix ◦ Prefix occurs more than once (clone)
37
What’s been done 37 Demonstration
38
Team Collaboration Team 2 & Team 3 38
39
Team 2 ◦ We plan to start giving Team 3 periodic drops of our source code for Java and C# parsing. ◦ We are researching and working to update the Java and C# grammars. Team 3 ◦ Team 3 is working on C++ parsing. Looking into other parser, ELSA. 39
40
Next Iteration & Schedule 40
41
Finalize Iteration 1 (C++ to CodeDom) Iteration 2 (Code Analysis) Iteration 3 (Begin GUI) Path Forward
42
Schedule
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.