Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cross Language Clone Analysis Team 2 November 22, 2010.

Similar presentations


Presentation on theme: "Cross Language Clone Analysis Team 2 November 22, 2010."— Presentation transcript:

1 Cross Language Clone Analysis Team 2 November 22, 2010

2 Feasibility Study Release Plan Architecture Parsing CodeDOM Clone Analysis Testing Demonstration Team Collaboration Path Forward 2

3  Allen Tucker  Patricia Bradford  Greg Rodgers  Brian Bentley  Ashley Chafin 3

4 Our evaluation of the project to determine the difficulty in carrying out the task. 4

5  Our Customers: Dr. Etzkorn and Dr. Kraft  Customer Request: ◦ A tool that will abstract programs in C++, C#, Java, and (Python or VB) to the Dagstuhl Middle Metamodel, Microsoft CodeDOM or something similar, and detect cross-language clones.  Areas to Note: ◦ the user interface ◦ easy comparisons of clones ◦ visualization of clones ◦ sub-clones ◦ clone detection for large bodies of code 5

6  Per our task, in order to find clones across different programming languages, we will have to first convert the code from each language over to a language independent object model.  Some Language Independent Object Models: ◦ Dagstuhl Middle Metamodel (DMM) ◦ Microsoft CodeDOM  Both of these models provide a language independent object model for representing the structure of source code. 6

7  Three Step Process Step 1 Code Translation Step 2 Clone Detection Step 3 Visualization Source Files Translator Common Model Inspector Detected Clones UI Clone Visualization 7

8  Fact: Modularity is a key characteristic in today’s software world  Why? Allows us to divide software into a decomposed separation of concerns ◦ Attributes to maintainability, reusability, testability and reliability  Clone Detection allows us to detect common software spread across large bodies of code ◦ Identify code that is subject to further modularity 8

9  Clone Detection Software Suite ◦ Identifies ◦ Tracks ◦ Manages Software Clones  Multi-language support ◦ C++ ◦ C# ◦ Java 9

10  Provides complete code coverage  Multi-Application Support ◦ Stand-alone ◦ Plug-in based (Eclipse) ◦ Backend service (Ant task)  Extendible ◦ Built on a Plug-in Framework ◦ Add new languages  Easy to Navigate between Clones  Persists Clones for easy Retrieval 10

11  Complexity of problem proves more difficult than initial estimates.  Technology to be applied is neither well- established or has yet to be developed.  Unable to complete defined project scope within schedule.  Volatile user requirements leading to redefinition of project objectives. 11

12 Release Plan and User Stories 12

13  Came out with original Release Plan on 9/15/20  Due to customer wants/needs, we had to re- tool our user stories.  Dr. Etzkorn’s main concerns:  Load source code and translate to a language independent model  Analyze the translated source code for clones ◦ Results from meeting:  Created two new user stories (see next two slides)  These two user stories have been pushed to the front of our card stack 13

14 Phase I

15 Story ID: Priority: Estimate: 017 1 14 Days 15  As an analyst I want the to load and translate my source code projects so I can analyze the source for clones.

16 Story ID: Priority: Estimate: 018 1 14 Days 16  As an analyst I want the to analyze my source code projects so I can see the clones.

17 Story ID: Priority: Estimate: 002 1 14 Days 17  As a analyst I want the capability to have the source code associated with clones highlighted within source files so that they are easy to identify.

18 Requirements & Models 18

19  Requirements modeling for the first user story “Source Code Load & Translate”: ◦ Load & parse C#, Java, C++ source code. ◦ Translate the parsed C#, Java, C++ source code to CodeDOM. ◦ Associate the CodeDOM to the original source code.  Requirements modeling for the second user story “Source Code Analyze”: ◦ Analyze CodeDom for clones. 19

20 20

21 21

22 22

23 23

24 Design and Architecture 24

25  Multilanguage support  Configurable for different platforms ◦ Stand-along application ◦ plug-in ◦ backend service  Extendable 25

26 C# Service Java Service C++ Service Application User Interface Application User Interface Code Model Clone Detection Algorithms Core API Language Support (Interface) 26 Service Eclipse Plug-in Eclipse Plug-in Etc… Web Interface Web Interface

27  Code Model ◦ Stores the code in common format  Application Programming Interface ◦ Used to embed clone detection in applications  Language Service Interface ◦ Communication layer between the core and the specific language services Code Model Clone Detection Algorithms Core API Language Service Interface 27

28 28

29 Class Responsibility Collaboration Cards 29

30 Java Parser Parse Java source codeLALRParser (Gold Parser) Construct Java token tree 30

31 Parser Parse C# source codeLALRParser (Gold Parser) Construct C# token tree 31

32 LanguageService Defines standard interface for all language providers. ILanguageService 32

33 JavaService Reads Java source codeJava Parser Understands Java grammar production rules CloneDetection Construct CodeDOM compilation unit JavaCodeProvider ILanguageService 33

34 CsService Reads C# source codeC# Parser Understands C# grammar production rules CloneDetection Construct CodeDOM compilation unit CsCodeProvider ILanguageService 34

35 CloneDection Loads and manages languages services. ILanguageService Controls parsing Establishes CodeDOM compilation units to source code file associations Compares code segmentsCodeDomComparer Provides bookkeeping for code segments CodeDomSummary 35

36 Our struggles and our successes. 36

37  We explored and conducted spikes on CSParser and CS CodeDOM Parser. ◦ They both had advantages and disadvantage. ◦ We came to the conclusion that neither of them were going to fit our needs.  We explored and conducted a spike on GOLD Parser. ◦ We ultimately chose the GOLD Parser because it best fit our needs.  This gave us a way to manage multiple language grammars with one engine. 37

38 GOLD Parsing Populating CodeDOM 38

39 Grammar Compiled Grammar Table (*.cgt) Source Code Parsed Data 39

40 Grammar Compiled Grammar Table (*.cgt) Source Code Parsed Data Typical output from engine: a long nested tree 40

41 Compiled Grammar Table (*.cgt) Source Code Parsed Data CodeDOM Conversion Need to write routine to move data from Parsed Tree to CodeDOM Parsed data trees from parser are stored in consistent data structure, but are based on rules defined within grammars AST 41

42 Bookkeeping for parsing the multiple grammars. 42

43  Currently the grammars we have for the Gold parser are out dated.  Current Gold Grammars ◦ C# version 2.0 ◦ Java version 1.4  Current available software versions ◦ C# version 4.0 ◦ Java version 6 43

44  Grammars for C# and Java are very complex and require a lot of work to build.  Antler and Gold Parser grammars use completely different syntax.  Positive note: Other development not halted by use of older grammars. 44

45 Bookkeeping for parsing the multiple grammars 45

46  For Java, there is… ◦ 359 production rules ◦ 249 distinctive symbols (terminal & non-terminal)  For C#, there is… ◦ 415 production rules ◦ 279 distinctive symbols (terminal & non-terminal) 46

47 47

48 Since there are so many production rules, we came up with the following bookkeeping:  A spreadsheet of the compiled grammar table (for each language) with each production rule indexed. ◦ This spreadsheet covers:  various aspects of language  what we have/have not handled from the parser  what we have/have not implemented into CodeDOM  percentage complete 48

49 49

50  Parsing Handlers’ Status: ◦ C# = 100% complete ◦ Java = 100% complete 50

51 Language Independent Object Model 51

52  Document Object Model for Source Code  API - [System.CodeDom]  Only supports certain aspects of the language since it’s language agnostic ◦ Good Enough  What Does it Do? ◦ Programmatically Constructs Code  What Doesn’t it Do? ◦ Does NOT parse 52

53  CodeCompileUnit ◦ CodeNameSpace  Imports  Types  Members  Event  Field  Method  Statements  Expression  Property 53

54 Clones & Dr. Kraft’s Tool 54

55  3 Types of Clones (Definition of Similarity): ◦ Type 1: An exact copy without modifications (except for whitespace and comments) ◦ Type 2: A syntactically identical copy  Only variable, type, or function identifiers have been changed ◦ Type 3: A copy with further modifications  Statements have been changed, reordered, added, or removed 55

56  Multi-Language Clone Detection ◦ Cutting Edge of Research  Preliminary Research ◦ Dr. Kraft and Students at UAB  C# and VB.  Publication  Nicholas A. Kraft, Brandon W. Bonds, Randy K. Smith: Cross-language Clone Detection. SEKE 2008: 54-59 ◦ Utilizes Mono Parsers  C#  VB 56

57  Performs Comparisons of Code Files  For each File, a CodeDOM tree is tokenized  Uses Levenshtein Distance Calculation ◦ Minimum number of edits needed to transform one sequence into the other  Distances Calculated ◦ Distance determines Probability of a Clone 57

58 58

59  Only does file-to-file comparisons ◦ Does not detect clones in same source file  Can only detect Type 1 and some Type 2 clones  Not very efficient (brute force) 59

60  Add Support for Same File Clone Detection  Add Support for Type 3 Clone Detection ◦ Requires more Research  Provide a more efficient clone analysis algorithm 60

61 White Box & Black Box Testing 61

62  White Box Testing: ◦ Unit Testing  Black Box Testing: ◦ Production Rule Testing  Allows us to test the robustness of our engine because we can force rule production errors.  Regression Testing  Automated ◦ Functional Testing 62

63 63

64 64

65 65

66 Project Metrics 66

67  As of Nov 22, 2010  SLOC: ◦ CS666_Client = 1746 lines ◦ CS666_Core = 2653 lines ◦ CS666_CppParser = 155 lines ◦ CS666_CsParser = 3259 lines ◦ CS666_JavaParser = 3378 lines ◦ CS666_LanguageSupport = 84 lines ◦ CS666_UnitTests = 2162 lines  Total = 13467 lines (including unit tests) 67

68 Demonstration of our progress. 68

69  These are the things we would like to show you today: ◦ GUI work ◦ Project setup  Save project  Load project ◦ Loading of source code ◦ Parsing of source code ◦ Translation of source code 69

70 Team 2 & Team 3 70

71  Due to Team 3’s team size, we have taken responsibility of gathering & sharing grammars.  Team 3 has the responsibility of the C++ Parsing.  Both Teams will… ◦ Use the same grammars & engines  We will both have limitations based on this.  Ex: JAVA grammar is based off 1.4 -> we are limited to using JAVA 1.4 ◦ Test the same grammars & engines  We will have two test beds. 71

72  Both teams met Monday (11-8-10) after class and performed the required Pair Programming.  Current Status: ◦ Team 2  All project source code has been made available.  We are researching and working to update the Java and C# grammars. ◦ Team 3  Team 3 is working on C++ parsing.  Looking into other parser, ELSA. 72

73 Current Status & Path Forward for Next Semester 73

74  Iteration 1: Parsing -> 85% ◦ Completed parsing for Java & C# ◦ No parsing for C++  But we have a foundation and design to start from.  Iteration 2: Translation to CodeDOM -> 60% ◦ We have the foundation and design completed. ◦ Now, it is a matter of turning the crank for the languages.  Iteration 3: Clone Analysis -> 30% ◦ Ported majority of Dr. Kraft’s student project code. ◦ Started focusing on the GUI Where we stand… 74

75  Three Step Process Step 1 Code Translation Step 2 Clone Detection Step 3 Visualization Source Files Translator Common Model Inspector Detected Clones UI Clone Visualization 75

76 Schedule 76

77  Our next step is to re-evaluate where we currently stand. ◦ Revisit Release Plan  Pull in Software Studio I work that was not completed. ◦ Revisit User Stories ◦ Start off strong with unit tests not completed. Path Forward 77


Download ppt "Cross Language Clone Analysis Team 2 November 22, 2010."

Similar presentations


Ads by Google