Download presentation
Presentation is loading. Please wait.
Published byTaryn Bone Modified over 9 years ago
1
Identification of Distributed Features in SOA Anis Yousefi, PhD Candidate Department of Computing and Software McMaster University July 30, 2010 1
2
Feature Identification User Software Engineer List of classes Feature being used 2
3
Legacy Software Engineering! 3
4
Feature Identification Literature Identifying the source code constructs activated when exercising a feature – Feature: functionality offered by a system Techniques – Textual analysis – Static analysis – Dynamic analysis – Hybrid approaches 4
5
Trace-based Dynamic Analysis Instrumentation – TPTP Scenario execution Trace analysis – Pattern mining 5
6
Challenges Trace collection – Scattered implementation of features – Concurrency of events Feature location – Non-deterministic behavior of features 6
7
Summary What? – Identify the code associated with distributed features in SOA – Identify dynamic feature behavior How? – Trace-based dynamic analysis of services in SOA – Pattern mining to identify feature behavior Challenges? – Scattered implementation of features, concurrency of events, non- deterministic behavior of features 7
8
Steps of The Proposed Approach Run feature-oriented scenarios in SOA Collect and merge execution traces of services Mine traces to extract patterns Analyze the patterns to identify feature- specific code and behavior 8
9
1 2 3 4 The Proposed Framework 9
10
10 Step 1- Running Scenarios 10 ` ` ` ` ` `
11
11 m1m2 m3m4 … Step 2- Merging Distributed Traces Trace Structure 11 Enter m1, Timestamp: 0 Enter m2, Timestamp: 1 Leave m2, Timestamp: 3 Enter m3, Timestamp: 4 Enter m4, Timestamp: 5 Leave m4, Timestamp: 6 Leave m3, Timestamp: 7 …
12
Step 2- Merging (Contd.) Problems – Distributed data – Interweaved data Solution – Building “block execution tree” – Resolving uncertainties Before-after analysis Textual analysis Frequency analysis – Merging the traces 12
13
Step 3- Mining Frequent Patterns Traces represent “call graphs” Mining frequent sub-graphs Trace iTrace j m0m0 m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m0 m1 m2 m3 m4 m5 m6 m7 m0 m6 m2 m3 m4 m5 m6 m7 m1 m0m0 m1m1 m2m2 m3m3 m4m4 m5m5 m6m6 m7m7 m6m6 XX YY Z W Patterns: m1, Y 13 m1, {i} m5, {i} m6, {i} m7, {i} X, {i} m4, {i} Y, {i} Z, {i} m1, {i,j} m5, {i,j} m6, {i,j} m7, {i,j} X, {i,j} m4, {i,j} Y, {i,j} Z{i} m6, {j} W,{j}
14
Step 4- Analyzing Patterns Distinguishing “feature-specific patterns” from “omnipresent patterns” and “noise patterns” Pattern 1Pattern 2Pattern 3Pattern 4 Trace 1**-* Trace 2**-- Trace 3*-*- 14
15
Metrics: Feature Distribution 15, SnSn 50% 100% 25% FD (f) = 1 - 58.3% = 41.7% f f: feature p: pattern FD(f,p): distribution of feature f over SOA with regard to pattern p S p : services contributing in pattern p M s : methods defined in services s M p : methods contributing in pattern p FD(f): distribution of feature f (all patterns) S f : services contributing in the execution of feature f M f : methods contributing in the execution of feature f P f : patterns for feature f, SmSm SkSk 15 mimi
16
Metrics: Call Frequency f 210 CF (f) = 12 f: feature p: pattern CF (f,p): call frequency of feature f with regard to pattern p S p : services contributing in pattern p OP s : interface operations defined in services s CF (f): call frequency of feature f (all patterns) P f : patterns for feature f OP i 16 mimi SnSn SmSm SkSk
17
Metrics: Accuracy Acc(f): accuracy of service regarding feature f P f : patterns for feature f C f : cases defined on the scenarios =1 works as we expected >1 considers additional cases <1 treats some cases equally 17
18
18 Future Work Abstract feature behavior – Normal vs. alternative behavior – Augment published service description Improve/Define metrics 18
19
Thank You! Anis Yousefi yousea2@mcmaster.ca
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.