Presentation is loading. Please wait.

Presentation is loading. Please wait.

Doug Rosenberg, ICONIX and USC Barry Boehm, Bo Wang, Kan Qi, USC

Similar presentations


Presentation on theme: "Doug Rosenberg, ICONIX and USC Barry Boehm, Bo Wang, Kan Qi, USC "— Presentation transcript:

1 Compressing Schedules by Leveraging Parallelism in Development with Resilient Agile
Doug Rosenberg, ICONIX and USC Barry Boehm, Bo Wang, Kan Qi, USC ICSSP 10, July 2017

2 We all know that it’s impossible to build a 2 bedroom house in less than 3 hours with over 700 workers.

3 Actually it’s not impossible if you plan the project Very Carefully
Actually it’s not impossible if you plan the project Very Carefully. And if you have quick-drying concrete.

4 The Key: Organize, Manage for Parallel Development
Start by identifying, resolving high-risk elements Users’ satisfaction: prototyping, negotiating Reuse: compatibility analysis Performance: algorithm scalability analysis Develop architecture enabling parallel development Mobile apps: Model-View-Controller architecture Large-scale systems: Architectural integrity analysis Monitor architecture compliance, evolve architecture Keeper of the Holy Vision (KOTHV) Single expert for smaller systems (Doug Rosenberg) Sustained system engineering team for large systems

5 Small-scale: experimenting with student projects
We’ve been looking for repeatable results, piloting code generation, and testing cost models Location Based Advertising (47 students initially, around 75 students total) 2015 Picture Sharing (12 students) Crowdsourced Bad Driver Reporting (15 students initially, 30 students total) 2017 Augmented Reality Game project (5 students)

6 What kinds of systems did the students build?
Location Based Advertising: cross platform location-aware mobile app connected to a cloud based transaction processing and billing system (Cordova/PhoneGap, Cassandra, JQuery Mobile, Node JS) Adopted by Santa Monica Chamber of Commerce Picture Sharing: cloud connected iPhone app built with Parse Framework Crowdsourced Bad Driver Reporting: native Android and iOS “dashboard-cam” mobile apps and web apps connected to a cloud-hosted video database (Mongo DB, Angular JS, Node JS, iOS/Swift, Android/Java) Augmented Reality Game: cloud connected mobile app game (Unity 3D, Blender, Kudan)

7 Project 1: Location Based Advertising 47 students each got a use case for homework

8 LBA – Transaction Management and Billing

9 LBA – Geofenced Coupon Delivery

10 LBA - students averaged 4 days per use case

11 Project 2: PicShare

12 PicShare: 40 days to code and 50 days to test an iPhone app (students worked 10-15 hours/week)

13 Project 3: Crowdsourced Bad Driver Reporting
Dashboard camera continuously records looping video. Voice command to mobile app (“report a #$%& bad driver”) triggers the uploading of a short video clip to a cloud database. Video/report metadata is filed by license plate number. Bad Driver reports are independently reviewed before being added to the Insurance Database to ensure accuracy. Insurance companies can query against license plate numbers to see if there are any bad driver reports logged against a vehicle while issuing policies.

14 BDR – dashcam app uploads video to the cloud, then reports are filed and reviewed for accuracy

15 BDR – Build The Right System then Build The System Right
It’s deja-vu all over again

16 Project 4: AR Game Extensive prototyping to learn what requirements are feasible, combined with modeling Can we put a 3D tiki man in the middle of a spherical video? Can we blow up an animated tiki man with a lava fireball? Can we knock a tiki man down and make him flop around with a fireball that trails molten lava?

17 Student research – accelerating development and improving estimation
Bo Wang – automatic code generation of NoSQL databases and REST APIs Kan Qi – cost estimation models for Resilient Agile Kan - Extracting size and complexity metrics (EUCP, EXUCP, AFP) from models improves estimation ability NoSQL DB and REST API Bo - Executable domain models shorten development cycle

18 Code Generation for Executable Domain Models
Cloud-targeted RESTful APIs NoSQL database UML diagrams

19 Code generator saved time on student projects
It also enables prototype code to connect to a live database early in a project.

20 Size metrics & counting methods
Early Use Case Point (EUCP) Structured scenarios are identified from use case narratives. Calculate the cyclomatic complexity of structured scenarios to weight each use case Extended Use Case Point (EXUCP) Transactions are identified from robustness diagrams UI Elements are identified from storyboards. Domain objects are identified from domain models Use the number of domain objects and UI elements that each identified transaction interacts with to weight the transaction. Function Point directly from class and sequence diagrams Count ILF and ELF based on class diagrams Count transactional functions from sequence diagrams Originated by Takuya, Shinji, and Katsuro

21 Results are encouraging
None of the student projects has taken more than 2 person-years aggregate effort (40 hours x 50 weeks = 2000 person-hours)

22 Resilient Agile and ICSM

23 Previous successes with large-scale parallel development

24 Another previous success with parallel development

25 The Key: Organize, Manage for Parallel Development
Start by identifying, resolving high-risk elements Users’ satisfaction: prototyping, negotiating Reuse: compatibility analysis Performance: algorithm scalability analysis Develop architecture enabling parallel development Mobile apps: Model-View-Controller architecture Large-scale systems: Architectural integrity analysis Monitor architecture compliance, evolve architecture Keeper of the Holy Vision (KOTHV) Single expert for smaller systems (Doug Rosenberg) Sustained system engineering team for large systems

26 Backup charts

27 Project 2: PicShare 2 student teams on the same CS577 project

28 Project 2: PicShare – Build The Right System
Storyboards help to pinch the cone of uncertainty

29 PicShare: disambiguation via conceptual MVC
MVC decompositions pinch the cone of uncertainty even more while providing the basis for estimation and testing V C M

30 PicShare: Build The System Right
design of each use case considers sunny and rainy day scenarios, reducing “technical debt” and reducing bug reports added to backlog

31 BDR – Each student gets a use case

32 Project 4: AR Game project
Plan for parallelism by partitioning along scenario boundaries

33 Project 4: AR Game project Augmented reality tiki men on the USC campus

34 Effort Estimation Models for Resilient Agile
Kan Qi

35 Effort Estimation Models for Resilient Agile
Kan Qi

36 Challenges and Approaches
Keep integrity of Resilient Agile process Lifecycle: Phase-based effort estimation models to provide multiple estimations at different phases of the process. Design Methodology: Size metrics are defined based on the structural and behavioral aspects of the system. Agility: Size measurements are directly countable from artifacts of the process to avoid investing too much effort in collecting information for effort estimation. Automated counting procedure is utilized.

37 Size metrics & counting methods
Early Use Case Point (EUCP) Structured scenarios are identified from use case narratives. Calculate the cyclomatic complexity of structured scenarios to weight each use case.

38 Size metrics & counting methods
Extended Use Case Point (EXUCP) Transactions are identified from robustness diagrams UI Elements are identified from storyboards. Domain objects are identified from domain models Use the number of domain objects and UI elements that each identified transaction interacts with to weight the transaction.

39 Size metrics & counting methods
Function Point directly from class and sequence diagrams Count ILF and ELF based on class diagrams Count transactional functions from sequence diagrams Originated by Takuya, Shinji, and Katsuro

40 Model evaluation Observations:
High R-squared values suggest good fits of the linear models to the data set. The P-values for the slopes and intercepts don’t suggest statistical significance of the calibrated parameters. There is uncertainty in the conclusions, which may be due to limited sample size. Since 𝑀𝑀𝑅𝐸 and 𝑃𝑅𝐸𝐷 are calculated by training dataset, they are not the indices for estimation accuracy, but another representation of goodness of fit. 𝑀𝑜𝑑𝑒 𝑙 𝐼𝐼𝐼  fails the common-sense test of having each source of effort contribute positively to the total effort.

41 Conclusions 1. The preliminary calibration results show good fits of the models to the dataset. 2. More data points need to be collected to make conclusions about estimation accuracy. 3. Supporting software tools need to be developed to streamline and standardize the process of training and testing.


Download ppt "Doug Rosenberg, ICONIX and USC Barry Boehm, Bo Wang, Kan Qi, USC "

Similar presentations


Ads by Google