Compressing Schedules by Leveraging Parallelism in Development with Resilient Agile Doug Rosenberg, ICONIX and USC Barry Boehm, Bo Wang, Kan Qi, USC resilientagile@iconixsw.com ICSSP 10, July 2017
We all know that it’s impossible to build a 2 bedroom house in less than 3 hours with over 700 workers.
Actually it’s not impossible if you plan the project Very Carefully Actually it’s not impossible if you plan the project Very Carefully. And if you have quick-drying concrete.
The Key: Organize, Manage for Parallel Development Start by identifying, resolving high-risk elements Users’ satisfaction: prototyping, negotiating Reuse: compatibility analysis Performance: algorithm scalability analysis Develop architecture enabling parallel development Mobile apps: Model-View-Controller architecture Large-scale systems: Architectural integrity analysis Monitor architecture compliance, evolve architecture Keeper of the Holy Vision (KOTHV) Single expert for smaller systems (Doug Rosenberg) Sustained system engineering team for large systems
Small-scale: experimenting with student projects We’ve been looking for repeatable results, piloting code generation, and testing cost models 2014-15 Location Based Advertising (47 students initially, around 75 students total) 2015 Picture Sharing (12 students) 2016-17 Crowdsourced Bad Driver Reporting (15 students initially, 30 students total) 2017 Augmented Reality Game project (5 students)
What kinds of systems did the students build? Location Based Advertising: cross platform location-aware mobile app connected to a cloud based transaction processing and billing system (Cordova/PhoneGap, Cassandra, JQuery Mobile, Node JS) Adopted by Santa Monica Chamber of Commerce Picture Sharing: cloud connected iPhone app built with Parse Framework Crowdsourced Bad Driver Reporting: native Android and iOS “dashboard-cam” mobile apps and web apps connected to a cloud-hosted video database (Mongo DB, Angular JS, Node JS, iOS/Swift, Android/Java) Augmented Reality Game: cloud connected mobile app game (Unity 3D, Blender, Kudan)
Project 1: Location Based Advertising 47 students each got a use case for homework
LBA – Transaction Management and Billing
LBA – Geofenced Coupon Delivery
LBA - students averaged 4 days per use case
Project 2: PicShare
PicShare: 40 days to code and 50 days to test an iPhone app (students worked 10-15 hours/week)
Project 3: Crowdsourced Bad Driver Reporting Dashboard camera continuously records looping video. Voice command to mobile app (“report a #$%& bad driver”) triggers the uploading of a short video clip to a cloud database. Video/report metadata is filed by license plate number. Bad Driver reports are independently reviewed before being added to the Insurance Database to ensure accuracy. Insurance companies can query against license plate numbers to see if there are any bad driver reports logged against a vehicle while issuing policies.
BDR – dashcam app uploads video to the cloud, then reports are filed and reviewed for accuracy
BDR – Build The Right System then Build The System Right It’s deja-vu all over again
Project 4: AR Game Extensive prototyping to learn what requirements are feasible, combined with modeling Can we put a 3D tiki man in the middle of a spherical video? Can we blow up an animated tiki man with a lava fireball? Can we knock a tiki man down and make him flop around with a fireball that trails molten lava?
Student research – accelerating development and improving estimation Bo Wang – automatic code generation of NoSQL databases and REST APIs Kan Qi – cost estimation models for Resilient Agile Kan - Extracting size and complexity metrics (EUCP, EXUCP, AFP) from models improves estimation ability NoSQL DB and REST API Bo - Executable domain models shorten development cycle
Code Generation for Executable Domain Models Cloud-targeted RESTful APIs NoSQL database UML diagrams
Code generator saved time on student projects It also enables prototype code to connect to a live database early in a project.
Size metrics & counting methods Early Use Case Point (EUCP) Structured scenarios are identified from use case narratives. Calculate the cyclomatic complexity of structured scenarios to weight each use case Extended Use Case Point (EXUCP) Transactions are identified from robustness diagrams UI Elements are identified from storyboards. Domain objects are identified from domain models Use the number of domain objects and UI elements that each identified transaction interacts with to weight the transaction. Function Point directly from class and sequence diagrams Count ILF and ELF based on class diagrams Count transactional functions from sequence diagrams Originated by Takuya, Shinji, and Katsuro
Results are encouraging None of the student projects has taken more than 2 person-years aggregate effort (40 hours x 50 weeks = 2000 person-hours)
Resilient Agile and ICSM
Previous successes with large-scale parallel development
Another previous success with parallel development
The Key: Organize, Manage for Parallel Development Start by identifying, resolving high-risk elements Users’ satisfaction: prototyping, negotiating Reuse: compatibility analysis Performance: algorithm scalability analysis Develop architecture enabling parallel development Mobile apps: Model-View-Controller architecture Large-scale systems: Architectural integrity analysis Monitor architecture compliance, evolve architecture Keeper of the Holy Vision (KOTHV) Single expert for smaller systems (Doug Rosenberg) Sustained system engineering team for large systems
Backup charts
Project 2: PicShare 2 student teams on the same CS577 project
Project 2: PicShare – Build The Right System Storyboards help to pinch the cone of uncertainty
PicShare: disambiguation via conceptual MVC MVC decompositions pinch the cone of uncertainty even more while providing the basis for estimation and testing V C M
PicShare: Build The System Right design of each use case considers sunny and rainy day scenarios, reducing “technical debt” and reducing bug reports added to backlog
BDR – Each student gets a use case
Project 4: AR Game project Plan for parallelism by partitioning along scenario boundaries
Project 4: AR Game project Augmented reality tiki men on the USC campus
Effort Estimation Models for Resilient Agile Kan Qi
Effort Estimation Models for Resilient Agile Kan Qi
Challenges and Approaches Keep integrity of Resilient Agile process Lifecycle: Phase-based effort estimation models to provide multiple estimations at different phases of the process. Design Methodology: Size metrics are defined based on the structural and behavioral aspects of the system. Agility: Size measurements are directly countable from artifacts of the process to avoid investing too much effort in collecting information for effort estimation. Automated counting procedure is utilized.
Size metrics & counting methods Early Use Case Point (EUCP) Structured scenarios are identified from use case narratives. Calculate the cyclomatic complexity of structured scenarios to weight each use case.
Size metrics & counting methods Extended Use Case Point (EXUCP) Transactions are identified from robustness diagrams UI Elements are identified from storyboards. Domain objects are identified from domain models Use the number of domain objects and UI elements that each identified transaction interacts with to weight the transaction.
Size metrics & counting methods Function Point directly from class and sequence diagrams Count ILF and ELF based on class diagrams Count transactional functions from sequence diagrams Originated by Takuya, Shinji, and Katsuro
Model evaluation Observations: High R-squared values suggest good fits of the linear models to the data set. The P-values for the slopes and intercepts don’t suggest statistical significance of the calibrated parameters. There is uncertainty in the conclusions, which may be due to limited sample size. Since 𝑀𝑀𝑅𝐸 and 𝑃𝑅𝐸𝐷 are calculated by training dataset, they are not the indices for estimation accuracy, but another representation of goodness of fit. 𝑀𝑜𝑑𝑒 𝑙 𝐼𝐼𝐼 fails the common-sense test of having each source of effort contribute positively to the total effort.
Conclusions 1. The preliminary calibration results show good fits of the models to the dataset. 2. More data points need to be collected to make conclusions about estimation accuracy. 3. Supporting software tools need to be developed to streamline and standardize the process of training and testing.