PMT-352 Systems Engineering Seminar

Slides:



Advertisements
Similar presentations
Systems Engineering From a Life Cycle Perspective John Groenenboom Director Engineering – Mesa Boeing Rotorcraft Dec 12, 2007.
Advertisements

System Integration Verification and Validation
S Y S T E M S E N G I N E E R I N G.
Lecture # 2 : Process Models
Software Process Models
Aerospace Systems Engineering
Software Modeling SWE5441 Lecture 3 Eng. Mohammed Timraz
ITIL: Service Transition
System Design and Analysis
1 Introduction to System Engineering G. Nacouzi ME 155B.
Nov. 14, 2007 Systems Engineering ä System ä A set or arrangement of things so related as to form a unity or organic whole. ä A set of facts, principles,
SQM - 1DCS - ANULECTURE Software Quality Management Software Quality Management Processes V & V of Critical Software & Systems Ian Hirst.
Pertemuan Matakuliah: A0214/Audit Sistem Informasi Tahun: 2007.
TECH 101 Product Design and Manufacturing. TECH 1012 System Life-Cycle Engineering 2 Major phases in almost all products and in many cases services –Acquisition.
DITSCAP Phase 2 - Verification Pramod Jampala Christopher Swenson.
The Software Product Life Cycle. Views of the Software Product Life Cycle  Management  Software engineering  Engineering design  Architectural design.
IV&V Facility Model-based Design Verification IVV Annual Workshop September, 2009 Tom Hempler.
Software Verification and Validation (V&V) By Roger U. Fujii Presented by Donovan Faustino.
Engineering Systems of.
Introduction to Computer Technology
Enterprise Architecture
The Project AH Computing. Functional Requirements  What the product must do!  Examples attractive welcome screen all options available as clickable.
Effective Methods for Software and Systems Integration
Introduction to Systems Analysis and Design Trisha Cummings.
Chapter 2: Overview of Essentials ISE 443 / ETM 543 Fall 2013.
S/W Project Management
UML - Development Process 1 Software Development Process Using UML (2)
Overview of the Database Development Process
Chapter 6 Software Implementation Process Group
Introduction to RUP Spring Sharif Univ. of Tech.2 Outlines What is RUP? RUP Phases –Inception –Elaboration –Construction –Transition.
Chapter 2 The process Process, Methods, and Tools
Rational Unified Process Fundamentals Module 4: Disciplines II.
ITEC224 Database Programming
Demystifying the Business Analysis Body of Knowledge Central Iowa IIBA Chapter December 7, 2005.
1 Process Engineering A Systems Approach to Process Improvement Jeffrey L. Dutton Jacobs Sverdrup Advanced Systems Group Engineering Performance Improvement.
CS 360 Lecture 3.  The software process is a structured set of activities required to develop a software system.  Fundamental Assumption:  Good software.
1 Lecture 5.2a: SEF Ch 8 SE Outputs Dr. John MacCarthy UMBC CMSC 615 Fall, 2006.
SENG521 (Fall SENG 521 Software Reliability & Testing Software Product & process Improvement using ISO (Part 3d) Department.
Certification and Accreditation CS Phase-1: Definition Atif Sultanuddin Raja Chawat Raja Chawat.
© 2012 Cengage Learning. All Rights Reserved. This edition is intended for use outside of the U.S. only, with content that may be different from the U.S.
Role-Based Guide to the RUP Architect. 2 Mission of an Architect A software architect leads and coordinates technical activities and artifacts throughout.
Software Requirements Engineering: What, Why, Who, When, and How
Configuration Management Non Government Std: EIA Standard-649 “A management process for establishing and maintaining consistency of a product’s performance,
CHECKPOINTS OF THE PROCESS Three sequences of project checkpoints are used to synchronize stakeholder expectations throughout the lifecycle: 1)Major milestones,
Space Systems Engineering: Functional Analysis Module Functional Analysis Module Space Systems Engineering, version 1.0.
1 Introduction to Software Engineering Lecture 1.
Bill Fournier Nov 2014 Systems Engineer for Non SE Bill Fournier
Verification and Validation — An OSD Perspective — Fred Myers Deputy Director, Test Infrastructure Test Resource Management Center November 4, 2009.
Smart Home Technologies
Process Asad Ur Rehman Chief Technology Officer Feditec Enterprise.
SRR and PDR Charter & Review Team Linda Pacini (GSFC) Review Chair.
1 The Software Development Process ► Systems analysis ► Systems design ► Implementation ► Testing ► Documentation ► Evaluation ► Maintenance.
SCOPE DEFINITION,VERIFICATION AND CONTROL Ashima Wadhwa.
Software Development Process CS 360 Lecture 3. Software Process The software process is a structured set of activities required to develop a software.
Software Engineering Lecture 10: System Engineering.
Project Management Processes for a Project Chapter 3 PMBOK® Fourth Edition.
SwCDR (Peer) Review 1 UCB MAVEN Particles and Fields Flight Software Critical Design Review Peter R. Harvey.
1 Lecture 2.3: SE Process (SEF Ch 3) Dr. John MacCarthy UMBC CMSC 615 Fall, 2006.
1 ME Spring 2015 Systems Engineering, Part II Session 8 5 February 2015 Mr. Larry Hopp, CPL © Copyright 2013.
MANAGEMENT INFORMATION SYSTEM
Lesson Point of Contact: Name: John Rice Phone: Slide 1
ITIL: Service Transition
Supportability Design Considerations
IEEE Std 1074: Standard for Software Lifecycle
Session 5b Dr. Dan C. Surber, ESEP
Software Requirements
Lesson 6 Technology Maturation and Risk Reduction Exercise Team #
Engineering Processes
Engineering Processes
Presentation transcript:

PMT-352 Systems Engineering Seminar DragonFly JRATS Simulation Version 2.2, 3-26-15 Version 2.2, 3-26-15

Introduction and Objectives ….using aspects of the DragonFly simulation. Apply the systems engineering technical management processes and effectively implement the technical processes and the overall system acquisition process. This seminar is designed to help prevent the teams from making cost/performance tradeoffs in an ad hoc manner without really appreciating the rigor of the SE process model. This seminar is intended to provide an appreciation for how the SE process would actually be applied to a system like the JUGV. The objectives of this lesson are: Given a systems development scenario, students are to be able to relate the systems engineering process to the scenario by describing various techniques used in execution of the steps of the SE technical processes and describe the various artifacts that result from these processes. Given practical examples of the application of the SE technical processes to system development scenario, describe the application of the SE technical management processes at various points in the scenario and relate these processes to overall program management. Version 2.2, 3-26-15

Technical Management Processes Instructor guidance: Slide is included for easy student reference and reflects the process descriptions in the current DAG. Instructor has the option of briefly talking to each process, or simply telling the students that they should be very familiar with each of these process elements and how they relate to each other and the technical processes on the next slide. Discussion questions: None. Technical Planning Ensures that the SE processes are applied properly throughout a system’s life cycle, includes defining the scope of the technical effort required to develop, field, and sustain the system, as well as providing critical quantitative inputs to program planning and life-cycle cost estimates. Requirements Management Ensures bi-directional traceability from the high-level requirements down to the system elements through the lowest level of the design (top down); and from any derived lower-level requirement up to the applicable source from which it originates (bottom-up). Configuration Management Establishes and maintains the consistency of a system’s functional, performance, and physical attributes with its requirements, design, and operational documentation throughout the system's life cycle. Interface Management Ensure interface definition and compliance among the system elements, as well as with other systems. Documents all internal and external interface requirements and requirements changes in accordance with the program’s Configuration Management Plan. Technical Data Management Identifies, acquires, manages, maintains, and ensures access to the technical data and computer software required to manage and support a system throughout the acquisition life cycle. Decision Analysis Transforms a broadly stated decision opportunity into a traceable, defendable, and actionable plan. employ procedures, methods, and tools, such as trade studies, for identifying, representing, and formally assessing the important aspects of alternative decisions to select an optimum decision. Technical Assessment Compares achieved results with defined criteria to provide a fact-based understanding of the current level of product knowledge, technical maturity, program status, and technical risk. Includes methods such as technical reviews and use of technical performance measures (TPMs). Risk Management Involves the mitigation of program uncertainties that are critical to achieving cost, schedule, and performance goals at every stage of the life cycle. Encompasses identification, analysis, mitigation , and monitoring of program risks. Latest and used word for word in SYS 302 and ACQ 201B. Slide needs format correction, kept source formatting so you can see where it came from. I hide the orginal slide so you can see what it should replace. Per DAG Chapter 4 dated 15 May 2013.

Technical Processes Stakeholder Requirements Definitions Instructor guidance: Slide is included for easy student reference and reflects the process descriptions in the current DAG. Instructor has the option of briefly talking to each process, or simply telling the students that they should be very familiar with each of these process elements and how they relate to each other and the technical management processes on the previous slide. Discussion questions: None. Stakeholder Requirements Definitions Involves the translation of requirements from relevant stakeholders into a set of top-level technical requirements. The process helps ensure each individual stakeholder’s requirements, expectations, and perceived constraints are understood from the acquisition perspective. Requirements Analysis Involves the decomposition of top-level requirements captured by the Stakeholder Requirements Definition process into a clear, achievable, verifiable and complete set of system requirements. Architecture Design Involves a trade and synthesis process that translates the outputs of the Stakeholder Requirements Definition and Requirements Analysis processes into a system allocated baseline that describes the physical architecture of the system and the specifications that describe the functional and performance requirements for each configuration item along with the interfaces that compose the system. Implementation Involves two primary efforts; detailed design and realization. Outputs included the detailed design down to the lowest system elements in the system architecture, and the fabrication/production procedures. Integration Incorporates the lower level system elements into a higher-level system element in the physical architecture. Verification Provides evidence that the system or system element performs its intended functions and meets all performance requirements listed in the system performance specification and functional and allocated baselines. Verification answers the question, “Did you build the system correctly?” Validation Provides objective evidence that the capability provided by the system complies with stakeholder performance requirements, achieving its use in its intended operational environment. Validation answers the question, “Is it the right solution to the problem?” Transition Moves any system element to the next level in the physical architecture. For the end-item system, it is the process to install and field the system to the user in the operational environment. Latest and used word for word in SYS 302 and ACQ 201B. Slide needs format correction, kept source formatting so you can see where it came from. I hide the orginal slide so you can see what it should replace. per DAG Chapter 4 dated 15 May 2013.

Defense Acquisition Guide Systems Engineering Process Model Instructor guidance: Briefly explain that the new DAG Chapter 4 revision continues to present the SE process as a combination of 8 technical processes driven by 8 technical management processes. And, the eight technical processes are divided into design and realization. Discussion questions: How do these processes help in formulating the structure of a program? Defense Acquisition Guide Systems Engineering Process Model Transition Stakeholder Requirements Definition Validation (OT) Requirements Analysis Verification (DT) Integration Architecture Design Technical Processes Technical Processes Implementation Latest and used word for word in SYS 302 and ACQ 201B. Slide needs format correction, kept source formatting so you can see where it came from. I hide the orginal slide so you can see what it should replace. per DAG Chapter 4 dated 15 May 2013. Amplifying Information: The practice of systems engineering (SE) is composed of 16 processes: eight technical processes and eight technical management processes as shown in the slide and described in DAG section 4.3. The 16 processes provide a structured approach to increasing the technical maturity of a system and increasing the likelihood that the capability being developed balances mission performance with cost, schedule, risk, and design constraints. The eight technical management processes are implemented across the acquisition life cycle and provide insight and control to assist the Program Manager and Systems Engineer to meet performance, schedule, and cost goals. The eight technical processes closely align with the acquisition life-cycle phases and include the top-down design processes and bottom-up realization processes that support transformation of operational needs into operational capabilities. The ultimate purpose of the SE processes is to provide a framework that allows the SE team to efficiently and effectively deliver a capability to satisfy a validated operational need. To fulfill that purpose, a program implements the SE technical processes in an integrated and overlapping manner to support the iterative maturation of the system solution. Technical Planning Requirements Management Risk Management Decision Analysis Data Management Technical Assessment Configuration Management Interface Management Technical Management Processes Always On-going

JRATS Architectural Views Start at the Beginning Stakeholders Requirements Definition JRATS Capabilities Documents (ICD and Draft CDD) JCIDS Documents provide primary input to the SE process. Requirements immediately start to evolve when we add the CONOPS JRATS CONOPS Requirements Management Process JRATS Architectural Views An important part of the SE process The bottom line for program managers, and personnel in the program management career field, is that the terminology on the chart is what the reviewers at OSD expect in conjunction with program documentation, such as the Systems Engineering Plan, so that’s what needs to be absorbed during PMT 352B. Architectural Views Identify the “Context” Identify Interfaces Clarify “What’s Required” Interface Management Process Version 2.2, 3-26-15

Stakeholders Requirements Definition JUGV Stakeholders Requirements Definition Detailed stakeholder capability needs turned into good technical requirements ID System Requirement Traces to: SR1 The JUGV shall be capable of autonomously attacking enemy targets. Draft CDD 1 SR2 The JUGV shall be capable attacking enemy targets with a probability of kill of 0.75(T), 0.90 (O). SR3 The JUGV shall be capable or carrying and launching anti-armor guided missiles. Draft CDD 1, CONOPS SR4 The JUGV shall be capable of conducting operations by remote control using line of sight communication data-link. Draft CDD 2 JCIDS (Draft CDD) Key Performance Parameters (KPPs), and other required capabilities JCIDS Documents will detail operational requirements and KPPs Example: Human Systems Integration, Electro-Magnetic Environmental Effects Design Considerations Example: Dragonfly Unmanned Air Vehicle (UAV) Interface Requirements Example: Cybersecurity, Spectrum Supportability Statutory & Regulatory/ Certification Other Requirements are derived from Statutory, Regulatory, Certification, Design, and Interface requirements SR6 The JUGV shall be capable of identifying friendly targets with single-target accuracy of 0.90(T), 0.99(O). Draft CDD 6 SR8 The JUGV system shall be certified to operate in accordance with DIACAP. Draft CD (Regulatory) SR9 The JUGV shall be certified for electro-magnetic spectrum supportability in accordance with DoDI 4650.01 Draft CDD Para. 10 (Regulatory) SR10 The JUGV system shall be capable of being physically maintained by 90% of both the male and female population Para. 14 .c(4) (Consideration) SR11 The JUGV system shall comply with MIL-STD-464 , Interface Standard for Electro-magnetic Environmental Effects for Systems. Para (Consideration) The JUGV shall be capable of LASER designation of targets for the DragonFly UAV. Draft CDD CONOPS (Interface) CONOPS SLIDE BUILDS System requirements originate from multiple sources. JCIDS documents will detail operational requirements and identify Key Performance Parameters. Other requirements are derived from statutory, regulatory, and certification constraints, as well as . . . A range of design considerations and interface considerations that are both functional and non-functional in nature and are based on the system operational context described in the system Concept of Operations (CONOPS). Version 2.2, 3-26-15 7

Stakeholders Requirements Definition Technical Assessment Process JUGV Stakeholders Requirements Definition Technical Assessment Process As part of the Technical Assessment Process, the program office will plan and hold technical reviews. One of the most important is the System Requirements Review (SRR) The SRR ensures that the PMO, user, and contractor all have a common understanding & agreement on: the system level technical requirements and the associated costs, schedule, and risks associated with realizing a system to meet the requirements. The goal of Stakeholder Requirements Definition is to convert operational requirements & capability needs into preliminary system technical requirements. Version 2.2, 3-26-15

Requirements Analysis JUGV Requirements Analysis As an example, let’s look at three top-level requirements that were part of the output of the Stakeholder Requirements Definition process: SR1 The JUGV shall be capable of autonomously attacking active or passive enemy targets. SR2 The JUGV shall be capable of attacking targets with a probability of kill of 0.75(T), 0.90 (O). SR3 The JUGV shall be capable or carrying and launching anti-armor guided missiles. May Cause us to Reassess How RQMTs are Stated Extract top level functions and determine their performance requirements and constraints Functional Analysis (Verbs) Function (What the system must do?) Performance (How well?) Constraints Attack enemy targets Pk = 0.75(T), 0.90(O) Autonomously Active or passive targets Carry missiles How many? Anti-Armor Guided Missiles Launch missiles How quickly? SLIDE BUILDS Let’s start with a top level requirement that was part of the Stakeholder Requirements definition process We then extract functions (verbs) and determine their performance requirement and constraints 3. What are the characteristics of a good performance requirement statement? Verifiable—it can be tested. Clear and Concise—addresses a single requirement and is unambiguous. Complete—contains all information needed to define the function. Consistent—does not conflict with or duplicate other requirements. Traceable—has a unique identity and can be linked to higher and lower level requirements Feasible—can be met using existing technology and within resource constraints. Necessary—must be present to meet system level objectives; without it there will be a deficiency. Implementation Free—defines what the systems needs to do and not how? 4. The requirements analysis process is highly iterative (good requirements analysis will drive changes to stakeholder requirements, which will in turn drive further requirements analysis until the process converges and requirements are stable) and involves close coordination with other stakeholders. With a good set of system level requirements defined, requirements analysis proceeds by breaking out what exactly it is the system is supposed to do  what are the functions (indicated by the verbs in the written requirements); how well are those functions to be done (indicated by numbers); and under what specific conditions (constraints). During the initial cut at this functional analysis, the systems engineer will likely come up with more questions than answers, such as how many missiles does the system need carry? Verifiable Clear Concise Consistent Traceable Feasible Necessary Version 2.2, 3-26-15

Requirements Analysis JUGV Requirements Analysis Another way to “See” this process is with a Functional Flow Block Diagram (FFBD) (the primary functional analysis technique) The FFBD Indicates the logical and sequential relationships Shows the entire “network of actions” and the “logical sequence” Does NOT prescribe a time duration to or between functions HOWEVER: A time line analysis will be done “based on the FFBD” Does NOT show “how” a function is to be performed HOWEVER: the “how” will have to be identified for each block The goal of Stakeholder Requirements Definition is to convert operational requirements & capability needs into preliminary system technical requirements. Version 2.2, 3-26-15

JUGV Requirements Analysis Using a Functional Flow Block Diagram Top Level – Divide all functions into logical groups Load 1.0 Start 2.0 Transit to Op Area 3.0 Conduct Mission Operations 4.0 Transit to Base Area 5.0 Shutdown 6.0 Conduct R&S 4.2 And Communicate 4.1 Target & Attack 4.3 Or Second Level Detect Mines 4.4 Third Level Detect 4.3.1 Locate 4.3.2 Track 4.3.3 Identify 4.3.4 Designate 4.3.6 Arm Weapon 4.3.7 And Launch Weapon 4.3.8 Kill Target 4.3.10 Safe Launcher 4.3.11 Decide 4.3.5 Guide Weapon 4.3.9 NOTE: Slide Builds If we start with a Top Level function We can break down that one function into several “functions” We can then select just one of those functions and break that down even further The primary functional analysis technique is the functional flow block diagram. The purpose of the FFBD is to indicate the logical and sequential relationship of all functions that must be accomplished by a system. When completed, these diagrams show the entire network of actions that lead to the fulfillment of a function. The FFBD network shows the logical sequence of “what” must happen; It does not ascribe a time duration to functions or between functions Does not show how a function is to be performed. To understand time-critical requirements, a time line analysis is done based on the FFBD. The “how” is then defined for each block at a given level by defining the “what” functions at the next lower level necessary to accomplish that block. FFBDs are used to develop, analyze, and flow down requirements, as well as to identify profitable trade studies, by identifying alternative approaches to performing each function. We will follow this thread in subsequent slides. Version 2.2, 3-26-15

TMPs in play throughout Architecture Design JUGV Logical Decomposition Group functions in a way that they can be realized by a physical component or sub-system. ….“Allocated” into Logical Groups TMPs in play throughout Architecture Design Decision Analysis A first cut at a design . Decide which function groups can be COTS/NDI/GFE and which can be developed. Interface Management Risk Management NOTE: Slide Builds The first step in transforming the functional architecture to a physical architecture is to divide all the derived functions into logical groups. These functional groups can then be assigned to potential physical solutions. Some solutions will be straight forward selections of an appropriate off the shelf component, non-developmental item, or GFE but other decisions will be more complicated and will start with a range of options, especially for those system functions that drive overall cost, schedule, and/or performance. The TMPs of Decision Analysis, Interface Management, and Risk Management are in evidence in this process For functions that could be realized in a number of ways, conduct a trade-off analysis to decide on best solution. Version 2.2, 3-26-15

JUGV Architecture Design COTS RADAR System Trade-Off Analysis Required Logical Decomposition – Example: Target & Attack Function An initial cut at a Physical Architecture Sense Target COTS RADAR System Detect, Locate, Track Identify Target GFE IFF System Identify Designate Target NDI LASER Designator Designate Target, Guide Weapon Store and Launch Weapon GFE Launcher Arm Weapon, Launch Weapon, Safe Launcher Target and Control Weapon GFE Targeting Cmptr H/W Develop Targeting S/W Decide on Attack, Guide Weapon Kill Target GFE Missiles OPTIONS SPECIFIC SPECIFIC SPECIFIC In this example, the initial cut at a physical architecture for the Target & Attack function designates specific solutions for the Identify Target, Designate Target, Store and Launch Weapon, and Kill Target functional groups. For the Sense Target and Target & Control functional groups, the initial cut designates three options for either COTS solutions (in the case of the RADAR systems) or GFE solutions (in the case of the targeting computer) and also designates the targeting software to be developed from scratch (so, here the Target & Control functional group has been allocated to distinct hardware and software items that will need to be integrated later). The next step is to perform a trade study that optimizes the selection of a the COTS RADAR and GFE targeting computer over selected set of criteria. Note: Though the DragonFly simulation offers five options for RADARs and Targeting Computers, for simplicity purposes this example only uses three options. OPTIONS SPECIFIC Version 2.2, 3-26-15

Physical Architecture Targeting Computer& SW JUGV Architecture Design Functional Allocation Table – JUGV Targeting and Attack Sub-system N O U S Physical Architecture Radar Targeting Computer& SW LASER Designator IFF Missile Launcher Missile Functional Architecture Detect X Locate Track Identify Designate Decide Arm Launch Guide Kill Safe “Loop” back to RQMTS Requirements Management “Traceability” V E R B S SLIDE BUILD Part of the design process is to ensure that all functions (as derived down to the lowest level necessary) are assigned to physical item. That item can hardware, software, or the user (human). Note: functions can be allocated to more than one physical item and physical items can perform more than one function. The functional architecture should have a clear mapping to the physical architecture. Why is this important to good Requirements Management and, ultimately, good design? Applying an adequate level of rigor to functional allocation will help ensure traceability of required capabilities down to the physical architecture. In other words, this is an important check that helps engineers make sure that development of item performance specifications start with a complete set of allocated functions. What key technical/program planning artifact should be informed by the results of functional allocation and architecture design? Since the physical architecture represents a breakdown of the system in logical segments, this physical architecture should help to shape the overall program work breakdown structure (WBS). Helps Shape the WBS Version 2.2, 3-26-15

“N ” Diagram: To Analyze Interfaces and Interactions JUGV Architecture Design “N ” Diagram: To Analyze Interfaces and Interactions 2 Problems occur at “interfaces.” Identifying them in advance is crucial to effective Risk Management. N Diagrams are used to identify and analyze interface requirements between physical components or functions. 2 Shown on the diagonal Interfaces then identified or Interface types and requirements are identified for each component. They could include: - Electrical - Mechanical - Hydraulic - Heating / Cooling - User interface The goal of Stakeholder Requirements Definition is to convert operational requirements & capability needs into preliminary system technical requirements. TMPs in play throughout Architecture Design Interface Management Risk Management Version 2.2, 3-26-15

JUGV N Diagram To Analyze Interfaces and Interactions 2 JUGV N Diagram To Analyze Interfaces and Interactions RADAR Target Location RADAR Mode Command Targeting Cmptr Interrogation Command LASER Mode Command Arm/Safe Command Guidance Data Target Identification IFF LASER Mode Status Laser Designator Encoded Laser Signal Launcher Status Missile Status LASER Lock Status Missile Launcher Launch Command Missile This chart shows a notional N2 diagram that can be used to identify and analyze interface requirements between physical components, which are listed on the diagonal. N2 diagrams (or matrices) can be used to analyze interfaces and interactions from any number of perspectives. For example, system functions can be placed on the diagonal, instead of physical components in order to analyze interfaces from a functional perspective. In this example, the logical/informational interface requirements within the example subsystem are identified. Other types of interfaces and interface requirements would have to be identified for each component. These might include: Electrical interface requirements Mechanical interface requirements Hydraulic interface requirements Heating and cooling interface requirements User interface requirements How does this type of analysis support Risk Management? This type of analysis could by revealing highly complex component to component interfaces help identify areas of technical risk or, at that the system level, an overall large number of interfaces that could pose a big integration challenge. How would these types of risks be mitigated? Mitigation might start with the proper technical staffing, such as bringing in domain expertise in targeting and fire control systems. Also, mitigation could include allocating adequate time and resources to integration testing. Version 2.2, 3-26-15

JUGV Architecture Design Design Trade Study/Trade-off Analysis Decision Analysis Starts Here Define Candidate Solutions Define Assessment Criteria Assign Weights to Criteria Assign MOE or MOP to Candidate Solutions Sensitivity Analysis on Results Example: A trade-study to choose the combination of COTS RADAR and GFE targeting computer to optimize our system according to selection criteria, which are defined as follows: Weight Range Power Requirements Life Cycle Cost Note: The DragonFly simulation SW will provide the component data for you to use as input for the selection criteria categories for each combination of RADAR and Targeting Computer. Slide Builds We begin the trade-off analysis by defining candidate solutions We then define Assessment Criteria and weight the criteria Then we assign MOEs and MOSs to candidate solutions and perform sensitivity analysis on the results Example – see slide Decision analysis plays a central role in architecture design. Each trade study conducted in the design process should be carefully documented and should be linked and traceable to specific requirements The DragonFly simulation is set up to allow for the selection of “off-the-shelf” components to complete a particular design for the JUGV. For this example, we will assume that we there are three options to select from for the RADAR and three options for the Targeting Computer (the simulation actually offers five options for each, but for the sake of simplicity, we’ll only consider three options here). Version 2.2, 3-26-15

Example: Trade Study Results for RADAR/Targeting Computer Selection Architecture Design Example: Trade Study Results for RADAR/Targeting Computer Selection Decision Analysis Here we see the results of the trade study that compares the various combinations of RADARs and Targeting Computers based on the performance criteria—weight, power, range—and life cycle cost. Part of the process of trying to compare “apples” to “apples” is selecting a way to normalize performance data by assigning a score to a physical performance characteristic based on a pre-defined scale. Trade-off analysis for COTS RADAR and GFE Targeting Computer combinations. Nine different options considered based on available components in DF simulation. Version 2.2, 3-26-15 18

Trade Study Results for RADAR/Targeting Computer Selection Architecture Design Trade Study Results for RADAR/Targeting Computer Selection Decision Analysis Sensitivity Analysis – What is the impact on the “Scores” if we make a change in One of the “weights?” (T) (O) LCC constraint Options 3 & 6 are within our cost & performance criteria. What’s the best choice? Would it still be the best choice if the weights were changed? TRADE SPACE When conducting trade studies or applying other analytical methods to perform Decision Analysis, it is vitally important that you know your trade space. For this simple example, the trade space is boiled down to a the region bounded by the threshold and objective “Total Performance Score” and the Life Cycle Cost (LCC). SLIDE BUILDS What is a sensitivity analysis and why is it important? In any trade study or trade-off analysis, you must understand how the selection of criteria weightings and performance scoring scales impact your final results. For example, you need to know if slight changes in weightings cause significant changes in the results. Version 2.2, 3-26-15

This is the “Design to” Baseline. Architecture Design JUGV Evolving Technical Baseline (Example of requirements traceability down to JUGV Targeting Computer Configuration Item) Describes how system level functional and performance requirements are allocated to physical components (hardware items, software items, and users) Describes the interfaces among system components and external systems/environment. Main artifacts = Hardware Configuration Item Performance Spec, Computer Software Configuration Item Requirements Spec, Interface Requirement Spec. This is the “Design to” Baseline. Describes in detail how to fabricate components and code software, and how to manufacture, operate, and maintain the system and its components, and how to train the various users. Main artifacts: Detailed item specifications, material specifications, process specifications, various drawings and manuals. This is the “Build to” Baseline. Functional Baseline What the system must do - functions How well it must do it - performance - at the “system” level Defines the interfaces/dependencies among the functions, groups and the environment. Main artifacts = System Performance Spec and Subsystem/Segment Specs. System Functional Review (SFR) Allocated Baseline Technical Planning Technical Reviews Preliminary Design Review (PDR) Here we have the definitions of the three “baselines.” Functional Baseline – Establishes in detail what the system must do (functions) and how well it must do it (performance) at the “system” level and defines the interfaces/dependencies among the different functions or functional groups and the external environment. Main artifacts: System Performance Specification and Subsystem/Segment Specifications. Allocated Baseline – Describes how system level functional and performance requirements are allocated to physical components (hardware items, software items, and users) and describes the interfaces among system components and external systems/environment. Main artifacts: Hardware Configuration Item Performance Specification, Computer Software Configuration Item Requirements Specifications, Interface Requirement Specifications. This is the “Design to Baseline.” Product Baseline – Describes in detail how to fabricate components and code software, and how to manufacture, operate, and maintain the system and its components, and how to train the various users. Main artifacts: Detailed item specifications, material specifications, process specifications, various drawings and manuals. This is the “Build to Baseline.” Product Baseline Critical Design Review (CDR) 20 Version 2.2, 3-26-15

TMPs in play throughout Architecture Design JUGV Evolving Technical Baseline (Example of requirements traceability down to JUGV Targeting Computer Configuration Item) Describes how system level functional and performance requirements are allocated to physical components (hardware items, software items, and users) Describes the interfaces among system components and external systems/environment. Main artifacts: Hardware Configuration Item Performance Specification, Computer Software Configuration Item Requirements Specifications, Interface Requirement Specifications. This is the “Design to” Baseline. Describes in detail how to fabricate components and code software, and how to manufacture, operate, and maintain the system and its components, and how to train the various users. Main artifacts: Detailed item specifications, material specifications, process specifications, various drawings and manuals. This is the “Build to” Baseline. Functional Baseline TMPs in play throughout Architecture Design What the system must do (functions) and how well it must do it (performance) at the “system” level Defines the interfaces/dependencies among the different functions or functional groups and the external environment. Main artifacts: System Performance Spec and Subsystem/Segment Specs. Technical Assessment Configuration Management Requirements Management Technical Planning Tracing Requirements in the Tech Reviews: Identifies Requirements “Creep” Increases confidence in meeting Stakeholder expectations Baselines provide: The common reference point for Configuration Management The basis for “Verification” activities Allocated Baseline SLIDE BUILDS 1. Here we see the Technical Management Processes and the technical reviews that are in play as the baselines evolve. Technical Baseline Management/Requirements Management Why is tracing requirements important? Helps identify requirements creep (addition of new requirements, unsubstantiated requirements) and to ensure the top-level requirements are properly flowed down (allocated) to system components. Gives us confidence that the solution meets stakeholder expectations. Why do we establish baselines? Provide common reference point for configuration management and provide the basis for verification activities during product realization. How do technical reviews relate to the three baselines? SFR = Functional Baseline, PDR = Allocated Baseline, CDR = Initial Product Baseline. Product Baseline Version 2.2, 3-26-15

“System” Performance Requirement (JUGV System) Decomposition of Requirements and Traceability from Baseline to Baseline Completes “Design” EXAMPLE: JUGV Targeting Software Configuration Item “System” Performance Requirement (JUGV System) SR1 The JUGV shall be capable of autonomously attacking enemy targets. Functional Baseline “Item” Performance Specification (Targeting Software CI) IPS 2.1 The Targeting Software shall cyclically update JUGV track database with the combat identification of all targets. IPS 2.2 The Targeting Software shall fuse IFF track data with RADAR track data. IPS 2.3 The Targeting Software shall cyclically apply a rules of engagement algorithm to each track in the track database. Allocated Baseline (Performance of CIs that make up the system) This slide complements the previous slide Slide Builds It illustrates requirement allocation and decomposition down to the product baseline level. In an actual defense system, hundreds of system level requirements will flow down to thousands of item level performance requirements that will then flow down to tens of thousands (or more) detailed requirements. How is this managed? For small projects and products, the requirements can usually be managed using a spreadsheet program. However, the larger programs and projects require the use of one of the available requirements management tools such as DOORS (Dynamic Object Oriented Requirements System). What are the consequences if requirements are not managed effectively? One major problem resulting from poor requirements management will be inability to cope with changing requirements, especially in the early phases of the life cycle. If requirements are not properly tracked and traced up through the architecture back to the capabilities document, then is no basis to analysis and discuss changes. Item Detail Specification (Targeting Software - ROE Module) Pseudo Code, Flow Charts, Use Case Diagrams, Sequence Diagrams, State Diagrams, Structure Diagrams, Data Base Structure, Data Definitions, Data Storage Requirements, Etc. Product Baseline (Details of components and modules that make up CIs) Version 2.2, 3-26-15

EXAMPLE: “PROBABILITY OF Kill KPP (Next Slide) Crafting Technical Performance Measures (TPMs) (A critical step for Systems Engineers) Technical Assessment TPMs Selected attributes that are measurable through analysis from the early stages of design and development Allow Systems Engineers and PMs to track progress over time An integral part of the logical decomposition and the architecture design process Provide a mechanism for facilitating early awareness of problems Should be based on parameters that: Drive costs Are on the critical path Represent High Risk factors Add a third dimension to the Cost & Schedule strengths of EVM “Technical Achievement” EXAMPLE: “PROBABILITY OF Kill KPP (Next Slide) Risk Management The goal of Stakeholder Requirements Definition is to convert operational requirements & capability needs into preliminary system technical requirements. Architecture Design Version 2.2, 3-26-15

Example JUGV TPM Threshold Objective Measure of Effectiveness (KPP) Probability of Kill (Pk) = % of time that attack of single target results in rendering target incapable of performing its mission) Measure of Performance Target Track Accuracy (meters) Measure of Performance Targeting Data Update Rate (milliseconds) Measure of Performance Max Weapon Range (meters) Technical Performance Measure Targeting Algorithm Running Time (milliseconds) Technical Performance Measure Targeting Software Memory Utilization (% total RAM capacity) Planned progress over time with tolerance bands Actual, measured progress Technical Performance Measures (TPMs) (SLIDE BUILD: part of Tech Assessment and Risk Mgmt Tech Mgmt processes) Crafting of TPMs is a critical step for systems engineers that is essential for effective technical management of the development of a complex system. TPMs are a set of selected technical attributes or parameters that are either directly measureable or can be accurately estimated through analysis starting at the early stages of design and development. TPMs allow systems engineers and program managers to track actual progress against planned progress over time with regard to priority technical areas. TPM selection should be an integral part of the logical decomposition and the architecture design process. Selection of TPMs should be based on those “technical parameters” that drive program costs, are on the critical path schedule, or represent a high risk factor for your program. Typical measurement priorities are: activities with little time/cost flexibility and those that require introductions of new technology. In this JUGV example, the TPM is derived starting with the system level measure of effectiveness (MOE)/KPP, Probability of Kill (Pk). Since the Pk requirement has been allocated to the “Targeting & Attack Subsystem” and many of the key functions of this subsystem further allocated to the “Targeting Software” configuration item, which is a new development item in this example, the determination was made that the performance of the targeting software must be closely tracked. The measure selected in this case is “targeting algorithm running time.” Analysis would need to done to develop threshold and objective values for the selected TPM that would strongly correlate to the threshold and objective values of the system level MOE/KPP. How does the use of TPMs compliment other program management areas? TPM builds on the two traditional strengths of Earned Value management (cost and schedule performance indicators) by adding a third dimension - the status of technical achievement. By combining cost, schedule and technical progress into one comprehensive management tool, program managers are able to assess the progress of their entire program. TPM can be directly applied to risk management. Conditions threatening the health of program development must be identified sufficiently early to allow managers to mitigate those areas of technical, cost, and schedule risk. The cost avoidance window of opportunity is typically only in the early phases of a program or contract execution. TPM provides a rigorous mechanism for facilitating early awareness of problems. Threshold Targeting Algorithm Running Time (ms) Objective Version 2.2, 3-26-15 1QFY1 2QFY1 3QFY1 4QFY1 1QFY2 2QFY2 3QFY2

Implementation “Design” “Realization” Involves two primary efforts: detailed design down to the lowest system elements and the realization of fabrication/production into actual products. Plans, designs, analysis, requirements development, and drawings are realized into actual products Ensure detail design is properly captured in Design Phase Artifacts Make, buy, or reuse system components Verify that each element—whether bought, made, or reused—meets specification ACQ STRATEGY JUGV System Targeting & Attack RADAR Target H/W Target S/W IFF LASER Launch Missile Make Buy Re-use Drives Implementation Strategy Applied to JUGV WBS. Example: Make, Buy, or Reuse? Changed this slide and hide the original. This one has major definition changes since it now is both design and realization per the definition. I added the design “V” in the upper left hand side of the slide. I changed the top box to go closer with the definition of the phase from the earlier added slide which now matches the DAG. Build is awkward but I am not an expert to make it better and I can live with how it works how. SLIDE BUILD (ADDS THE V) Product implementation is the first process encountered in the SE process that begins the movement from the bottom of the product hierarchy up towards the Product Transition Process. This is where the plans, designs, analysis, requirements development, and drawings are realized into actual products. How does overall technical planning come into play during implementation? The acquisition strategy will influence the implementation strategy that will, in turn, influence and depend on the Architecture Design process. The Systems Engineering Plan should document how implementation decisions are to be made and clearly connect this process to the overall acquisition strategy. Implementation strategy must also consider long-lead procurement requirements for certain components and materials. Technical Baseline planning within the SEP must be consistent with the implementation strategy in that the artifacts that capture the design are clearly defined and provide adequate level of detail to fabricate or procure individual system elements. In this example, the implementation strategy calls for “making” Targeting Computer Software Configuration Item. What should be in place to support S/W coding? In addition to the detail software design artifacts, you must ensure that all the enabling products are in place, such as software coding tools, compilers, debuggers, test benches, etc. Also, must ensure trained software programmers are in place and that there is a process in place to measure progress and quality. What configuration management concerns will stem from your implementation strategy? For GFE, NDI items, your program will need to establish good communication channels with the “owners” of those systems in order to keep abreast of the impact future ECPs and deviations. For COTS items, a big concern could be the likely requirement for a future tech insertion because the item is no longer supported by the commercial vendor who has moved on to the next generation system because of the demands of the commercial market (the major player). SEP Documents how decisions are to be made Version 2.2, 3-26-15 Example: Targeting & Attack Segment

JUGV Schematic block diagram Integration (Putting the pieces together) IFF LASER Targeting Cmptr RADAR Launcher MIL-STD-1760 Encoded LASER Energy Ethernet RS 422 MIL-STD-1553B Targeting & Attack Sub-system TMPs in play Interface Management Configuration Management Integration - Continuing with our example Here we show a schematic block diagram of the notional JUGV Targeting & Attack subsystem architecture. The architecture defines the interfaces within the sub-system and with other JUGV subsystems. Integration happens at each level in the system architecture starting with basic parts, such as resistors, and proceeding up through components (e.g., printed circuit boards), subassemblies (e.g., actuators), assemblies (e.g., antenna), subsystems (e.g., targeting & attack subsystem), segments (e.g., mission system segment), and finally, full systems (e.g., JUGV). This examples shows how assemblies that make up our notional JUGV Targeting & Attack subsystem will be integrated and how the sub-system itself will be integrated into the overall system. What Technical Management Processes play a key role at this stage and what are some of the challenges these processes must deal with? Interface Management – Impact of changing requirements on interfaces. Need to understand how changes propagate through the system. Well defined interfaces are necessary to manage change. Configuration Management – As components are brought together from multiple sources (i.e., COTS, GFE, NDI, internally developed), each item must match the relevant item specification. Detailed accounting of product data at the prime, sub-contractor, and supplier level is required to ensure compliance with specifications at each level of the architecture as integration proceeds. Integration happens at each & every level in the system architecture Starts with basic parts, such as resistors . . . Then proceeds up through: Components (e.g., printed circuit boards) Assemblies (e.g., antenna) Segments (e.g., mission system segment) Subassemblies (e.g., actuators) Subsystems (e.g., target & attack subsystem) Full systems (e.g., JUGV). Version 2.2, 3-26-15

JUGV Schematic block diagram Integration (Putting the pieces together) IFF LASER Target Cmptr RADAR Launcher Targeting & Attack Sub-system Vehicle Control & Nav Remote Operator Comms Electrical and Mechanical Subsystems Integration - Continuing with our example Here we show a schematic block diagram of the notional JUGV Targeting & Attack subsystem architecture. The architecture defines the interfaces within the sub-system and with other JUGV subsystems. Integration happens at each level in the system architecture starting with basic parts, such as resistors, and proceeding up through components (e.g., printed circuit boards), subassemblies (e.g., actuators), assemblies (e.g., antenna), subsystems (e.g., targeting & attack subsystem), segments (e.g., mission system segment), and finally, full systems (e.g., JUGV). This examples shows how assemblies that make up our notional JUGV Targeting & Attack subsystem will be integrated and how the sub-system itself will be integrated into the overall system. What Technical Management Processes play a key role at this stage and what are some of the challenges these processes must deal with? Interface Management – Impact of changing requirements on interfaces. Need to understand how changes propagate through the system. Well defined interfaces are necessary to manage change. Configuration Management – As components are brought together from multiple sources (i.e., COTS, GFE, NDI, internally developed), each item must match the relevant item specification. Detailed accounting of product data at the prime, sub-contractor, and supplier level is required to ensure compliance with specifications at each level of the architecture as integration proceeds. Vehicle Chassis Engine and Drive Train Integration within a Sub-system = a Major Task Integrating Sub-system to Sub-system = a HUGE Task Major challenges in Interface Management and Configuration Management Version 2.2, 3-26-15

JUGV (Across 3 Dimensions) Verification JUGV (Across 3 Dimensions) Requires early and continuous Planning Development Qualification Acceptance Ops & Maintenance Throughout all Phases TEST INSPECTION ANALYSIS DEMONSTRATION JUGV System Drive Train Chassis Target & Attack RADAR H/W S/W S&R IFF LASER Launch Survivability Missile Make Buy Re-use “System” Verification Top to Bottom. Down to The Lowest Piece- part Think “DT” Verification activities are conducted across three dimensions  Life cycle dimension – Development (Does the system function as designed?), Qualification (Does the system perform adequately in its intended environment?), Acceptance (Am I getting what I paid for?), O&M(Has the system been maintained correctly or is it ready to operate?). System dimension —each level of the architecture must be adequately verified down to lowest piece part. This might involve a simple inspection of a basic electronic components from a supplier all the way up to full system testing. Method dimension —One or more verification methods can be applied to a requirement depending on the nature of the required verification (see next slide). How does the perspective illustrated in this slide help us to understand how verification impacts technical planning? First, even for moderately complex system, verification efforts will be a large part of the development and production process. Second, without early and continuous planning for verification activities, it will be impossible to collect the necessary evidence needed to make major program decisions. Verification activities provide the “ground truth” for a program. Thus, verification can also be looked at in the context of what is needed for major decisions (MS B,C, FRPDR, DD250 of an end item, etc.). Technical planning should account for personnel, resources, facilities, and engineering needed to perform the length, width, and depth of verification activities. Methods “Confirming system elements meet the design-to or build-to Spec…Did you build it right?” Version 2.2, 3-26-15

JUGV Verification Verification Technical Planning Verification System/Item Requirement Method Verification Requirement I A D T SR1. The JUGV shall be capable of autonomously attacking active or passive enemy targets. X VSR1D. Provide evidence that the JUGV is capable of autonomous attack by demonstrating in a representative operational environment with simulated friendly and hostile forces an attack against simulated hostile targets. The demonstration will show that the JUGV system is capable of distinguishing between friendly and non-friendly units and launching weapons to engage only non-friendly units. Verification requirements provide the basis for the TEMP As they’re developed and refined, the SEP should be updated to describe how test results will feedback into system design. The SEP should also describe the tools used to for tracking and maintaining traceability of verification requirements. Verification Matrices are developed for each level of the system Verification planning should happen as early as possible. How does establishment of good verification requirements help with the overall Technical Planning? Verification requirements provide the basis for the TEMP (Long Lead Facilities, Laboratory Design, Range Coordination , Software Analysis Tools, Test Articles, etc.) As verification requirements are developed and refined, the SEP should be updated to describe how test results will feedback into system design. SEP should also describe the tools used to for tracking and maintaining traceability of verification requirements. Plan for early verification activities to help avoid cost . What early verification activities might be done on the JUGV system to avoid future costs? Perhaps, early analysis and demonstrations of autonomous attack capability using prototype hardware and software. Note: Verification Requirements Matrices are developed for each level of the system. The excerpt shown above is for the “system” level. There would also be “segment” or “subsystem” VRMs, and “item” VRMs. You never really have a “good” requirement until you have “verification” requirements to go with it Version 2.2, 3-26-15

JUGV Validation JUGV Operational Test Design Example Developed by JCIDS Process Developed by T&E WIPT Critical Operational Issues Can the JUGV kill its intended targets? Can the JUGV be maintained in an operational environment? Mission Task Analysis Attack by Fire an Enemy Force or Position Identify enemy Fix enemy location Engage enemy with weapons Provide Maintenance Support Identify failed components Remove and replace failed components in the field Measures of Effectiveness Target Detection Range Measures of Suitability Mean time to Fault Locate Measures of Performance RADAR Range Built in Test (BIT) False Alarm Rate TEMP JUGV CDD JUGV CONOPS Operational Scenarios OT Framework/ OT Plan OT Data Requirements OT Resource Requirements KPPs Probability of Kill / Ao Developed by Ops Testers Think “OT” “Confirming system elements meet Stakeholder requirements” Here we show how planning for JUGV validation would notionally evolve. Validation is accomplished through operational test. Planning for operational test involves all key stakeholders in a system. PMO, OTA, interface with each other primarily through the T&E WIPT The T&E WIPT, using JCIDS artifacts as input, lead the high level analysis and planning that goes into mission analysis, identification of Critical Operational Issues, Measures or Effectiveness and Suitability, and, ultimately, the Operational Test section of the Test and Evaluation Master Plan (TEMP). This high level planning provides the basis for detailed operational test planning that is accomplished solely by the designated Operational Test Agency or Unit. Version 2.2, 3-26-15 Did you get the requirements right?

Verification Validation Development Tests Operational Tests Controlled by program manager One-on-one tests Controlled environment Contractor environment Trained, experienced operators Precise performance objectives and threshold measurements Test to specification Developmental, engineering, or production representative test article Controlled by independent agency Many-on-many tests Realistic/tactical environment with operational scenario No system contractor involvement User troops recently trained Performance measures of operational effectiveness and suitability Test to operational requirements Production representative test article What’s the difference between Verification and Validation? From a process perspective, System Verification and System Validation may be similar in nature, but the objectives are fundamentally different. Validation is from a customer point of view, where the interest is in whether the end product provided will do what they intend within the environment of use. Verification testing relates back to the approved technical requirements and can be performed at different stages in the product life cycle. Verification testing includes: (1) any testing used to assist in the development and maturation of products or support processes; and/or (2) any test used to understand status of technical progress, to verify that design risks are minimized, to substantiate achievement of technical performance, and to certify readiness validation testing. Verification tests use instrumentation and measurements, and are generally accomplished by engineers, technicians, or operator/maintainer test personnel in a controlled environment to facilitate failure analysis. Why do both verification and validation, why not just validation? VERIFICATION REDUCES VALIDATION RISK  It is essential to confirm that the realized product is in conformance with its specifications and design description documentation because these specifications and documents will establish the configuration baseline of the product, which may have to be modified at a later time. Without a verified baseline and appropriate configuration controls, such later modifications could be costly or cause major performance problems. When cost-effective and warranted by analysis, various combined tests are used. The expense of validation testing alone can be mitigated by ensuring that each end product in the system structure was correctly realized in accordance with its specified requirements before conducting validation. VERIFICATION REDUCES VALIDATION RISK Version 2.2, 3-26-15

II Transition Integration JUGV Targeting & Attack Sub-system to JUGV System Transition a system element to the next level of the physical architecture Focus on: Integration Interface Management Tech Data Management JUGV System to User Transition an end-item to the user in the operational environment This is an UPDATED slide. Added the word “physical” in the upper blue arrow. Changed the lower blue arrow to have it say operational environment. These are to match the definitions from the earlier new added slides. Previous slide was hidden so you can see both versions. Product transition occurs during all phases of the life cycle. In the early phases, the technical team’s products are documents, models, studies, and reports. As the project moves through the life cycle, these paper or soft products are transformed through implementation and integration processes into verified and validated hardware and software solutions to meet the stakeholder requirements. The Product Transition Process includes product transitions from one level of the system architecture upward. The Product Transition Process is the last of the product realization processes, and it is a bridge from one level of the system to the next higher level. The Product Transition Process is the key to bridge from one activity, subsystem, or element to the overall engineered system. As the system development nears completion, the Product Transition Process is again applied for the end product, but with much more rigor since now the transition objective is delivery of the system-level end product to the actual end user. How does Technical Data Management facilitate transition? Technical Data Management ensures that technical data is properly stored, easily accessible, and well protected. Technical Data will include such areas as Logistics Management Information (Supply Support information, Tech Pubs, etc.) that will help support transition to the field. Additionally, Technical Data Management should provide for how product definition data (drawings, interface control documents, etc.) is stored, accessed, and protected, thus helping to facilitate the transition of components/subsystems to higher levels of the system architecture. II Focus on Operational Integration Integrated Logistics Support Elements (Training & Maintenance Plans, Supply Provisions, Technical Publications, Support Equipment, PHS&T, etc.) Version 2.2, 3-26-15

Recursive and Iterative Systems Engineering Recursive - The repeated application of processes to design next lower layer system products or to realize next upper layer end products within the system structure. SUB- SYSTEM A key takeaway from any discussion on SE is the understanding of the iterative and recursive nature of the process. But what does this mean? “Recursive” is defined as adding value to the system by the repeated application of processes to design next lower layer system products or to realize next upper layer end products within the system structure. This also applies to repeating application of the same processes to the system structure in the next life-cycle phase to mature the system definition and satisfy phase success criteria. “Iterative” is the application of a process to the same product or set of products to correct a discovered discrepancy or other variation from requirements. In more practical terms, the systems engineer takes a big problem, and through the SE process breaks that problem down into smaller problems (e.g., item performance specifications), each of those smaller problems then become an input to the SE process at a lower level. The process is reversed on the realization side of the Vee. This is recursion. This recursion also happens across life-cycle phases. The output of MSA phase—validated concepts, technologies, capability needs—become the input to the systems engineering process of the TD phase. Outputs of the TD phase—validated system level and allocated requirements, matured technologies— become input to the EMD phase. Iteration is exemplified by the process of documenting performance requirements at each level of decomposition on the downward side of the Vee and then verifying these requirements as each level on the upward side of the Vee. Discrepancies found during this upward verification will then drive changes to the documented performance requirements developed on the downward side. Also, as a design becomes more detailed and engineers “impose more reality” on the design; mistakes, gaps, and shortcomings originating in the previous step will inevitably be identified and require that previous step to be repeated in order to refine the solution. Steps in the Vee process are intended to move forward in parallel, but be completed in order! Iterative - The application of a process to the same product or set of products to correct a discovered discrepancy or other variation from requirements. COMPONENTS Version 2.2, 3-26-15

Recursive and Iterative Systems Engineering Vee Model Stakeholders Requirements Definition Stakeholder Requirements, CONOPS, Validation Planning Definition Transition Validate System to Stakeholder Requirements and CONOPS Validation Requirements Analysis Validation System Performance Specification and Verification Planning Integrate System and Verify to System Specification Verification Architecture Design Verification Configuration Item Performance Specification and Verification Planning Verification Assemble Configuration Items and Verify to CI Performance Specification Implementation Integration In more practical terms, the systems engineer takes a big problem, and through the SE process breaks that problem down into smaller problems (e.g., item performance specifications), each of those smaller problems then become an input to the SE process at a lower level. The process is reversed on the realization side of the Vee. This is recursion. This recursion also happens across life-cycle phases. The output of MSA phase—validated concepts, technologies, capability needs— become the input to the systems engineering process of the TD phase. Outputs of the TD phase—validated system level and allocated requirements, matured technologies—become input to the EMD phase. Iteration is exemplified by the process of documenting performance requirements at each level of decomposition on the downward side of the Vee and then verifying these requirements as each level on the upward side of the Vee. Discrepancies found during this upward verification will then drive changes to the documented performance requirements developed on the downward side. Also, as a design becomes more detailed and engineers “impose more reality” on the design; mistakes, gaps, and shortcomings originating in the previous step will inevitably be identified and require that previous step to be repeated in order to refine the solution. Steps in the Vee process are intended to move forward in parallel, but be completed in order! SLIDE BUILDS to show the original 8 TPs and 8 TMPs. Emphasize that the TPMs are in play through the entire SE process Configuration Item Detail Specification and Verification Procedures Inspect and test to Detail Specification Verification Technical Planning Requirements Management Configuration Management Interface Management Fabricate, code, buy, or reuse Decision Analysis Risk Management Data Management Technical Assessment Version 2.2, 3-26-15