Download presentation
Presentation is loading. Please wait.
Published byErin Harrison Modified over 9 years ago
1
TPF-C Architecture Trade A route map for the next few years Charley Noecker Ball Aerospace & Technologies Corp 28 August 2006
2
24 Sept 2006TPF-C architecture trade process2 Goals of this presentation Describe a process and documentation practice to organize the decisions we need to make –Telescope size –Starlight Suppression System (SSS) –Wavefront sensing and control approach Begin the list of specific candidates –Examples of how specific they should be Begin the list of evaluation criteria –Identify useful metrics
3
24 Sept 2006TPF-C architecture trade process3 This week we will Approve a process (like this one?) Agree on the list of criteria Agree on the list of candidates –As complete as possible –Later additions and modifications are expected Assign action items to begin assessing metrics
4
24 Sept 2006TPF-C architecture trade process4 This week we will not Complete the analysis of metrics or Begin the scoring Make any actual decision among possible architectures Take potshots at each other’s concepts Perform detailed design (except on your own time) Hoard innovations that could benefit another architecture –“Mix and match” will benefit planet finding
5
24 Sept 2006TPF-C architecture trade process5 Trade matrix features Decision statement: clear, concise, complete. –Identifies full scope of the question; get everyone thinking at the same level Options: Brief identifier for each candidate. Details provided elsewhere Musts: All of the pass / fail criteria. (Expect all realistic candidates to “pass”.) –Metrics may be shown for support Discriminators: all of the better / worse criteria –All the ways we can compare the merits of each option
6
24 Sept 2006TPF-C architecture trade process6 Trade matrix scoring Metrics –Quantify important characteristics of candidates — things that we “value” Scores –Subjective (numeric) ratings based on those metrics, range 0-10 Weights: –Declare how important each discriminator is to us Subweights –Relative weighting of metrics contributing to single discriminator
7
24 Sept 2006TPF-C architecture trade process7 Combining scores Totals show a numeric rollup of all our judgments This arithmetic is “truthy” –Conveys a false sense of truth or authority Really it’s only a tool we use by choice Authority comes from our choices and how we defend them
8
24 Sept 2006TPF-C architecture trade process8 Final negotiation The real meat of the decision is captured in our choices for –Scores—Weights –Algorithms in the spreadsheet So now we reassess: –Does each discriminator have the right importance in the result? –Could reasonable tweaks in weights and scores change the answer? –Did we leave out something important? Do we all believe the answer we’re getting? Adjust scores and weights until we reach a consensus view
9
24 Sept 2006TPF-C architecture trade process9 Common scoring practices Example from a similarly large-scale TPF-I architecture trade –Scoring meeting: 9-10 December 2004 (alpha-lib:Collection-24885)alpha-lib:Collection-24885 Linear relationship was used for 55 of 56 discriminators –Choose linear relation between scores and the metric –Define top score to be 10 –Choose lowest score by mean, median, or mode of a vote Nonlinear relationship chosen once –Curve gives score vs. star counts –Score = 0 for <100 stars –Next 60 stars have a high value –Lower value per star beyond that
10
24 Sept 2006TPF-C architecture trade process10 Features / benefits Acknowledges subjectivity of decision making, but keeps it grounded in analysis –Numbers and arithmetic reflect our judgments, or we change them Scoring by a group: balance many opinions, differing expertise Transparently documents the decision –Factors considered–Metrics used –Value judgments–Importance judgments Robustness of the result –Decision stands on all judgments taken together Simplifies re-evaluation with new concepts / data
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.