Download presentation
Presentation is loading. Please wait.
Published byPierce Gregory Modified over 9 years ago
1
Comparative Evaluation of a Novel Concept Design Method Presented by: Damian Rogers, PhD Candidate Ryerson University, Toronto, Canada
2
Background Part of doctoral research Part of doctoral research Research grounded in engineering design theory Research grounded in engineering design theory Meta-level Meta-level Covers all disciplines of engineering Covers all disciplines of engineering Based on previous work creating new methods for “early” stages Based on previous work creating new methods for “early” stages Problem analysis through concept evaluation Problem analysis through concept evaluation
3
Creativity in Design Agreement that creative designs are good Agreement that creative designs are good Current creativity tools do not show “how” Current creativity tools do not show “how” Stimuli to promote creative thinking Stimuli to promote creative thinking Random chance, no guarantee Random chance, no guarantee No consensus on measuring creativity No consensus on measuring creativity Novelty Novelty Usefulness Usefulness Cohesion (complexity, elaboration, abstraction, flexibility, and robustness) Cohesion (complexity, elaboration, abstraction, flexibility, and robustness)
4
Design Experiments Few empirical experiments conducted Few empirical experiments conducted Verify the components of creativity Verify the components of creativity Give designers a “way” to be creative Give designers a “way” to be creative Study to assess Design by DNA vs other conventional methods Study to assess Design by DNA vs other conventional methods DbD is author’s own method DbD is author’s own method Concept design only Concept design only
5
Is DbD Effective? Conducted an experiment to test hypothesis Conducted an experiment to test hypothesis 2 experimental runs 2 experimental runs Both groups given the same task: design an urban bicycle Both groups given the same task: design an urban bicycle Both groups given same info package Both groups given same info package Analyze the final product of each participant Analyze the final product of each participant Creativity measures Creativity measures NASA TLX NASA TLX Short questionnaire Short questionnaire
6
Evaluating Creativity Measured by: novelty, usefulness, cohesion Measured by: novelty, usefulness, cohesion Participants evaluated themselves Participants evaluated themselves A committee evaluated each participant A committee evaluated each participant Evaluated on a 5-point Likert scale Evaluated on a 5-point Likert scale Mean for each population taken in each category Mean for each population taken in each category Means across methods compared Means across methods compared
7
Creativity Assessments Experimental Run 1 Experimental Run 1 Results vary Results vary Systems appears to come out on top Systems appears to come out on top MethodEval.NoveltyUsefulnessCohesion Design by DNA Committee3.06 3.443.31 Comm+self3.13 3.96N/A Design by TRIZ Committee 4.08 3.502.67 Comm+self 3.72 3.94N/A Systems Design Committee 3.83 3.75 Comm+self 3.78 N/A
8
Creativity Assessments cont’d Experimental Run 2 Experimental Run 2 DbD category winner in 2 of 3 DbD category winner in 2 of 3 MethodEval.NoveltyUsefulnessCohesion Design by DNA Committee3.42 3.833.25 Comm+self3.5 3.78N/A Common Design Committee 4.38 3.13 Comm+self 3.75 3.42N/A
9
NASA TLX Task Loading indeX Task Loading indeX Widely used as a measure of how tasks affect the people doing them Widely used as a measure of how tasks affect the people doing them Uses 6 measures of task loading Uses 6 measures of task loading Higher numbers mean more load on the individual Higher numbers mean more load on the individual MethodMental Demand Physical Demand Temporal Demand Perf.EffortFrustration DbD 7.171.174.003.33 6.83 2.33 Common7.52.54.755.25 6.00 3.00
10
Statistical Analysis Statistical analysis was run on the experiment results Statistical analysis was run on the experiment results “T-test” run on each parameter in both the “creativity” and “NASA TLX” assessments “T-test” run on each parameter in both the “creativity” and “NASA TLX” assessments Confidence intervals recorded for each Confidence intervals recorded for each
11
Confidence Intervals Run 1 Run 1 Run 2 Run 2 TLX TLX MethodEval.NoveltyUsefulnessCohesion Systems vs DbD Committee93.19% 84.44%83.19% Comm+self89.55% 67.34%N/A TRIZ vs DbD Committee 99.49% 58.65%87.44% Comm+self 93.05% 51.47%N/A Eval.NoveltyUsefulnessCohesionCohesion´ Committee88.05% 84.07%55.44%96.83% Comm+self65.80% 77.28%N/A MentalPhysicalTemporalPerf.EffortFrust. 75.25%87.49%64.49%77.69%70.82%75.25%
12
Temporal Influence Large variability of time spent in run 1 Large variability of time spent in run 1 6 to 17 hours 6 to 17 hours Time a factor in results Time a factor in results Actual value unknown Actual value unknown Temporal influence as a way to account for time variability Temporal influence as a way to account for time variability 0% TI = original data 0% TI = original data 100% TI = normalized to time spent 100% TI = normalized to time spent Investigate interesting thresholds Investigate interesting thresholds
13
100% Temporal Influence MethodEval.Nov.Use.Coh. Systems Design Comm1.5021.516 C+self1.5061.543N/A Design by TRIZ Comm2.2131.8881.499 C+self2.0152.137N/A Design by DNA Comm2.2172.8832.817 C+self2.3003.256N/A 8.3% Temporal Influence MethodEval.Nov.Use.Coh. Systems Design Comm 3.3883.331 C+self3.3483.361N/A Design by TRIZ Comm 3.7943.2492.494 C+self 3.4573.665N/A Design by DNA Comm 2.9543.3293.209 C+self 3.0173.831N/A 12.5% Temporal Influence MethodEval.Nov.Use.Coh. Systems Design Comm 3.2113.164 C+self3.1763.194N/A Design by TRIZ Comm 3.6703.1432.419 C+self 3.3443.545N/A Design by DNA Comm 2.9053.2803.162 C+self 2.9683.773N/A 20% Temporal Influence MethodEval.Nov.Use.Coh. Systems Design Comm 2.9142.883 C+self2.8892.914N/A Design by TRIZ Comm 3.4592.9602.291 C+self 3.1503.341N/A Design by DNA Comm 2.8203.1943.080 C+self 2.8833.673N/A
14
Conclusions Experimental results show DbD could be a favourable method Experimental results show DbD could be a favourable method DbD outperforms common design in most categories, with TI DbD outperforms common design in most categories, with TI Reasonable to explain how DbD could win out in all categories Reasonable to explain how DbD could win out in all categories Statistics show that the results are promising Statistics show that the results are promising More experiments should be run to verify results More experiments should be run to verify results
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.