Presentation is loading. Please wait.

Presentation is loading. Please wait.

Beth Blickensderfer, Ph.D. Robert Thomas, M.S., ATP, CFII

Similar presentations


Presentation on theme: "Beth Blickensderfer, Ph.D. Robert Thomas, M.S., ATP, CFII"— Presentation transcript:

1 Beth Blickensderfer, Ph.D. Robert Thomas, M.S., ATP, CFII
Assessing Aviation Weather Knowledge in General Aviation Pilots: Overview and Initial Results General Aviation Pilots’ Aviation Weather Knowledge:  Research Results and Implications for Training and Assessment A Proposed Aviation Weather Knowledge Taxonomy for GA Pilots John Lanicci, Ph.D. Meteorologist Assessing Aviation Weather Knowledge in General Aviation Pilots:  Overview and Initial Results Beth Blickensderfer, Ph.D. Human Factors Psychologist GA Pilot Aviation Weather Resources:  Challenges and Suggestions for Improvement Tom Guinn, Ph.D.   Meteorologist  Bob Thomas, M.S.A., CFII Aviation Professor  Weather Salience in GA Pilots:  An initial study  Jennifer Thropp, Ph.D.  Human Factors Psychologist  Using a Scenario-Based Approach to Assessing Aviation Weather Knowledge in GA Pilots Jessica Cruit, M.S., Ph.D Recent Human Factors Ph.D. graduate! Beth Blickensderfer, Ph.D. John Lanicci, Ph.D. Thomas Guinn Ph.D. Robert Thomas, M.S., ATP, CFII Jayde King, M.S. Yolanda Ortiz, M.S. Presentation given at the Friends and Partners of Aviation Weather (FPAW) Meeting; Nov 2-3, 2016; Orlando, FL

2 Background Research indicates:
FAA Knowledge exams for private and commercial pilots are out of date and too easy. GA Pilots may lack adequate aviation weather knowledge. Are knowledge gaps a contributing factor to accident rate? Aviation weather knowledge assessment tools: Practical use (e.g. FAA Exams  prompt better instruction) Research use (e.g. to identify aviation weather training needs; validate aviation weather training strategies)

3 Purpose Develop and validate Aviation Weather knowledge questions for use with subsequent General Aviation Weather research.

4 Knowledge questions 95 Aviation Weather questions (“items”)
Team: 2 meteorologists, 1 flight instructor, 1 I-O/HF Psychologist, 2 HF graduate students Item content: driven by task analysis, FAA documents, ACS codes, AFS 630 content guidelines Item format: driven by AFS-630 item writing guide Item level of learning: driven by research guidelines and AFS-630 item difficulty level guidelines (Rote, understanding, application, correlation). Content validation: FAA personnel Rote (Verbal information; memory) Understanding (Verbal information) Application (Cognitive skill) Correlation (Cognitive skill) Is the question content relevant to the private/commercial pilot certificate? Pertinent to safety and relevant to safe operations? Is it more effective to introduce a new question or revise existing? Is the subject matter relevant to the airman everyday or only occasionally?

5 Method As shown in AC

6 Participants N = 204 (June – September 2016)
ERAU Affiliated = 133; Non-ERAU = 71 Part 61 = 60; Part 141/142 = 143 Flight hours Mean = 201.4 Median = 131 Pilot Certificate and/or Rating Student pilots = 41 Private pilots = 72 Instrument = 50 Commercial = 41 Years Flying; Mean = 3.6 Flight training Part 61 – 52 141 Collegiate – 118 141 Non-collegiate – 9 Mixture – 3 Region in which majority of flight hours were flown NW = 2 SW = 10 NE = 12 SC = 4 EC = 36 NE = 18 SE = 99

7 Procedure Informed consent
Completed Demographic information and Attitudinal measures Self-efficacy (Confidence) Weather salience Completed knowledge questions Computer-based (at ERAU); Randomized Paper-based (OshKosh) Paid $20 + $0.31 per correctly answered question (ERAU students) Debriefed by Experimenter (Graduate Research Assistant)

8 RESULTS

9 Overall Aviation Weather Knowledge Score (% Correct)
95 Questions (Cronbach’s alpha = .92) n M (SD) Private-in- Training 41 47.65 (13.61) Private 72 56.62 (15.67) Private with Instrument 50 61.77 (12.93) Commercial with Instrument 65.62 (14.50) * One-way ANOVA Significant between groups effect A Tukey post hoc test revealed that the overall percent correct of Student Pilots (M = , SD = 13.61) was significantly less than that of Private pilots (M = 56.62, SD = 15.67, p=.009), Private pilots with Instrument rating (M = 61.77, SD = 12.93, p <. 0005), and Commercial pilots with Instrument rating (M = 65.62, SD = 14.50, p < .0005). The post hoc test also revealed that Commercial pilots with Instrument rating had significantly higher composite test scores compared to Private pilots (p = .009). No other between groups differences appeared.

10 Scores on Aviation Weather Knowledge Categories (Lanicci et al
Weather Phenomena 31 Questions; alpha = .76 Weather Hazard Products 80 Questions; alpha = .91 Weather Hazard Product Sources 10 Questions; alpha = .66 3x4 Mixed ANOVA 2 Significant Main effects No significant Interaction effect A main effect did occur for knowledge categories, Wilks’ Lambda = .62, F(2, 199) = 62.19, p = .000, partial η2= .39. Post hoc paired-samples t-tests with a bonferroni correction of the three knowledge categories revealed no significant difference between scores on knowledge of Weather phenomenology and Weather hazard products, t(203)=1.82, p = .07. Weather hazard product source scores (M = 68.73, SD = 20.64) were significantly higher than both Weather phenomenology (M = 58.98, SD = 16.26) with t(203) = 9.74, p = .000, and Weather hazard products (M = 58.05, SD = 16.05) with t(203)= 11.45, p = .000. Higher scores on Weather hazard product sources questions may be indicative of the product source questions being easier than the questions about phenomenology and/or weather products themselves. Alternately, it may be pilots are better trained in weather product sources than the other two categories of knowledge. The main effect comparing the four pilot ratings was also significant, F(3, 200) = 11.07, p = .000, partial η2= .14, suggesting there was a difference between the ratings on knowledge scores. Post hoc analysis showed student pilots (M = 51.82, SD = 2.38) scored significantly lower than private (M = 60.41, SD = 1.80), private w/ instrument (M = 65.79, SD = 2.16), and commercial w/ instrument pilots (M = 69.95, SD = 2.38). However, private pilots did not differ significantly from privates pilots with instrument ratings (p = .23) and private pilots with instrument ratings did not differ significantly than commercial pilots with instrument (p = .57).

11 Weather Phenomena Subcategories
4 x7 Mixed ANOVA Impact of Pilot Certificate/Rating and Weather Phenomena Subcategories on Score Both main effects were significant; no interaction Main effect for Weather Phenomena Icing and Turbulence (≈ 70%) Definitions of LIFR, IFR, MVFR, and VFR (≈ 65%) Thunderstorms, Satellite, Radar, and Lightning concepts (≈ 60% and below) A 3 x 7 mixed analysis of variance was conducted to evaluate the impact of Pilot Rating (Student, Private, Instrument, Commercial) and Weather Phenomenology Subcategory (1201, 1202, 1204, 1206, 1003, 1011, 1013) on knowledge score. There was a significant main effect of Weather Phenomenology Subcategories on score, Wilks’ Lambda = 0.43, F (6, 195) = 43.14, p < 0.05, partial eta squared = 0.57. Post hoc pairwise comparisons were performed on Weather Phenomenology Category levels to investigate differences of scores. Participants scores on Icing and Turbulence (1206 and 1202) were significantly higher than their scores on definitions of LIFR, IFR, MVFR and VFR (1201e) which, in turn, were significantly higher than their scores on Thunderstorms, Satellite, Radar, and Lightening concepts (1204, , 1013). Regardless of the weather phenomenology subcategories, there was a significant main effect of Pilot rating on scores, F (3, 200) = 12.35, p < 0.05, partial eta squared = Similar to the analysis on weather phenomenology analysis reported previously in this report, Bonferroni Post Hoc comparisons were performed to evaluate differences in scores within pilot rating levels. Student pilots performed significantly lower on weather phenomenology questions than Private, Instrument and Commercial rated pilots, p<0.05. Private rated pilots scores were significantly lower than commercial rated pilot scores, p<0.05 but not lower than instrument p>0.05. There was also not a significant difference between instrument and Commercial rated pilot scores, p>0.05.

12 Weather Hazard Products Subcategories
4 x7 Mixed ANOVA Impact of Pilot Certificate/Rating and Weather Hazard Product Subcategories on Knowledge Score Two significant main effects; no interaction Main effect for Weather Hazard Product Interpreting upper level charts (≈75%) Interpreting convective SIGMETs and surface charts (≈ 65 %) Interpreting surface weather and PIREPS, AIRMETS, satellite data, infrared visible, water vapor, and radar (≈55%) A 4 x 7 mixed ANOVA was conducted to evaluate the impact of Pilot Certificate/Rating (Student, Private, Instrument, Commercial) and Weather Hazard Product Subcategory (2001, 2002, 2003a, 2005, 2022, 2023, 2026) on knowledge score. Regardless of Pilot certificate/rating, there was a significant main effect of Weather Hazard Product Subcategories on score, Wilks’ Lambda = 0.274, F (6, 195) = 86.31, p < 0.05, partial eta squared = Post hoc pairwise comparisons were performed on the Weather Hazard Products Subcategories to investigate differences in scores. Participants’ scores were significantly higher on interpreting upper level charts (2002) than the scores on interpreting convective SIGMETs and surface charts (2003 and 2026), which in turn, were significantly higher than the scores on interpreting surface weather and PIREPs, interpreting AIRMETs, interpreting satellite data, infrared visible, and water vapor, and interpreting weather radar (2001, 2005, 2022, 2023). Regardless of Weather Hazard Product Subcategory, there was a significant main effect of Pilot certificate/rating on score, F (3, 200) = 11.85, p < 0.05, partial eta squared = Bonferroni Post Hoc comparisons were performed to evaluate differences in scores within pilot rating levels. Student rated pilots performed significantly lower than Private, Instrument and Commercial rated pilots, p < Private rated pilots scores were significantly lower than commercial rated pilot scores, p < 0.05 but not significantly lower than instrument rated pilots scores p > Scores of instrument rated pilots did not significantly differ from those of Commercial rated pilots p > 0.05. No significant interaction occurred between Weather Hazard Product Subcategory and Pilot rating, Wilks’ Lambda = , F(18, 552) = 0.826, p = .67, partial eta squared = Indicating that there is not a combined effect of Pilot rating and Weather Hazard Product Subcategory on score.

13 Weather Product Hazard Source Subcategories
4 x 3 Mixed ANOVA Impact of Pilot Certificate/Rating and Weather Hazard Product Sources Subcategories on Knowledge Score Both main effects were significant; no interaction Main effect for Weather Hazard Product Source Subcategories When to use weather product sources (≈ 72%) Weather issues in Flight planning in general (≈ 70%) How flight plan weather products are constructed (≈ 70%) 4 x 3 mixed ANOVA was conducted to evaluate the impact of Pilot Certificate/Rating (Student, Private, Instrument, Commercial) and Weather Hazard Product Source Subcategory (3007, 3007a, 3201) on knowledge score. Regardless of Pilot certificate/rating, there was a significant main effect of Weather Product Source Subcategories on scores, Wilks’ Lambda = .732, F (1, 200) = 73.27, p < 0.05, partial eta squared = .268. Post hoc pairwise comparisons were performed on the Weather Products Source Subcategories to investigate differences. Participant’s scored significantly higher on knowledge of when to use flight planning product sources questions (3201) than they did on knowledge of how flight planning products are constructed and flight planning in general (3007a and 3007). A closer inspection of the means in Table 15, however, indicates that the reason the 3201 was higher was from the pilots with both commercial and instrument ratings scoring highly on that subcategory. Regardless of subcategory of weather hazard product sources, there was also a significant main effect on the levels within Pilot rating, F (3, 200) = 6.428, p < 0.05, partial eta squared = Bonferroni Post Hoc comparisons were performed to evaluate differences in scores within pilot rating levels. Student rated pilots performed significantly lower on weather products source questions than Instrument and Commercial rated pilots, p < However, there was not a significant difference between Student pilots’ scores and Private pilots’ scores, p > Private rated pilots scores were significantly lower than commercial rated pilot scores, p < However, there was not a significant difference between instrument and commercial with instrument scores, p > 0.05. No significant interaction occurred between Pilot Rating and Weather Hazard Products Source subcategories, Wilks’ Lambda = .975, F(3, 200) = 1.73, p = .162, partial eta squared = Indicating that there is not a combined effect of Pilot rating and Weather Hazard Products Source subcategories on score.

14 Good news – Training helps!
* One-way ANOVA Significant between groups effect

15 Training Experience Estimated Months since last Weather Training
Last Weather Training Experience (in Months) Training Experience Estimated Months since last Weather Training M (SD) Median Private-in-Training 4.53 (7.81) 2.00 Private 12.55 (29.46) 5.50 Instrument 12.53 (27.51) Note commercial only Mean (SD) = (39.71); median = 7.5; private with instrument = 7.69(8.28); median = 4.0 (Note: Private with instrument reported 8 months, and Commercial pilots reported 19+ months)

16 DISCUSSION and CONCLUSION

17 Discussion Test questions/Instrument: GA Pilots knowledge
Used a systematic approach that followed guidelines in assessment instrument development. Measure has content validity and initial evidence that scores discriminate between pilots of differing levels of training. Instrument generated a spread of scores reflecting both high and low aviation weather knowledge. GA Pilots knowledge Results indicate gaps in aviation weather knowledge! Limitations/Future Research Need to assess criterion validity of questions Need older GA pilots to take the questions Current study provides an instrument that can assess GA pilot weather knowledge, and in turn, assess future Wx Training Programs. study used a systematic approach that followed FAA (2015) guidelines in the assessment instrument development. The measure has content validity and initial evidence that test scores discriminate between pilots of differing levels of training as well as between different type of aviation weather information. The test also generated a spread of scores that reflect both high and low aviation weather knowledge.

18 Questions?


Download ppt "Beth Blickensderfer, Ph.D. Robert Thomas, M.S., ATP, CFII"

Similar presentations


Ads by Google