Presentation is loading. Please wait.

Presentation is loading. Please wait.

FAA EDR Standards Project

Similar presentations


Presentation on theme: "FAA EDR Standards Project"— Presentation transcript:

1 FAA EDR Standards Project
RTCA Plenary Sub-Group 4 & 6 September 16, 2014 Presented by: Sal Catapano - Exelis

2 Outline Background / FAA EDR Standards Project Report
Current In Situ EDR Algorithms / Aircraft Equipped FAA EDR Standards Project Overview 1-Minute Mean In Situ EDR Report Performance and Recommendations 1-Minute Peak In Situ EDR Report Performance and Recommendations Variability Analysis Findings Follow-on Recommendations Open Question / Discussion Period

3 Current EDR Algorithms / Aircraft Equipped
Background & Current EDR Algorithms / Aircraft Equipped

4 Background In 2001, ICAO made EDR the turbulence metric standard
In 2011, ADS-B ARC recommended the FAA “establish performance standards for EDR” In 2012, RTCA SC-206, developed an Operational Services and Environmental Definition (OSED) identifying the necessity for: An international effort to develop performance standards for aircraft (in situ) EDR values, independent of computation approach In response, the FAA sponsored an EDR Standards Project from July, 2012 to September, 2014 that: Provides the analysis, inputs, and recommendations required to adopt in situ EDR performance standards Title of OSED is “Aircraft Derived Meteorological Data via ADS-B Data Link for Wake Vortex, Air Traffic Management and Weather “

5 FAA EDR Standards Final Report
Delivered to FAA August 29, 2014 Documents research, analysis, and findings of project Includes performance tolerance recommendations for 1 minute mean and peak in situ EDR reports Report and appendices submitted to RTCA on September 10, 2014

6 Current Operational In Situ EDR Algorithms
NCAR Vertical Acceleration-Based Input: TAS, Altitude, Vertical Acceleration, Weight, Frequency Response, Mach, Flap Angle, Autopilot Status, QC Parameters Users: United Airlines Windowing: 10 sec window every 5 sec Average Calc: Arithmetic mean over 1 min Peak Calc: 95th percentile over 1 minute ATR Accelerometer-Based Input: TAS, Altitude, Vertical Acceleration, Weight, Frequency Response Users: American Airlines, others Windowing: 5 sec running window Average Calc: N/A Peak Calc: Largest EDR in 30 seconds NCAR Vertical Wind-Based Input: TAS, Altitude, Inertial Vertical Velocity, Body Axis AoA, Pitch Rate, Pitch, Roll Angle, QC, Filter Parameters Users: Delta and Southwest Airlines Windowing: 10 sec running Average Calc: Median over 1 min Peak Calc: Largest EDR in 1 Minute Panasonic Longitudinal Wind-Based Input: TAS, Roll Angle for QC, TAMDAR Icing for QC (if using TAMDAR Sensor) Users: TAMDAR - Regional Airlines Windowing: 9 sec window Average Calc: 1, 3, 7min; 300, 1500ft Peak Calc: Largest EDR in 1, 3, 7min; 300, 1500ft

7 Aircraft Equipped Airline Type Method Count Total: 1133 7
American and others B , B B , A320, A321, A Vertical Acceleration 500+ Delta B737NG, B767 Vertical Wind 167 Southwest B , B737NG 156 United B757 (EDR equipped B737 no longer in fleet) 54 (reducing to 15 by Dec 31, 2015) Regional Airlines via TAMDAR SAAB 340, ERJ-145, ERJ-190, ERJ-195, Beech 1900C, Dash 8 (Q-100, Q-300, Q-400) Longitudinal Wind (via TAS) 256 Total: 7

8 FAA EDR Standards Project Overview

9 Project Team and Key Stakeholders

10 Standards Research Process
Raw Vertical Winds Commercial Output Simulator Output Research Quality Data Graph is of vertical wind gust spectra X-axis is Frequency Y-axis is Power Brown line – raw vertical wind component that drives simulator Black line – vertical component of wind generated by simulator Green line – vertical component of wind calculated by research quality a/c Blue line – vertical component of wind calculated by commercial a/c Project Team Members deemed the simulation provides a representative source of simulated algorithm input data for the project’s analysis

11 Input Winds Development
Homogenous – exercise mean EDR Maintains single EDR on average throughout wind dataset (e.g. 0.5 EDR) .5 EDR Non-Homogenous – exercise peak EDR Simulate “burst” of turbulence embedded in background field of ambient turbulence X = Minute 1 Minute2 Minute 3

12 Input Winds Datasets X = Case Title Case Description
Input Wind Datasets Specifications Turbulence Intensity or Modulation Shape Length Scales Random Phenomena (Homogenous) Turbulence is a random process, therefore datasets were created to replicate 5 von Karman 0.1, 0.2, 0.3, 0.5, 0.7 EDR 500 Meters Varying Length Scale Nonconformance w/ Algorithm Assumptions Operational In situ EDR algorithms all assume a 500 meter length scale, however in real-world conditions the length scale varies, therefore datasets were created with varying length scales 20 von Karman (including the 5 for random) , 0.3, 0.5, 0.7 EDR 200 Meters 750 Meters 1000 Meters Noise Floor Aircraft sensors and avionics introduce noise to EDR calculations, so datasets at low EDR severity levels were generated to determine impacts to EDR calculations 15 von Karman 0.01, 0.02, 0.03, 0.04, 0.06 EDR Nonconformance w/ Algorithm Assumptions (Non-Homogeneous) In real-world conditions not all turbulence is homogenous, so modulation was applied to von Karman datasets to simulate convective and mountain wave turbulence von Karman at 500 meters and 0.06 EDR background Narrow/Tall Pulse Shape (SPIKE) A1 = 20 L_Sigma = 500M Mid/Mid Pulse Shape (MID) A1 = 15 L_Sigma =1000M Wide/Short Pulse Shape (FLAT) A1 = 5 L_Sigma =1500M X =

13 ‘Expected’ EDR Value Homogeneous Wind Datasets – Mean EDR
Single ‘expected’ EDR value corresponding to each dataset (i.e., .5 EDR dataset has ‘expected’ EDR value of .5 EDR) Non-homogeneous Wind Dataset – Peak EDR Spectral Scaling Method developed by the project used to calculated EDR ‘expected’ values based on individual implementation window length and modulation characteristics .5 EDR

14 Expected Mean vs. Sample Mean
Reference value (red dot) at the center of the target shows the expected mean value Red tolerance bands quantify the range relative to the reference value that contains a specified percentage of the distribution (70% and 99%) Sample mean (blue dot) of a distribution is usually offset some distance from the reference value due to bias Blue tolerance bands quantify the range relative to the sample mean that contains a specified percentage of the distribution If bias = 0, then red and blue dots and range circles coincide Choose to calculate tolerance bands to “sample mean” because when the bias is nonzero, calculating tolerance bands relative to a reference value can skew the dispersion measurement Essentially, tolerance bands calculated relative to the reference value also incorporate the bias In this manner the tolerance bands measure the dispersion of the data distributions independent of any bias.

15 Statistical Sets Sample Mean
Bias characterizes how centered an implementation is relative to EDR expected value 70%-band characterizes the spread or tightness of an implementation relative to sample mean (or root mean square) 99%-band characterizes how bounded an implementation is, ignoring only the most severe outliers All three statistics are normalized allowing for easy comparison across varying levels of EDR intensity, window length, etc.

16 1-Minute Mean EDR

17 In Situ EDR Mean Report Findings
Implementations performed well compared with EDR ‘expected’ value Consistency across turbulence severity levels and length scales, as well as across implementations Noise floor for the project’s simulation determined to be at or below 0.02 EDR Below noise floor bias is high for all implementations Above noise floor bias quickly improves, as signal-to-noise ratio increases Real-world operational noise floors are dependant on avionics and sensor characteristics

18 1-Minute Mean EDR Performance & Recommendations
Performance Category EDR Range Bias 70%-band 99%-band Supplemental +5% +10% +20% Minimum >0.02 – 0.20 +10 >0.20 – 0.70 +25% Note: Statistics normalized to expected value

19 1-Minute Mean EDR Performance Tolerance Threshold Example
Example: 0.1 Mean EDR Performance Tolerance Thresholds Bias +5% Expected Value 0.1 EDR Bias must be between 0.095 – EDR 70%-band +10% Sample Mean At least 70% of results must be Sample Mean EDR 99%-band +20% At least 99% of results must be Sample Mean EDR Note: Statistics normalized to expected value

20 Wake Vortex Decay Modeling Performance Objectives (99%-Band)
Note: Wake decay modeling community has performance objectives for null to light turbulence

21 1-Minute Peak EDR

22 1-Minute Peak EDR Expected Value
5 Sec Window 10 Sec Window Window Length (Meters) EDR Expected Value Spike Mid Flat ‘Representative’ Expected Value Window Length Flat Mid Spike 5 Second Window 0.32 0.51 0.70 8 Second 0.28 0.42 0.56 10 Second Window 0.26 0.38 0.50 Standard Deviation Increases .7 .5 .6 Expected Value Decreases Bias Increases

23 Performance Recommendations for 1-Minute Peak In Situ EDR Reports
Statistic Set Performance (FLT) Performance (MID) Performance (SPIKE) 1Bias +15.6% +18.3% +23.2% 270%-band +23.8% +26.9% +35.7% 299%-band +65.6% +69.2% +82.9% Statistic Set Recommended Peak Standard (FLT) Recommended Peak Standard (MID) Recommended Peak Standard (SPIKE) 1Bias +20% +25% 270%-band +30% +40% 299%-band +70% +85% 1Bias is normalized to the “representative” expected value 2 70% and 99% bands are normalized to the “window length specific” expected value

24 1-Minute Peak EDR Example
Example: MID Dataset Peak EDR Performance Tolerance Thresholds Bias +20% ‘Representative’ Expected Value 0.445 EDR Bias must be “Representative’ Expected Value EDR 70%-band +30% 9 Second Window Specific 0.40 EDR At least 70% of results must be Window Specific Expected Value EDR 99%-band +70% At least 99% of results must be Window Specific Expected Value EDR Window Length (Meters) EDR Expected Value Spike Mid Flat 5 Sec Window 10 Sec Window .38 .445 .51 .51 ‘Representative’ Expected Value Bias Increases Bias Increases

25 Peak EDR Performance Objectives
Today Current user applications employ peak EDR data only for strategic forecasting and planning Interested in receiving numerous peak EDR reports User objectives for peak EDR are subjective and require further sensitivity analysis Future EDR used in tactical decision making (e.g., cross-linking between aircraft) Improved inter-implementation consistency may be required across EDR implementations Today’s peak EDR performance may not meet future use performance requirements

26 In Situ EDR Peak Report Findings
All implementations performed well compared with EDR ‘expected’ value However, all existing in situ EDR implementations employ different window lengths, parameter settings, and calculation methodologies Some of these component differences lead to inconsistent peak EDR values across implementations in non-homogeneous turbulence (i.e., convective, mountain wave) Project elected to perform variability analysis on a set of EDR algorithm components to discover sources of variability

27 Variability Analysis Scatter Plot of Results
Algorithm 1 along X-axis Algorithm 2 along y-axis Algorithm 3 along z-axis Black dot in center is the reference value, which is 3-D sample mean Iterating from 0 to 100% in 1% increments, the project calculated the minimum distance (in EDR) from the reference value that contains each percentage of data points Algorithm parameters included in variability analysis are included in table on bottom of chart Scatter Plot of Results Consistency Performance Curve Algorithm Parameters included in Variability Analysis Input Type Window Length Window Function Window Overlap Lower Cutoff Wavenumber Upper Cutoff Wavenumber

28 Consistency Improvement Potential
Red line – Algorithms with their native parameter settings to serve as a baseline Blue line – Algorithms with 8 second window length – all other parameters native Green line – Algorithms with 8 second window length, common window function if applicable, Black line – Algorithms with common upper cutoff, and native lower cutoffs proved to be the best setting for consistency Algorithm input data has level of variability in itself, which contributes to variability between algorithms

29 Follow-on Recommendations
Performance standard adoption Validate in situ recommendations Determine how compliance will be enforced Define operational requirements Pursue broad ConOps for EDR Perform application specific sensitivity analyses Continue variability analyses Research additional algorithm components Define parameter values for all components Pursue additional research into the science of EDR Analyze impact of distorting assumptions Define an approach to develop vertical EDR profiles Consider non-in situ EDR performance standards Leverage momentum of Project Team’s Success Follow-on activities MUST have operational significance and benefit

30 Questions / Discussions
Name Phone Michael Emanuel Joe Sherry Sal Catapano

31 Back-up Slides

32 Inertial Sub-Range

33 Work Element Relationship

34 EDR Standards Process

35 Algorithm Input Data Development

36 Modulation Pulse

37 Notional Depiction of Relationships Associated with Differing Window Length

38 Spike, Mid, and Flat Expected Values
Window Length (Seconds) 1 2 3 4 5 6 7 8 9 10 SPIKE Dataset Expected Values 1.19 1.05 0.90 0.79 0.70 0.64 0.60 0.56 0.53 0.50 MID Dataset Expected Values 0.65 0.63 0.59 0.55 0.51 0.48 0.45 0.42 0.40 0.38 FLT Dataset - 0.32 0.28 0.26

39 ISE Comparison of Project Generated Non-homogenous Wind Data Sets

40 Example: 0.1 Mean EDR Bias 70%-band 99%-band Expected Value = 0.1 EDR
-35% +35% Sample Mean 99%-band +49.5% -49.5% Expected Value = 0.1 EDR Results Center = EDR Performance Stnd = EDR Sample Mean Sample Mean = 0.1 EDR Right Tolerance Band = EDR Left Tolerance Band = EDR Performance Stnd = At least 70% of results must be between 0.09 EDR and 0.11 EDR Sample Mean = 0.1 EDR Right Tolerance Band = EDR Left Tolerance Band = EDR Performance Stnd = At least 99% of results must be between 0.08 EDR and 0.12 EDR


Download ppt "FAA EDR Standards Project"

Similar presentations


Ads by Google