Presentation is loading. Please wait.

Presentation is loading. Please wait.

Variability of Temperature Measurement in the Canadian Surface Weather and Reference Climate Networks By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological.

Similar presentations


Presentation on theme: "Variability of Temperature Measurement in the Canadian Surface Weather and Reference Climate Networks By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological."— Presentation transcript:

1 Variability of Temperature Measurement in the Canadian Surface Weather and Reference Climate Networks By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological Service of Canada

2 Background – Regional Procurement  When automation of weather stations began in Canada in the late 1980’s, there was no specifically designated “climate” network  Had a network of “primary” stations recording various meteorological parameters

3 Background – Regional Procurement  When automation of weather stations began in Canada in the late 1980’s, there was no specifically designated “climate” network  Had a network of “primary” stations recording various meteorological parameters  When sensors were procured for this network they were done so by five distinct Environment Canada Regions  Pacific & Yukon; Prairie and Northern; Ontario; Quebec; Atlantic  Resulted in a wide variety of instruments throughout the country all measuring the same parameter  Many of these “primary” stations today are part of Environment Canada’s Surface Weather and Climate Networks.

4  National survey was undertaken to catalogue the various sensors being used to measure temperature in what are now considered Canada’s Surface Weather and Climate Networks Background – Sensors in Use  Seven different sensors were found to be the predominant source of temperature data

5 Background – Sensors in Use  National survey was undertaken to catalogue the various sensors being used to measure temperature in what are now considered Canada’s Surface Weather and Climate Networks  Seven different sensors were found to be the predominant source of temperature data  In addition to sensor type, differences were reported with respect to shield type and shield aspiration

6 1)CSI 44002AWooden Screen (WS)Non-Aspirated (NA) 2)CSI 44212Wooden Screen (WS)Aspirated (A) 3)CSI HMP35CGill 12-Plate Screen (G12)Non-Aspirated (NA) 4)CSI HMP45CGill 12-Plate Screen (G12)Non-Aspirated (NA) 5)CSI HMP45C212Wooden Screen (WS)Aspirated (A) 6)CSI HMP45C212Wooden Screen (WS)Non-Aspirated (NA) 7)CSI HMP45C212Gill Screen (G)Aspirated (A) 8)CSI HMP45C212Gill 12-Plate Screen (G12)Non-Aspirated (NA) 9)CSI HMP45CFWooden Screen (WS)Non-Aspirated (NA) 10)CSI HMP45CFGill 12-Plate Screen (G12)Non-Aspirated (NA) 11)CSI PRT1000Wooden Screen (WS)Non-Aspirated (NA)  Eleven predominant sensor types/configurations were found to be in use in the Canadian Surface Weather and Climate Networks: Background – Sensors in Use

7 Purpose:  Is a sensor’s reading of temperature close to the truth? X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used  Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks.  Is a sensor model consistent in its ability to measure temperature over a range of different temperatures? Operational Comparability =

8 Data - Establishing a “true” Temperature Reference  Average of the three taken to represent the “true” temperature in the middle of the triangle Reference Sensor 1 Reference Sensor 2 Reference Sensor 3  Three reference temperature sensors installed in a triangle formation in Aspirated Stevenson Screens Reference Temperature  Used average of three YSI SP20048 sensors as reference  Only instances in which all three reference temperature sensors agreed to within 0.5 o C were used in the analysis  Each sensor was calibrated and associated corrections were applied

9 Purpose:  Is a sensor’s reading of temperature close to the truth? X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used X ai = ith measurement made by one system X bi = ith simultaneous measurement made be an identical system N = number of samples used  Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks.  Is a sensor model consistent in its ability to measure temperature over a range of different temperatures? Operational Comparability Functional Precision = =

10 1)CSI 44002AWooden Screen (WS)Non-Aspirated (NA) 2)CSI 44212Wooden Screen (WS)Aspirated (A) 3)CSI HMP35CGill 12-Plate Screen (G12)Non-Aspirated (NA) 4)CSI HMP45CGill 12-Plate Screen (G12)Non-Aspirated (NA) 5)CSI HMP45C212Wooden Screen (WS)Aspirated (A) 6)CSI HMP45C212Wooden Screen (WS)Non-Aspirated (NA) 7)CSI HMP45C212Gill Screen (G)Aspirated (A) 8)CSI HMP45C212Gill 12-Plate Screen (G12)Non-Aspirated (NA) 9)CSI HMP45CFWooden Screen (WS)Non-Aspirated (NA) 10)CSI HMP45CFGill 12-Plate Screen (G12)Non-Aspirated (NA) 11)CSI PRT1000Wooden Screen (WS)Non-Aspirated (NA) Data - Sensors Under Test

11 1)CSI 44002AWooden Screen (WS)Non-Aspirated (NA)A 2)CSI 44002AWooden Screen (WS)Non-Aspirated (NA)B 3)CSI 44212Wooden Screen (WS)Aspirated (A)A 4)CSI 44212Wooden Screen (WS)Aspirated (A)B 5)CSI HMP35CGill 12-Plate Screen (G12)Non-Aspirated (NA)A 6)CSI HMP35CGill 12-Plate Screen (G12)Non-Aspirated (NA)B 7)CSI HMP45CGill 12-Plate Screen (G12)Non-Aspirated (NA)A 8)CSI HMP45CGill 12-Plate Screen (G12)Non-Aspirated (NA)B 9)CSI HMP45C212Wooden Screen (WS)Aspirated (A)A 10)CSI HMP45C212Wooden Screen (WS)Aspirated (A)B 11)CSI HMP45C212Wooden Screen (WS)Non-Aspirated (NA)A 12)CSI HMP45C212Wooden Screen (WS)Non-Aspirated (NA)B 13)CSI HMP45C212Gill Screen (G)Aspirated (A)A 14)CSI HMP45C212Gill Screen (G)Aspirated (A)B 15)CSI HMP45C212Gill 12-Plate Screen (G12)Non-Aspirated (NA)A 16)CSI HMP45C212Gill 12-Plate Screen (G12)Non-Aspirated (NA)B 17)CSI HMP45CFWooden Screen (WS)Non-Aspirated (NA)A 18)CSI HMP45CFWooden Screen (WS)Non-Aspirated (NA)B 19)CSI HMP45CFGill 12-Plate Screen (G12)Non-Aspirated (NA) A 20)CSI HMP45CFGill 12-Plate Screen (G12)Non-Aspirated (NA)B 21)CSI PRT1000Wooden Screen (WS)Non-Aspirated (NA)A 22)CSI PRT1000Wooden Screen (WS)Non-Aspirated (NA)B Data - Sensors Under Test

12 Purpose:  Is a sensor’s reading of temperature close to the truth? X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used X ai = ith measurement made by one system X bi = ith simultaneous measurement made be an identical system N = number of samples used  Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks.  Is a sensor model consistent in its ability to measure temperature over a range of different temperatures? Operational Comparability Functional Precision = =  Reference temperature ≤ -5 o C  Reference temperature > -5 o C and ≤ 5 o C  Reference temperature > 5 o C  Test data was divided into three categories based on reference temperature

13  Instruments installed at Environment Canada’s Centre for Atmospheric Research Experiments  Located approximately 70 km NW of Toronto, Ontario Data - Test Site

14 Data - Sensors Under Test  Experiment was run from December, 2002 to June, 2003.  Minutely data was collected from all three reference sensors and all 22 sensors under test  In order to maintain a consistent dataset for analysis, if any sensor under test was missing a minutely value, the values for that minute were removed for all other sensors under test

15 Results

16 Operational Comparability Scores ( o C) Results – Operational Comparability Scores ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

17 Operational Comparability Scores ( o C) Results – Operational Comparability Scores ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C Best = 0.03 Best = 0.03 Best = 0.07 Best = 0.03 Best = 0.03 Best = 0.07 Worst = 0.23 Worst = 0.15 Worst = 0.29 Worst = 0.23 Worst = 0.15 Worst = 0.29 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Range = 0.20Range = 0.12 Range = 0.22 Range = 0.20Range = 0.12 Range = 0.22

18 Operational Comparability Scores ( o C) Results – Operational Comparability Scores ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C Best = 0.03 Best = 0.03 Best = 0.07 Best = 0.03 Best = 0.03 Best = 0.07 Worst = 0.23 Worst = 0.15 Worst = 0.29 Worst = 0.23 Worst = 0.15 Worst = 0.29 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Range = 0.20Range = 0.12 Range = 0.22 Range = 0.20Range = 0.12 Range = 0.22

19 Sensors with Best Operational Comparability Scores Sensors with Worst Operational Comparability Scores ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C Results – Percentage Frequency of Differences from Reference Percentage Frequency of Difference (%)

20 Sensors with Best Operational Comparability Scores Sensors with Worst Operational Comparability Scores ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C 0.05% 0.02% 15.79%7.38%12.43% Results – Percentage Frequency of Differences from Reference Percentage Frequency of Difference (%)

21 Sensors with Highest and Lowest Operational Comparability Scores Results – Differences from Reference  Time series represent hourly differences from the reference temperature over the period of test Difference between means = 0.34 o CDifference between means = 0.21 o CDifference between means = 0.37 o C

22 Functional Precision Scores ( o C) Functional Precision Scores ( o C) Results – Functional Precision ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

23 Functional Precision Scores ( o C) Functional Precision Scores ( o C) Results – Functional Precision Best = 0.04 Best = 0.03 Best = 0.06 Best = 0.04 Best = 0.03 Best = 0.06 Worst = 0.16 Worst = 0.12 Worst = 0.19 Worst = 0.16 Worst = 0.12 Worst = 0.19 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Range = 0.12Range = 0.09 Range = 0.13 Range = 0.12Range = 0.09 Range = 0.13 ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

24 Functional Precision Scores ( o C) Functional Precision Scores ( o C) Results – Functional Precision Best = 0.04 Best = 0.03 Best = 0.06 Best = 0.04 Best = 0.03 Best = 0.06 Worst = 0.16 Worst = 0.12 Worst = 0.19 Worst = 0.16 Worst = 0.12 Worst = 0.19 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Range = 0.12Range = 0.09 Range = 0.13 Range = 0.12Range = 0.09 Range = 0.13 ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

25 Sensors with Highest and Lowest Functional Precision Scores ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C Results – Difference from Reference Difference between means = 0.05 o CDifference between means = 0.002 o CDifference between means = 0.15 o C Difference between means = 0.3 o C

26 Conclusions  Wide range of Operational Comparability scores observed  Highest – 0.23 o C  Lowest – 0.03 o C  In worst case, over 15% of minutely differences from the reference > 0.5 o C  Wide range of Functional Precision scores observed  Highest – 0.19 o C  Lowest – 0.03 o C  PRT 1000 WS NA A – best operational comparability score in ≤ -5 o C category  HMP45C212 G A A – best operational comparability score in > -5 o C and ≤ 5 o C category  44002A WS NA A – best operational comparability score in > 5 o C category  Purpose of Study– attempt to quantify the variability of temperature measurement Canadian Surface Weather and Climate Networks  Closeness to the “truth”  Consistency from one identical sensor to another  Temperature Dependence

27 Final Note – Future Instrument Procurement  In order to avoid such variability in the future, one temperature sensor will be procured by a central body and used at all stations throughout Canada  It has been proposed that the analysis methodology used in this study be used to select the best instruments for future procurements  Analysis will be undertaken at three different test sites representing significantly different climatologies  Should result in a more uniform measurement of temperature and other parameters across Canada.

28 Questions?

29

30 Worst Operational Comparability ScoreBest Functional Precision Score

31 Results – Difference from Reference Mean ( o C)  Values represent differences between means of each sensor under test and reference (SUT - Reference)  T-test was used to determine if the observed differences in means were significant at the 95% confidence level (all sensors highlighted in red) Difference Between Sensor Under Test and Reference ( o C) Difference Between Sensor Under Test and Reference ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

32 Results – Difference from Reference Mean ( o C)  Values represent absolute value of differences between means of identical sensors in identical configurations  T-test was used to determine if the observed differences in means were significant at the 95% confidence level (all sensors highlighted in red) Difference Between Identical Sensors Under Test ( o C) Difference Between Identical Sensors Under Test ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C


Download ppt "Variability of Temperature Measurement in the Canadian Surface Weather and Reference Climate Networks By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological."

Similar presentations


Ads by Google