Variability of Temperature Measurement in the Canadian Surface Weather and Reference Climate Networks By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological.

Slides:



Advertisements
Similar presentations
Meteorological Observatory Lindenberg – Richard Assmann Observatory The GCOS Reference Upper Air Network.
Advertisements

Adjustment of Global Gridded Precipitation for Systematic Bias Jennifer Adam Department of Civil and Environmental Engineering University of Washington.
THE OKLAHOMA MESONET: EVOLUTION FROM REAL-TIME WEATHER NETWORK TO CLIMATE NETWORK Christopher Fiebrich and Kevin A. Kloesel Oklahoma Climatological Survey.
Field experiment on the effects of a nearby asphalt road on temperature measurement Mariko Kumamoto 1, Michiko Otsuka 2, Takeshi Sakai 1 and Toshinori.
March 25, 2002EarthScope CSIT Workshop Data Archiving and IT Issues for PBO Douglas Neuhauser UC Berkeley Seismological Lab Northern California Earthquake.
Commission for Instruments and Methods of Observation Fourteenth Session Geneva, 7 – 14 December 2006 INSTRUMENTS AND METHODS OF OBSERVATION FOR SURFACE.
Towards a Stream Classification System for the Canadian Prairie Provinces CWRA-CGU National Conference, Banff, Alberta June 5-8, 2012 Greg MacCulloch and.
Annual Unit Runoff in Canada Presentation at PPWB Prairie Hydrology Workshop January 29, 2013.
Siting matters, and the numbers show just how much.
2011 Fire Season Review Canada Kerry Anderson Canadian Forest Service Edmonton, AB.
Acknowledgments Jennifer Fowler, University of Montana, Flight Director UM-BOREALIS Roger DesJardins, Canadian East Fire Region, Incident Meteorologist.
Canada’s Health by Region By: Jack Wei and Colin McClenaghan.
WMO/TECO Functional Testing of Surface Weather Instruments and Systems - Rodica Nitu Meteorological Service of Canada.
EXPERIMENTAL ERRORS AND DATA ANALYSIS
Report from the GCOS Archive/Analysis Center Matthew Menne NOAA/National Centers for Environmental Information Center for Weather and Climate (NCEI-Asheville)
JEFS Status Report Department of Atmospheric Sciences University of Washington Cliff Mass, Jeff Baars, David Carey JEFS Workshop, August
CONFIDENCE INTERVALS What is the Purpose of a Confidence Interval?
2008 Seasonal Prediction for Canada Kerry Anderson Richard Carr Peter Englefield.
12th EMS Annual Meeting & 9th European Conference on Applied Climatology (ECAC), Łódź, September 2012 Rajmund Przybylak, Aleksandra Pospieszyńska.
CLIMAT (CLIMAT TEMP) History: 1935 – IMO (International Meteorological Organization) that mean monthly values of the main climatological elements at certain.
Presented by: Mike Hamdan South Coast Air Quality Management District Diamond Bar, CA Presented at: The Tribal Air Monitoring Training, Pechanga Reservation,
World Meteorological Organization Working together in weather, climate and water WMO OMM WMO Survey on Institutional Arrangements for NMHSs.
European Metrology Research Program (EMRP) MeteoMet Project (October 2011) WP3. Traceable measurements methods and protocols for ground based meteorological.
MATH1342 S08 – 7:00A-8:15A T/R BB218 SPRING 2014 Daryl Rupp.
Meteorological Observatory Lindenberg – Richard Assmann Observatory The GCOS Reference Upper Air Network.
Impact of Urbanization on the Thermal Comfort Conditions in the Hot Humid City of Chennai, India. A. Lilly Rose Assistant Professor, Department of Architecture,
MADIS to LITTLE_R Converter MADIS observation types MADIS to LITTE_R converter Future work.
Error Analysis Accuracy Closeness to the true value Measurement Accuracy – determines the closeness of the measured value to the true value Instrument.
Prepared by: Ms. Siddhi Hegde Pawar Public School Kandivali.
Population All members of a set which have a given characteristic. Population Data Data associated with a certain population. Population Parameter A measure.
A Statistical Comparison of Weather Stations in Carberry, Manitoba, Canada.
1 Status of NERON/HCN-M for The Committee for Climate Analysis, Monitoring, and Services (CCAMS) John Hahn NWS Office of Science and Technology.
Chapter Eleven A Primer for Descriptive Statistics.
Presented By: Ken Crawford, State Climatologist for Oklahoma August 11, 2009.
Climate data sets: introduction two perspectives: A. What varieties of data are available? B. What data helps you to identify...
EvergreenEcon.com ESA 2011 Impact Evaluation Draft Report Public Workshop #2 August 7, 2013 Presented By: Steve Grover, President.
An Integrated Meteorological Surface Observation System By Wan Mohd. Nazri Wan Daud Malaysian Meteorological Department Due to the increasing and evolving.
Copyright © 2014 by Nelson Education Limited. 3-1 Chapter 3 Measures of Central Tendency and Dispersion.
Quality control of daily data on example of Central European series of air temperature, relative humidity and precipitation P. Štěpánek (1), P. Zahradníček.
Ping Zhu, AHC5 234, Office Hours: M/W/F 10AM - 12 PM, or by appointment M/W/F,
Basic Statistical Terms: Statistics: refers to the sample A means by which a set of data may be described and interpreted in a meaningful way. A method.
Extremes in Surface Climate Parameters and Atmospheric Circulation Patterns in Eastern Germany and Estonia Andreas Hoy.
14 ARM Science Team Meeting, Albuquerque, NM, March 21-26, 2004 Canada Centre for Remote Sensing - Centre canadien de télédétection Geomatics Canada Natural.
When weather means business  Managing the collection and dissemination of non-homogenous data from numerous, diverse, geographically scattered sources.
Tide gauge measurements and analysis of the Indian Ocean tsunami on the Pacific coast of South America A.B. Rabinovich 1,2 and R.E. Thomson 1 1 Institute.
EVALUATION OF THE RADAR PRECIPITATION MEASUREMENT ACCURACY USING RAIN GAUGE DATA Aurel Apostu Mariana Bogdan Coralia Dreve Silvia Radulescu.
Meteorological Observatory Lindenberg Results of the Measurement Strategy of the GCOS Reference Upper Air Network (GRUAN) Holger Vömel, GRUAN.
Statistical Analysis Quantitative research is first and foremost a logical rather than a mathematical (i.e., statistical) operation Statistics represent.
RESEARCH & DATA ANALYSIS
Meteorological Observatory Lindenberg – Richard Assmann Observatory (2010) GCOS Reference Upper Air Network Holger Vömel Meteorological Observatory Lindenberg.
Designing Systems to Address Outstanding Issues in Climate Change Betsy Weatherhead.
Weather and Climate. Introduction Before the end of June 2011, the National Oceanic and Atmospheric Administration (NOAA) officially declared the year.
ASCL Workshop— Boulder, CO Fundamental Concepts for Essential Principal: Through measurement and the application of physical principles, humans can understand.
Climate Data. Scope of Core Data Air temperature Precipitation Rain Snow Relative Humidity Barometric Pressure Solar Radiation Wind.
ERT 207 Analytical Chemistry ERT 207 ANALYTICAL CHEMISTRY Dr. Saleha Shamsudin.
1 Detection of discontinuities using an approach based on regression models and application to benchmark temperature by Lucie Vincent Climate Research.
Climate Reference Network Temperature Measurements C. Bruce Baker, NCDC, Tilden P. Meyers, ATDD Mark E. Hall, ATDD, Richard R. Heim, Jr., NCDC.
Statistics Josée L. Jarry, Ph.D., C.Psych. Introduction to Psychology Department of Psychology University of Toronto June 9, 2003.
Chapter 14 Quantitative Data Analysis. Quantitative Analysis The technique by which researchers convert data to a numerical form and subject it to statistical.
Measures of Central Tendency, Variance and Percentage.
Idaho Transportation Department Winter Maintenance Best Practices
SUR-2250 Error Theory.
U.S.-India Partnership for Climate Resilience
DETERMINING ROAD SURFACE AND WEATHER CONDITIONS WHICH HAVE A SIGNIFICANT IMPACT ON TRAFFIC STREAM CHARACTERISTICS   Reza Golshan Khavas1 and Bruce Hellinga2.
Predicting Salinity in the Chesapeake Bay Using Neural Networks
Intelligent pricing plans for de-icing companies
Meteorological Instrumentation and Observations
Maps.
Observing Climate Variability and Change
Repeatablility and Hysteresis
Presentation transcript:

Variability of Temperature Measurement in the Canadian Surface Weather and Reference Climate Networks By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological Service of Canada

Background – Regional Procurement  When automation of weather stations began in Canada in the late 1980’s, there was no specifically designated “climate” network  Had a network of “primary” stations recording various meteorological parameters

Background – Regional Procurement  When automation of weather stations began in Canada in the late 1980’s, there was no specifically designated “climate” network  Had a network of “primary” stations recording various meteorological parameters  When sensors were procured for this network they were done so by five distinct Environment Canada Regions  Pacific & Yukon; Prairie and Northern; Ontario; Quebec; Atlantic  Resulted in a wide variety of instruments throughout the country all measuring the same parameter  Many of these “primary” stations today are part of Environment Canada’s Surface Weather and Climate Networks.

 National survey was undertaken to catalogue the various sensors being used to measure temperature in what are now considered Canada’s Surface Weather and Climate Networks Background – Sensors in Use  Seven different sensors were found to be the predominant source of temperature data

Background – Sensors in Use  National survey was undertaken to catalogue the various sensors being used to measure temperature in what are now considered Canada’s Surface Weather and Climate Networks  Seven different sensors were found to be the predominant source of temperature data  In addition to sensor type, differences were reported with respect to shield type and shield aspiration

1)CSI 44002AWooden Screen (WS)Non-Aspirated (NA) 2)CSI 44212Wooden Screen (WS)Aspirated (A) 3)CSI HMP35CGill 12-Plate Screen (G12)Non-Aspirated (NA) 4)CSI HMP45CGill 12-Plate Screen (G12)Non-Aspirated (NA) 5)CSI HMP45C212Wooden Screen (WS)Aspirated (A) 6)CSI HMP45C212Wooden Screen (WS)Non-Aspirated (NA) 7)CSI HMP45C212Gill Screen (G)Aspirated (A) 8)CSI HMP45C212Gill 12-Plate Screen (G12)Non-Aspirated (NA) 9)CSI HMP45CFWooden Screen (WS)Non-Aspirated (NA) 10)CSI HMP45CFGill 12-Plate Screen (G12)Non-Aspirated (NA) 11)CSI PRT1000Wooden Screen (WS)Non-Aspirated (NA)  Eleven predominant sensor types/configurations were found to be in use in the Canadian Surface Weather and Climate Networks: Background – Sensors in Use

Purpose:  Is a sensor’s reading of temperature close to the truth? X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used  Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks.  Is a sensor model consistent in its ability to measure temperature over a range of different temperatures? Operational Comparability =

Data - Establishing a “true” Temperature Reference  Average of the three taken to represent the “true” temperature in the middle of the triangle Reference Sensor 1 Reference Sensor 2 Reference Sensor 3  Three reference temperature sensors installed in a triangle formation in Aspirated Stevenson Screens Reference Temperature  Used average of three YSI SP20048 sensors as reference  Only instances in which all three reference temperature sensors agreed to within 0.5 o C were used in the analysis  Each sensor was calibrated and associated corrections were applied

Purpose:  Is a sensor’s reading of temperature close to the truth? X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used X ai = ith measurement made by one system X bi = ith simultaneous measurement made be an identical system N = number of samples used  Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks.  Is a sensor model consistent in its ability to measure temperature over a range of different temperatures? Operational Comparability Functional Precision = =

1)CSI 44002AWooden Screen (WS)Non-Aspirated (NA) 2)CSI 44212Wooden Screen (WS)Aspirated (A) 3)CSI HMP35CGill 12-Plate Screen (G12)Non-Aspirated (NA) 4)CSI HMP45CGill 12-Plate Screen (G12)Non-Aspirated (NA) 5)CSI HMP45C212Wooden Screen (WS)Aspirated (A) 6)CSI HMP45C212Wooden Screen (WS)Non-Aspirated (NA) 7)CSI HMP45C212Gill Screen (G)Aspirated (A) 8)CSI HMP45C212Gill 12-Plate Screen (G12)Non-Aspirated (NA) 9)CSI HMP45CFWooden Screen (WS)Non-Aspirated (NA) 10)CSI HMP45CFGill 12-Plate Screen (G12)Non-Aspirated (NA) 11)CSI PRT1000Wooden Screen (WS)Non-Aspirated (NA) Data - Sensors Under Test

1)CSI 44002AWooden Screen (WS)Non-Aspirated (NA)A 2)CSI 44002AWooden Screen (WS)Non-Aspirated (NA)B 3)CSI 44212Wooden Screen (WS)Aspirated (A)A 4)CSI 44212Wooden Screen (WS)Aspirated (A)B 5)CSI HMP35CGill 12-Plate Screen (G12)Non-Aspirated (NA)A 6)CSI HMP35CGill 12-Plate Screen (G12)Non-Aspirated (NA)B 7)CSI HMP45CGill 12-Plate Screen (G12)Non-Aspirated (NA)A 8)CSI HMP45CGill 12-Plate Screen (G12)Non-Aspirated (NA)B 9)CSI HMP45C212Wooden Screen (WS)Aspirated (A)A 10)CSI HMP45C212Wooden Screen (WS)Aspirated (A)B 11)CSI HMP45C212Wooden Screen (WS)Non-Aspirated (NA)A 12)CSI HMP45C212Wooden Screen (WS)Non-Aspirated (NA)B 13)CSI HMP45C212Gill Screen (G)Aspirated (A)A 14)CSI HMP45C212Gill Screen (G)Aspirated (A)B 15)CSI HMP45C212Gill 12-Plate Screen (G12)Non-Aspirated (NA)A 16)CSI HMP45C212Gill 12-Plate Screen (G12)Non-Aspirated (NA)B 17)CSI HMP45CFWooden Screen (WS)Non-Aspirated (NA)A 18)CSI HMP45CFWooden Screen (WS)Non-Aspirated (NA)B 19)CSI HMP45CFGill 12-Plate Screen (G12)Non-Aspirated (NA) A 20)CSI HMP45CFGill 12-Plate Screen (G12)Non-Aspirated (NA)B 21)CSI PRT1000Wooden Screen (WS)Non-Aspirated (NA)A 22)CSI PRT1000Wooden Screen (WS)Non-Aspirated (NA)B Data - Sensors Under Test

Purpose:  Is a sensor’s reading of temperature close to the truth? X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used X ai = ith measurement made by one system X bi = ith simultaneous measurement made be an identical system N = number of samples used  Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks.  Is a sensor model consistent in its ability to measure temperature over a range of different temperatures? Operational Comparability Functional Precision = =  Reference temperature ≤ -5 o C  Reference temperature > -5 o C and ≤ 5 o C  Reference temperature > 5 o C  Test data was divided into three categories based on reference temperature

 Instruments installed at Environment Canada’s Centre for Atmospheric Research Experiments  Located approximately 70 km NW of Toronto, Ontario Data - Test Site

Data - Sensors Under Test  Experiment was run from December, 2002 to June,  Minutely data was collected from all three reference sensors and all 22 sensors under test  In order to maintain a consistent dataset for analysis, if any sensor under test was missing a minutely value, the values for that minute were removed for all other sensors under test

Results

Operational Comparability Scores ( o C) Results – Operational Comparability Scores ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

Operational Comparability Scores ( o C) Results – Operational Comparability Scores ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C Best = 0.03 Best = 0.03 Best = 0.07 Best = 0.03 Best = 0.03 Best = 0.07 Worst = 0.23 Worst = 0.15 Worst = 0.29 Worst = 0.23 Worst = 0.15 Worst = 0.29 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Range = 0.20Range = 0.12 Range = 0.22 Range = 0.20Range = 0.12 Range = 0.22

Operational Comparability Scores ( o C) Results – Operational Comparability Scores ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C Best = 0.03 Best = 0.03 Best = 0.07 Best = 0.03 Best = 0.03 Best = 0.07 Worst = 0.23 Worst = 0.15 Worst = 0.29 Worst = 0.23 Worst = 0.15 Worst = 0.29 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Range = 0.20Range = 0.12 Range = 0.22 Range = 0.20Range = 0.12 Range = 0.22

Sensors with Best Operational Comparability Scores Sensors with Worst Operational Comparability Scores ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C Results – Percentage Frequency of Differences from Reference Percentage Frequency of Difference (%)

Sensors with Best Operational Comparability Scores Sensors with Worst Operational Comparability Scores ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C 0.05% 0.02% 15.79%7.38%12.43% Results – Percentage Frequency of Differences from Reference Percentage Frequency of Difference (%)

Sensors with Highest and Lowest Operational Comparability Scores Results – Differences from Reference  Time series represent hourly differences from the reference temperature over the period of test Difference between means = 0.34 o CDifference between means = 0.21 o CDifference between means = 0.37 o C

Functional Precision Scores ( o C) Functional Precision Scores ( o C) Results – Functional Precision ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

Functional Precision Scores ( o C) Functional Precision Scores ( o C) Results – Functional Precision Best = 0.04 Best = 0.03 Best = 0.06 Best = 0.04 Best = 0.03 Best = 0.06 Worst = 0.16 Worst = 0.12 Worst = 0.19 Worst = 0.16 Worst = 0.12 Worst = 0.19 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Range = 0.12Range = 0.09 Range = 0.13 Range = 0.12Range = 0.09 Range = 0.13 ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

Functional Precision Scores ( o C) Functional Precision Scores ( o C) Results – Functional Precision Best = 0.04 Best = 0.03 Best = 0.06 Best = 0.04 Best = 0.03 Best = 0.06 Worst = 0.16 Worst = 0.12 Worst = 0.19 Worst = 0.16 Worst = 0.12 Worst = 0.19 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Range = 0.12Range = 0.09 Range = 0.13 Range = 0.12Range = 0.09 Range = 0.13 ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

Sensors with Highest and Lowest Functional Precision Scores ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C Results – Difference from Reference Difference between means = 0.05 o CDifference between means = o CDifference between means = 0.15 o C Difference between means = 0.3 o C

Conclusions  Wide range of Operational Comparability scores observed  Highest – 0.23 o C  Lowest – 0.03 o C  In worst case, over 15% of minutely differences from the reference > 0.5 o C  Wide range of Functional Precision scores observed  Highest – 0.19 o C  Lowest – 0.03 o C  PRT 1000 WS NA A – best operational comparability score in ≤ -5 o C category  HMP45C212 G A A – best operational comparability score in > -5 o C and ≤ 5 o C category  44002A WS NA A – best operational comparability score in > 5 o C category  Purpose of Study– attempt to quantify the variability of temperature measurement Canadian Surface Weather and Climate Networks  Closeness to the “truth”  Consistency from one identical sensor to another  Temperature Dependence

Final Note – Future Instrument Procurement  In order to avoid such variability in the future, one temperature sensor will be procured by a central body and used at all stations throughout Canada  It has been proposed that the analysis methodology used in this study be used to select the best instruments for future procurements  Analysis will be undertaken at three different test sites representing significantly different climatologies  Should result in a more uniform measurement of temperature and other parameters across Canada.

Questions?

Worst Operational Comparability ScoreBest Functional Precision Score

Results – Difference from Reference Mean ( o C)  Values represent differences between means of each sensor under test and reference (SUT - Reference)  T-test was used to determine if the observed differences in means were significant at the 95% confidence level (all sensors highlighted in red) Difference Between Sensor Under Test and Reference ( o C) Difference Between Sensor Under Test and Reference ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C

Results – Difference from Reference Mean ( o C)  Values represent absolute value of differences between means of identical sensors in identical configurations  T-test was used to determine if the observed differences in means were significant at the 95% confidence level (all sensors highlighted in red) Difference Between Identical Sensors Under Test ( o C) Difference Between Identical Sensors Under Test ( o C) ≤ -5 o C> -5 o C and ≤ 5 o C> 5 o C