Original Graph Set This is the first set of studies that we made after re-rating the stations using the updated Leroy (2010) system. We examine compliant.

Slides:



Advertisements
Similar presentations
Literature Review Kathryn Westerman Oliver Smith Enrique Hernandez Megan Fowler.
Advertisements

Mark Troy – Data and Research Services –
EVAL 6970: Meta-Analysis Vote Counting, The Sign Test, Power, Publication Bias, and Outliers Dr. Chris L. S. Coryn Spring 2011.
Representativity of the Iowa Environmental Mesonet Daryl Herzmann and Jeff Wolt, Department of Agronomy, Iowa State University The Iowa Environmental Mesonet.
Weather and X/Q 1 Impact Of Weather Changes On TVA Nuclear Plant Chi/Q (  /Q) Kenneth G. Wastrack Doyle E. Pittman Jennifer M. Call Tennessee Valley Authority.
Siting matters, and the numbers show just how much.
Climatographs for the City of Toronto – THEN vs. NOW
February 15, 2006 Geog 458: Map Sources and Errors
EC220 - Introduction to econometrics (chapter 9)
Getting Started with Hypothesis Testing The Single Sample.
Breast Cancer Disparity in Appalachia Rural Appalachian Cancer Demonstration Program.
10-1Quality Control William J. Stevenson Operations Management 8 th edition.
Working paper number WLTP-DHC Application of the development approach described in WLTP-DHC on ACEA’s EU database By H. Steven
Impact of Urbanization on the Thermal Comfort Conditions in the Hot Humid City of Chennai, India. A. Lilly Rose Assistant Professor, Department of Architecture,
© Copyright McGraw-Hill CHAPTER 3 Data Description.
PPA 501 – Analytical Methods in Administration Lecture 5a - Counting and Charting Responses.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
Slide Slide 1 Copyright © 2007 Pearson Education, Inc Publishing as Pearson Addison-Wesley. Lecture Slides Elementary Statistics Tenth Edition and the.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
Lecture PowerPoint Slides Basic Practice of Statistics 7 th Edition.
Indian and Northern Affaires indiennes Affairs Canada et du Nord Canada First Nation and Inuit Community Well-Being : Describing Historical Trends ( )
CHAPTER 9: Producing Data: Experiments. Chapter 9 Concepts 2  Observation vs. Experiment  Subjects, Factors, Treatments  How to Experiment Badly 
中新天津生态城 China temperature variation in the last 60 years Jiang Yangming.
Copyright © 2010, 2007, 2004 Pearson Education, Inc Lecture Slides Elementary Statistics Eleventh Edition and the Triola Statistics Series by.
Instrumental Surface Temperature Record Current Weather Data Sources Land vs. Ocean Patterns Instrument Siting Concerns Return Exam II For Next Class:
So, what’s the “point” to all of this?….
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
Sunglasses Sales Excellence Discussion. Sunglasses Identify and describe at least one further feature of this time series data with reasons. – Sunglasses.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
CHAPTER 27: One-Way Analysis of Variance: Comparing Several Means
Higher National Certificate in Engineering Unit 36 –Lesson 4 – Parameters used to Describe the Normal Distribution.
Multifamily Homes: Calibration Update Regional Technical Forum December 8 th, 2015.
What Affects Students’ Performance in School? A report by: Justin Caldwell.
1 HETEROSCEDASTICITY: WEIGHTED AND LOGARITHMIC REGRESSIONS This sequence presents two methods for dealing with the problem of heteroscedasticity. We will.
Copyright © 2015, 2011, 2008 Pearson Education, Inc. Chapter 5, Unit D, Slide 1 Statistical Reasoning 5.
The Practice of Statistics, 5th Edition Starnes, Tabor, Yates, Moore Bedford Freeman Worth Publishers CHAPTER 4 Designing Studies 4.2Experiments.
How to Construct a Seasonal Index. Methods of Constructing a Seasonal Index  There are several ways to construct a seasonal index. The simplest is to.
John Batty DEFRA UK Bratislava November Legal Background For any given surface water body, applying the MAC-EQS means that the measured concentration.
CHAPTER 9: Producing Data Experiments ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
CPUE analysis methods, progress and plans for 2011 Simon Hoyle.
Chapter 4: Measures of Central Tendency. Measures of central tendency are important descriptive measures that summarize a distribution of different categories.
Exploratory Data Analysis
Regression Analysis.
CHAPTER 4 Designing Studies
Process Capability and Capability Index
Fractions Click to start Created By Dr. Cary Lee
Change Score Analysis versus ANCOVA in Pretest/Posttest Designs:
CHAPTER 4 Designing Studies
Instrumental Surface Temperature Record
Higher National Certificate in Engineering
CHAPTER 4 Designing Studies
QUALITY CONTROL AND QUALITY ASSURANCE (Contd.)
Instrumental Surface Temperature Record
CHAPTER 4 Designing Studies
CHAPTER 9: Producing Data— Experiments
Chapter 2 Exploring Data with Graphs and Numerical Summaries
One-Way Analysis of Variance
Chapter 4: Designing Studies
Story Line: Bias in data
Statistical Reasoning December 8, 2015 Chapter 6.2
CHAPTER 4 Designing Studies
CHAPTER 4 Designing Studies
Instrumental Surface Temperature Record
CHAPTER 4 Designing Studies
CHAPTER 4 Designing Studies
An area and distance weighted analysis of the impacts of station exposure on the U.S. Historical Climatology Network temperatures and temperature trends.
Chapter 4: Designing Studies
CHAPTER 4 Designing Studies
CHAPTER 4 Designing Studies
CHAPTER 4 Designing Studies
Presentation transcript:

Original Graph Set This is the first set of studies that we made after re-rating the stations using the updated Leroy (2010) system. We examine compliant (Class 1&2) stations with non-compliant (Class 3,4,5) using different classes as baseline comparisons. We also ran the data without baselines for purposes of comparison and to ensure that the results were not a methodological artifact.

Part 1 Baseline Comparisons How do stations compare with nearby stations of a different class? This uses a different approach than our (more recent) “nine regions” method. But we wanted to be certain that however we addressed the problem the basic results would be the same.

Methodology We cut the US up into 26 grid boxes. We then compare well sited and poorly sited stations within each grid, using each Class as a separate touchstone. This gives us four baselines of comparison: Class 1\2, Class 3, Class 4, and Class 5. Note: We combine Class 1&2 and treat them as a single class to ensure a robust sample and also because neither Class 1 nor 2 stations are temperature biased, according to Leroy (2010) and are therefore equivalent for our purposes.

Grid Boxes 1.) Comparisons are made within each box to establish baseline. 2.) All boxes are averaged for gridded results.

Class 4 stations are the most numerous (36%) and therefore this comparison is the most robust. Compliant (Class 1\2) stations show a trend of 0.095°C / Decade lower than non-compliant (Class 3\4\5 stations).

Class 3 stations are the second most numerous (33%). Compliant (Class 1\2) stations show a trend of 0.102°C / Decade lower than non-compliant (Class 3\4\5 stations).

Class 1\2 stations comprise only 20% of the total number. Compliant (Class 1\2) stations show a trend of 0.082°C / Decade lower than non-compliant (Class 3\4\5 stations).

Class 5 stations comprise only 12% of the total number. Results therefore cannot be considered to be robust. Yet the same pattern emerges. Compliant (Class 1\2) stations show a trend of 0.076°C / Decade lower than non-compliant (Class 3\4\5 stations).

Part 2 Equipment This is a look at how the different equipment affects the data. CRS: Cotton Region Shelters (a/k/a “Stevenson Screens” MMTS (Maximum-Minimum Temperature Sensors) ASOS (Automated Surface Observing Systems)

This compares different equipment. Note that the modern MMTS shows a significantly lower trend than the obsolete CRS and the notoriously unreliable ASOS. Yet rather than adjusting CRS and ASOS trends downward to match MMTS, MMTS trends are adjusted upwards to conform with the older, less reliable equipment.

CRS equipment shows a higher overall trend than MMTS and somewhat less difference between compliant and non-compliant stations (0.64). Part of this is due to poor distribution of stations and is addressed by gridding (see next slide).

After gridding and baselining to Class 4, CRS equipment shows a difference between compliant and non-compliant stations of 0.73.

Modern MMTS equipment shows a much larger difference between compliant and non-compliant stations (0.173). (ASOS comparisons cannot be made, as there are too few for a robust internal comparison. They tend to be better sited, almost exclusively in airports, yet their trends are higher owing to an equipment (HO-83) failure issue and other factors pertaining to the unique situations in airports.)

After gridding and baselining to Class 4, MMTS equipment shows a slightly smaller, yet still very large difference between compliant and non-compliant stations (0.164).

Part 3 Urban vs. Rural This section confirms the that urbanization not only increases the readings, but also the trends. In addition, urbanization is found to dampen, though not eliminate, the differences between complaint (Class 1&2) and non-compliant (Class 3,4,5) stations. Rural stations show the greatest disparity. This is significant because 10% of the rated sites are urban and 25% semi-urban, which is a far greater proportion than the actual CONUS surface area. Therefore, to that extent, the trends are exaggerated.

We now turn our attention to urban vs. rural trends. Urban trends are much higher than rural (0.99) with semi-urban trends squarely in between, at 0.56 higher than rural.

The difference between compliant and non-compliant rural stations is much greater (.095) than for urban stations.

The difference between compliant and non-compliant semi-urban stations is also much greater (0.114) than for urban stations.

The difference between compliant and non-compliant urban stations is much less (0.037), as urban waste heat overwhelms the stations, nominally compliant and non-compliant, alike. Class 4 station in urban areas show the same tendencies as Class 5 stations in rural areas.

This chart demonstrates the large effect of urban areas on (otherwise) compliant stations. It also tells us how the NOAA deals with this by way of adjustment: Namely that non-urban trends appear to be adjusted upward to match urban trends rather than urban trends being adjusted downward to match rural trends.

Non-compliant (Class 3,4,5) stations show somewhat less urban-rural difference than the compliant (Class 1,2) stations.

Part 4 Gridded, but with no Baseline We now examine the data without any baseline.

Without a baseline, the data shows a.077 cooler trend for compliant (Class 1&2) stations than for non-compliant stations (Class 3,4,5). This is consistent with our overall findings.

And this shows how NOAA adjusts for the differences: Not by adjusting the non-compliant stations downward to match the compliant stations, but by adjusting the compliant stations upward to match those stations that are out of compliance.

Finally, we showcase the best equipment with urban and semi-urban stations excluded (there are a handful of rural airports included, however). This data is not gridded or baselined, but is a simple national average. The warming effects of poor siting are obvious, as are the effects of NOAA adjustment procedure. Compliant trends are fully.190 higher after NOAA adjustment.

Baseline Comparisons It is, of course, important to provide a simple, ungridded nationwide average of all well sited stations and poorly sited stations. And indeed, we provide those figures. But it is possible that a nationwide average result can be skewed by poor station distribution. If well (or poorly) sited stations are concentrated in some areas but not in others. Furthermore, it is not very revealing to compare a well sited station in Northern Virginia with a poorly sited station in Arizona. One would want to compare well sited (Class 1&2) stations with nearby poorly sited (Class 3,4,5) stations and vice versa. Therefore, gridding and baselining is desirable.