Dec. 13, 2003 1 Quality Control of Weather Radar Data National Severe Storms Laboratory & University.

Slides:



Advertisements
Similar presentations
Weather Forecasting This chapter discusses: 1.Various weather forecasting methods, their tools, and forecasting accuracy and skill 2.Images for the forecasting.
Advertisements

Applications of one-class classification
JMA Takayuki MATSUMURA (Forecast Department, JMA) C Asia Air Survey co., ltd New Forecast Technologies for Disaster Prevention and Mitigation 1.
1 Ground Based Meteorological Radars Presented By: David Franc NOAAs National Weather Service September 2005.
Chapter 13 – Weather Analysis and Forecasting
Surface Transportation Weather Research Center University of North Dakota Pavement Precipitation Accumulation Precipitation Data Sources Presenter: Mark.
Estimates J. E.Black Brock Buffalo Base Reflectivity May 1999 How do we interpret this image?
Keith Brewster Radar Assimilation Workshop National Weather Center 18-Oct-2011.
Multiple Sensor Precipitation Estimation over Complex Terrain AGENDA I. Paperwork A. Committee member signatures B. Advisory conference requirements II.
Future Radar and Satellite Technology Daniel C. Miller National Weather Service Columbia, SC.
By: Mani Baghaei Fard.  During recent years number of moving vehicles in roads and highways has been considerably increased.
Quantification of Spatially Distributed Errors of Precipitation Rates and Types from the TRMM Precipitation Radar 2A25 (the latest successive V6 and V7)
Calibration of GOES-R ABI cloud products and TRMM/GPM observations to ground-based radar rainfall estimates for the MRMS system – Status and future plans.
7. Radar Meteorology References Battan (1973) Atlas (1989)
NEXRAD TAC Norman, OK March 21-22, 2006 Clutter Mitigation Decision (CMD) system Status and Demonstration Studies Mike Dixon, Cathy Kessinger, and John.
Anomalous Velocity Signatures in Low Level Super Resolution Data.
Clear air echoes (few small insects) -12 dBZ. Echoes in clear air from insects Common is summer. Watch for echoes to expand area as sun sets and insects.
DUAL-POLARIZATION OF WSR-88D NETWORK
Rainfall Monitioring Using Dual-polarization Radars Alexander Ryzhkov National Severe Storms Laboratory / University of Oklahoma, USA.
Operational Weather Radar Featuring: WSR-88D Doppler Radar
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Satellites and Radar – A primer ATMO 203. Satellites Two main types of satellite orbits – Geostationary Earth Orbiting Satellite is 35,786 km (22,236.
Chapter 1 Ways of Seeing. Ways of Seeing the Atmosphere The behavior of the atmosphere is very complex. Different ways of displaying the characteristics.
16/06/20151 Validating the AVHRR Cloud Top Temperature and Height product using weather radar data COST 722 Expert Meeting Sauli Joro.
Digital Image Processing Chapter 5: Image Restoration.
Remote Sensing of Mesoscale Vortices in Hurricane Eyewalls Presented by: Chris Castellano Brian Cerruti Stephen Garbarino.
The Global Digital Elevation Model (GTOPO30) of Great Basin Location: latitude 38  15’ to 42  N, longitude 118  30’ to 115  30’ W Grid size: 925 m.
Surveillance Weather Radar 2000 AD. Weather Radar Technology- Merits in Chronological Order WSR-57 WSR-88D WSR-07PD.
Profilers. Wind profilers are phased array radars that measure the wind as a function of height above a fixed location. Characteristics: Wavelength: 33.
World Renewable Energy Forum May 15-17, 2012 Dr. James Hall.
Moonlight reflecting off ice crystals in cirrostratus clouds can cause a halo to appear around the moon. Such a halo often indicates that precipitation.
Scott W. Powell and Stacy R. Brodzik University of Washington, Seattle, WA An Improved Algorithm for Radar-derived Classification of Convective and Stratiform.
Radar Summary Charts Radar Summary Chart – An example of the radar echo intensity information available every hour from the national radar network is.
Oct. 12, National Severe Storms Laboratory & University of Oklahoma Information briefing to the NEXRAD Technical Advisory.
Radars Sandra Cruz-Pol Professor Electrical and Computer Engineering Department University of Puerto Rico at Mayagüez CASA- Collaborative Adaptive Sensing.
National Weather Service Dual-Polarization Radar Technology Photo courtesy of NSSL.
Basic Principles of Doppler Radar Elena Saltikoff Alessandro Chiariello Finnish Meteorological Institute.
SATELLITE METEOROLOGY BASICS satellite orbits EM spectrum
National Lab for Remote Sensing and Nowcasting Dual Polarization Radar and Rainfall Nowcasting by Mark Alliksaar.
RAdio Detection And Ranging. Was originally for military use 1.Sent out electromagnetic radiation (Active) 2.Bounced off an object and returned to a listening.
NEXRAD TAC Meeting August 22-24, 2000 Norman, OK AP Clutter Mitigation Scheme Cathy Kessinger Scott Ellis Joseph VanAndel
EumetCal Examples.
Comparison of Polarimetric C Band Doppler Radar Observations with Reflectivity Fields obtained at S Band: A Case Study of Water induced Attenuation R.
Observed & Simulated Profiles of Cloud Occurrence by Atmospheric State A Comparison of Observed Profiles of Cloud Occurrence with Multiscale Modeling Framework.
METR February Radar Products More Radar Background Precipitation Mode: -Volume Coverage Patterns (VCP) 21: 9 elevation angles with a complete.
Satellites Storm “Since the early 1960s, virtually all areas of the atmospheric sciences have been revolutionized by the development and application of.
NEXRAD Data Quality 25 August 2000 Briefing Boulder, CO Cathy Kessinger Scott Ellis Joe VanAndel Don Ferraro Jeff Keeler.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Radar Interpretation Chad Entremont National Weather Service Jackson, MS.
Estimating Rainfall in Arizona - A Brief Overview of the WSR-88D Precipitation Processing Subsystem Jonathan J. Gourley National Severe Storms Laboratory.
Integrating LiDAR Intensity and Elevation Data for Terrain Characterization in a Forested Area Cheng Wang and Nancy F. Glenn IEEE GEOSCIENCE AND REMOTE.
Travis Smith U. Of Oklahoma & National Severe Storms Laboratory Severe Convection and Climate Workshop 14 Mar 2013 The Multi-Year Reanalysis of Remotely.
Unit 4 Lesson 5 Weather Maps and Weather Prediction
CAPS Radar QC and Remapping
Weather Radar the WSR-88D
Weather Radar.
Anomalous Propagation
A dual-polarization QPE method based on the NCAR Particle ID algorithm Description and preliminary results Michael J. Dixon1, J. W. Wilson1, T. M. Weckwerth1,
Radar Observation of Severe Weather
GOES-R Risk Reduction Research on Satellite-Derived Overshooting Tops
NEXRAD Data Quality Optimization AP Clutter Mitigation Scheme
High resolution radar data and products over the Continental United States National Severe Storms Laboratory Norman OK, USA.
the University of Oklahoma
Severe and Unusual Weather ESAS 1115
Automated Extraction of Storm Characteristics
An overview of real-time radar data handling and compression
A Real-Time Learning Technique to Predict Cloud-To-Ground Lightning
Weather Radar the WSR-88D
Pillars of excellent communication
A New Approach to Tornado Warning Guidance Algorithms
Presentation transcript:

Dec. 13, Quality Control of Weather Radar Data National Severe Storms Laboratory & University of Oklahoma Norman OK, USA

Dec. 13, Weather Radar Weather forecasting relies on observations using remote sensors. Models initialized using observations Severe weather warnings rely on real-time observations. Weather radars provide the highest resolution In time: a complete 3D scan every 5-15 minutes In space: degree x km tilts Vertically: 0.5 to 2 degrees elevation angles

Dec. 13, NEXRAD – WSR-88D Weather radars in the United States Are 10cm Doppler radars Measure both reflectivity and velocity. Spectrum width information also provided. Very little attenuation with range Can “see” through thunderstorms Horizontal resolution 0.95 degrees (365 radials) 1km for reflectivity, 0.25km for velocity Horizontal range 460km surveillance (reflectivity-only) scan 230km scans at higher tilts, and velocity at lowest tilt.

Dec. 13, NEXRAD volume coverage pattern The radar sweeps a tilt. Then moves up and sweeps another tilt. Typically collects all the moments at once Except at lowest scan The 3dB beam width is about 1-degree.

Dec. 13, Beam path Path of the radar beam slightly refracted earth curvature Standard atmosphere: 4/3 Anamalous propagation Beam heavily refracted Non-standard atmospheric condition Ground clutter: senses ground.

Dec. 13, Anomalous Propagation Buildings near the radar. Reflectivity values correspond to values typical of hail. Automated algorithms severely affected.

Dec. 13, AP + biological North of the radar is some ground-clutter. The light green echo probably corresponds to migrating birds. The sky is actually clear.

Dec. 13, AP + precipitation AP north of the radar A line of thunderstorms to the east of the radar. Some clear-air return around the radar.

Dec. 13, Small cells embedded in rain The strong echoes here are really precipitation. Notice the smooth green area.

Dec. 13, Not rain This green area is not rain, however. Probably biological.

Dec. 13, Clear-air return Clear-air return near the radar Mostly insects and debris after the thunderstorm passed through.

Dec. 13, Chaff The high reflectivity lines are not storms. Metallic strips released by the military.

Dec. 13, Terrain The high-reflectivity region is actually due to ice on the mountains. The beam has been refracted downward.

Dec. 13, Radar Data Quality Radar data is high resolution, and is very useful. However, it is subject to many contaminants. Human users can usually tell good data from bad. Automated algorithms find it difficult to do so.

Dec. 13, Motivation Why improve radar data quality? McGrath et al (2002) showed that the mesocyclone detection algorithm (Stumpf et al, Weather and Forecasting, 1999) produces the majority of its false detections in clear-air. The presence of AP degrades the performance of a storm identification and motion estimation algorithm (Lakshmanan et al, J. Atmos. Research, 2003)

Dec. 13, Quality Control of Radar Data An extensively studied problem. Simplistic approaches: Threshold the data (low=bad) High=bad for AP, terrain, chaff Low=good in mesocylones, hurricane eye, etc. Vertical tilt tests Works for AP Fails farther from the radar, shallow precipitation.

Dec. 13, Image processing techniques Typically based on median filtering reflectivity data Removes clear-air return, but fails for AP. Fails in spatially smooth clear-air return. Smoothes the data Insufficiently tested techniques Fractal techniques. Neural network approaches.

Dec. 13, Steiner and Smith Journal of Applied Meteorology, 2002 A simple rule-base. Introduced more sophisticated measures Echo top: the highest tilt that has at least 5dBZ. Works mostly. Fails in heavy AP, shallow precipitation. Inflections Measure of variability within a local neighborhood of pixel. A texture measure suited to scalar data. Their hard thresholds are not reliable.

Dec. 13, Radar Echo Classifier Operationally implemented on US radar product generators Fuzzy logic technique (Kessinger, AMS 2002) Uses all three moments of radar data Insight: targets that are not moving have zero velocity, and low spectrum width. High reflectivity values usually good. Those that are not moving are probably AP. Also makes use of Steiner-Smith measures Not vertical (echo-top) features (to retain tilt-by-tilt ability) Good for human users, but not for automated use

Dec. 13, Radar Echo Classifier Finds the good data and the AP. But can not be used to reliably discriminate the two on a pixel-by-pixel basis.

Dec. 13, Quality Control Neural Network Compute texture features on three moments. Vertical features on latest (“virtual”) volume Can clean up tilts as they arrive and still utilize vertical features. Train neural network off-line on these features to classify pixels into precip or non-precip at every scan of the radar. Use classification results to clean up the data field in real-time.

Dec. 13, The set of input features Computed in 5x5 polar neighborhood around each pixel. For velocity and spectrum width: Mean Variance (Kessinger) value-mean

Dec. 13, Reflectivity Features Lowest two tilts of reflectivity: Mean Variance Value-mean Square diff of pixel values (Kessinger) Homogeneity radial inflections (Steiner-Smith) echo size found through region-growing

Dec. 13, Vertical Features Vertical profile of reflectivity maximum value across tilts weighted average with the tilt angle as the weight difference between data values at the two lowest scans (Fulton) echo top height at a 5dBZ threshold (Steiner-Smith) Compute these on a “virtual volume”

Dec. 13, Training the Network How many patterns? Cornelius et al. (1995) used a neural network to do radar quality control Resulting classifier not useful discarded in favor of fuzzy logic Radar Echo Classifier. Used < 500 user-selected pixels to train the network. Does not capture the diversity of the data. Skewed distribution.

Dec. 13, Diversity of data? Need to have data cases that cover Shallow precipitation Ice in the atmosphere AP, ground-clutter (high data values that are bad) Clear-air return Mesocyclones (low data values that are good)

Dec. 13, Distribution of data Not a climatalogical distribution Most days, there is no weather, so low reflectivities (non- precipitating) predominate. We need good performance in weather situations. Need to avoid bias in selecting pixels – choose all pixels in storm echo, for example, not just the storm core Neural networks perform best when trained with equally likely classes At any value of reflectivity, both classes should be equally likely Need to find data cases to meet this criterion. Another reason why previous neural network attempts failed.

Dec. 13, Distribution of training data by reflectivity values

Dec. 13, Training the network Human experts classified the training data by marking bad echoes. Had access to time-sequence and knowledge of the event. Training data was 8 different volume scans that captured the diversity of the data. 1 million patterns.

Dec. 13, The Neural Network Fully feed-forward neural network. Trained using resilient propagation with weight decay. Error measure was modified cross-entropy. Modified to weight different patterns differently. Separate validation set of 3 volume scans used to choose the number of hidden nodes and to stop the training.

Dec. 13, Emphasis Weight the patterns differently because: Not all patterns are equally useful. Given a choice, we’d like to make our mistakes on low reflectivities. We don’t have enough “contrary” examples. Texture features are inconsistent near boundaries of storms. Vertical features unusable at far ranges. Does not change the overall distribution to a large extent.

Dec. 13, Histograms of different features The best discriminants: Homogeneity Height of maximum Inflections Variance of spectrum width.

Dec. 13, Generalization No way to guarantee generalization Some ways we avoided overfitting Use the validation set (not the training set) to decide: Number of hidden nodes When to stop the network training Weight-decay Limited network complexity 500,000 patterns Emphasize certain patterns

Dec. 13, Untrainable data case None of the features we have can discriminate the clear-air return from good precipitation. Essentially removed the migratory birds from the training set.

Dec. 13, Velocity We don’t always have velocity data. In the US weather radars, Reflectivity data available to 460km Velocity data available to 230km But higher resolution. Velocity data can be range-folded Function of Nyquist frequency Two different networks One with velocity (and spectrum width) data Other without velocity (or spectrum width) data

Dec. 13, Choosing the network Training the with-velocity and without-velocity networks Shown is the validation error as training progresses for different numbers of hidden nodes Choose 5 nodes for with-velocity (210 th epoch) and 4 nodes for without-velocity (310 th epoch) networks.

Dec. 13, Behavior of training error Training error keeps decreasing. Validation error starts to increase after a while. Assume that point this happens is where the network starts to get overfit.

Dec. 13, Performance measure Use a testing data set which is completely independent of the training and validation data sets. Compared against classification by human experts.

Dec. 13, Receiver Operating Characteristic A perfect classifier would be flush top and flush left. If you need to retain 90% of good data, then you’ll have to live with 20% of the bad data when using the QCNN Existing NWS technique forces you to live with 55% of the bad data.

Dec. 13, Performance (AP test case)

Dec. 13, Performance (strong convection)

Dec. 13, Test case (ground clutter)

Dec. 13, Test case (small cells)

Dec. 13, Summary A radar-only quality control algorithm Uses texture features derived from 3 radar moments Removes bad data pixels corresponding to AP, ground clutter, clear-air impulse returns Does not reliably remove biological targets such as migrating birds. Works in all sorts of precipitation regimes Does not remove bad data except toward the edges of storms.

Dec. 13, Multi-sensor Aspect There are other sensors observing the same weather phenomena. If there are no clouds on satellite, then it is likely that there is no precipitation either. Can’t use the visible channel of satellite at night.

Dec. 13, Surface Temperature Use infrared channel of weather satellite images. Radiance to temperature relationship exists. If the ground is being sensed, the temperature will be ground temperature. If satellite “cloud-top” temperature is less than the surface temperature, cloud- cover exists.

Dec. 13, Spatial and Temporal considerations Spatial and temporal resolution Radar tilts arrive every 20-30s High spatial resolution (1km x 1-degree) Satellite data every 30min 4km resolution Surface temperature 2 hours old 20km resolution Fast-moving storms and small cells can pose problems.

Dec. 13, Spatial … For reasonably-sized complexes, both satellite infrared temperature and surface temperature are smooth fields. Bilinear interpolation is effective.

Dec. 13, Temporal Estimate motion Use high-resolution radar to estimate motion. Advect the cloud-top temperature Based on movement from radar Advection has high skill under 30min. Assume surface temperature does not change 1-2 hr model forecast has no skill above persistence forecast.

Dec. 13, Cloud-cover: Step 1 Satellite infrared temperature field. Blue is colder Typically higher storms A thin line of fast- moving storms A large thunderstorm complex

Dec. 13, Cloud-cover: Step 4 Forecast to move east, and decrease in intensity. This forecast is made based on radar data.

Dec. 13, Cloud-cover: Step 2 Combined data from 4 different radars. Two “views” of the same phenomenon – the different sensors measure different things, and have different constraints.

Dec. 13, Cloud-cover: Step 3 Estimates of motion and growth-and- decay made using KMeans texture segmentation and tracking. Red – eastward motion.

Dec. 13, Cloud-cover: Step 4 The forecast is for 43 minutes – time difference between satellite image and radar tilt.

Dec. 13, Cloud-cover: Step 5 Surface temperature 20kmx20km spatial resolution 2 hours old Interpolated from data from weather stations around the country Best we have.

Dec. 13, Cloud-cover: Step 6 Difference field White – temperature difference more than 20K. 5K is a very conservative threshold.

Dec. 13, Distribution of cloud-cover Two precipitation cases May 8, 2003 July 30, 2003 Indicate that cloud- cover values more than 15K minimum.

Dec. 13, Multi-sensor QC: Step 1 Original data from July 11, 2003 (KTLX) Large amount of contamination. Clear-air Probably biological

Dec. 13, Multi-sensor QC: Step 2 Result of applying the radar-only neural network. Most of the clear-air contamination is gone. Possible precipitation north- west of the radar.

Dec. 13, Multi-sensor QC: Step 3 Cloud-cover field Some cloud-cover north-west of the radar. Nothing to the south of the radar. 5K threshold corresponds to the light blues.

Dec. 13, Multi-sensor QC: Step 4 Result of applying cloud-cover field to NN output. Small cells retained, but biological contamination removed.

Dec. 13, Conclusion The radar-only neural network outperforms the currently operational quality-control technique. Can be improved even further using data from other sensors. Needs more systematic examination.