Presentation is loading. Please wait.

Presentation is loading. Please wait.

Observation Targeting Andy Lawrence Predictability and Diagnostics Section, ECMWF Acknowledgements: Martin Leutbecher, Carla Cardinali, Alexis Doerenbecher,

Similar presentations


Presentation on theme: "Observation Targeting Andy Lawrence Predictability and Diagnostics Section, ECMWF Acknowledgements: Martin Leutbecher, Carla Cardinali, Alexis Doerenbecher,"— Presentation transcript:

1 Observation Targeting Andy Lawrence Predictability and Diagnostics Section, ECMWF Acknowledgements: Martin Leutbecher, Carla Cardinali, Alexis Doerenbecher, Roberto Buizza ECMWF Predictability Training Course - April 2006

2 Contents Introduction What is Observation Targeting? Targeting Methodology Example using a simple model Kalman Filter techniques Singular Vector techniques Summary of research issues Operational targeting principles Previous targeting campaigns Operational structure and schedules Results from ATReC 2003 Verification of forecast impacts Future of observation targeting ECMWF Predictability Training Course – April 2006

3 What is observation targeting? Techniques that optimize a flexible component of the observing network on a day-to day basis with the aim to achieve specific forecast improvements. ECMWF Predictability Training Course - April 2006 Estimated ROUTINE observations = 10 6 Estimated TARGETED observations = 10 3

4 The concept of observation targeting Forecast an event. ECMWF Predictability Training Course - April 2006 OBS Improve forecast? Use adjoint model transform algorithm to derive data-sensitive areas. Add extra observations Verify with corresponding analysis.

5 The concept of observation targeting Q: If we have the capability to add observations in data-sensitive areas to improve the forecast of a specific event, can these locations be determined using objective (model-based) methods? ECMWF Predictability Training Course - April 2006 A: This is an optimization problem with two constraints: i.Probability of making an analysis error at a particular location ii.The intrinsic ability of the flow at that location (i.e. sensitivity) OBS

6 Where are observations needed to improve a 18-hour forecast? Sensitive areasVerification region ECMWF Predictability Training Course - April 2006

7 Where are observations needed to improve a 42-hour forecast? ECMWF Predictability Training Course - April 2006

8 Where are observations needed to improve a 66-hour forecast? ECMWF Predictability Training Course - April 2006

9 The Observation Targeting Question How do identify optimal sites for additional observations? Methodology ECMWF Predictability Training Course - April 2006 OR How do we predict changes of forecast uncertainty due to assimilation of additional observations?

10 Information needed to answer this question: 1.Knowledge of the statistics of initial condition errors…and how they change due to an assimilation of additional observations. –Gaussian error statistics –Kalman filter techniques Methodology ECMWF Predictability Training Course - April 2006 For linear perturbation dynamics and Gaussian error statistics, optimal state estimation can be approximated by the Extended Kalman Filter… which can also be used to select optimal sites for additional observations. 2.Knowledge of the perturbation dynamics from the observation time to the forecast verification time –NWP studies suggest that perturbation dynamics are approximated by a linear propagator defined by ensemble-based techniques or tangent- linear/ adjoint techniques.

11 Targeted observations in the framework of an Extended Kalman Filter (Illustration using the Lorenz-95 system - Leutbecher, 2003) Extended Kalman Filter A chaotic system, where a time unit of 1 represents 5 days dx i dt= -x i-2 x i-1 + x i-1 x i+1 – x i + F with i = 1,2,…40 x 0 = x 40, x –1 = x 39, x 41 = x 1 and F = 8 ECMWF Predictability Training Course - April 2006 A perturbation placed at i=10, propagates ‘eastward’ at a speed of 25 degrees/day.

12 For routine observations: Observations are constructed by adding noise (representing unbiased and uncorrelated normally distributed errors) to values taken from a ‘truth’ run..become available every 6 hours. Over land (positions 21-40) we have observations at all locations  0 = 0.05  clim Over ocean (positions 1-20) we have observations at ‘cloud-free’ locations  0 = 0.15  clim Planet L95: routine observing network ECMWF Predictability Training Course - April 2006

13 2-day forecast errors for Europe Planet L95: forecast errors ECMWF Predictability Training Course - April 2006

14 A single observation over the ocean, with the error characteristics of a land observations is considered (i.e. at i=1,…20). The aim of this is to provide a better forecast over ‘Europe’ on Planet L95. Planet L95: targeted observation ECMWF Predictability Training Course - April 2006

15 Covariance evolution within the Kalman filter: For Routine observations: Analysis Step at time t j (P r a ) -1 = (P r f ) -1 + H r T R r -1 H r Forecast Step t j  t j+ P r f = M P r a M T Covariance prediction with the Kalman filter (routine + additional observations) ECMWF Predictability Training Course - April 2006 Optimal position i* (where i* gives the maximum reduction of forecast error variance) : max i=1..20 trace ( L Eu (P r f - P i f ) L T Eu ) For Routine + additional observations at position i (where i = 1,…20) Analysis Step at time t j (P i a ) -1 = (P r f ) -1 + H r T R r -1 H r + H i T R i -1 H i Forecast Step t j  t j+ P i f = M P i a M T

16 Optimal position for an additional observation ECMWF Predictability Training Course - April 2006 Distribution of optimal position for an additional observation in L95 ‘Atlantic’ (to improve the 2-day forecast over Europe)

17 Planet L95: Forecast impacts How to measure the improvements in forecast skill: Compare with randomly placed observations Compare with impact obtained with observations that actually reduce the forecast error the most ( although never achievable as it requires information at verification time; position depends on realization of actual errors!) Compare with the impact of an observation added at a fixed site optimized for the forecast goal (a hard test). –No such test with NWP system and real observations yet. –How does last test look with L95? ECMWF Predictability Training Course - April 2006

18 Fixed location v. adaptive day-to-day location ECMWF Predictability Training Course - April 2006

19 Method is good for a simple model, BUT an extended Kalman filter is too expensive for a full NWP model Targeting would require to run the Kalman filter several times (i.e. each configuration of the additional observations considered in the planning process is one among several feasible ones). Approximations of the Kalman Filter Solution: Calculate forecast error variance in small relevant subspace! 1.Perturbations of ensemble members about the mean: a proxy for data assimilation scheme (ETKF). 2.Based on singular vector schemes computed with an estimate of the inverse of the routine analysis error covariance metric as the initial time metric. ECMWF Predictability Training Course - April 2006

20 Reduction of forecast error variance using the ETKF (Majumdar, 2001) Signal = forecast error (routine +additional) – forecast error (routine) Ensemble Transform Kalman Filter (ETKF) ECMWF Predictability Training Course - April 2006 Used in targeting for Winter Storms Reconnaissance Program (WSRP) ( Bishop, Etherton and Majumdar, 2001) Assumes linearity. ‘linear combination of ensemble perturbations + ensemble mean = model trajectory’. Assumes optimal data assimilation. No covariance localisation.

21 Singular vectors identify the directions (in phase space) that provide maximum growth over a finite period of time. ECMWF Predictability Training Course - April 2006 Singular Vector Schemes Dependent on model characteristics and optimization time Growth is measured by the inner product (or metric or norm): If the correct norm is used, the resulting ensemble captures the largest amount of forecast error variance at optimization time (assuming that the forecast error evolves linearly). For targeting: Forecast error variance prediction from t 0 to t v replaced by variance predictions in a singular vector subspace. Data assimilation can either use a full Kalman filter or Optimal Interpolation.

22 Singular Vector-based reduced-rank estimate: Initial time metric is the inverse of the routine analysis error covariance matrix (P r a ) -1 SV’s computed with this metric evolve into the leading eigenvectors of the routine error covariance matrix trace (L eu P f L T Eu ). Singular vector targeting method 1 ECMWF Predictability Training Course - April 2006 Compute variance of forecast errors only in a subspace of leading singular vectors trace ( n L Eu P f L T Eu  n T ) instead of trace (L Eu P f L T Eu ) Here,  n denotes the projection on the subspace of the leading n (left) singular vectors of L Eu M. Data assimilation uses full Kalman filter

23 Reduced-Rank approximation of covariance forecast step Analysis error covariances in the subspace spanned by the leading n SV’s are represented by Routine networkV n V n T where V n T ( P r a ) -1 V n = I Modified network V n  i  i T V n T where (V n  i ) T ( P r a ) -1 ( V n  i ) = I The transformation matrix  i is the inverse square root of the n x n matrix,C i, that expresses the modified analysis error covariance metric in the basis of the singular vectors; C i = V n T ( P r a ) -1 V n = I n + V n T H i T R –1 H i V n. Using these representations (of the aecm), the forecast error variance in the verification region becomes: trace ( n L Eu ( P f j+ ) L T Eu  n T ) =  n j=1  2 j routine network trace (  T diag (  2 1 …  2 n )  ) modified network where  j denotes the singular value of the j-th SV v j ECMWF Predictability Training Course - April 2006

24 Singular vector targeting method 2 ECMWF Predictability Training Course - April 2006 Approximate full Kalman filter by replacing the routine forecast error covariance matrix P f r by a static background error covariance matrix B (Optimal Interpolation scheme) In the variance prediction (for targeting) and In the assimilation algorithm Analysis error covariance matrix given by A -1 = B -1 + H T R -1 H In L95 system, static background error covariance matrix B =  (x f - x t ) (x f - x t ) T  is a sample covariance matrix computed from (forecast – truth) differences from a 1000 day sample → combine with reduced-rank technique

25 Method 1:Method 2: Full KF- SV subspace Optimal Interpolation – SV subspace L95 comparisons : SV reduced-rank schemes ECMWF Predictability Training Course - April 2006

26 L95 comparisons : SV full rank v reduced-rank schemes ECMWF Predictability Training Course - April 2006 Distribution of 2-day forecast errors over Europe Full KF v. Method 1 ( Reduced-rank Kalman filter) Full KF v. Method 2 ( Reduced-rank OI)

27 Targeted observations should be directed to ‘sensitive’ regions of the atmosphere. ECMWF Predictability Training Course - April 2006 SV dependency on initial time metric p1 p0SS For predictability studies, an appropriate metric is based on energy. –Total Energy provides a dynamical basis (has no knowledge of error statistics): ||x|| e 2 = x T Ex = ½ u 2 + v 2 + (c p /T r ) T 2 dp dS + ½ R d T r p r ( ln p sfc ) 2 dS Correct metric is dependent on the purpose for making the targeted observations (study precursor developments or to improve forecast initial conditions).

28 Hessian Singular Vectors ECMWF Predictability Training Course - April 2006 The Hessian of the cost-function provides the estimate of the inverse of the analysis error covariance matrix: J(x) = J b (x) + J o (x)  J = B -1 + H T R -1 H= A -1 Initial error estimates are consistent with the covariance estimates of the variational data assimilation scheme (incorporates error statistics).

29 TESV ‘Full’ Hessian SV ‘Partial’ Hessian SV  J = J b + J o  J = J b ECMWF Predictability Training Course - April 2006 Total energy SV v Hessian SV

30 Hessian reduced-rank estimate ECMWF Predictability Training Course - April 2006 Similar to Kalman Filter/ OI- reduced rank estimate but is based on a subspace of Hessian Singular vectors v i computed with the metric  J routine (using only observations from the routine network in J o ) ( Leutbecher, 2003 ) Estimate of forecast error variance reduction due to additional observations trace ([I – C -1 ] diag (  2 1 ….  2 n )) where  j denotes the singular value of the routine Hessian SV v j Efficient computation of the Hessian metric for modified observation network (routine + additional) in the subspace: C ij = v T i  J mod v j = v T i (  J routine + H T a R -1 a H a ) v j =  ij + (H a v i ) T R -1 a H a v j

31 Comparison of flight track ranking ECMWF Predictability Training Course - April 2006 Winter Storms Reconnaissance Program 2003 Additional observations on 4 th Feb 00UT for forecast verification time: 6 th Feb 00UT Hessian Reduced-rank estimateETKF (Doerenbecher et al., 2003)

32 Summary & Research Issues ECMWF Predictability Training Course - April 2006 Aim of observation targeting: Prediction of forecast error variance due to modifications of observing network. Factors that affect skill of forecast error variance predictions in operational NWP: Covariance estimates –Skill of forecast error variance predictions depends on quality of background error covariance estimate…incorporate flow-dependant wavelet approach. –Account for correlations between observation error in current schemes (Important for satellite data with observation error correlations in space and between channels  optimal thinning - Liu & Rabier, 2003) –Predict spatial resolution of routine observations. Harder to do with day-to-day variability of targeted observations, particularly for satellite data affected by cloud.

33 Summary & Research Issues ECMWF Predictability Training Course - April 2006 Error dynamics –TL/AD model simplified due to resolution and physical process parameterisation. Advances in formulation (moist processes, sensitivity of observation targeting guidance to spatial resolution) should improve variance predictions. –Validity of tangent linear assumption: Gilmour et al. 2001: probably not useful beyond 24h; but measure of nonlinearity dominated by small scales Reynolds & Rosmond 2003: SV’s usefully up to 72h (diagnostic in SV-space and scale dependant diagnostic) Reduced Rank SV’s (Subspace): How many SV’s are needed to reliably predict forecast error variance reductions? –L95: 1 SV is sufficient: the leading SV explains a large fraction of the total forecast error variance in the verification region (Rank 1KF  full KF) –NWP: For rank (L Eu M)  number of grid-points in verification region times number of variables.

34 Summary & Research Issues ECMWF Predictability Training Course - April 2006 Observation types –Forecast error variance reductions can be determined for different observation types in sensitive areas. –Will the abundance of satellite data eliminate the need for in-situ measurements? –Satellite sampling is limited through cloud layers → in-situ measurements useful if dynamically-sensitive areas are beneath clouds. Targeting methodology –Perhaps combination of SV and ensemble-based approach? (expensive, as requires a dedicated ensemble). –Does validity of linear transformation in ETKF technique extend further than the validity of the TL-approximation? Contribution of model error? –To initial condition error at t 0 –To growth of error from t 0 to t v

35 Previous targeting campaigns Targeted observation techniques and methods were tested during numerous operational campaigns: FASTEX (Fronts and Atlantic Storm-Track Experiment 1997) Improving forecasting of atmospheric cyclone depressions forming in the North-Atlantic and reaching the west-coast of Europe. NORPEX (North Pacific Experiment 1998) North Pacific winter-season storms that affect the United States, Canada, and Mexico. WSRP (Winter Storms Reconnaissance Program 1999-2006) North-Eastern Pacific storms affecting the west-coast of the United States. Use of targeted observations has a positive effect on forecast skill (Majumdar et al. 2001) ATReC (Atlantic THORPEX Regional Campaign 2003) North-Atlantic storms affecting east-coast United States and Europe. ECMWF Predictability Training Course - April 2006

36 1.Case selection: Decision made at t c on whether or not to commit additional observing resources at t 0. Decide for which forecast verification time t v and which region R adaptive observations should be taken. NWP Centres ECMWFUK MONCEPNRLMF Operations Centre Operational Structure for Observation Targeting 1 2 3 4 5 t c t d t 0 t v time ECMWF Predictability Training Course - April 2006

37 2.Sensitive area prediction: Compute which configuration for adaptive observations (to be taken at t 0 ) is likely to best constrain error of forecast for (t v,R). Operational Structure for Observation Targeting 1 2 3 4 5 t c t d t 0 t v time ECMWF Predictability Training Course - April 2006 NWP Centres ECMWFUK MONCEPNRLMF Operations Centre

38 3.Select and request additional observations at t d. Operational Structure for Observation Targeting 1 2 3 4 5 t c t d t 0 t v time ECMWF Predictability Training Course - April 2006 NWP Centres ECMWFUK MONCEPNRLMF Operations Centre Observation control centre

39 4.Observing platforms deployed at t 0 and observations taken. Operational Structure for Observation Targeting 1 2 3 4 5 t c t d t 0 t v time ECMWF Predictability Training Course - April 2006 NWP Centres ECMWFUK MONCEPNRLMF AMDARRadiosondeRes. AircraftASAP Observation control centre Data Monitoring Operations Centre

40 NWP Centres ECMWFUK MONCEPNRLMF Operational Structure for Observation Targeting 1 2 3 4 5 t c t d t 0 t v time ECMWF Predictability Training Course - April 2006 AMDARRadiosondeRes. AircraftASAP Observation control centre Data Monitoring Operations Centre 5.Forecast verification time at t v NWP Centres ECMWFUK MONCEPNRLMF

41 Atlantic THORPEX Regional Campaign 2003 ECMWF Predictability Training Course - April 2006 First field campaign in which multiple observing systems were used. Dropsondes from research aircraft, ASAP ships, AMDAR, land radiosonde sites. Observation targeting guidance to predict the sensitive areas based on UKMO: ETKF based on ECWMF ensemble Meteo-France: total energy SV’s run on a (possibly perturbed) trajectory NRL: SV’s and sensitivity to observations NCEP: ETKF based on combined ECWMF and NCEP ensembles. ECWMF: 2 flavours of total energy SV’s and Hessian SV’s Config.initial normTLM res.TLM physics TE-d42Total EnergyT42dry TE-m95Total EnergyT L 95moist H-d42HessianT42dry

42 Dry TESV (ECMWF)Moist TESV (ECMWF) Hessian SV (ECMWF) ETKF (Met Office using ECMWF ensemble ) ATReC_029_1: Obs. time: 20031208, 18UT ; Ver. time 20031211, 00 (54h opt) ECMWF Predictability Training Course - April 2006

43 ECMWF calculated overlap ratios for a = 4 x 10 6 km 2 ( Leutbecher et al. 2004 ) SAP1SAP2Number of cases with overlap >0.50 SV TE-d42SV TE-m9542/4398% SV TE-d42SV H-d4235/4283% SV TE-d42ETKF31/6746% SV H-d42ETKF32/6748% Sensitivity area prediction ECMWF Predictability Training Course - April 2006 How do we determine where to send the observations? In ATReC 2003, level of agreement between the sensitive area predictions can be quantified in terms of geographical overlap. Sensitive area 1: S j of area a Sensitive area 2:S k of area a Geographical overlap O jk = area ( S j  S k ) /a

44 TESV vs Hessian SV vs ETKF ECMWF Predictability Training Course - April 2006 DESIGNATED TARGET AREA ETKF sensitivity across Atlantic, main area 40-55N,20-30W; secondary max 45N60W (this is also secondary area for Hessian SVs). Main HSV and dry TESV centre is South tip of Greenland (and down to 50N), and back into Canada at 60N. Moist TESVs have significant area 35N65W (Odette)

45 Case 43: Obs. time 8 th Dec 18UT, Verif. time 11 th Dec 00UT ECMWF Predictability Training Course - April 2006 Radiosonde, Satellite rapid-scan winds, AMDAR flights

46 Case 47: Obs time 11 th Dec 18UT, Verif. Time 13 th Dec 12UT ECMWF Predictability Training Course - April 2006 Radiosondes, ASAP, Satellite, AMDAR

47 Case 36: Obs time 4 th Dec 18UT, Verif. Time 6 th Dec 12UT ECMWF Predictability Training Course - April 2006 Radiosonde, ASAP, AMDAR

48 Case 37: Obs. Time 5 th Dec 18UT, Verif. Time 7 th Dec 12UT ECMWF Predictability Training Course - April 2006 Radiosonde, ASAP, Satellite, AMDAR and Dropsondes

49 East Coast USA storm 5-7 th December 2003 ECMWF Predictability Training Course - April 2006 A major winter storm impacted parts of the Mid-Atlantic and Northeast United States during the 5 th -7th. Snowfall accumulations of one to two feet were common across areas of Pennsylvania northward into New England. Boston, MA received 16.2 inches while Providence RI had the greatest single snowstorm on record with 17 inches, beating the previous record of 12 inches set December 5-6, 1981. (from http://www.met.rdg.ac.uk/~brugge/world2003.html) http://www.met.rdg.ac.uk/~brugge/world2003.html NASA-GSFC, data from NOAA GOES

50 Control (routine observations) ATReC (routine + additional observations) Summary of ATReC forecast Impacts ECMWF Predictability Training Course - April 2006 ATReC (routine + additional observations in target area)

51 Case 37: Obs. Time 5 th Dec 18UT, Verif. Time 7 th Dec 12UT ECMWF Predictability Training Course - April 2006 Control ATReC (routine + additional observations) ATReC (routine + additional observations in target area)

52 Statistical verification of forecast impacts: –Large sample needed to estimate forecast skill differences in verification regions. –Data denial experiments may provide larger sample size. Sensitive area prediction method: –Sample similar observational coverage for both SV and ETKF sensitive regions …not possible during ATReC. ATReC can be used as to plan future adaptive observation campaigns : –Use observation targeting for medium range. –Improve efficiency so as to shorten warning time for observation providers. –Identify cost-effective observation platforms (i.e. dropsondes). –Cancel cases that show sharp reduction in uncertainty as observation time approaches. Some ATReC 2003 conclusions ECMWF Predictability Training Course - April 2006

53 J is measures the forecast error: its gradient with respect to the observation vector y. This gives the forecast error sensitivity with respect to the observations used in the initial condition for model forecast the sensitivity respect the initial condition x a Analysis sensitivity with respect the observation Forecast sensitivity ECMWF Predictability Training Course - April 2006 From Cardinali & Buizza, 2003

54 Total Contribution Mean Contribution Observation Contribution to Forecast ECMWF Predictability Training Course - April 2006

55 Forecast and Analysis Sensitivity ECMWF Predictability Training Course - April 2006

56 Forecast sensitivity to observations has been computed for the campaigns showing a forecast impact (ATreC-Control)/Control ≥ ± 10% In general, results show that in 13 cases out of 38: 9 positive and 4 negative –Targeted observations decrease the forecast error in verification area for 60% of cases (ALTHOUGH high skill in control forecasts). –Differences in forecast impact come also from the continuous assimilation cycling that provides different model trajectories From the campaigns of 5 th Dec at 18 UTC - Targeted observations improved the forecast of a cyclone moving along the east coast of North America for which severe weather impact was forecast. Forecast impact of observations ECMWF Predictability Training Course - April 2006

57 Adding targeted observations in the Pacific is expected to have a rather small impact (5% on z500/z1000 forecast errors over North America) Adding targeted observations in the Atlantic is expected to have an even smaller impact (3% on z500/z1000 forecast errors over Europe) Value of observations taking in oceans is regionally dependant and depends on the underlying observation system. Up to the optimization time, the value of observations taken in SV-target areas is higher than the value of observations taken in similar size random areas. Future of Observation Targeting Studies of the value of targeted observations: Buizza et al. (2005) performed experiments on 100+ cases to: (a)Assess impact of observations taken in Pacific and verified over North America (b)Assess impact of observations taken in Atlantic and verified over Europe (c)Compare impact of observations taken in SV-target areas to observations taken in random areas. (d)Estimate the average impact of targeted observations. ECMWF Predictability Training Course - April 2006

58 Future of Observation Targeting Data denial studies (Kelly, Buizza, Thepaut, Cardinali) –Relatively inexpensive and incorporate a large number of cases (~ 6 months). –Assess value and impacts of specific types of observation platforms. Hurricane targeting (Majumdar et al. 2005, 2006) –Improve short-range forecasts of cyclone tracks –Provides useful comparison on ETKF and SV targeting techniques. Satellite Targeting –Targeting provides utilisation of ‘dynamical thinning’ techniques. –Improved channel selection can reduce problems in cloudy conditions ECMWF Predictability Training Course - April 2006

59 Future of Observation Targeting Targeted observation campaigns: –Winter Storms Reconnaissance Program (ongoing) (NCEP/ NRL) –European THORPEX Regional Campaign 2007 : a `virtual’ targeting experiment. –African Monsoon Multidisciplinary-Analyses (AMMA) Observing System test 2005-2010 –Pacific-Asian Regional Campaign (PARC) 2008. Improve adaptive parts of the observing network –New platforms (e.g driftsondes) are being developed –Rocketsondes –Aerosondes ECMWF Predictability Training Course - April 2006


Download ppt "Observation Targeting Andy Lawrence Predictability and Diagnostics Section, ECMWF Acknowledgements: Martin Leutbecher, Carla Cardinali, Alexis Doerenbecher,"

Similar presentations


Ads by Google