Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ensemble hindcasting of “Superstorm ‘93”

Similar presentations


Presentation on theme: "Ensemble hindcasting of “Superstorm ‘93”"— Presentation transcript:

1 Ensemble hindcasting of “Superstorm ‘93”
Robert Fovell University of California, Los Angeles Peter Dailey AIR Worldwide 14th Cyclone Workshop Ste.-Adele, Quebec, Canada

2 Motivation Superstorm ‘93: example of an extreme winter storm
APE far exceeded climatological mean (Bosart et al. 2000) Risk assessment for insurance industry Need to assess recurrence interval Leveraging ~50 year record of cases (reanalysis) into 10,000 year catalogue Ensemble hindcasting of historical cases How much more intense could this storm have been?

3 Animation of SLP field (ECMWF reanalysis)
03/14/93 06Z Storm Track ecmwf_movie.GIF 03/13/93 06Z

4 Kocin-Uccellini Snow Footprint
NESIS = 12.52 NESIS soc Top 30 Events NESIS = 12.5 NESIS (Kocin and Uccellini 2004): Northeast Snow Impact Scale • area-integrated • population-weighted Superstorm ‘93: #1 for ~100 year period

5 Hypothesized distribution all East Coast winter storms
Need not be Gaussian

6 Superstorm 93’s position among all EC winter storms

7 Superstorm 93’s own distribution
Storm itself could have been weaker or stronger Even at weakest, likely still very significant event How much more intense could it have been? Implications for recurrence interval Drew distrib w/ expectation that: (1) it’s normal; (2) event was in middle of its distribution

8 Ensemble design MM5 model ECMWF initialization
Perturb initial conditions > 400 members Purpose: retrieve event’s own frequency distribution Overarching goal: not to bias the outcome MM5 Model Domain Configuration INNER DOMAIN (30km) OUTER DOMAIN (90km)

9 Perturbation strategy
Goal: be unbiased Random perturbations to T, Td Columns perturbed to preserve Lapse rate Relative humidity Objective analysis with large influence radius

10 Sample 300 mb T perturbation fields

11 All stats limited to common event period
Lead time strategy Six hourly starting times ranging from 0 to 90 hour leads All stats limited to common event period

12 Storm’s own distribution
Control/seed/ benchmark case Control/seed/ benchmark case Default scenario Perturbations as likely to weaken and strengthen storm All quite intense What if storm already at optimal intensity? Most perturbations should weaken storm Possible to retrieve this from a control run in the tail? If only top distribution is possible, not that interesting. Width may be important, but shape is a given

13 Storm’s own distribution
Control/seed/ benchmark case Control/seed/ benchmark case Default scenario Perturbations as likely to weaken and strengthen storm All quite intense What if storm already at optimal intensity? Most perturbations should weaken storm Possible to retrieve this from a control run in the tail? If only top distribution is possible, not that interesting. Width may be important, but shape is a given

14 Results from 465 member ensemble

15 Kocin & Uccellini’s estimate: 12.52
NESIS - control runs perfect model & data Values range from Kocin & Uccellini’s estimate: 12.52 NCDC’s estimate: 13.2 Both use 2000 census

16 (weakest would rank 10th among 20th century events)
NESIS - perturbed runs Max 14.1 Average Min 6.1 (weakest would rank 10th among 20th century events)

17 Large and small snowfalls
12 h lead case h lead case NESIS = NESIS = 7.7 Rank 1/ Rank 450/455

18 NESIS distribution for 465 cases
Which case best represents the actual event?

19 Selecting the benchmark case

20 RMS SLP error metric (with respect to ECMWF reanalysis)
Independent of snow information More accurate, objective than snow metrics Minimum error case is most similar to reanalysis with respect to location, timing and intensity ECMWF analysis coarseness underrepresents actual cyclone intensity

21 SLP error 0 h lead control run: nominal benchmark
NESIS = 12.5 SLP min case: actual benchmark NESIS = 13.7

22 Where benchmarks ranked (465 cases)
NESIS Actual benchmark: ranked 3rd of 465 Nominal benchmark: ranked 33rd of 465 (exceeded by 7%) Total snow over land Actual benchmark: ranked 11th Nominal benchmark: ranked 35th Other metrics…

23 (weakest would rank 10th among 20th century events)
NESIS revisited Max 14.1 Average Min 6.1 (weakest would rank 10th among 20th century events)

24 0-24 h lead subset 310 cases Little temporal trend - variance within lead >> among lead NESIS ranks: actual benchmark 3/310; nominal 33/310

25 0-24 h lead subset

26 Conclusion: Actual storm nearly as strong
0-24 h lead subset Conclusion: Actual storm nearly as strong as it could have been

27 0-24 h lead subset NESIS control-relative
76% of perturbed cases weaker than their control/seed

28 0-24 h lead subset Total snow control-relative
Again, 76% of perturbed cases weaker than their control/seed

29 0-24 h lead subset Vorticity control-relative
56% of perturbed cases stronger than their control/seed

30 Another case: December 1992 20th century rank: #42

31 10-12 December 1992 Ranked 42nd among 20th century snowstorms
NESIS value influenced by isolated snowbands

32 NESIS distribution for Dec 92 event

33 Future work Reconsider perturbation strategy, benchmark selection
Ensemble diversity sufficient? Use ensemble output to identify best place(s) to perturb More cases Build a statistical model to create the yr catalogue More from less…

34 end


Download ppt "Ensemble hindcasting of “Superstorm ‘93”"

Similar presentations


Ads by Google