Download presentation
Presentation is loading. Please wait.
Published byMeredith Russell Modified over 9 years ago
1
Space weather forecasting has made tremendous strides in recent years. Nevertheless, there are frequent mismatches between predicted and measured impacts. This session will discuss such failures, and assess whether they are due to limitations in observations, modeling, forecasting methods, or our understanding of the physics involved. We encourage contributions from the forecasting and research communities on all aspects of space weather predictions, including flares, energetic particle events, coronal mass ejections, high speed streams, and impacts to spacecraft and planetary environments. Questions: How do you define and quantify “failure” (or “success”)? What are appropriate (or inappropriate) metrics for quantifying success? What would be appropriate definitions of ”success/failure” for the many different events and SWx forecast products? (flares, SEPs, CMEs, storms, shock arrival, etc). If your research is forecast-relevant, please come with a one-slide example of how you define/verify success/failure. We would like to evaluate a number of potential approaches and strategies and identify which ones are most applicable to which types of situations. When and Why Does Space Weather Forecasting Fail?
2
Questions: Why do "good" research results (e.g., that imply a reliable correlation) sometimes fail to improve forecasts? What are examples of research results that were unable to transition to a real-time predictive environment? Why did they fail? How do researchers benefit from transitioning their result into forecasting? How can a prediction environment help verify results and find the "ground truth." What are key differences between a forecast and a non-forecasting research product? What are the key obstacles in moving a principle into practice? When and Why Does Space Weather Forecasting Fail?
3
Scene Setters: Nariaki Nitta (LMSAL): Eruptive event assessment and tracking, limitations posed by data availability. KD Leka (NWRA): Skill Scores, Statistics, and Sample Size Issues in Forecast Verification. Some candidate events for future SHINE sessions on forecast failures have been identified, including: the large active region 12192 in Oct 2014 that produced many X-class flares but was unexpectedly CME and SEP-poor the CMEs on 7 Jan 2014, 10 Sep 2014, and 15 March 2015, where the resulting geomagnetic storm strength at L1 was far different from predictions. However, contributions focusing on other events, are welcome! All aspects of forecasting and forecast-related research are welcome This session is not just for forecasters – all kinds of prognosticators welcome. : )
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.