Transfer to Ops: Requirements at the Canadian Meteorological Centre David Anselmo Air Quality Modelling Applications Section Meteorological Service of Canada Montréal, Québec Data Assimilation Fusion Meeting Downsview January 16-17, 2012
Page 2 Outline Requirements for an operational implementation –Make the case (identify the need) –Data readiness (observations) ▪Top 4 –System readiness Common challenges to ops transfers Advantages to going operational
Page 3 Identify Need from Program Perspective What?... products are to be generated in ops Who?... are (potential) clients of the products –SPCs/forecasters, Weatheroffice/general public, other operational systems Why? … –Identify the benefits of the products –Does it have to be operational to realize full benefit? –What is the importance of near real-time? How/Where? … will users access the products –Is development necessary? –Are other groups involved?
Page 4 Data Readiness – Top 4 Data availability –What is source of data? ▪Are transfers to CMC already established? Can they be? ▪Would data transfer make use of existing links to CMC? ▪What are protocols for data transfer from provider? ▪Are they reasonable/acceptable to CMC? –Bandwidth, security concerns –What is format of data? ▪Is it new to CMC operational systems? Is there precedence? ▪Is software in place to decode this format? –What are long term prospects wrt data availability? ▪Longevity, continuity of observing programs ▪Dependence on other countries (changing budgets, priorities)
Page 5 Data Readiness – Top 4 Data reliability –Is upstream data processing supported by provider? ▪Is it supported 24/7? –How are unexpected outages or routine downtimes addressed? –What is normal frequency and duration of outages & downtimes? –What is overall percentage of data availability? ▪Is it acceptable for operational system? ▪Is it acceptable for clients (assuming a dependency develops)?
Page 6 Data Readiness – Top 4 Data quality –What is usability of data? –What quality measures are in place at source? ▪Quality assured data ▪Quality controlled data –Does data arrive with pre-applied flags? –What additional measures must be applied before data can be used operationally? ▪Must assess negative impact on downstream users from poor quality data
Page 7 Data Readiness – Top 4 Data timeliness –“Latency, latency, latency.” –For many apps, if data does not arrive in time, it is essentially useless –Define what is “late” for the intended application ▪Concept of a cut-off ▪For some programs T+9h, for others T+30min Operational Near Real-Time –Is the entire transmission system “operationally capable”? ▪Though, it need not be operational!! (Ex. satellite)
Page 8 Assimilation cycles at the CMC G200 G218 G212 G206 R112 G100 G112 R206 R218 R100 Analysis is transmitted Trial Field is generated Analysis is generated T+9 at 09Z T+6 à 12Z T+6 at 00Z T+8:15 at 20:15Z T+2:30 at 02:30Z T+2:30 at 14:30Z T+ 2:05 at 14:05Z T+ 2:05 at 14:05Z Global cycle Regional cycle R200 R106 T+ 1:50 at 7:50Z R118 R212 T+ 1:50 at 19:50Z *Image courtesy CMDA/CMC Cut-offs
Page 9 System Readiness Applies to applicant system as well as host environment CPOP considerations (Comité des passes opérationelles et parallèls) –Advance planning ▪Resource allocations (human & computer) ▪Balance/coordination with other implementation requests ▪Initial proposal months in advance –Coordination with existing operational components ▪Impacts & dependencies between upstream & downstream systems –Ex. Global model, Regional model, AQ model, UMOS, OA, etc ▪Regional SPCs (forecast scheds), Weatheroffice, etc Commonality of working environment (tools) –Research Development Operations –To reduce AMAP duplication of work; streamline implementations –Ex. Job sequencer (OCM/Maestro)
Page 10 System Readiness System diagnostics –Monitoring of the reliability, quality, timeliness of input –Performance measures ▪Routine verification of quality of final products
Page 11 System Readiness Documentation –Creation of standards for evaluation and future upgrades ▪What are conditions for implementations? –Define procedures for future parallel runs (seasons, length of time, etc.) –Verification scores & thresholds –Against observations/analyses –Subjective evaluations by A&P ▪Identify dependant systems that must undergo impact assessments with every implementation –Support documentation ▪Assist 24/7 support teams (NetOps, CMOI, A&P) ▪Problem scenarios & remedy procedures ▪Contingencies for data or system outages –GENOT, Technical note, CMC product guide
Page 12 System Readiness Outreach –Presentation to CMC building prior to formal CPOP proposal ▪Present in detail the science and implementation plans ▪Present future directions ▪50 minutes –Formal CPOP proposal for parallel run ▪Brief summary of science and implementation plan ▪15-20 minutes ▪Voted on by CPOP members
Page 13 Common Challenges to Ops Transfers Each implementation = additional cost –Competition for limited resources The first implementation is resource intensive –Often requires significant adaptation to conform to operational expectations ▪New data types & formats & paradigms –Tests communication links between R, D, and O Maturity or lack thereof of component(s) –Observation infrastructure, robustness of methodology, etc. Increased complexity for assimilation systems –Marriage of 3 components: observations, model, methodology Adaptation to continual evolution of… –Computing environment –Upstream/downstream systems
Page 14 Advantages to Ops Status Demonstrates important value/purpose of system Provides continuous monitoring to identify issues with data –Quality, timeliness, etc. –In turn, opportunities to improve data stream (feedback to data providers) Improves product availability & visibility Can be supportive to other operational systems –Ex. sensitivity of GEM-MACH has proven an effective means of debugging dynamics & physics libraries shared by other models
Page 15 Thanks! David Anselmo Air Quality Modelling Applications Section Meteorological Service of Canada Montréal, Québec
Page 16 Extras
Page 17 Operational Observation Data Streams
Page 18 Surface Obs Data Transfer – Canada Source networks for surface data: –Metro Vancouver (DRDAS) –BC MoE (DRDAS) –Alberta Env (9 air sheds, CASA server) –Saskatchewan Env (DRDAS) –Manitoba Conservation (moving to DRDAS) –Ontario MoE (DRDAS) –Ville de Montréal & Québec MDDEP (via Québec Region) –New Brunswick, PEI, Nova Scotia, Newfoundland (via Atlantic Region) –CAPMoN Hourly observations Species: O 3, PM 2.5, PM 10, NO2, SO 2, H 2 S, TRS, CO, NO Stns: 175, 165, 35, 135, 70, 5, 20, 30, 75
Page 19 Surface Obs Data Transfer – Canada Format: AIRNow ‘OBS’ ASCII Processed in near real-time at 40 mins past hour Used to feed: –AQHI national forecast program –UMOS –Model verification –Objective analysis system for surface pollutants
Page 20 AQHI availability – Pacific Region Mean 6-month availability Nov 2010: 78% Mean 6-month availability Jan 2012: ?? DRDAS
Page 21 AQHI availability – Prairie Region Mean 6-month availability Nov 2010: 88% Mean 6-month availability Jan 2012: ??
Page 22 AQHI availability – Ontario Region Mean 6-month availability Nov 2010: 97% Mean 6-month availability Jan 2012: ??
Page 23 AQHI availability – Quebec Region Mean 6-month availability Nov 2010: 93% Mean 6-month availability Jan 2012: ??
Page 24 AQHI availability – Atlantic Region Mean 6-month availability Nov 2010: 84% Mean 6-month availability Jan 2012: ??
Page 25 Surface Obs Data Transfer – US US obs retrieved from AIRNow Gateway – –Data in ‘AQCSV’ ASCII format –Improvement over previous ‘OBS’ format Hourly observations Species: –Primarily O 3 and PM 2.5 –Includes other pollutants and meteorology for select stations Availability of data in near real-time: –~80% after 1 hour –~95% after 2 hours Used to feed: –Model verification –Objective analysis system for surface pollutants