Download presentation
Presentation is loading. Please wait.
Published byReginald Randall Modified over 9 years ago
1
Angèle Simard Canadian Meteorological Center Meteorological Service of Canada MSC Computing Update
2
2 Canadian Meteorological Centre National Meteorological Centre for Canada Provide services to National Defence and Emergency organizations International WMO commitments ( emergency response, Telecomm, data, etc…) Supercomputer used for NWP, climate, air quality, etc… ; operations and R&D
3
3 Outline Current Status Supercomputer Procurement Conversion Experience Scientific Direction
4
Current Status
5
5 MSC’s Supercomputing History
6
6
7
7 Supercomputers: NEC – 10 node – 80 PE – 640 GB Memory – 2 TB disks – 640 Gflops (peak)
8
8 Archive Statistics
9
9 Automated Archiving System 22,000 tape mounts/month 11 Terabytes growth/month maximum of 214 TB
10
Supercomputer Procurement
11
11 RFP/Contract Process Competitive process Formal RFP released in mid-2002 Bidders need to meet all mandatory requirements to make it to next phase Bidders were rejected if bids above the funding envelope Bidders were rated on performance (90%) and price (10%) IBM came first Live preliminary benchmark at IBM facility Contract awarded to IBM in November 2002
12
12 IBM Committed Sustained Performance in MSC TFlops
13
13 On-going Support The systems must achieve a minimum of 99 % availability. Remedial maintenance 24/7: 30 minute response time between 8 A.M. and 5 P.M. on weekdays. One hour response outside above periods. Preventive maintenance or engineering changes: maximum of eight (8) hours a month, in blocks of time not exceeding two (2) or four (4) hours per day (subject to certain conditions). Software support: Provision of emergency and non- emergency assistance.
14
14
15
15 Supercomputers: Now and Then… NEC – 10 node – 80 PE – 640 GB Memory – 2 TB disks –640 Gflops (peak) IBM – 112 nodes – 928 PE – 2.19 TB Memory – 15 TB Disks – 4.825 Tflops (peak)
16
Conversion Experience
17
17 Transition to IBM Prior to conversion: Developed and tested code on multiple platforms (MPP & vector) Where possible, avoided proprietary tools When required, hid proprietary routines under a local API to centralize changes Following contract award, obtained early access to the IBM (thanks to NCAR)
18
18 Transition to IBM Application Conversion Plan: Plan was made to comply to tight schedule Plan’s key element included a rapid implementation of an operational “core” (to test mechanics) Phased approach for optimized version of models End to End testing scheduled to begin mid- September
19
19 Scalability Rule of Thumb To obtain required wall-clock timings: Require 4 times more IBM processors to obtain same performance with NEC SX-6
20
20 Timing (min) Configuration NECIBM NECIBM GEMDM regional 38 348 MPI 8 MPI x4 OpenMP (48 hour) (32 total) GEMDM Global35 364 MPI 4 MPI x4 OpenMP (240 hour)(16 total) 3D-Var(R1)11 09 4 6 CPU OpenMP (without AMSU-B) 3D-Var (G2)16 12 4 6 CPU OpenMP (with AMSU-B) Preliminary Results
21
SCIENTIFIC DIRECTIONS
22
22 Global System Now: Uniform resolution of 100 km (400 X 200 X 28) 3D-Var at T108 on model levels, 6-hr cycle Use of raw radiances from AMSUA, AMSUB and GOES 2004: Resolution to 35 km (800 X 600 X 80) Top at 0.1 hPa (instead of 10 hPa) with additional AMSUA and AMSUB channels 4D-Var assimilation, 6-hr time window with 3 outer loops at full model resolution and inner loops at T108 (cpu equivalent of a 5- day forecast of full resolution model) new datasets: profilers, MODIS winds, QuikScat 2005+: Additional datasets (AIRS, MSG, MTSAT, IASI, GIFTS, COSMIC) Improved data assimilation
23
23 Regional System Now: Variable resolution, uniform region at 24 km (354 X 415 X 28) 3D-Var assimilation on model levels at T108, 12-hr spin-up 2004: Resolution to 15 km in uniform region (576 X 641 X 58) Inclusion of AMSU-B and GOES data in assimilation cycle Four model runs a day (instead of two) New datasets: profilers, MODIS winds, QuikScat 2006: Limited area model at 10 km resolution (800 X 800 X 60) LAM 4D-Var data assimilation Assimilation of Radar data
24
24 Ensemble Prediction System Now: 16 members global system (300 X 150 X 28) Forecasts up to 10 days once a day at 00Z Optimal Interpolation assimilation system, 6-hr cycle,use of derived radiance data (Satems) 2004: Ensemble Kalman Filter assimilation system, 6-hr cycle, use of raw radiances from AMSUA, AMSUB and GOES Two forecast runs per day (12Z run added) Forecasts extended to 15 days (instead of 10)
25
25...Ensemble Prediction System 2005: Increased resolution to 100 km (400 X 200 X 58) Increased members to 32 Additional datasets such as in global deterministic system 2007: Prototype regional ensemble 10 members LAM (500 X 500 X 58) No distinct data assimilation; initial and boundary conditions from global EPS
26
26 Mesoscale System Now: Variable resolution, uniform region at 10 km (290 X 371 X 35) Two windows; no data assimilation 2004: Prototype Limited area model at 2.5 km (500 X 500 X 45) over one area 2005: Five Limited area model windows at 2.5 km (500 X 500 X 45) 2008: 4D data assimilation
27
27 Coupled Models Today In R&D: coupled atmosphere, ocean, ice, wave, hydrology, biology & chemistry In Production: storm surge, wave Future Global coupled model for both prediction & data assimilation
28
28 Estuary & Ice Circulation
29
29 Merchant Ship Routing Puissance Temps …2 jours... Route sud Route nord
30
30 Challenges IT Security Accelerated Storage Needs Replacement of Front-end & Storage Infrastructure
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.