Computing at Norwegian Meteorological Institute Roar Skålin Director of Information Technology Norwegian Meteorological Institute CAS.

Slides:



Advertisements
Similar presentations
Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) StorNextFS, a fast global Filesysteme in a heterogeneous Cluster Environment.
Advertisements

Weather Research & Forecasting: A General Overview
HIRLAM Use of the Hirlam NWP Model at Met Éireann (Irish Meteorological Service) (James Hamilton -- Met Éireann)
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
Saurabh Bhardwaj Centre for Global Environment Research, Earth Science and Climate Change Division Ongoing Climate
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Beowulf Supercomputer System Lee, Jung won CS843.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Server Platforms Week 11- Lecture 1. Server Market $ 46,100,000,000 ($ 46.1 Billion) Gartner.
1 HPC and the ROMS BENCHMARK Program Kate Hedstrom August 2003.
International Workshop APAN 24, Current State of Grid Computing Researches and Applications in Vietnam Nguyen Thanh Thuy 1, Nguyen Kim Khanh 1,
UK -Tomato Chromosome Four Sarah Butcher Bioinformatics Support Service Centre For Bioinformatics Imperial College London
Aim High…Fly, Fight, Win NWP Transition from AIX to Linux Lessons Learned Dan Sedlacek AFWA Chief Engineer AFWA A5/8 14 MAR 2011.
ECMWF Slide 1CAS2K3, Annecy, 7-10 September 2003 Report from ECMWF Walter Zwieflhofer European Centre for Medium-Range Weather Forecasting.
Federico Calzolari 1, Silvia Arezzini 2, Alberto Ciampa 2, Enrico Mazzoni 2 1 Scuola Normale Superiore - Pisa, Italy 2 National Institute of Nuclear Physics.
1 The Virtual Reality Virtualization both inside and outside of the cloud Mike Furgal Director – Managed Database Services BravePoint.
Types of Operating System
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
National Weather Service National Weather Service Central Computer System Backup System Brig. Gen. David L. Johnson, USAF (Ret.) National Oceanic and Atmospheric.
SURA Regional HPC Grid Proposal Ed Seidel LSU With Barbara Kucera, Sara Graves, Henry Neeman, Otis Brown, others.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
Computing Environment in Chinese Academy of Sciences Dr. Xue-bin Dr. Zhonghua Supercomputing Center Computer Network.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Planning and Designing Server Virtualisation.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
MURI Hardware Resources Ray Garcia Erik Olson Space Science and Engineering Center at the University of WI - Madison.
9-Sept-2003CAS2003, Annecy, France, WFS1 Distributed Data Management at DKRZ Distributed Data Management at DKRZ Wolfgang Sell Hartmut Fichtel Deutsches.
November 2, 2000HEPiX/HEPNT FERMI SAN Effort Lisa Giacchetti Ray Pasetes GFS information contributed by Jim Annis.
Meteorologisk Institutt met.no Operational ocean forecasting in the Arctic (met.no) Øyvind Saetra Norwegian Meteorological Institute Presented at the ArcticGOOS.
Angèle Simard Canadian Meteorological Center Meteorological Service of Canada MSC Computing Update.
CMAQ Runtime Performance as Affected by Number of Processors and NFS Writes Patricia A. Bresnahan, a * Ahmed Ibrahim b, Jesse Bash a and David Miller a.
Never Down? A strategy for Sakai high availability Rob Lowden Director, System Infrastructure 12 June 2007.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
1 Monday, 26 October 2015 © Crown copyright Met Office Computing Update Paul Selwood, Met Office.
Precipitation over Iceland simulated in a future climate scenario at various horizontal resolutions Hálfdán Ágústsson, Ólafur Rögnvaldsson, Haraldur Ólafsson.
THORPEX Interactive Grand Global Ensemble (TIGGE) China Meteorological Administration TIGGE-WG meeting, Boulder, June Progress on TIGGE Archive Center.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
August 2001 Parallelizing ROMS for Distributed Memory Machines using the Scalable Modeling System (SMS) Dan Schaffer NOAA Forecast Systems Laboratory (FSL)
Computing Environment The computing environment rapidly evolving ‑ you need to know not only the methods, but also How and when to apply them, Which computers.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
Computing at Météo-France CAS 2003, Annecy 1.News from computers 2.News from models.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
CIS250 OPERATING SYSTEMS Chapter One Introduction.
Lecture 1: Network Operating Systems (NOS) An Introduction.
Developments in High Performance Computing A Preliminary Assessment of the NAS SGI 256/512 CPU SSI Altix (1.5 GHz) Systems SC’03 November 17-20, 2003 Jim.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Building and managing production bioclusters Chris Dagdigian BIOSILICO Vol2, No. 5 September 2004 Ankur Dhanik.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Grid Remote Execution of Large Climate Models (NERC Cluster Grid) Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
Tackling I/O Issues 1 David Race 16 March 2010.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
3-5/10/2005, LjubljanaEWGLAM/SRNWP meeting NWP related Slovak Hydrometeorological Institute 2005.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
CIT 140: Introduction to ITSlide #1 CSC 140: Introduction to IT Operating Systems.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Types of Operating System
SMHI operational HIRLAM EWGLAM October , OSLO Lars Meuller SMHI
Chapter 1: Introduction
Grid Computing.
Chapter 1: Introduction
Cluster Computers.
Presentation transcript:

Computing at Norwegian Meteorological Institute Roar Skålin Director of Information Technology Norwegian Meteorological Institute CAS 2003 – Annecy

Norwegian Meteorological Institute met.no

Norwegian Meteorological Institute Main office in Oslo Regional offices in Bergen and Tromsø Aviation offices at four military airports and Spitsbergen Three arctic stations: Jan Mayen, Bear Island and Hopen 430 employees ( )

Norwegian Meteorological Institute met.no met.no Computing Infrastructure SGI O /512/7.2 SGI O /304/7.2 CXFS Climate Storage 2/8/20 Backup Server SGI O200/2 S-AIT 33 TB DLT 20 TB Production Dell 4/8 Linux Production Dell 2/4 Linux NetApp 790 GB Scali Cluster 20/5/0.3 XX Cluster y/y/y STA 5 TB Storage Server SGI O2000/2 DLT 20 TB met.no - Oslo NTNU - Trondheim Switch Router 2.5 GBit/s 155 MBit/s 1 GBit/s 100 MBit/s x/y/z = processors/GB memory/TB disk

Norwegian Meteorological Institute met.no met.no Local Production Servers Dell PowerEgde servers with two and four CPUs NetApp NAS Linux ECMWF Supervisor Monitor Scheduler (SMS) Perl, shell, Fortran, C++, XML, MySQL, PostgreSQL Cfengine Production Environment November 2003:

Norwegian Meteorological Institute met.no Linux replace proprietary UNIX at met.no Advantages: –Off-the-shelf hardware replace proprietary hardware Reduced cost of new servers Reduced operational costs –Overall increased stability –Easier to fix OS problems –Changing hardware vendor becomes feasible –Become an attractive IT-employer with highly motivated employees Disadvantages: –Cost of porting software –High degree of freedom: a Linux distribution is as many systems as there are users

Norwegian Meteorological Institute met.no Data storage: A critical resource We may loose N-1 production servers and still be up-and-running, but data must be available everywhere all the time We used to duplicate data files, but increased use of databases reduce the value of this strategy met.no replace a SAN by a NetApp NAS because: –availability –Linux support –”sufficient” IO-bandwidth (40-50 MB/s per server)

Norwegian Meteorological Institute met.no HPC in Norway – A national collaboration

Norwegian Meteorological Institute met.no Performance available to met.no CRAY X-MP CRAY Y-MP CRAY T3E SGI O3000

Norwegian Meteorological Institute met.no met.no Production Compute Servers SGI Origin 3800 Embla: 512 MIPS R14K processors 614 Gflops peak 512 GB memory Gridur: 384 MIPS R14K processors 384 Gflops peak 304 GB memory Trix OS / LSF batch system 7.2 TB CXFS filesystem

Norwegian Meteorological Institute met.no Atmospheric models HIRLAM 20HIRLAM 10HIRLAM 5UMMM5 PurposeOperation Exp Air Poll. Resolution20 km 40 layers 360 s 10 km 40 layers 240 s 5 km 40 layers 120 s 3 km 38 layers 75 s 3 km 17 layers 9 s 1 km 17 layers 3 s Grid Points 468*378248*341152*150280*27661x76 Forecast time 60 h48 h Result data 24 h 8 GB1.2 GB0.3 GB2.1 GB0.1 GB

Norwegian Meteorological Institute met.no Oceanograpic models MIPOM22ECOM3DWAVEECOM3D PurposeOperation Exp Resolution4 km 17 layers 150 s km 21/5 layers 600 s 45/8 km4/300m km 17 layers 360/50 s Grid Points 1022x578208x120142x x x250 Forecast time 60 h Result data 24 h 1 GB

Norwegian Meteorological Institute met.no Production timeline HIRLAM20 HIRLAM10 HIRLAM5 UM MM5 ECOM3D/WAVE ECOM3D MIPOM22

Norwegian Meteorological Institute met.no HIRLAM scales, or …? The forecast model without I/O and support programs scales reasonably well up to 512 processors on a SGI O3800 In real life: –data transfer, support programs and I/O has a very limited scaling –there are other users of the system –machine dependent modifications to increase scaling has a high maintenance cost for a shared code such as HIRLAM For cost-efficient operational use, 256 processors seems to be a limit

Norwegian Meteorological Institute met.no How to utilise 898 processors operationally? Split in two systems of 512 and 384 processors and used as primary and backup system Will test a system to overlap model runs based on dependencies: HIRLAM 20 HIRLAM 10 ECOM3D HIRLAM 5 WAVE

Norwegian Meteorological Institute met.no Overlapping Production Timeline HIRLAM20 HIRLAM10 HIRLAM5 UM MM5 ECOM3D/WAVE ECOM3D MIPOM22

Norwegian Meteorological Institute met.no RegClim: Regional Climate Development Under Global Warming Overall aim: Produce scenarios for regional climate change suitable for impact assessment Quantify uncertainties Some keywords: Involve personell from met.no, universities and research organisations Based on global climate scenarios Dynamical and empircal downscaling Regional and global coupled models Atmosphere – ocean – sea-ice

Norwegian Meteorological Institute met.no Climate Computing Infrastructure SGI O /512/7.2 SGI O /304/7.2 CXFS Climate Storage 2/8/20 S-AIT 33 TB Para//ab - Bergen NTNU - Trondheim Router 2.5 GBit/s 155 MBit/s IBM Cluster 64/64/0.58 IBM p690 Regatta 96/320/7 IBM TB x/y/z = processors/GB memory/TB disk

Norwegian Meteorological Institute met.no Climate Storage Server Low-cost solution: Linux server Brocade switch Nexsan AtaBoy/AtaBeast RAID 19.7 TB 34 TB Super-AIT library, tapecapacity 0.5 TB uncompressed

Norwegian Meteorological Institute met.no GRID in Norway Testgrid comprising experimental computers at the four universities Globus 2.4 -> 3.0 Two experimental portals: Bioinformatics and Gaussian Plan testing of large datasets (Storage Resource Broker) and metascheduling autumn 2003 Plan to phase in production supercomputers in 2004