ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.

Slides:



Advertisements
Similar presentations
Cross-site data transfer on TeraGrid using GridFTP TeraGrid06 Institute User Introduction to TeraGrid June 12 th by Krishna Muriki
Advertisements

DOE Global Modeling Strategic Goals Anjuli Bamzai Program Manager Climate Change Prediction Program DOE/OBER/Climate Change Res Div
Earth System Curator Spanning the Gap Between Models and Datasets.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
University of Chicago Department of Energy The Parallel and Grid I/O Perspective MPI, MPI-IO, NetCDF, and HDF5 are in common use Multi TB datasets also.
WRF Modeling System V2.0 Overview
Saurabh Bhardwaj Centre for Global Environment Research, Earth Science and Climate Change Division Ongoing Climate
June 2003Yun (Helen) He1 Coupling MM5 with ISOLSM: Development, Testing, and Application W.J. Riley, H.S. Cooley, Y. He*, M.S. Torn Lawrence Berkeley National.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Global Climate Modeling Research John Drake Computational Climate Dynamics Group Computer.
Application of GRID technologies for satellite data analysis Stepan G. Antushev, Andrey V. Golik and Vitaly K. Fischenko 2007.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
Mesoscale & Microscale Meteorological Division / NCAR ESMF and the Weather Research and Forecast Model John Michalakes, Thomas Henderson Mesoscale and.
The Puget Sound Regional Environmental Prediction System: An Update.
Weather Research & Forecasting Model (WRF) Stacey Pensgen ESC 452 – Spring ’06.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
Simo Niskala Teemu Pasanen
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
UNIVERSITY of MARYLAND GLOBAL LAND COVER FACILITY High Performance Computing in Support of Geospatial Information Discovery and Mining Joseph JaJa Institute.
CCSM Portal/ESG/ESGC Integration (a PY5 GIG project) Lan Zhao, Carol X. Song Rosen Center for Advanced Computing Purdue University With contributions by:
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
NSF NCAR | NASA GSFC | DOE LANL ANL | NOAA NCEP GFDL | MIT | U MICH First Field Tests of ESMF GMAO Seasonal Forecast NCAR/LANL CCSM NCEP.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
GPU Performance Prediction GreenLight Education & Outreach Summer Workshop UCSD. La Jolla, California. July 1 – 2, Javier Delgado Gabriel Gazolla.
The MicroGrid: A Scientific Tool for Modeling Grids Andrew A. Chien SAIC Chair Professor Department of Computer Science and Engineering University of California,
Willem A. Landman Asmerom Beraki Francois Engelbrecht Stephanie Landman Supercomputing for weather and climate modelling: convenience or necessity.
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
Computer Science Section National Center for Atmospheric Research Department of Computer Science University of Colorado at Boulder Blue Gene Experience.
Mathematics and Computer Science & Environmental Research Divisions ARGONNE NATIONAL LABORATORY Regional Climate Simulation Analysis & Vizualization John.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Amy Apon, Pawel Wolinski, Dennis Reed Greg Amerson, Prathima Gorjala University of Arkansas Commercial Applications of High Performance Computing Massive.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
National Computational Science National Center for Supercomputing Applications National Computational Science NCSA Terascale Clusters Dan Reed Director,
Experiences with the Globus Toolkit on AIX and deploying the Large Scale Air Pollution Model as a grid service Ashish Thandavan Advanced Computing and.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Commodity Grid Kits Gregor von Laszewski (ANL), Keith Jackson (LBL) Many state-of-the-art scientific applications, such as climate modeling, astrophysics,
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
3 rd Annual WRF Users Workshop Promote closer ties between research and operations Develop an advanced mesoscale forecast and assimilation system   Design.
A Portable Regional Weather and Climate Downscaling System Using GEOS-5, LIS-6, WRF, and the NASA Workflow Tool Eric M. Kemp 1,2 and W. M. Putman 1, J.
SAN DIEGO SUPERCOMPUTER CENTER Inca TeraGrid Status Kate Ericson November 2, 2006.
Building the e-Minerals Minigrid Rik Tyer, Lisa Blanshard, Kerstin Kleese (Data Management Group) Rob Allan, Andrew Richards (Grid Technology Group)
Easy Deployment of the WRF Model on Heterogeneous PC Systems Braden Ward and Shing Yoh Union, New Jersey.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
AMWG Breakout, CCSM Workshop June 25, 2002 Overview of CAM status and simulations Bill Collins and Dave Randall National Center for Atmospheric Research.
VAPoR: A Discovery Environment for Terascale Scientific Data Sets Alan Norton & John Clyne National Center for Atmospheric Research Scientific Computing.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
CCSM Performance, Successes and Challenges Tony Craig NCAR RIST Meeting March 12-14, 2002 Boulder, Colorado, USA.
1 Accomplishments. 2 Overview of Accomplishments  Sustaining the Production Earth System Grid Serving the current needs of the climate modeling community.
Toward GSI Community Code Louisa Nance, Ming Hu, Hui Shao, Laurie Carson, Hans Huang.
2/22/2001Greenbook 2001/OASCR1 Greenbook/OASCR Activities Focus on technology to enable SCIENCE to be conducted, i.e. Software tools Software libraries.
Welcome to the PRECIS training workshop
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Grid Remote Execution of Large Climate Models (NERC Cluster Grid) Dan Bretherton, Jon Blower and Keith Haines Reading e-Science Centre
Presented by LCF Climate Science Computational End Station James B. White III (Trey) Scientific Computing National Center for Computational Sciences Oak.
Support to scientific research on seasonal-to-decadal climate and air quality modelling Pierre-Antoine Bretonnière Francesco Benincasa IC3-BSC - Spain.
The Community Climate System Model (CCSM): An Overview Jim Hurrell Director Climate and Global Dynamics Division Climate and Ecosystem.
CIT 140: Introduction to ITSlide #1 CSC 140: Introduction to IT Operating Systems.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
HPC usage and software packages
Collaborations and Interactions with other Projects
CRESCO Project: Salvatore Raia
WORKFLOW PETRI NETS USED IN MODELING OF PARALLEL ARCHITECTURES
Cyberinfrastructure and PolarGrid
Cluster Computers.
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions Argonne National Laboratory and Computation Institute University of Chicago

ARGONNE NATIONAL LABORATORY Outline u A description of the Jazz Linux cluster at Argonne National Laboratory u Porting and performance of climate codes on the Jazz Linux cluster –Community Climate Systems Model, CCSM –Community Atmosphere Model, CAM –Mesocale Meteorological Model, MM5v3.4 u Regional climate modeling studies at ANL –Long climate simulations on Jazz using MM5v3.6 –Tools for regional climate modeling

ARGONNE NATIONAL LABORATORY ANL Jazz Linux Cluster u Compute nodes, each with a 2.4 GHz Pentium Xeon u Memory nodes with 2 GB of RAM, 175 nodes with 1 GB of RAM u Storage - 20 TB of clusterwide disk: 10 TB GFS and 10 TB PVFS u Network - Myrinet 2000 u Linpack benchmark - ~ 1 TFLOP

ARGONNE NATIONAL LABORATORY Community Climate Systems Model - CCSM u Download of standard release of CCSM from NCAR web site u Current mpich release is compatible with multiple executables multiple data model used by CCSM u Build process needs modification e.g. to use mpif90 and mpicc wrappers u Environment variables must be included in shell u Use pgf90 compiler

ARGONNE NATIONAL LABORATORY Community Climate Systems Model - CCSM u CCSM runs well on Jazz – now at 3 years per wallclock day u Load balance could be further optimized on Jazz u CCSM 2.1 will include build modifications used to run CCSM on Jazz

ARGONNE NATIONAL LABORATORY Community Atmosphere Model CAM u Download of standard release of CAM from NCAR web site u Makefile needs modification to use mpif90 and mpicc wrappers u Switch on 2-D finite volume dynamics u Assessed performance using 64,92,128,184 processors

ARGONNE NATIONAL LABORATORY Cam With 2-D Fv Dynamics ARGONNE NATIONAL LABORATORY Acknowledgement: IBM Pwr3 data from Art Mirin, LLNL

ARGONNE NATIONAL LABORATORY Performance and Scaling u Performed the standard MM5 benchmark on the Jazz Linux cluster at ANL u Ported MM5 to Intel compilers on IA-32 and IA-64 u Added MPE calls to facilitate profiling on Jazz, TeraGrid, etc

ARGONNE NATIONAL LABORATORY MM5 Benchmark on Jazz at ANL Source: John Michalakes, NCAR

ARGONNE NATIONAL LABORATORY MM5 over Myrinet on Jazz

ARGONNE NATIONAL LABORATORY MM5 over Ethernet on Jazz

ARGONNE NATIONAL LABORATORY Regional Climate Modeling u Parallel regional climate model development and testing based on MM5v3.6  WRF u Contributing to PIRCS experiments –PIRCS 1b and currently PIRCS 1c 15 year run u Downscaling using boundary and initial conditions derived from high resolution CCM runs made at LLNL

ARGONNE NATIONAL LABORATORY Regional Climate Modeling u Testbed for regional climate simulation laboratory – Espresso interface u Delivering regional climate data using interactive web based tools u Performance testing and porting to the NSF TeraGrid

ARGONNE NATIONAL LABORATORY PIRCS 1-B EXPERIMENT  We are using Version 3 of the Penn State / NCAR MM5, with the OSU land surface model  Total precipitation results for the period June 1-July 31, 1993 are shown in the center panel  Note the agreement with both the NCEP reanalysis forcing data (left panel) and the NCDC half-degree Cressman analysis of observations (right panel).  We plan to use this experiment and the PIRCS 1a (1988 US drought) as primary test beds for further enhancements of model physics

ARGONNE NATIONAL LABORATORY PIRCS-1c 1987 Precipitation

ARGONNE NATIONAL LABORATORY PIRCS-1c June 1988 Temps  We are using Version 3 of the Penn State / NCAR MM5 at 52km grid resolution, with the OSU land surface model NCEP I boundary and initial condition data

ARGONNE NATIONAL LABORATORY PIRCS-1c 1988 Precipitation

ARGONNE NATIONAL LABORATORY PIRCS-1c 1989 Precipitation

ARGONNE NATIONAL LABORATORY Large modeling systems are difficult to configure and run Running complex scientific models can require substantial computing skills Managing the computer science reduces the time available for doing science and limits what is possible e.g. MM5 requires many jobs to be submitted to setup and perform a one year run Current approaches are prone to error (especially where the build process is complex) Espresso Motivation

ARGONNE NATIONAL LABORATORY Contemporary software tools are not being exploited e.g. Java, XML, Globus Toolkit distributed computing, etc… Provide secure access to remote supercomputing resources Motivation (Cont.)

ARGONNE NATIONAL LABORATORY Develop a flexible graphical user interface (GUI) with low maintenance and development costs Incorporate modern software tools in order to dramatically increase flexibility and efficiency while reducing the chance of operator error = Espresso ! Approach

ARGONNE NATIONAL LABORATORY Espresso Modeling Interface

ARGONNE NATIONAL LABORATORY CO 2 Sensitivity ppm July 1986

ARGONNE NATIONAL LABORATORY CO 2 Sensitivity ppm December 1986

ARGONNE NATIONAL LABORATORY Interactive Data Analysis

ARGONNE NATIONAL LABORATORY Interactive Data Analysis

ARGONNE NATIONAL LABORATORY Conclusions…. u Key climate modeling codes, CCSM, CAM, MM5v3 are performing well on the Jazz Linux cluster –Multi-year regional climate simulations can be achieved on existing IA-32 Linux supercomputers u Future –NSF TeraGrid (IA-64) –WACCM model with Atmospheric Chemistry code –Performing downscaling using high resolution global GCM data

ARGONNE NATIONAL LABORATORY Argonne Climate Modeling Group