Download presentation
Presentation is loading. Please wait.
Published byMadyson Gladhill Modified over 9 years ago
1
MS698: Implementing an Ocean Model Benchmark tests. Dealing with big data files. – (specifically big NetCDF files). Signell paper. Activity: – nctoolbox on matlab. April 4, 2014
2
Benchmark Tests Name NtileI, NtileJ (grid is 50x70) No. of Processors No. of NodesPPN Type of Node CPU Time (hr) Wall Time (hr) Julia1,1111score1.8 Daniel1,2212score1.70.8 Danielle4,1414qcore2.50.6 Danny1,8818qcore2.40.3 Jiabi4,2818qcore2.80.4 Jiabi2,4818qcore2.60.3 Britt2,1212score3.11.5 Fei2,2414dcore2.7.7
3
Activity Today: Part I: Analyze Benchmark Tests Plot the walltime vs. number of processors. Make another figure of the speedup vs. the number of processors. Discuss the model performance when run in parallel: – Do you see a difference in speedup depending on how the model tiles were configured? – Does it seem worthwhile to run the model in parallel up to the number of processors tested (8)? – Based on these benchmarks, how would you set up a parallel run if you wanted to represent a long period of time?
5
Themes of the paper Model output makes big data. ~terabyte scale. Data access is limited by bandwidth in many cases. But, often, you don’t want or need the entire file or data set, you just need part of it. Especially in collaborative settings where different models are being combined or compared – it can be a pain to compare models (different grids, different timesteps, different variable names, different units…). Because the output data is “big” it may be spread across multiple files so that each file < 2GB.
6
Section 2 gives 5 pieces of advice 1.Store data in a machine-independent, self-describing format. (like NetCDF). 2.Use CF (Climate and Forecast) conventions. This makes it easier for processing scripts to figure out the model grid, variables, etc. Especially important for users who do not know the details of the model. 3.Use and develop generic tools that work with CF – compliant data. 4.Use OPeNDAP to distribute data. Lets the data be served over the internet so that subsets of the data can be accessed at one time. 5.Use a THREDDS catalog. This lets you string a lot of data files together into one dataset.
7
Now for our example: How can we deal with big datasets? Use matlab script on /export/home/ckharris/MODELS/ROMS/RIVER PLUME2/MS698…
8
We are going to use nctoolbox in matlab to analyze big NetCDF data on the vlab computers. Why are we going to use the vlab computers? Because it has a new enough version of Matlab and java and can access our model output. Avoid the step of logging onto the cluster or poverty. You might be able to do this from poverty as well. You can use these tools on pacific but running jobs interactively on the cluster requires some extra steps. Why do we want to use the nctoolbox? It gives us some useful tools for concatenating across history files. It has some useful tools for analyzing ocean model data.
9
Now for our example: How can we deal with big datasets? Use matlab script on /export/home/ckharris/MODELS/ROMS/RIVER PLUME2/MS698…
10
Activity for today Plot the walltime vs. number of processors. Make another figure of the speedup vs. the number of processors. Discuss the model performance when run in parallel: – Do you see a difference in speedup depending on how the model tiles were configured? – Does it seem worthwhile to run the model in parallel up to the number of processors tested (8)? – Based on these benchmarks, how would you set up a parallel run if you wanted to represent a long period of time? Use the nctoolbox to plot a timeseries of the data from the RIVERPLUME 2 test case. /export/home/ckharris/MODELS/ROMS/RIVERPLUME2/MS698…
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.