Download presentation
Presentation is loading. Please wait.
1
Computational Physics Lecture 10 Dr. Guy Tel-Zur
2
Agenda Quantum Monte Carlo Star HPC Parallel Matlab GPGPU Computing Hybrid Parallel Computing (MPI + OpenMP) OpenFoam
3
מנהלה השעור האחרון הוזז. במקום יום ה ', ה - 6/1/11, השיעור יתקיים ביום א ', ה - 2/1/11, 17:00- 20:00, בבניין 90 חדר 141. פרויקטי הגמר – כמה תלמידים טרם בחרו נושא או לא כתבו אלי מייל עם הנושא. *** דחוף להסדיר את הפרוייקטים ***
5
News… AMD, 16 cores in 2011 – How can a scientist continue to program in the old fashioned serial way???? Do you want to utilize only 1/16 of the power of your computer????
6
Quantum Monte Carlo MHJ Chapter 11
7
Star-HPC http://web.mit.edu/star/hpc/index.html StarHPC provides an on-demand computing cluster configured for parallel programming in both OpenMP and OpenMPI technologies. StarHPC uses Amazon's EC2 web service to completely virtualize the entire parallel programming experience allowing anyone to quickly get started learning MPI and OpenMP programming.
9
Username: mpiuser Password: starhpc08
13
Parallel Matlab
14
Unfortunately Star-P is dead
15
MatlabMPI http://www.ll.mit.edu/mission/isr/matlabmpi/matlabmpi.html#introduction
16
MatlabMPI Demo Installed on the vdwarf machines
17
Add to Matlab path: vdwarf2.ee.bgu.ac.il> cat startup.m addpath /usr/local/PP/MatlabMPI/src addpath /usr/local/PP/MatlabMPI/examples Addpath./MatMPI
18
xbasic %%%%%%%%%%%%%%%%%%%%%%%%%%%% % Basic Matlab MPI script that % prints out a rank. % % To run, start Matlab and type: % % eval( MPI_Run('xbasic',2,{}) ); % % Or, to run a different machine type: % % eval( MPI_Run('xbasic',2,{'machine1' 'machine2'}) ); % % Output will be piped into two files: % % MatMPI/xbasic.0.out % MatMPI/xbasic.1.out % %%%%%%%%%%%%%%%%%%%%%%%%%%%% % MatlabMPI % Dr. Jeremy Kepner % MIT Lincoln Laboratory % kepner@ll.mit.edu %%%%%%%%%%%%%%%%%%%%%%%%%%%%
19
% Initialize MPI. MPI_Init; % Create communicator. comm = MPI_COMM_WORLD; % Modify common directory from default for better performance. % comm = MatMPI_Comm_dir(comm,'/tmp'); % Get size and rank. comm_size = MPI_Comm_size(comm); my_rank = MPI_Comm_rank(comm); % Print rank. disp(['my_rank: ',num2str(my_rank)]); % Wait momentarily. pause(2.0); % Finalize Matlab MPI. MPI_Finalize; disp('SUCCESS'); if (my_rank ~= MatMPI_Host_rank(comm)) exit; end
20
Demo folder ~/matlab/, watch top at the other machine
21
GPGPU
22
An interesting new article URL: http://www.computer.org/portal/c/document_library/get_file?uuid=2790298b-dbe4-4dc7-b550- b030ae2ac7e1&groupId=808735
23
GPGPU and Matlab http://www.accelereyes.com
24
GP-you.org
25
>> GPUstart Copyright gp-you.org. GPUmat is distribuited as Freeware. By using GPUmat, you accept all the terms and conditions specified in the license.txt file. Please send any suggestion or bug report to gp-you@gp-you.org. Starting GPU - GPUmat version: 0.270 - Required CUDA version: 3.2 There is 1 device supporting CUDA CUDA Driver Version: 3.20 CUDA Runtime Version: 3.20 Device 0: "GeForce 310M" CUDA Capability Major revision number: 1 CUDA Capability Minor revision number: 2 Total amount of global memory: 455475200 bytes - CUDA compute capability 1.2...done - Loading module EXAMPLES_CODEOPT - Loading module EXAMPLES_NUMERICS -> numerics12.cubin - Loading module NUMERICS -> numerics12.cubin - Loading module RAND
26
A = rand(100, GPUsingle); % A is on GPU memory B = rand(100, GPUsingle); % B is on GPU memory C = A+B; % executed on GPU. D = fft(C); % executed on GPU Executed on GPU A = single(rand(100)); % A is on CPU memory B = double(rand(100)); % B is on CPU memory C = A+B; % executed on CPU. D = fft(C); % executed on CPU Executed on CPU Let’s try this
27
GPGPU Demos OpenCL demos are here: C:\Users\telzur\AppData\Local\NVIDIA Corporation\NVIDIA GPU Computing SDK\OpenCL\bin\Win64\Release And C:\Users\telzur\AppData\Local\NVIDIA Corporation\NVIDIA GPU Computing SDK\SDK Browser OpenCL demos are here: C:\Users\telzur\AppData\Local\NVIDIA Corporation\NVIDIA GPU Computing SDK\OpenCL\bin\Win64\Release And C:\Users\telzur\AppData\Local\NVIDIA Corporation\NVIDIA GPU Computing SDK\SDK Browser My laptop has Nvidia Geforce 310M with 16 cuda cores
28
oclParticles.exe
32
Hybrid MPI + OpenMP Demo Machine File: hobbit1 hobbit2 hobbit3 hobbit4 Each hobbit has 8 cores mpicc -o mpi_out mpi_test.c -fopenmp MPI OpenMP An Idea for a final project!!! cd ~/mpi program name: hybridpi.c
33
MPI is not installed yet on the hobbits, in the meanwhile: vdwarf5 vdwarf6 vdwarf7 vdwarf8
34
top -u tel-zur -H -d 0.05 H – show threads, d – delay for refresh, u - user
35
Hybrid MPI+OpenMP continued
36
OpenFoam
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.