Download presentation
Presentation is loading. Please wait.
Published byClaribel Gallagher Modified over 8 years ago
1
Using HPC for Ansys Mechanical John Zaitseff and Jay Sul, May 2016 High Performance Computing
2
The problem in computer labs Multiple computers locked for long periods of time Often just a handful of students All computers running Ansys CFX, Fluent or Mechanical Often randomly rebooted by other students and/or staff Cannot get a computer when you need it Can lose results when you do Image credit: John Zaitseff, UNSW
3
The solution: High Performance Computing “High performance computing is used to solve real-world problems of significant scale or detail across a diverse range of disciplines including physics, biology, chemistry, geosciences, climate sciences, engineering and many others.” — Intersect Australia http://www.intersect.org.au/content/time-fast-computing Image credit: IBM Blue Gene P supercomputer, Argonne National Laboratory
4
High Performance Computing architecture Massively Parallel Distributed Computational Clusters Many individual servers (“nodes”): dozens to thousands Multiple processors per node: between 8 and 64 cores Interconnected by fast networks Almost always run Linux –In our case: Rocks Linux Distribution on top of CentOS 6.x The Leonardi cluster Image credit: John Zaitseff, UNSW
5
High Performance Computing architecture Head NodeStorage Node Internal Network Switch Compute Node 1Compute Node 2Compute Node 3Compute Node 4 Compute Node n Internet Chassis 1 Compute Node 1-1Compute Node 1-2Compute Node 1-3Compute Node 1-4 Compute Node 1- n Chassis m Compute Node m -1 Compute Node m -2 Compute Node m -3 Compute Node m -4 Compute Node m - n
6
Facilities for MECH students and staff The Newton cluster –For undergraduate students, postgraduates and staff –MECH4620, MECH4100, MMAN4010, MMAN4020, MMAN4410, AERO4110 and AERO4120 students already have an account! The Trentino cluster –For postgraduate students and staff –By application The Leonardi cluster –For postgraduate students and staff –By application UNSW R1 Data Centre Image credit: John Zaitseff, UNSW
7
The Newton cluster: newton.mech.unsw.edu.au 10 × Dell R415 server nodes –Head node: newton –Compute nodes: newton01 to newton09 160 × AMD Opteron 4386 3.1GHz processor cores –Two physical processors per node –Eight CPU cores per processor –Only four floating-point units per processor 320 GB of main memory (32 GB per node) 12 TB of storage: 6 × 3 TB drives in RAID 6 1Gb Ethernet network interconnect http://cfdlab.unsw.wikispaces.net/ The Newton cluster Image credit: John Zaitseff, UNSW
8
The Trentino cluster: trentino.mech.unsw.edu.au 16 × Dell R815 server nodes –Head node: trentino –Compute nodes: trentino01 to trentino15 1024 × AMD Opteron 6272 2.1GHz processor cores –Four physical processors per node –Sixteen CPU cores per processor –Only eight floating-point units per processor 2048 GB of main memory (128 GB per node) 30 TB of storage: 12 × 3 TB drives in RAID 6 4×1Gb Ethernet network interconnect http://cfdlab.unsw.wikispaces.net/ The Trentino cluster Image credit: John Zaitseff, UNSW
9
The Leonardi cluster: leonardi.eng.unsw.edu.au 7 × HP BladeSystem c7000 blade enclosures 1 × HP ProLiant DL385 G7 server: leonardi 56 × HP BL685c G7 compute nodes –Compute nodes: ec01b01-ec07b08 2944 × AMD Opteron 6174 2.2GHz processor cores and Opteron 6276 2.3GHz processor cores –Four physical processors per node –Twelve or sixteen CPU cores per processor 8448 GB of main memory (96–512 GB per node) 93.5 TB of storage: 70 × 2 TB drives in RAID 6+0 2×10Gb Ethernet network interconnect http://leonardi.unsw.wikispaces.net/ Nodes in the Leonardi cluster Image credit: John Zaitseff, UNSW
10
3592 × Fujitsu blade server nodes Multiple login nodes Multiple management nodes 57,472 Intel Xeon E5-2670 2.60GHz processors 160 TB of main memory 10 PB of storage using the Lustre distributed file system 56Gb Infiniband FDR network interconnect http://nci.org.au/nci-systems/national-facility/peak-system/raijin/ The Raijin cluster: raijin.nci.org.au Image credit: National Computational Infrastructure
11
Connecting to a HPC system Use the Secure Shell protocol (SSH) –Under Linux or Mac OS X: ssh username@hostname (for example, ssh z9693022@newton.mech.unsw.edu.au ) –Under Windows: PuTTY (Start » All Programs » PuTTY » PuTTY) –Can install Cygwin: “that Linux feeling under Windows” To connect to the Newton cluster: –Hostname: newton.mech.unsw.edu.au –Check RSA2 fingerprint: 69:7e:64:75:57:67:ad:4c:21:8e:90:7d:8e:97:70:ce –User name: your zID –Password: your zPass You will get a command line prompt: something like To exit, type exit and press ENTER. z9693022@newton:~ $
12
Simple Linux commands List files in a directory: ls [ options ] [ pathname...] –[ ] indicates optional parameters,... indicates one or more parameters – Italic fixed-width font indicates replaceable parameters –Options include “ -l ” (letter L) for a long (detailed) listing To show the current directory: pwd To change directories: cd directory – ~ is the home directory –. is the current directory –.. is the directory above the current one – ~user is the home directory of user user –Subdirectories are separated by “ / ”, e.g., /home/z9693022/src To create directories: mkdir directory To remove an empty directory: rmdir directory To get help for a command: man command
13
More simple Linux commands To output one or more file’s contents: cat filename... To view one or more files page by page: less filename... To copy one file: cp source destination To copy one or more files to a directory: cp filename... dir To preserve the “last modified” time-stamp: cp -p To copy recursively: cp -pr source destination To move one or more files to a different directory: mv filename... dir To rename a file or directory: mv oldname newname To remove files: rm filename... Recommendation: use “ ls filename...” before rm or mv : what happens if you accidentally type “ rm * ”? or “ rm *.c ”? (note the space!)
14
Transferring files To copy files to a Linux or Mac OS X system: use scp, rsync or insync To copy files to and from a Windows machine: use WinSCP (Start » All Programs » WinSCP » WinSCP), or scp or rsync under Cygwin To copy files to and from the Newton cluster: –Host name newton.mech.unsw.edu.au –Check RSA2 fingerprint: 69:7e:64:75:57:67:ad:4c:21:8e:90:7d:8e:97:70:ce –User name: your zID –Password: your zPass Using WinSCP, simply drag and drop files from one pane to the other.
15
Editing files Use an editor to edit text files Many choices, leading to “religious wars”! Some options: GNU Emacs, Vim, Nano Nano is very simple to use: nano filename –CTRL-X to exit (you will be asked to save any changes) GNU Emacs and Vim are highly customisable and programmable –For example, see the file ~z9693022/.emacs –Debra Cameron et al., Learning GNU Emacs, 3rd Edition, O’Reilly Media, December 2004. ISBN 9780596006488, 9780596104184 –Arnold Robbins et al., Learning the vi and Vim Editors, 7th Edition, O’Reilly Media, July 2008. ISBN 9780596529833, 9780596159351
16
Running Ansys Mechanical jobs 1.Set up your job using Ansys Mechanical as per normal 2.Connect to the Newton cluster using PuTTY 3.Create a directory for this particular job 4.Transfer the required files to that directory using WinSCP 5.Create an appropriate script file 6.Submit the job to the Newton queue 7.Periodically check the status of the job 8.Once finished, transfer the output files to your desktop computer 9.Check the results using the standard Ansys Mechanical tools
17
Step 1a: Setting up the job: Use Workbench a.Start Ansys Workbench b.Open a project file c.Choose a relevant item under Static Structural (B5) d.Select the Tools menu item, then Write Input File… e.Choose a filename with an extension of.inp (APDL Input Files) Note: Avoid spaces in all file and directory names!
18
a. b.Create a working directory folder using the usual Windows Explorer (WIN and E keys pressed together) c.Define the Working Directory and Job Name Step 1b: Setting up the job: Define the job a.Run the Ansys Mechanical APDL Product Launcher (same version as Workbench)
19
Step 1c: Setting up the job: Import the input file a. b.Select the.inp file that you saved from Workbench a.In APDL, select the File menu item, then Read Input from…
20
Step 1d: Setting up the job: Create a database file a. b. c.Select File » Resume from… and select the saved.db database file a.Select the File » Save as… menu item b.Save the.db database file
21
Step 1e: Setting up the job: Create a log file a. b.Once the job starts, stop it c.Select File » Exit d.Choose Save Everything, then click OK a.Select Solution » Solve » Current LS from the Main Menu (on the left)
22
Step 2a: Connect to the Newton cluster a. b.Enter newton.mech.unsw.edu.au for the Host Name and click Open c.The first time you connect, a Security Alert will appear. Check the RSA2 fingerprint 69:7e:64:75:57:67:ad:4c:21:8e:90:7d:8e:97:70:ce and click Yes a.Connect to the Newton cluster using PuTTY: Start » All Programs » PuTTY » PuTTY
23
Step 2b: Connect to the Newton cluster a.Type in your zID for “login as:”, then press the ENTER key b.Type in your zPass (nothing will be shown) and press ENTER c.You will get a command line prompt, something like: z9693022@newton:~ $
24
Step 3: Create a directory for the job a.Create a directory for this particular job –Use the mkdir command: mkdir directoryname –Come up with a consistent naming scheme –Structure your directories; use subdirectories as required –Directories are separated using “/”, not “\” –You can use any characters in filenames except “/” and NUL –Just because you can does not mean you should! –Avoid using spaces in file and directory names –Recommendation: use “a” to “z”, “A” to “Z”, “0” to “9”, “-”, “_” and “.” only
25
Step 4: Transfer data files to the cluster a. b. c.Check the RSA2 fingerprint d.The left pane (side) shows your local directory; the right pane shows the (remote) cluster e.Transfer the.db and.log files to the remote directory using drag and drop a.Connect to the Newton cluster using WinSCP: Start » All Programs » WinSCP » WinSCP b.Enter newton.mech.unsw.edu.au as the Host name, your zID as the User name and your zPass as the Password
26
Step 5: Create the script file a.Change to the project directory: cd jobdirectory b.Invoke the text editor to create a script file: nano jobfilename.sh c.Add the following text, replacing parameters as required: #!/bin/bash #SBATCH --time=0-02:00:00 # for 0 days 2 hours #SBATCH --mem=14336 # 14GB memory #SBATCH --ntasks=1 # A single job #SBATCH --cpus-per-task=8 # 8 processor cores #SBATCH --mail-user=emailaddr@unsw.edu.au # or @student.unsw.edu.au #SBATCH --mail-type=ALL cd $SLURM_SUBMIT_DIR module load ansys/16.2 # or ansys/17.0 as appropriate ansys162 –np 8 –i jobfilename.log –o jobfilename.out # or ansys170 if “ module load ansys/17.0 ” specified d.Save the file by pressing CTRL-X and following the prompts
27
Steps 6 and 7: Submit and check on the job 6.Once you have created the jobfilename.sh script file, submit it into the Newton queue: –Make sure you are in the correct directory –Submit the job: sbatch jobfilename.sh –Take note of the job number: “Submitted batch job jobid” –Once submitted, you do not need to be connected to the cluster 7.Periodically check on the job status –The job will start as soon as resources are available for it to run –Emails will be sent to you on job start and completion –Show queue status: squeue or squeue -l (letter L) –Show node status: sinfo –Cancel a running or queued job: scancel jobid
28
Step 8: Transfer output file from the cluster a.Once you have received an email stating your job is complete, connect back to the Newton cluster using WinSCP b.Enter newton.mech.unsw.edu.au as the Host name, your zID as the User name and your zPass as the Password c.You may need to click the Refresh icon (CTRL-R) to update the list of files d.Transfer the.rst result file from the remote directory (right-hand pane) to your local drive (left-hand pane) using drag and drop
29
Step 9: Check the result a. b. c. d.Select the.rst results file e.You have now finished! a.Open the project in Ansys Mechanical b.Choose the Solution tree c.Select File » Read Result Files…
30
Whom to ask for help? 1.Your colleagues 2.Your supervisor/lecturer 3.HPC administrators: John Zaitseff and/or Mark Minchenko eng.rsch.mec.cfdlab.grpmgr @unsw.edu.au Available for consultations on Tuesdays 9:30am–4pm by appointment only. Getting help with HPC Image credit: John Zaitseff, UNSW
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.