LS-DYNA ENVIRONMENT Oasys Shell 9.3 October 2008.

Slides:



Advertisements
Similar presentations
Oasys SOFTWARE RELEASE Version 9.1 November 2004.
Advertisements

GXP in nutshell You can send jobs (Unix shell command line) to many machines, very fast Very small prerequisites –Each node has python (ver or later)
NGS computation services: API's,
Oasys D3PLOT 9.3 October 2008.
LS-DYNA ENVIRONMENT Oasys T/HIS 9.3 October 2008.
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Job Submission.
Protocols and software for exploiting Myrinet clusters Congduc Pham and the main contributors P. Geoffray, L. Prylli, B. Tourancheau, R. Westrelin.
HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
Oasys 9.2. Oasys 9.2 – What is new? Multiple Models and Automatic Processes –Generate a series of models in Primer in one operation –Submit list of models.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
S/W meeting 18 October 2007RSD 1 Remote Software Deployment Nick West.
CS 898N – Advanced World Wide Web Technologies Lecture 6: PERL and CGI Chin-Chih Chang
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
LS-DYNA ENVIRONMENT Oasys REPORTER 9.3 October 2008.
1 Shared Financial Systems WISDM2 Difference Training October 2006.
DevCon ‘11 Center for Instructional Delivery. DevCon ‘11 Enrolling in Blackboard Learn for Campus Edition Alumni.
Matt Wesley University of Hawaii at Manoa 7/6/12.
Communicating with Users about HTCondor and High Throughput Computing Lauren Michael, Research Computing Facilitator HTCondor Week 2015.
ViciDocs for BPO Companies Creating Info repositories from documents.
Fabien Viale 1 Matlab & Scilab Applications to Finance Fabien Viale, Denis Caromel, et al. OASIS Team INRIA -- CNRS - I3S.
Track 1: Cluster and Grid Computing NBCR Summer Institute Session 2.2: Cluster and Grid Computing: Case studies Condor introduction August 9, 2006 Nadya.
Research Achievements Kenji Kaneda. Agenda Research background and goal Research background and goal Overview of my research achievements Overview of.
Resource management system for distributed environment B4. Nguyen Tuan Duc.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Clusters at IIT KANPUR - 1 Brajesh Pande Computer Centre IIT Kanpur.
Grid Computing I CONDOR.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Install Software. UNIX Shell The UNIX/LINUX shell is a program important part of a Unix system. interface between the user & UNIX kernel starts running.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
Research Computing Environment at the University of Alberta Diego Novillo Research Computing Support Group University of Alberta April 1999.
Network Queuing System (NQS). Controls batch queues Only on Cray SV1 Presently 8 queues available for general use and one queue for the Cray analyst.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
APST Internals Sathish Vadhiyar. apstd daemon should be started on the local resource Opens a port to listen for apst client requests Runs on the host.
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
Creating and running an application.
Faucets Queuing System Presented by, Sameer Kumar.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Master Control Program Subha Sivagnanam SDSC. Master Control Program Provides automatic resource selection for running a single parallel job on HPC resources.
The NorduGrid toolkit user interface Mattias Ellert Presented at the 3 rd NorduGrid workshop, Helsinki,
HPC at HCC Jun Wang Outline of Workshop2 Familiar with Linux file system Familiar with Shell environment Familiar with module command Familiar with queuing.
1 Advanced Archive-It Application Training: Reviewing Reports and Crawl Scoping.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
How to configure, build and install Trilinos November 2, :30-9:30 a.m. Jim Willenbring.
Event Service Wen Guan University of Wisconsin 1.
 Simple UNIX commands  I/O techniques in C++  Solutions to Lab#0 problems  Solutions to Lab#1 problems 1.
NLD-WRT Brief October 2008 US Army Corps of Engineers NLD-WRT Submission Library Primer.
Modules, Compiling WRF, and Running on CHPC Clusters Adam Varble WRF Users Meeting 10/26/15.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Click on “My Courses”. Please note that only summative assignments can be uploaded on the new virtual campus. Formative assignments are now available online.
C Copyright © 2009, Oracle. All rights reserved. Using SQL Developer.
Auburn University

Fundamentals Sunny Sharma Microsoft
OpenPBS – Distributed Workload Management System
Microsoft BackOffice Applications
IBC233 Week 2 Updated Winter 2011
Bomgar Remote support software
Osiz-Leading Mobile App Development Company in India
Compiling and Job Submission
IBC233 Week 2 Updated Fall 2011.
Working in The IITJ HPC System
SPL – PS1 Introduction to C++.
Creating Grade Columns in Blackboard
Presentation transcript:

LS-DYNA ENVIRONMENT Oasys Shell 9.3 October 2008

LS-DYNA ENVIRONMENT Contents New features in Shell 9.3 –Parallel OptionsParallel Options –Queue OptionsQueue Options –oasys_queue fileoasys_queue file –dyna_versions filedyna_versions file –Command line submissionCommand line submission

LS-DYNA ENVIRONMENT Back to Contents Parallel Options If MPP and either Online, Background or Batch submission are selected, the Nodes and CPUs to run the analysis on can be specified: –Local Host - use the machine the shell is being run on –Node File - specify a file containing a list of Nodes and CPUs to use –Node List - specify a string containing a list of Nodes and CPUs to use

LS-DYNA ENVIRONMENT Back to Contents Queue Options If Queue submission is selected the number of CPUs and Nodes to use can be selected Options are defined in the ‘oasys_queue’ file If no options are defined the default commands produced by the Shell are used

LS-DYNA ENVIRONMENT Back to Contents oasys_queue file $ Contains 1 queue definition "dyna" $ submit commands for 1,2,4 and 8 CPU's to a PBS queuing system $ cpu_limit="none" $ display_string="1 CPU x 1 Node" command="#PBS -l nodes=1:ppn=1:dyna" display_string="2 CPU x 1 Node" command="#PBS -l nodes=1:ppn=2:dyna" mpp_only display_string="1 CPU x 2 Nodes" command="#PBS -l nodes=1:ppn=2:dyna" mpp_only display_string="2 CPU x 2 Nodes" command="#PBS -l nodes=2:ppn=2:dyna" mpp_only display_string="2 CPU x 4 Nodes" command="#PBS -l nodes=4:ppn=2:dyna" $ command="#PBS -m abe" </all_queue_commands The ‘oasys_queue’ file lists queues and queue directives It contains three distinct blocks of data: –Block 1 defines queue names, their cpu limit and directives specific to that queue. –Block 2 defines queue directives for different CPU and Node combinations. –Block 3 defines queue directives that apply to all queues Block 1 Block 2 Block 3

LS-DYNA ENVIRONMENT Back to Contents dyna_versions file $ SMP Double "P:\Dyna_executables\Dyna_970_6763\ls970_d_6763_win32_p.exe" Win32 DP LS970v6763 SMP Single "P:\Dyna_executables\Dyna_970_6763\ls970_s_6763_win32_p.exe" Win32 SP LS970v6763 MPP Double "P:\Dyna_executables\Dyna_970_6763\mpp970_d_6763_Intelsse_win32_mpich125.exe“ MPICH Win32 DP LS970v6763 (MPICH) MPP Single “P:\Dyna_executables\Dyna_970_6763\mpp970_s_6763_Intelsse_win32_mpich125.exe“ MPICH Win32 SP LS970v6763 (MPICH) $ a SMP Double "P:\Dyna_executables\Dyna_970_5434a\ls970_d_5434a_win32_p.exe" Win32 DP LS970v5434(a) SMP Single "P:\Dyna_executables\Dyna_970_5434a\ls970_s_5434a_win32_p.exe" Win32 SP LS970v5434(a) MPP Double “P:\Dyna_executables\Dyna_970_5434a\mpp970_d_5434a_win32.exe" MPICH Win32 DP LS970v5434(a) (MPICH) MPP Single "P:\Dyna_executables\Dyna_970_5434a\mpp970_s_5434a_win32.exe" MPICH Win32 SP LS970v5434(a) (MPICH) $ SMP Double "P:\Dyna_executables\Dyna_970_5434\ls970_d_5434_win32_p.exe" Win32 DP LS970v5434 SMP Single "P:\Dyna_executables\Dyna_970_5434\ls970_s_5434_win32_p.exe" Win32 SP LS970v5434 MPP Double “P:\Dyna_executables\Dyna_970_5434\mpp970_d_5434_win32.exe" MPICH Win32 DP LS970v5434 (MPICH) MPP Single "P:\Dyna_executables\Dyna_970_5434\mpp970_s_5434_win32.exe" MPICH Win32 SP LS970v5434 (MPICH) The ‘dyna_versions’ file has an extra column to identify which MPI library the versions have been compiled with

LS-DYNA ENVIRONMENT Back to Contents Command line submission On UNIX systems a command line version of the submission shell is available It will start automatically if an X-connection cannot be established It can be started manually using the command ‘oasys_93 cmd’

LS-DYNA ENVIRONMENT Back to Contents Contact Information UK: Arup The Arup Campus Blythe Valley Park Solihull, West Midlands B90 8AE UK T +44 (0) F +44 (0) For more information please contact the following: China: Arup 39/F-41/F Huai Hai Plaza Huai Hai Road (M) Shanghai China T F India: nHance Engineering Solutions Pvt. Ltd (Arup) Plot No. 39, Ananth Info Park Opposite Oracle Campus HiTec City-Phase II Madhapur Hyderabad India T +91 (0) / 8 or contact your local Oasys distributor