Xushan Zhao, Yang Chen Application of ab initio In Zr-alloys for Nuclear Power Stations General Research Institute for Non- Ferrous metals of Beijing September.

Slides:



Advertisements
Similar presentations
Software & Services Group, Developer Products Division Copyright© 2010, Intel Corporation. All rights reserved. *Other brands and names are the property.
Advertisements

Tutorial1: NEMO5 Technical Overview
Profiling your application with Intel VTune at NERSC
Intel® performance analyze tools Nikita Panov Idrisov Renat.
The Charm++ Programming Model and NAMD Abhinav S Bhatele Department of Computer Science University of Illinois at Urbana-Champaign
Parallel ISDS Chris Hans 29 November 2004.
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
HPCC Mid-Morning Break MPI on HPCC Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Research
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
New MPI Library on the cluster Since WSU’s Grid had an upgrade of its operating system recently, we need to use a new MPI Library to compile and run our.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Copyright © 2006 by The McGraw-Hill Companies,
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Using VASP on Ranger Hang Liu. About this work and talk – A part of an AUS project for VASP users from UCSB computational material science group led by.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
1 Introduction to Tool chains. 2 Tool chain for the Sitara Family (but it is true for other ARM based devices as well) A tool chain is a collection of.
MIGRATING TO THE SHARED COMPUTING CLUSTER (SCC) SCV Staff Boston University Scientific Computing and Visualization.
SAGE: Self-Tuning Approximation for Graphics Engines
High Performance Computation --- A Practical Introduction Chunlin Tian NAOC Beijing 2011.
Joshua Alexander University of Oklahoma – IT/OSCER ACI-REF Virtual Residency Workshop Monday June 1, 2015 Deploying Community Codes.
1 Intel Mathematics Kernel Library (MKL) Quickstart COLA Lab, Department of Mathematics, Nat’l Taiwan University 2010/05/11.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
Executing Message-Passing Programs Mitesh Meswani.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Sobolev Showcase Computational Mathematics and Imaging Lab.
CSC 215 : Procedural Programming with C C Compilers.
Parallel Computing Through MPI Technologies Author: Nyameko Lisa Supervisors: Prof. Elena Zemlyanaya, Prof Alexandr P. Sapozhnikov and Tatiana F. Sapozhnikov.
1 What is a Kernel The kernel of any operating system is the core of all the system’s software. The only thing more fundamental than the kernel is the.
1 Computer Programming (ECGD2102 ) Using MATLAB Instructor: Eng. Eman Al.Swaity Lecture (1): Introduction.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Parallelization of the Classic Gram-Schmidt QR-Factorization
E-science grid facility for Europe and Latin America E2GRIS1 André A. S. T. Ribeiro – UFRJ (Brazil) Itacuruça (Brazil), 2-15 November 2008.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
Roadrunner Supercluster University of New Mexico -- National Computational Science Alliance Paul Alsing.
1. 2 Define the purpose of MKL Upon completion of this module, you will be able to: Identify and discuss MKL contents Describe the MKL EnvironmentDiscuss.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Preliminary CPMD Benchmarks On Ranger, Pople, and Abe TG AUS Materials Science Project Matt McKenzie LONI.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
HPC F ORUM S EPTEMBER 8-10, 2009 Steve Rowan srowan at conveycomputer.com.
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
Lally School of M&T Pindaro Demertzoglou 1 Computer Software.
GPU Programming Contest. Contents Target: Clustering with Kmeans How to use toolkit1.0 Towards the fastest program.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Hernán García CeCalcULA Universidad de los Andes.
Yang Chen (1), Xushan Zhao (1), Yuqin Liu (2), Maoyou Chu (1), Jianyun Shen (1) First-principles Calculation of Zr-alloys based on e-Infrastructure (1)General.
Software Engineering Algorithms, Compilers, & Lifecycle.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
Computational chemistry packages (efficient usage issues?) Jemmy Hu SHARCNET HPC Consultant Summer School June 3, 2016 /work/jemmyhu/ss2016/chemistry/
Parallel OpenFOAM CFD Performance Studies Student: Adi Farshteindiker Advisors: Dr. Guy Tel-Zur,Prof. Shlomi Dolev The Department of Computer Science Faculty.
HPC usage and software packages
OpenPBS – Distributed Workload Management System
A survey of Exascale Linear Algebra Libraries for Data Assimilation
Tools of the Trade
CompChem VO: User experience using MPI
Advanced TAU Commander
Cornell Theory Center Cornell Theory Center (CTC) is a high-performance computing and interdisciplinary research center at Cornell.
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Run time performance for all benchmarked software.
Presentation transcript:

Xushan Zhao, Yang Chen Application of ab initio In Zr-alloys for Nuclear Power Stations General Research Institute for Non- Ferrous metals of Beijing September 2010

Zr-alloys : Safety Wall of Nuclear Station Characteristics: Low neutron absorption Cross section High strength Good ductility Low corrosion rate Main Purposes: Nuclear reactor fuel cladding

PROPERTY COMPOSITION STUCTURE Our Application PREDICT Our Application and Expectation

Softpackage  A package for performing ab-initio quantum-mechanical molecular dynamics (MD).

Installation of the Program 1.Fortran Compiler compile the Vasp software 2.Math Kernel Library used during the calculation 3.Install MPICH2 for parallel calculation

First Step: Install the Fortran Compiler  unpack it into a writeable directory of your choice using the command: tar –xzvf name-of-downloaded-file  change the directory (cd) to the directory containing the unpacked files and begin the installation using the command:./install.sh  Establishing the Compiler Environment : source /opt/intel/Compiler/11.1/xxx/bin/ifortvars.sh (default for system-wide installation is /opt/intel) free tools for non-commercial software development

Second Step: Install Math Kernel Library Intel® Math Kernel Library (Intel® MKL) is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more. Offering performance optimizations for current and next-generation Intel® processors, it includes improved integration with Microsoft Visual Studio*, Eclipse*, and XCode*. free tools for non-commercial software development  change the directory (cd) to the directory containing the unpacked files and begin the installation using the command:./install.sh

Third Step: Install MPICH2 Unpack the tar file and go to the top level directory: tar xzf mpich2-1.3b1.tar.gz cd mpich2-1.3b1 Configure MPICH2 specifying the installation directory:./configure --prefix=/home/ /mpich2-install Then: make make install

Final Step: Install Our Program There are two directories in which VASP resides: …/vasp.5.lib holds files which change rarely, but might require considerable changes for supporting new machines …/vasp.5.2 contains the VASP code, and changes with every update. cd vasp.4.lib cp makefile.machine makefile You might choose makefile.machine from the following list: makefile.cray makefile.dec makefile.hp makefile.linux_abs makefile.linux_alpha makefile.linux_ifc_P4 makefile.linux_ifc_ath makefile.linux_pg makefile.nec makefile.rs6000 makefile.sgi makefile.sp2 makefile.sun makefile.t3d makefile.t3e makefile.vpp

mpif90 Modify the makefile in vasp.lib directory Then make we will obtain a libdmy.a file

In vasp.5.2 directory: Modify the directory of MKL library Then & make, we will obtain the vasp excutive file.

How VASP runs? MPI Version of VASP: It generates several MPI processes on each core and parallel execution between nodes, is performed using MPI communication between processes. Generate several mpi processers WN 1 WN 2 WN 3 WN 4 WN 5 WN 6 WN 7 …………….. Submit job to the WN

4 Input files mpdboot mpirun -np 2 vasp >&runlog

runlog files

Onput files input files <1 Mb output files <100 Mb

.pbs FILE ……….. ########################################## # Output some useful job information. ########################################## JOBINFOR=$PBS_JOBID MASTERNODE=`hostname` SCRATCHDIR=$PBS_JOBID NCPU=`wc -l < $PBS_NODEFILE` SERVER=$PBS_O_HOST WORKDIR=$PBS_O_WORKDIR MKDIR=/bin/mkdir RSH=/usr/bin/rsh CP=/bin/cp LAUNCH="/disk6/xlxy50123k/copy/mpich-1.2.7p1/bin/mpirun -np $NCPU - machinefile “ location of mpich and the number of CPUS We Needed PROGRAMEXEC="/disk6/xlxy50123k/copy/bin/vasp.neb“ calling VASP program ……. To run the job : qsub vasp.pbs -l nodes=20:ppn=1 -N job

Some Information Of the test job:

Our Demand on CPU’s One Single run: 20~30*3.06G Intel Xeon CPU , 2GB Memory Cost: 2-7 days depend on the accuracy we set One simulation always have >10 jobs Therefore: ~100 CPU’S maybe enough for our job More CPU’s will help us to reach high accuracy