Yang Chen (1), Xushan Zhao (1), Yuqin Liu (2), Maoyou Chu (1), Jianyun Shen (1) First-principles Calculation of Zr-alloys based on e-Infrastructure (1)General.

Slides:



Advertisements
Similar presentations
Kejun Dong, Kai Nan CNIC/CAS CNIC Resources And Activities Update Resources Working Group PRAGMA11 Workshop, Oct.16/17 Osaka, Japan.
Advertisements

Xushan Zhao, Yang Chen Application of ab initio In Zr-alloys for Nuclear Power Stations General Research Institute for Non- Ferrous metals of Beijing September.
CNIC Grid/SDG CA Updates 2 nd APGrid PMA meeting, October 15, 2006 Morrise Xu NTARL, CNIC, China.
Atomic Scale Modelling of Zirconium Alloys and Hydrogen in Zirconium By Simon Lumley Supervised by Dr Mark Wenman, Prof. Robin Grimes and Dr Paul Chard-Tuckey.
GPGPU Introduction Alan Gray EPCC The University of Edinburgh.
Ab initio Alloy Thermodynamics: Recent Progress and Future Directions
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Parallel Computation of the 2D Laminar Axisymmetric Coflow Nonpremixed Flames Qingan Andy Zhang PhD Candidate Department of Mechanical and Industrial Engineering.
VLab: A Collaborative Cyberinfrastructure for Computations of Materials Properties at High Pressures and Temperatures Cesar R. S. da Silva 1 Pedro R. C.
Division of Pharmacokinetics and Drug Therapy Department of Pharmaceutical Biosciences Uppsala University Estimating and forecasting in vivo drug disposition.
Case Studies Class 5. Computational Chemistry Structure of molecules and their reactivities Two major areas –molecular mechanics –electronic structure.
DCABES 2009 China University Of Geosciences 1 The Parallel Models of Coronal Polarization Brightness Calculation Jiang Wenqian.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
Thermal Properties of Crystal Lattices
Large-Scale Density Functional Calculations James E. Raynolds, College of Nanoscale Science and Engineering Lenore R. Mullin, College of Computing and.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Cloud based Dynamic workflow with QOS for Mass Spectrometry Data Analysis Thesis Defense: Ashish Nagavaram Graduate student Computer Science and Engineering.
Synergy.cs.vt.edu Power and Performance Characterization of Computational Kernels on the GPU Yang Jiao, Heshan Lin, Pavan Balaji (ANL), Wu-chun Feng.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
Efficient Parallel Implementation of Molecular Dynamics with Embedded Atom Method on Multi-core Platforms Reporter: Jilin Zhang Authors:Changjun Hu, Yali.
Performance Evaluation of Hybrid MPI/OpenMP Implementation of a Lattice Boltzmann Application on Multicore Systems Department of Computer Science and Engineering,
“High-performance computational GPU-stand for teaching undergraduate and graduate students the basics of quantum-mechanical calculations“ “Komsomolsk-on-Amur.
FP6−2004−Infrastructures−6-SSA EUChinaGRID Project Giuseppe Andronico Technical Manager EUChinaGRID Project INFN Sez.
The educational-oriented pack of computer programs to simulate solar cell behavior Aleksy Patryn 1 Stanisław M. Pietruszko 2  Faculty of Electronics,
Density Functional Theory HΨ = EΨ Density Functional Theory HΨ = EΨ E-V curve E 0 V 0 B B’ E-V curve E 0 V 0 B B’ International Travel What we do Why computational?
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
การติดตั้งและทดสอบการทำคลัสเต อร์เสมือนบน Xen, ROCKS, และไท ยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll.
Sobolev Showcase Computational Mathematics and Imaging Lab.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
InCoB August 30, HKUST “Speedup Bioinformatics Applications on Multicore- based Processor using Vectorizing & Multithreading Strategies” King.
EGEE is a project funded by the European Union under contract IST Advances in the Grid enabled molecular simulator (GEMS) EGEE 06 Conference.
Phase diagram calculation based on cluster expansion and Monte Carlo methods Wei LI 05/07/2007.
Parallelization of the Classic Gram-Schmidt QR-Factorization
Experiences with the Globus Toolkit on AIX and deploying the Large Scale Air Pollution Model as a grid service Ashish Thandavan Advanced Computing and.
GPU Architecture and Programming
E-science grid facility for Europe and Latin America E2GRIS1 Gustavo Miranda Teixeira Ricardo Silva Campos Laboratório de Fisiologia Computacional.
NIH Resource for Biomolecular Modeling and Bioinformatics Beckman Institute, UIUC NAMD Development Goals L.V. (Sanjay) Kale Professor.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n GROMACs.
1 The Performance Analysis of Molecular dynamics RAD GTPase with AMBER application on Cluster computing environtment. The Performance Analysis of Molecular.
Status of Nuclear Data Processing Codes at CNDC China Nuclear Data Center(CNDC) China Institute of Atomic Energy(CIAE) P.O.Box ,Beijing.
Molecular dynamics simulation of strongly coupled QCD plasmas Peter Hartmann 1 Molecular dynamics simulation of strongly coupled QCD plasmas Péter Hartmann.
FP6−2004−Infrastructures−6-SSA Interconnection & Interoperability of Grids between Europe and China the EUChinaGRID Project F. Ruggieri – INFN Project.
TURBOMOLE Lee woong jae.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Interconnection & Interoperability of Grids.
Preliminary CPMD Benchmarks On Ranger, Pople, and Abe TG AUS Materials Science Project Matt McKenzie LONI.
FP6−2004−Infrastructures−6-SSA EUChinaGrid status report Giuseppe Andronico INFN Sez. Di Catania CERN – March 3° 2006.
Regional Meeting on Applications of the Code of Conduct on Safety of Research Reactors Lisbon, Portugal, 2-6 November 2015 Diakov Oleksii, Institute for.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks HYP3D Gilles Bourhis Equipe SIMPA, laboratoire.
Dynamic Scheduling Monte-Carlo Framework for Multi-Accelerator Heterogeneous Clusters Authors: Anson H.T. Tse, David B. Thomas, K.H. Tsoi, Wayne Luk Source:
Xi He Golisano College of Computing and Information Sciences Rochester Institute of Technology Rochester, NY THERMAL-AWARE RESOURCE.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Onlinedeeneislam.blogspot.com1 Design and Analysis of Algorithms Slide # 1 Download From
School of Mechanical and Nuclear Engineering North-West University
7 th session of the AER Working Group “f “ - Spent Fuel Transmutations Simulations of experimental “ADT systems” Mitja Majerle Nuclear Physics Institute.
Analyzing Memory Access Intensity in Parallel Programs on Multicore Lixia Liu, Zhiyuan Li, Ahmed Sameh Department of Computer Science, Purdue University,
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Spark on Entropy : A Reliable & Efficient Scheduler for Low-latency Parallel Jobs in Heterogeneous Cloud Huankai Chen PhD Student at University of Kent.
15/02/2006CHEP 061 Measuring Quality of Service on Worker Node in Cluster Rohitashva Sharma, R S Mundada, Sonika Sachdeva, P S Dhekne, Computer Division,
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Regional Climate Model Version 4.1 (RegCM4.1) Centre for Oceans, Rivers, Atmosphere and Land Sciences Indian Institute of Technology Kharagpur Kharagpur.
The Clutch Control Strategy of EMCVT in AC Power Generation System
Production of an S(α,β) Covariance Matrix with a Monte Carlo-Generated
CompChem VO: User experience using MPI
Simulations in data analysis at ILL
Parallel computing in Computational chemistry
Excursions into Parallel Programming
Presentation transcript:

Yang Chen (1), Xushan Zhao (1), Yuqin Liu (2), Maoyou Chu (1), Jianyun Shen (1) First-principles Calculation of Zr-alloys based on e-Infrastructure (1)General Research Institute for Nonferrous Metals (2)China University of Geosciences Beijing

Introduction of GRINM 1.the largest R&D institution in the field of nonferrous metals industry in China 2. more than 5,000 projects have been carried out in GRINM Since establishment 3.research areas microelectronic 、 photoelectronic materials 、 rare and precious metals, rare earth materials, energy technology and materials

Our group 1. Thermodynamics of phase diagram, diffusion and interfacial reaction, computational simulation of materials. 2. Mainly on the Titanium and Zirconium Alloys. Research fields: Research condition for compution: CPUs can be used 2. Soft-package: VASP, Wien2k

1. Background 2. The Soft package 3. Arrange the application on grid euchina 4. Test of Our application on grid 5. Conclusions Outline

1. Background Current: 9000 MW 2020 : MW Nuclear Power Stations being and to be Constructed

Zr-alloys : Safety Wall of Nuclear Station Characteristics: Low neutron absorption cross section High strength Good ductility Low corrosion rate Main Purposes: Nuclear reactor fuel cladding

Shortage of domestic Zr-alloy 1. Most nuclear reactors constructed by ANP(French),Western house(USA) 2. The fuel cladding should be replace every 18 months 3. Demand: 1000 t/Year Current situation: Domestic Zr-alloy can’t satisfy the high quality requirement in Nuclear reactor

Properties Design new Zr alloys Structure XRD,TEM investigation Predict Design new zirconium alloys

First-principles methodology First-princples methodlogy in material design: Based on structure of the Material, We can : 1.Calculate the structure and electronic properties 2.Obtain the thermodynamic properties 3.Simulate the material transformation and Failure in service ………

Part 1: Thermodynamic properties

Quasiharmonic approach 0-K total energy Lattice vibration contirbution Thermal excitation of electrons contribution

First-principles calculation flowchart VASP: 0K total energy ATAT : supercell method VASP: Calculate interatomic force and electronic DOS ATAT: Calculate force constants tensor, phonon frequency, phonon DOS ATAT : F ph (V,T) ATAT : F el (V,T)

BCC_Cr

Part 2.First-principles elastic constants

(6  n) (6  6) (6  n) Solved by SVD (single value decomposition) or by direct average from n sets of S vs. T relations S: our settings, T: from first-principles

Cij matrices for α- & γ-ZrO 2 Cubic  -ZrO 2 (3 independent c ij ) Monoclinic α-ZrO 2 (13 independent c ij )

(GPa)α-ZrO 2 γ-ZrO 2 C C C C C C C C C C C C C cij for α- & γ-ZrO2

2. The soft package  A package for performing ab-initio quantum-mechanical molecular dynamics (MD).

Installation of VASP Minimum requirements 1.FORTRAN 90 Compiler(Intel Fortran, PGI Fortran) 2.Math library (Intel MKL, -Atlas…..) 3.MPI Library for parallel work(MPICH,OPENMPI…….)

How VASP runs? MPI Version of VASP: It generates several MPI processes on each core and parallel execution between nodes, is performed using MPI communication between processes. Generate several mpi processers WN 1 WN 2 WN 3 WN 4 WN 5 WN 6 WN 7 …………….. Submit job to the WN

Important hardware parameter CPU the cpu throughput is very important Memory vasp requires 1G-2G/cpu the computational speed of vasp mainly depends on the memory Hard-disk non critical

Demand of Hardware One Single job: 20~30*CPU with 2GB Memory Cost: 2-7 days depend on the accuracy required One calculation always >10 jobs Some remarks: 1. Clusters are suitable for parallel applications (100~200 CPUs ) 2. When Big system and high accuracy is required! More CPUs needed

3.Arrange the application on grid euchina EUChinaGRID partners Beihang University, Beijing (China) Beihang University, Beijing CNIC (China) CNIC IHEP, Beijing (China) IHEP, Beijing Peking University, Beijing (China) Peking University, Beijing GRnet (Greece) GRnet Consortium GARR (Italy) Consortium GARR Department of Biology, Università di Roma3 (Italy) Department of Biology, Università di Roma3 INFN (Italy) INFN Jagiellonian University in Krakow (Poland) Jagiellonian University in Krakow CERN (Switzerland) CERN

We arranged our application on UI(lcg003) provided by IHEP  Install Fortran compiler for linux  Compile Math Kernel Library  Install VASP program

4.Test Our Application on Grid

Input and Output files input files output files

Extra required files: jdl, wrapper and hook The JDL:  A fully extensible language  Support a certain set of attributes  Schedule and submit our job

Test operation results List available resources

Submit our test job

See job status

Retrieve job output

5. Conclusion

1. our application runs sucessfully on the euchina grid 2. more than 200 CPU*2G memory we can use Tips: 1. the efficiency is low when use >10 CPUs (MPI communication efficiency)

Special thanks to: Prof. SiJin Qian (PKU) 马兰馨老师 Dr. Fargetta Marco (INFN) 许冬老师 Dr. Fabrizio Pistagna (INFN) 伍文静老师 Dr. Andre Cortelleses (INFN) 朱威老师 Thanks to: EPIKH Project Institute of High Energy Physics