Tutorial1: NEMO5 Technical Overview

Slides:



Advertisements
Similar presentations
Operating System.
Advertisements

1 AMS 200 Week 3 Computing Resources and Electronic Resources Hongyun Wang.
Parallel ISDS Chris Hans 29 November 2004.
Network for Computational Nanotechnology (NCN) UC Berkeley, Univ.of Illinois, Norfolk State, Northwestern, Purdue, UTEP Tutorial 6 – Device Simulation:
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Job Submission on WestGrid Feb on Access Grid.
Understanding Operating Systems 1 Overview Introduction Operating System Components Machine Hardware Types of Operating Systems Brief History of Operating.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Overview Basic functions Features Installation: Windows host and Linux host.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Lecture 8 Configuring a Printer-using Magic Filter Introduction to IP Addressing.
Computer Concepts 2013 Chapter 4 Operating Systems and File Management.
Introduction to HP LoadRunner Getting Familiar with LoadRunner >>>>>>>>>>>>>>>>>>>>>>
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Topics Introduction Hardware and Software How Computers Store Data
4 1 Operating System Activities  An operating system is a type of system software that acts as the master controller for all activities that take place.
Introduction to HPC resources for BCB 660 Nirav Merchant
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
MSc. Miriel Martín Mesa, DIC, UCLV. The idea Installing a High Performance Cluster in the UCLV, using professional servers with open source operating.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
TRACEREP: GATEWAY FOR SHARING AND COLLECTING TRACES IN HPC SYSTEMS Iván Pérez Enrique Vallejo José Luis Bosque University of Cantabria TraceRep IWSG'15.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Copyright © 2012 Pearson Education, Inc. Publishing as Pearson Addison-Wesley C H A P T E R 1 Introduction to Computers and Programming.
Introduction to Interactive Media Interactive Media Tools: Software.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
MaterialsHub - A hub for computational materials science and tools.  MaterialsHub aims to provide an online platform for computational materials science.
OpenSees on NEEShub Frank McKenna UC Berkeley. Bell’s Law Bell's Law of Computer Class formation was discovered about It states that technology.
INVITATION TO COMPUTER SCIENCE, JAVA VERSION, THIRD EDITION Chapter 6: An Introduction to System Software and Virtual Machines.
Grid MP at ISIS Tom Griffin, ISIS Facility. Introduction About ISIS Why Grid MP? About Grid MP Examples The future.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Having a Blast! on DiaGrid Carol Song Rosen Center for Advanced Computing December 9, 2011.
“NanoElectronics Modeling tool – NEMO5” Jean Michel D. Sellier Purdue University.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
TOPIC 7.0 LINUX SERVICES AND CONFIGURATION. ROOT USER Root user is called “super user” because it has power far beyond those of mortal user. As root,
HUBbub 2013: Developing hub tools that submit HPC jobs Rob Campbell Purdue University Thursday, September 5, 2013.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
SimTK 1.0 Workshop Downloads Jack Middleton March 20, 2008.
Tutorial 1 Getting Started with Adobe Dreamweaver CS5.
 systemD  FirewallD  Network manager (NMCLI)  Target CLI (iscsi targets)  GRUB 2 (Booting process)  Network teamnig & bridging.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Using Virtualization in the Classroom
Brief introduction about “Grid at LNS”
bitcurator-access-webtools Quick Start Guide
HPC usage and software packages
Topics Introduction Hardware and Software How Computers Store Data
Hodor HPC Cluster LON MNG HPN Head Node Comp Node Comp Node Comp Node
Architecture & System Overview
MaterialsHub - A hub for computational materials science and tools.
NGS computation services: APIs and Parallel Jobs
Lecture 4 : Windows 7 By MSc. Manar Joundy Hazar 2017
College of Engineering
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Welcome to our Nuclear Physics Computing System
IT Infrastructure: Software
High Performance Computing in Bioinformatics
Introduction to High Performance Computing Using Sapelo2 at GACRC
bitcurator-access-webtools Quick Start Guide
Using the Omega3P Eigensolver
Software - Operating Systems
PerformanceBridge Application Suite and Practice 2.0 IT Specifications
Working in The IITJ HPC System
Presentation transcript:

Tutorial1: NEMO5 Technical Overview Jim Fonseca, Tillmann Kubis, Michael Povolotskyi, Jean Michel Sellier, Gerhard Klimeck Network for Computational Nanotechnology (NCN) Electrical and Computer Engineering

Overview Licensing Getting NEMO5 Getting Help Documentation Compiling Workspace Parallel Computing Run a job on workspace

NEMO5 Licensing License Provides access to NEMO5 source code and ability execute NEMO5 License Types Commercial/Academic/Governmental Non-commercial (Academic) Free, with restrictions Cannot redistribute Must cite specific NEMO5 paper Must give us back a copy of your changes Commercial License for the Summer School week

Getting NEMO5 How to run NEMO5 1. Download source code and compile Configuration file for Ubuntu 2. Download static binary Runs on x86 64-bit version of Linux 3. Execute through nanoHUB workspace 4. Future – nanoHUB app with GUI NEMO5 powers some nanoHUB tools already, but they are for specific uses

Getting Help https://nanohub.org/groups/nemo5distribution/

Can download source from NEMO Distribution and Support group For Developers Can download source from NEMO Distribution and Support group NEMO5 source code resource (for developers) Configure, build libraries, compile NEMO5, link NEMO5 designed to rely on several libraries Need good knowledge of Makefiles, building software

Libraries PETSc for the solution of linear and nonlinear equation systems Both real and complex versions employed simultaneously SLEPc for eigenvalue problems MUMPS Parallel LU-factorization Interfaces with PETSc/SLEPc PARPACK for sparse eigenvalue solver Libmesh for finite element solution of Poisson equation Qhull for computation of Brillouin zones Tensor3D for small matrix-vector manipulations Boost::spirit for input/output (will be covered later) Output in Silo(for spatial parallelization) or VTK format Others will be discussed in input/output tutorial

For Developers NEMO5/configure.sh configuration_script NEMO5/mkfiles contains configuration scripts NEMO5/prototype/libs/make Builds packages like petsc, slepc, ARPACK, libmesh, silo NEMO5/prototype/make Binary: NEMO5/prototype/nemo Examples: NEMO5/prototype/public_examples Materials database: NEMO5/prototype/materials/all.mat Doxygen documentation: NEMO5/prototype/doc make

Doxygen Generates online documentation browser (also PDF, etc.) Extracts information directly from source code doxygen.org

Workspace

Workspace

Interacting with Workspace nanoHUBMenu Pop Out Window Multiple Desktops

nanoHUB Menu and Editors

File Transfer to/from Workspace Can use file transfer capability in workspace ‘start’ menu How to copy files to/from workspace In workspace; nanoHUB menu->import/export file Can ssh/sftp/scp to nanohub.org sftp username@nanohub.org ssh -Y -l username nanohub.org https://nanohub.org/tools/storage

Parallel Computing NEMO5 scales to 100,000 cores In the past, processors had only one “central processing unit” Now a processor, has multiple “central processing units” aka “cores” “processor” is outdated – use “cores” All memory on a node is shared NEMO5 uses MPI (message passing interface) standardized and portable message-passing system N1 C1 C2 C3 C4 C5 C6 C7 C8 P1 P2 P3 P4 Cores Processors Nodes

Rosen Center for Advanced Computing (RCAC) Infrastructure Two main cluster computer systems we will use http://www.rcac.purdue.edu/ Rossmann (#345) ~400 nodes Each node has: Dual 12 core AMD Opteron 6172 24 cores/nodes 48 GB RAM Coates ~1,000 nodes Two 2.5 Ghz quad-core AMD 2380 8 cores/nodes 32 GB RAM Newer systems available soon Carter #88 Intel Xeon-E5 NVIDIA GPUs

nanoHUB/RCAC interaction via ‘submit’ A workspace is a Linux desktop in your browser Virtual machine NEMO5 is installed on clusters on RCAC machines Rosen Center for Advanced Computing nanoHUB workspace RCAC PBS Queuing System ‘submit’ command Output

bulkGaAs_parallel.in Small cuboid of GaAs before and after Keating strain model applied Before: qd_structure_test.silo After: GaAs_20nm.silo You can set these in the input file & will be covered later qd.log

Parallel run (1 Node, 8 cores) NEMO5 revision(XXXX = 8028) mkdir test (make directory) cd test (change directory) ln -s /apps/share64/nemo/examples/current/materials/all.mat (make a link to materials database) submit -v [venue] -i ./all.mat -n 8 -w 0:05:00 nemo-rXXXX /apps/share64/nemo/examples/current/public_examples/NCN_summer_school_2012/Technical_Overview/bulkGaAs_parallel.in Submit command -v venue where the job will run Venue be ncn-hub@coates OR ncn-hub@rossmann ncn-hub is a queue For Summer School only! (after summer school use coates or rossmann) Will go to ‘standby’ queue -i filename to pass to the venue (materials database: all.mat) -n number of cores (must be correct for machine) -w walltime (5 minutes) submit --help

1 2 MPI Parallelization Real space partitioning Determined in advance by user Two different methods (bulkGaAs_parallel.in example uses method #2) Try changing num_geom_CPUs Partitioning { x_partition_nodes = (-0.1, 3.0, 6.1) y_partition_nodes = (-0.1, 3.0, 6.1) z_partition_nodes = (-0.1, 3.0, 6.1) } 6 CPU 2 CPU 1 CPU 4 CPU 3 CPU 7 CPU 8 1 Partitioning { x_extension = (-0.1,6.1) y_extension = (-0.1,6.1) z_extension = (-0.1,6.1) num_geom_CPUs = 8 } Currently, 1-D and 3-D regular grid partitioning is supported. A user may 1) choose the exact positions of the partition boundaries or 2) just specify the total extension of the device and a desired total number of processes for the partitioning, leaving the partitioning to NEMO5. In this case, the partition boundaries are determined from a minimization of the interfaces between partitions. 2

Multi-Node Run (copy example to your present working directory) cp /apps/share64/nemo/examples/current/public_examples/NCN_summer_school_2012/Technical_Overview/bulkGaAs_parallel.in . submit -v [venue] -i ./all.mat –n X -N Y -w 0:05:00 nemo-rXXXX bulkGaAs_parallel.in argument meaning Rossmann Coates X -n (cores) 48 16 Y -N (cores/node) 24 8 submit –v ncn-hub@rossmann -i ./all.mat –n 48 –N 24 -w 0:05:00 nemo-rXXXX bulkGaAs_parallel.in submit -v ncn-hub@coates -i ./all.mat –n 16 –N 8 -w 0:05:00 nemo-rXXXX bulkGaAs_parallel.in

cp -r /apps/share64/nemo/examples/current/public_examples/ . Walltime 1 hour default 4 hour maximum Cores Default is 1 core cd ~ cp -r /apps/share64/nemo/examples/current/public_examples/ .

Schedule for the NEMO5 tutorial Device Modeling with NEMO5 10:00 Break 10:30 Lecture 14 (NEMO5 Team): “NEMO5 Introduction” 12:00 LUNCH 1:30 Tutorial 1 (NEMO5 Team): “NEMO5 Technical Overview” 3:00 Break 3:30 Tutorial 2 (NEMO5 Team): “NEMO5 Input and Visualization” 4:30 Tutorial 3 first part (NEMO5 Team): “Models” 5:00 Adjourn

Thanks!