Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 2: Message-Passing Computing LAM/MPI at the.

Slides:



Advertisements
Similar presentations
Router Configuration PJC CCNA Semester 2 Ver. 3.0 by William Kelly.
Advertisements

Sonny J Zambrana University of Pennsylvania ISC-SEO November 2008.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CSCI-455/552 Introduction to High Performance Computing Lecture 11.
The OSCAR Cluster System Tarik Booker CS 370. Topics Introduction OSCAR Basics Introduction to LAM LAM Commands MPI Basics Timing Examples.
Parallel ISDS Chris Hans 29 November 2004.
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 7: Load Balancing and Termination Detection.
CSCI-455/522 Introduction to High Performance Computing Lecture 2.
The Stanford Directory Architecture for Shared Memory (DASH)* Presented by: Michael Bauer ECE 259/CPS 221 Spring Semester 2008 Dr. Lebeck * Based on “The.
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 11: Numerical Algorithms Sec 11.2: Implementing.
1 Friday, October 06, 2006 Measure twice, cut once. -Carpenter’s Motto.
PC Cluster Setup on Linux Fedora Core 5 High Performance Computing Lab Department of Computer Science and Information Engineering Tunghai University, Taichung,
Introducing the Command Line CMSC 121 Introduction to UNIX Much of the material in these slides was taken from Dan Hood’s CMSC 121 Lecture Notes.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Scripts and iSQL*Plus SQL*Plus over the Internet.
Guide To UNIX Using Linux Third Edition
Installing and running COMSOL on a Windows HPCS2008(R2) cluster
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Agenda What is Computer Programming? The Programming Process
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, 2012, Jan 18, 2012assignprelim.1 Assignment Preliminaries ITCS 4145/5145 Spring 2012.
5 Chapter Five Web Servers. 5 Chapter Objectives Learn about the Microsoft Personal Web Server Software Learn how to improve Web site performance Learn.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
HPC at HCC Jun Wang Outline of Workshop1 Overview of HPC Computing Resources at HCC How to obtain an account at HCC How to login a Linux cluster at HCC.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Bigben Pittsburgh Supercomputing Center J. Ray Scott
ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006outline.1 ITCS 4145/5145 Parallel Programming (Cluster Computing) Fall 2006 Barry Wilkinson.
Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.
We will now practice the following concepts: - The use of known_hosts files - SSH connection with password authentication - RSA version 2 protocol key.
O.S.C.A.R. Cluster Installation. O.S.C.A.R O.S.C.A.R. Open Source Cluster Application Resource Latest Version: 2.2 ( March, 2003 )
AE6382 Secure Shell Usually referred to as ssh, the name refers to both a program and a protocol. The program ssh is one of the most useful networking.
Sharif University of technology, Parallel Processing course, MPI & ADA Server Introduction By Shervin Daneshpajouh.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Creating Programs on UNIX This term you can create programs on UNIX or you can create programs using a C++ compiler on your PC. This set of slides steps.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, Dec 26, 2012outline.1 ITCS 4145/5145 Parallel Programming Spring 2013 Barry Wilkinson Department.
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Minimalist’s Linux Cluster Changyoung Choi, Jeonghyun Kim, Seyong Kim Department of Physics Sejong University.
CSCI-455/552 Introduction to High Performance Computing Lecture 6.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Router Initialization steps.
1 Running MPI on “Gridfarm” Bryan Carpenter February, 2005.
Using MPI on Dept. Clusters Min LI Sep Outline Run MPI programs on single machine Run mpi programs on multiple machines Assignment 1.
CSCI-455/552 Introduction to High Performance Computing Lecture 21.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
@Yuan Xue CS 283Computer Networks Spring 2011 Instructor: Yuan Xue.
System Models Advanced Operating Systems Nael Abu-halaweh.
Intel Xeon Phi Training - Introduction Rob Allan Technology Support Manager The Hartree Centre.
Hands on training session for core skills
GRID COMPUTING.
Auburn University
First Day in Lab Making a C++ program
The OSCAR Cluster System
Dr. Barry Wilkinson © B. Wilkinson Modification date: Jan 9a, 2014
Outline Subversion server Sandpit cluster What is it?
SAPC Hardware Pentium CPU (or 486) 4M usable memory
Message-Passing Computing
Working in The IITJ HPC System
Presentation transcript:

Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers Chapter 2: Message-Passing Computing LAM/MPI at the University of Akron (c) Prentice-Hall Inc., All rights reserved. Based on the 1998 Notre Dame MPI Tutorial Sessions and the Fall 2002 OSC “Introduction to Cluster Ohio” Workshop at Kent State. Modified 1/10/09 by T. O’Neil for 3460:4/577, Spring 2009, Univ. of Akron. Previous revisions: 1/16/03, 1/18/03, 3/16/05, 10/7/05, 1/16/07. Barry Wilkinson and Michael Allen

Wilkinson & Allen, Parallel Programming: Techniques & Applications Using Networked Workstations & Parallel Computers, University of Akron Cluster Configurations C.S. Cluster 22 nodes 48 processors, mostly 3.0 GHz Pentium 4 4 Gb memory/node (88 Gb total memory) Dual gigabit Ethernet interconnects Separate overhead and MPI subnets Diskless; no swap space Further details can be found in the /root/CLUSTER.HARDWARE file FAQs can be found in the /root/cluster.notes file

Wilkinson & Allen, Parallel Programming: Techniques & Applications Using Networked Workstations & Parallel Computers, Accessing the Clusters The only way is to remotely access the front end node of the cluster: ssh ssh sends your commands over an encrypted stream so your passwords and so forth can’t be sniffed over the network Batch nodes are not connected to the external network

Wilkinson & Allen, Parallel Programming: Techniques & Applications Using Networked Workstations & Parallel Computers, Setting Up LAM/MPI on the Cluster The first time you access the cluster establish your identity before attempting to use other nodes. Enter the command ssh-keygen and hit return until no longer prompted Then issue the commands cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys cat /root/.ssh/known_hosts >> ~/.ssh/known_hosts Now issue the command cp /root/lamboot.full. to copy the full boot file to your home directory. Assuming you don’t need all nodes for your programs, edit this file for your job size and save as hostfile.

Wilkinson & Allen, Parallel Programming: Techniques & Applications Using Networked Workstations & Parallel Computers, Using LAM/MPI To begin using MPI ssh into the cluster and issue the command lamboot –v hostfile Now : To compile C code: mpicc –o file file.c To compile C++ code: mpiCC –o file file.cpp To execute compiled program: mpirun –v –np no_procs file To terminate processes for reboot: lamclean –v To terminate LAM: lamhalt

Wilkinson & Allen, Parallel Programming: Techniques & Applications Using Networked Workstations & Parallel Computers, Other Points of Note If you have to change your password, change it just prior to logging out. Otherwise may have problems with the password script. Use nice values of Acceptable values 1-20 with lower number corresponding to a higher priority Keeping this low shouldn’t impact anyone else In your code #include “unistd.h” then as soon as you can in main() nice( x );