What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve?

Slides:



Advertisements
Similar presentations
Introduction to Openmp & openACC
Advertisements

1 ISCM-10 Taub Computing Center High Performance Computing for Computational Mechanics Moshe Goldberg March 29, 2001.
Introductions to Parallel Programming Using OpenMP
Introduction to Parallel Programming at MCSR. Mission Enhance Computational Research Climate at Mississippi’s 8 Public Universities also: Support High.
Beowulf Supercomputer System Lee, Jung won CS843.
Lecture 2 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Parallel Computing in Chemistry Brian W. Hopkins Mississippi Center for Supercomputing Research 4 September 2008.
Types of Parallel Computers
WEST VIRGINIA UNIVERSITY HPC and Scientific Computing AN OVERVIEW OF HIGH PERFORMANCE COMPUTING RESOURCES AT WVU.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
October 2003 The Mississippi Center for Supercomputing Research.
Introduction to bioknoppix: Linux for the life sciences Carlos M Rodríguez Rivera Humberto Ortiz Zuazaga.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Parallel/Concurrent Programming on the SGI Altix Conley Read January 25, 2007 UC Riverside, Department of Computer Science.
Academic and Research Technology (A&RT)
Introduction to Scientific Computing Doug Sondak Boston University Scientific Computing and Visualization.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
High Performance Computation --- A Practical Introduction Chunlin Tian NAOC Beijing 2011.
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
1 Programming Multicore Processors Aamir Shafi High Performance Computing Lab
Performance Evaluation of Hybrid MPI/OpenMP Implementation of a Lattice Boltzmann Application on Multicore Systems Department of Computer Science and Engineering,
LLNL-PRES-XXXXXX This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Introduction to Parallel Programming at MCSR Presentation at Delta State University January 17, 2007 Jason Hale.
Introduction to Unix Part 1 Research Computing Workshops Fall 2008 Office of Information Technology & Mississippi Center for Supercomputing Research Jason.
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst IDF: Multi-Core Processing for.
Lecture 1 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
The Cray XC30 “Darter” System Daniel Lucio. The Darter Supercomputer.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
J. J. Rehr & R.C. Albers Rev. Mod. Phys. 72, 621 (2000) A “cluster to cloud” story: Naturally parallel Each CPU calculates a few points in the energy grid.
February 23, 2007MSUAG Meeting, Starkville Presentation for the Mississippi Supercomputer User Advisory Group (MSUAG) February 23, 2007 Starkville, MS.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 The University of Southern Mississippi April 8, 2010.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 MCSR Unix Camp.
October 2003 Tonight’s Presentations: Mr. David Roach Director, Mississippi Center for Supercomputing Research University, Mississippi Mr. Eigoro Hashimoto.
November 2004 The Mississippi Center for Supercomputing Research.
Parallel Programming on the SGI Origin2000 With thanks to Igor Zacharov / Benoit Marchand, SGI Taub Computer Center Technion Moshe Goldberg,
CCS Overview Rene Salmon Center for Computational Science.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve? What Kinds of Accounts? Why Does Mississippi Need Supercomputers? What Kinds of Research?
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
October 14, 2005Guassian Performance Workshop at Jackson State University The Mississippi Center for Supercomputing Research Gaussian Performance Workshop.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Constructing a system with multiple computers or processors
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Cornell Theory Center Cornell Theory Center (CTC) is a high-performance computing and interdisciplinary research center at Cornell.
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Types of Parallel Computers
Working in The IITJ HPC System
Presentation transcript:

What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve? What Kinds of Accounts? Why Does Mississippi Need Supercomputers? What Kinds of Research? What Kinds of Instruction? What Kinds of Workshops? How Much Does it Cost? What Kinds of Software? What Supercomputers and Clusters?

What is MCSR? Mississippi Center for Supercomputer Research Established in 1987 by the Mississippi Legislature Mission: Enhance Computational Research Climate at Mississippi’s 8 Public Universities also: Support High Performance Computing (HPC) Education in Mississippi

What Does MCSR Do? We make Mississippi scientists: - more competitive for federal grants - more productive in research We provide extraordinary learning opportunities for Mississippi college students - instructional accounts - computing workshops - helpdesk support

Who Does MCSR Serve? MCSR serves faculty, researchers, and students at all of Mississippi’s 8 public universities Alcorn State University Jackson State University Mississippi Sate University Mississippi Valley State University The University of Southern Mississippi Delta State University The University of Mississippi Mississippi University for Women

Who Uses More MCSR Computing Time?

What Types of Computing Access is Available? Research Accounts - provided for faculty and student “researchers” - good for the duration of employment or enrollment Instructional Accounts - provided at an instructor’s request - for all students enrolled in a semester course - valid for the duration of the semester - can be “converted” to research account

Research vs. Class Accounts?

Who Uses Research Accounts?

Who Uses Instructional Accounts?

What Types of Courses Use MCSR? Over 82 University Courses Supported since 2000 C/C++, Fortran, MPI, OpenMP, MySQL, HTML, Javascript, Matlab, PHP, Perl, …. http://www.mcsr.olemiss.edu/education.php

Why Do Mississippi Researchers Need Supercomputers? Economics: researchers in a poor state like Mississippi can still make a big splash. Computational simulations are faster, cheaper, and safer than laboratory experiments alone.

What Kinds of Research @ MCSR? 90% of MCSR calculations are computational chemistry Cleanup of high explosive materials. Design of high energy density rocket fuels The chemical underpinnings of high powered lasers Mutation studies of enzyme activity Designing weather-proofing coatings for machinery Other Areas Hurricane forecasting Blast resistant Coatings Better 3-D imaging for diagnosing brain tumors

What Types of Workshops by MCSR? MCSR consultants taught over 140 free seminars in FY08. Over 60 training topics available, and growing. Fixed schedule or on-demand. Unix/programming, Math Software, Stats Software, Computational Chemistry Software

Do Researchers and Students Pay to Use MCSR? No. MCSR services are provided at no cost to the individual, department, or institution. Funded researchers may ask for priority access. Mississippi researchers may claim the value of MCSR computing services received as an in-kind contribution from their institution when seeking federal grants.

How Much Does MCSR Cost Mississippi?

What Is the Value of MCSR to Mississippi? RETURN ON INVESTMENT = GRANTS SUPPORTED / MCSR BUDGET For FY 2008: $32,832,097 / $845,535 = $38.30 per dollar spent

What Software Environments @ MCSR Programming C/C++, FORTRAN, Java, Perl, PHP, MPI… Science/Engineering PV-Wave, IMSL, GSL, Math Libraries, Abaqus Math/Statistics SAS, SPSS, Matlab, Mathematica Chemistry Gaussian, Amber, NWChem, GAMESS, CPMD, MPQC, MolPro, GROMACS

What Supercomputers @ MCSR?

Supercomputers at MCSR: sweetgum . - SGI Origin 2800 128-CPU Supercomputer - 64 GB of shared memory

Supercomputers at MCSR: redwood NSF Expiration Sep 30, 2004; HPVCI Expiration Date Dec 31, 2002. - 224 CPU SGI Altix 3700 Supercomputer - 224 GB of shared memory

Supercomputers at MCSR: mimosa 253 CPU Intel Linux Cluster – Pentium 4 Distributed memory – 500MB – 1GB per node Gigabit Ethernet

Supercomputers at MCSR: mimosa

Supercomputers at MCSR: sequoia 22 nodes 176 cores 352 GB Memory 20 TB Storage InfiniBand Interconnect

Supercomputers at MCSR: sequoia

Introduction to Parallel Programming at MCSR Message Passing Computing Processes communicate via calls to message passing library routines Programmers “parallelize” algorithm and add message calls At MCSR, this is via MPI programming with C or Fortran Sweetgum – Origin 2800 Supercomputer (128 CPUs) Mimosa – Beowulf Cluster with 253 Nodes Redwood – Altix 3700 Supercomputer (224 CPUs) Sequoia – Altix XE 310 InfiniBand Cluster (176 cores) Shared Memory Computing Threads coordinate/communicate results via shared memory variables Care must be taken not to modify the wrong memory areas At MCSR, this is via OpenMP programming with C or Fortran on sweetgum, redwood, or sequoia

Speed-Up http://www.mcsr.olemiss.edu/Engr692_TimingWorshkeet.xls

What MCSR Systems for USM Class Accounts Sweetgum MPI or OpenMP 1 to 16 CPUs Up to 900mb per CPU PBS scripts preferred #PBS –l ncpus=4 Interactive computations will be killed after 30 minutes Queues: SM-4P, SM-8P, MM-8P, MM-16P Processors: Mix of 195 MHz and 300 MHz O/S: Irix (like Unix) Compilers: SGI’s Fortran, C/C++, GNU C/C++, w/ SGI MPT

What MCSR Systems for USM Class Accounts Mimosa MPI 1 to 18 nodes 400 GB Memory per node PBS Scripts Only (no interactive jobs allowed) #PBS –l nodes=4 Queues: MCSR-CA Processors: single 1.4 GHz P4/node O/S: SUSE Linux 10.3 Compilers: Portland Group (PGI) Fortran, C/C++ w/mpich qstat –f (to find out what nodes your job is running on)

What MCSR Systems for USM Class Accounts Sequoia OpenMP (multiple processors on the same node) MPI (multiple processors on the same or different nodes) Hybrid (OpenMP within node, MPI across nodes) 1 to 4 nodes, 1 to 8 CPUs per node PBS Queues: SM-4P (for up to 4 CPUs on 1 node) MCSR-Test (up to 8 CPUs on each of 4 nodes) PBS Scripts Only (no interactive jobs allowed) #PBS –l nodes=4:ppn=8 (to run on all 8 CPUs of all 4 nodes) #PBS –l ncpus=8:select= 16 GB Memory per node (2 GB per CPU) qstat –f (to find out what nodes your job is running on)

Sequoia for USM Class Accounts To run on 4 nodes, and 8 processors per node (32 processes) #PBS –l nodes=4:ppn=8 To run on 2 nodes, and 4 processors per node (8 processes) #PBS –l nodes=2:ppn=4 To run on 1 node, and up to 8 processors (OpenMP) #PBS –l nodes=1:ppn=8 To run on 8 processors, regardless of number of nodes #PBS –l ncpus=8 To run 8 processors, with preferences about node placement #PBS –l place=scatter (distribute across as many nodes as can) #PBS –l place=pack (pack processes onto as few nodes as can) #PBS –l place=free (place processes on first available processors)

Parallel Efficiency NSF Expiration Sep 30, 2004; HPVCI Expiration Date Dec 31, 2002.