PET Summer Institute Kim Kido | Univ. Hawaii Manoa.

Slides:



Advertisements
Similar presentations
© 2009 Fakultas Teknologi Informasi Universitas Budi Luhur Jl. Ciledug Raya Petukangan Utara Jakarta Selatan Website:
Advertisements

Prepared 7/28/2011 by T. O’Neil for 3460:677, Fall 2011, The University of Akron.
Distributed Systems CS
SE-292 High Performance Computing
Princess Sumaya Univ. Computer Engineering Dept. Chapter 7:
“PCs, Clusters, Supercomputers!” The Final Day..... The PET-sponsored UH Summer Institute for High Performance Computing At the University of Hawaii August.
Teaching Parallel Computing using Beowulf Clusters: A Laboratory Approach Phil Prins Seattle Pacific University October 8, 2004
1 Parallel Scientific Computing: Algorithms and Tools Lecture #3 APMA 2821A, Spring 2008 Instructors: George Em Karniadakis Leopold Grinberg.
Introduction to MIMD architectures
SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
1  1998 Morgan Kaufmann Publishers Chapter 9 Multiprocessors.
An Introduction to Parallel Computing Dr. David Cronk Innovative Computing Lab University of Tennessee Distribution A: Approved for public release; distribution.
Chapter 17 Parallel Processing.
Arquitectura de Sistemas Paralelos e Distribuídos Paulo Marques Dep. Eng. Informática – Universidade de Coimbra Ago/ Machine.
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
Parallel Processing Group Members: PJ Kulick Jon Robb Brian Tobin.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
Technion – Israel Institute of Technology Department of Electrical Engineering High Speed Digital Systems Lab Spring 2009.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
1 Parallel computing and its recent topics. 2 Outline 1. Introduction of parallel processing (1)What is parallel processing (2)Classification of parallel.
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
What is a Distributed System? n From various textbooks: l “A distributed system is a collection of independent computers that appear to the users of the.
Data Warehousing 1 Lecture-24 Need for Speed: Parallelism Virtual University of Pakistan Ahsan Abdullah Assoc. Prof. & Head Center for Agro-Informatics.
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
Early Experiences with Energy-Aware (EAS) Scheduling
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Multi-core.  What is parallel programming ?  Classification of parallel architectures  Dimension of instruction  Dimension of data  Memory models.
Pipelined and Parallel Computing Data Dependency Analysis for 1 Hongtao Du AICIP Research Mar 9, 2006.
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
Introduction to the new mainframe © Copyright IBM Corp., All rights reserved. 1 Main Frame Computing Objectives Explain why data resides on mainframe.
Server HW CSIS 4490 n-Tier Client/Server Dr. Hoganson Server Hardware Mission-critical –High reliability –redundancy Massive storage (disk) –RAID for redundancy.
Outline Why this subject? What is High Performance Computing?
Lecture 3: Computer Architectures
Xi He Golisano College of Computing and Information Sciences Rochester Institute of Technology Rochester, NY THERMAL-AWARE RESOURCE.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
1 Lecture 17: Multiprocessors Topics: multiprocessor intro and taxonomy, symmetric shared-memory multiprocessors (Sections )
LECTURE #1 INTRODUCTON TO PARALLEL COMPUTING. 1.What is parallel computing? 2.Why we need parallel computing? 3.Why parallel computing is more difficult?
Background Computer System Architectures Computer System Software.
The University of Adelaide, School of Computer Science
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
These slides are based on the book:
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Parallel Hardware Dr. Xiao Qin Auburn.
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali.
Multiprocessor Systems
Distributed Processors
Parallel Processing - introduction
The University of Adelaide, School of Computer Science
CS 147 – Parallel Processing
Introduction to Parallelism.
Multi-Processing in High Performance Computer Architecture:
Multi-Processing in High Performance Computer Architecture:
MIMD Multiple instruction, multiple data
Chapter 17 Parallel Processing
Symmetric Multiprocessing (SMP)
AN INTRODUCTION ON PARALLEL PROCESSING
The University of Adelaide, School of Computer Science
Lecture 17 Multiprocessors and Thread-Level Parallelism
Lecture 17 Multiprocessors and Thread-Level Parallelism
The University of Adelaide, School of Computer Science
Lecture 17 Multiprocessors and Thread-Level Parallelism
Presentation transcript:

PET Summer Institute Kim Kido | Univ. Hawaii Manoa

Thursday, July 27 ● Intro to parallel computing ● Intro to MHPCC ● Intro to IBM SMP hardware ● Intro to SQUALL

Introduction to Parallel Computing Useful for analyzing complex problems requiring massive amounts of computation that can be divided into independent tasks

Introduction to Parallel Computing ● Use of multi-processors to complete a task ● Classification systems – How processor exectutes instructions: SIMD, MIMD – Memory type (shared, distributed) – Number of processors – Symmetric/asymmetric

IBM SMP Hardware Overview ● Symmetric Multi- Processors ● 2+ identical processors connected to single shared memory ● Workload efficiently balanced by moving tasks between processors

IBM SMP Hardware Overview ● Only one processor can access memory at a time ● Alternatives – NUMA – Asymmetric – Beowulf (computer- clustered)

Introduction to MHPCC ● Air Force Research Laboratory Center managed by UH ● Provides >10,000,000 hours of computing time annually Maui High Performance Computing Center

Introduction to MHPCC ● Engineering ● Meteorology ● Biology ● Computer science Fluid pathlines about Predator B fuselage. General Atomics Aeronautical Systems, Inc

Introduction to SQUALL ● 2-node, 32 processor IBM SP system ● Available to users that don't meet DoD HPCMP access requirements – Unclassified – Non-sensitive ● Designed for government, commercial, academic users ● Used in institute to run sample programs

Thank you ! Next: The Parallel Operating Environment! -->