1 SIAC 2000 Program. 2 SIAC 2000 at a Glance AMLunchPMDinner SunCondor MonNOWHPCGlobusClusters TuePVMMPIClustersHPVM WedCondorHPVM.

Slides:



Advertisements
Similar presentations
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
Advertisements

1 Introduction to Cluster Computing Prabhaker Mateti Wright State University Dayton, Ohio, USA.
Protocols and software for exploiting Myrinet clusters Congduc Pham and the main contributors P. Geoffray, L. Prylli, B. Tourancheau, R. Westrelin.
Beowulf Supercomputer System Lee, Jung won CS843.
Introduction CSCI 444/544 Operating Systems Fall 2008.
Types of Parallel Computers
Distributed Processing, Client/Server, and Clusters
History of Distributed Systems Joseph Cordina
Chapter 16 Client/Server Computing Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Networks of Workstations Prabhaker Mateti Wright State University.
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.
A Comparative Study of Network Protocols & Interconnect for Cluster Computing Performance Evaluation of Fast Ethernet, Gigabit Ethernet and Myrinet.
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
Parallel Computing Overview CS 524 – High-Performance Computing.
Parallel Computing Multiprocessor Systems on Chip: Adv. Computer Arch. for Embedded Systems By Jason Agron.
1: Operating Systems Overview
OPERATING SYSTEM OVERVIEW
MASPLAS ’02 Creating A Virtual Computing Facility Ravi Patchigolla Chris Clarke Lu Marino 8th Annual Mid-Atlantic Student Workshop On Programming Languages.
Introduction. Why Study OS? Understand model of operation –Easier to see how to use the system –Enables you to write efficient code Learn to design an.
07/14/08. 2 Points Introduction. Cluster and Supercomputers. Cluster Types and Advantages. Our Cluster. Cluster Performance. Cluster Computer for Basic.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
Computer Architecture Parallel Processing
Distributed Systems Early Examples. Projects NOW – a Network Of Workstations University of California, Berkely Terminated about 1997 after demonstrating.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
1 Distributed Processing, Client/Server, and Clusters Chapter 13.
1 In Summary Need more computing power Improve the operating speed of processors & other components constrained by the speed of light, thermodynamic laws,
KUAS.EE Parallel Computing at a Glance. KUAS.EE History Parallel Computing.
Remote OMNeT++ v2.0 Introduction What is Remote OMNeT++? Remote environment for OMNeT++ Remote simulation execution Remote data storage.
LOGO OPERATING SYSTEM Dalia AL-Dabbagh
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Parallel Computing Department Of Computer Engineering Ferdowsi University Hossain Deldari.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
1 CMPE 511 HIGH PERFORMANCE COMPUTING CLUSTERS Dilek Demirel İşçi.
A Summary of the Distributed System Concepts and Architectures Gayathri V.R. Kunapuli
GVis: Grid-enabled Interactive Visualization State Key Laboratory. of CAD&CG Zhejiang University, Hangzhou
1: Operating Systems Overview 1 Jerry Breecher Fall, 2004 CLARK UNIVERSITY CS215 OPERATING SYSTEMS OVERVIEW.
Distributed Programming CA107 Topics in Computing Series Martin Crane Karl Podesta.
Grid Computing Framework A Java framework for managed modular distributed parallel computing.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
Outline Why this subject? What is High Performance Computing?
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
System Programming Basics Cha#2 H.M.Bilal. Operating Systems An operating system is the software on a computer that manages the way different programs.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Background Computer System Architectures Computer System Software.
Primitive Concepts of Distributed Systems Chapter 1.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Evolution of Distributed Computing
Introduction to Cluster Computing
Grid Computing.
University of Technology
Why PC Based Control ?.
What is Parallel and Distributed computing?
CSE 451: Operating Systems Winter 2003 Lecture 1 Course Introduction
MPJ: A Java-based Parallel Computing System
CSE 451: Operating Systems Autumn 2003 Lecture 1 Course Introduction
Types of Parallel Computers
Presentation transcript:

1 SIAC 2000 Program

2 SIAC 2000 at a Glance AMLunchPMDinner SunCondor MonNOWHPCGlobusClusters TuePVMMPIClustersHPVM WedCondorHPVM

3 SIAC 2000 Materials: Book 1 David Spector, “Building Linux Clusters: Scaling Linux for Scientific and Enterprise Applications,” August 2000 CD-ROM RedHat Linux, PVM, … O'Reilly & Associates, ISBN:

4 SIAC 2000 Materials: Book 2 Rajkumar Buyya (Editor), “High Performance Cluster Computing: Programming and Applications” Volume 2, June 1999 Prentice Hall; ISBN:

5 SIAC 2000 Materials: Book 3 Selection of papers –Linux clusters –Beowulf HOWTO –PVM, MPI FAQs Handouts from speakers

6 Introduction to Cluster Computing Prabhaker Mateti Wright State University Dayton, Ohio

7 Overview High performance computing High throughput computing NOW, HPC, and HTC Parallel algorithms Software technologies

8 “High Performance” Computing CPU clock frequency Parallel computers Alternate technologies –Optical –Bio –Molecular

9 “Parallel” Computing Traditional supercomputers –SIMD, MIMD, pipelines –Tightly coupled shared memory –Bus level connections –Expensive to buy and to maintain Cooperating networks of computers

10 “NOW” Computing Workstation Network Operating System Cooperation Distributed (Application) Programs

11 Traditional Supercomputers Very high starting cost –Expensive hardware –Expensive software High maintenance Expensive to upgrade

12 Traditional Supercomputers No one is predicting their demise, but …

13 Computational Grids are the future

14 Computational Grids “Grids are persistent environments that enable software applications to integrate instruments, displays, computational and information resources that are managed by diverse organizations in widespread locations.”

15 Computational Grids Individual nodes can be supercomputers, or NOW High availability Accommodate peak usage LAN : Internet :: NOW : Grid

16 Globus: A Computational Grid Lee Liming, Argonne Monday, Aug 21, 2000, 1:00 – 5:30 PM

17 “NOW” Computing Workstation Network Operating System Cooperation Distributed+Parallel Programs

18 Workstation? PC? Mac?

19 “Workstation Operating System” Authenticated users Protection of resources Multiple processes Preemptive scheduling Virtual Memory Hierarchical file systems Network centric

20 Network Ethernet –10 Mbps obsolete –100 Mbps common –1000 Mbps desired Protocols –TCP/IP

21 Cooperation Workstations are “personal” Others use slows you down … Willing to share Willing to trust

22 Distributed Programs Spatially distributed programs –A part here, a part there, … –Parallel –Synergy Temporally distributed programs –Compute half today, half tomorrow –Combine the results at the end Migratory programs –Have computation, will travel

23 Technological Bases of Distributed+Parallel Programs Spatially distributed programs –Message passing Temporally distributed programs –Shared memory Migratory programs –Serialization of data and programs

24 Distributed Shared Memory “Simultaneous” read/write access by spatially distributed processors Abstraction layer of an implementation built from message passing primitives Semantics not so clean

25 Technological Bases for Migratory programs Same CPU architecture –X86, PowerPC, MIPS, SPARC, …, JVM Same OS + environment Be able to “checkpoint” –suspend, and –then resume computation –without loss of progress

26 Development of Distributed+Parallel Programs New code + algorithms Old programs rewritten in new languages that have distributed and parallel primitives Parallelize legacy code

27 New Programming Languages With distributed and parallel primitives Functional languages Logic languages Data flow languages

28 Parallel Programming Languages based on the shared-memory model based on the distributed-memory model parallel object-oriented languages parallel functional programming languages concurrent logic languages

29 PVM, and MPI Message passing primitives Can be embedded in many existing programming languages Architecturally portable Open-sourced implementations

30 PVM, and MPI Prabhaker Mateti, WSU Tuesday Aug 22, :30 – 12:00

31 OpenMP Distributed shared memory API Implementations: Real soon now

32 SPMD Single program, multiple data Contrast with SIMD Same program runs on multiple nodes May or may not be lock-step Nodes may be of different speeds Barrier synchronization

33 Condor Cooperating workstations Migratory programs –Checkpointing –Remote IO Resource matching

34 Condor Prabhaker Mateti, WSU Wednesday Aug 22, :30 – 12:00

35 Clusters of Workstations Inexpensive alternative to traditional supercomputers High availability –Lower down time –Easier access Development platform with production runs on traditional supercomputers

36 Linux Clusters Kumaran Kalyanasundaram, SGI Monday Aug 21, :30 – 9:00 PM Tuesday Aug 22, :00 – 5:30 PM

37 HPVM Clusters Mario Lauria Wednesday Aug 23, :00 – 4:00 PM