SHARCNET. Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer.

Slides:



Advertisements
Similar presentations
Clusters, Grids and their applications in Physics David Barnes (Astro) Lyle Winton (EPP)
Advertisements

A Workflow Engine with Multi-Level Parallelism Supports Qifeng Huang and Yan Huang School of Computer Science Cardiff University
Lecture 38: Chapter 7: Multiprocessors Today’s topic –Vector processors –GPUs –An example 1.
High-Performance Computing
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Fundamental of Computer Architecture By Panyayot Chaikan November 01, 2003.
CM-5 Massively Parallel Supercomputer ALAN MOSER Thinking Machines Corporation 1993.
Beowulf Supercomputer System Lee, Jung won CS843.
Types of Parallel Computers
CSCI 8150 Advanced Computer Architecture Hwang, Chapter 1 Parallel Computer Models 1.2 Multiprocessors and Multicomputers.
History of Distributed Systems Joseph Cordina
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
1 BGL Photo (system) BlueGene/L IBM Journal of Research and Development, Vol. 49, No. 2-3.
Beowulf Cluster Computing Each Computer in the cluster is equipped with: – Intel Core 2 Duo 6400 Processor(Master: Core 2 Duo 6700) – 2 Gigabytes of DDR.
PARALLEL PROCESSING The NAS Parallel Benchmarks Daniel Gross Chen Haiout.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Scheduling of Tiled Nested Loops onto a Cluster with a Fixed Number of SMP Nodes Maria Athanasaki, Evangelos Koukis, Nectarios Koziris National Technical.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Chapter 2 Computer Clusters Lecture 2.1 Overview.
The Great White SHARCNET. SHARCNET: Building an Environment to Foster Computational Science Shared Hierarchical Academic Research Computing Network.
A brief overview about Distributed Systems Group A4 Chris Sun Bryan Maden Min Fang.
Windows 2000 Advanced Server and Clustering Prepared by: Tetsu Nagayama Russ Smith Dale Pena.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
Multiprocessor systems Objective n the multiprocessors’ organization and implementation n the shared-memory in multiprocessor n static and dynamic connection.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
What is a Distributed System? n From various textbooks: l “A distributed system is a collection of independent computers that appear to the users of the.
SOS71 Is a Grid cost-effective? Ralf Gruber, EPFL-SIC/FSTI-ISE-LIN, Lausanne.
Introduction. Outline Definitions Examples Hardware concepts Software concepts Readings: Chapter 1.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
Institute for Software Science – University of ViennaP.Brezany Parallel and Distributed Systems Peter Brezany Institute for Software Science University.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
Multiprossesors Systems.. What are Distributed Databases ? “ A Logically interrelated collection of shared data ( and a description of this data) physically.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
Chapter 9: Alternative Architectures In this course, we have concentrated on single processor systems But there are many other breeds of architectures:
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
Chapter 2 Introduction to Systems Architecture. Chapter goals Discuss the development of automated computing Describe the general capabilities of a computer.
Parallel Computing.
Cluster Software Overview
Distributed Computing Systems CSCI 6900/4900. Review Distributed system –A collection of independent computers that appears to its users as a single coherent.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
1 Lecture 1: Parallel Architecture Intro Course organization:  ~18 parallel architecture lectures (based on text)  ~10 (recent) paper presentations 
Building and managing production bioclusters Chris Dagdigian BIOSILICO Vol2, No. 5 September 2004 Ankur Dhanik.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Distributed Computing Systems CSCI 6900/4900. Review Definition & characteristics of distributed systems Distributed system organization Design goals.
University of Texas at Arlington Scheduling and Load Balancing on the NASA Information Power Grid Sajal K. Das, Shailendra Kumar, Manish Arora Department.
CDA-5155 Computer Architecture Principles Fall 2000 Multiprocessor Architectures.
Background Computer System Architectures Computer System Software.
Computer System Evolution. Yesterday’s Computers filled Rooms IBM Selective Sequence Electroinic Calculator, 1948.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
By:Muhannad ALMathami
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Clouds , Grids and Clusters
Super Computing By RIsaj t r S3 ece, roll 50.
Grid Computing.
Constructing a system with multiple computers or processors
CLUSTER COMPUTING.
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Types of Parallel Computers
Presentation transcript:

SHARCNET

Multicomputer Systems r A multicomputer system comprises of a number of independent machines linked by an interconnection network. r Each computer executes its own program which may access its local memory and may send and receive messages over the network. r The nature of the interconnection network has been a major topic of research for both academia and industry. r Distributed systems is one type of multicomputer system. What about others?

Multiprocessor Usage Scientific and engineering applications often require loops over large vectors e.g., matrix elements or points in a grid or 3D mesh. Applications include Computational fluid dynamics Scheduling (airline) Health and biological modeling Economics and financial modelling (e.g., option pricing)

Multiprocessor Usage r It should be noted that people have been developing “clusters” of machines that are connected using Ethernet for parallel applications. r The first such cluster (developed by two researchers at NASA) had machines and was connected using 10 Mb Ethernet. r This is known as the Beowulf approach to developing a parallel computing and the clusters are sometimes called Beowulf clusters.

Sharcnet r UWO has taken a leading role in Canada in exploiting the concepts behind the Beowulf cluster. r High performance clusters: “Beowulf on steroids” m Powerful “off the shelf” computational elements m Advanced communications r Geographical separation (local use) r Connect clusters: emerging optical communications r This is referred to as Shared Hierarchical Academic Research Computing Network or Sharcnet

Sharcnet r One cluster is called “Great White” r Processors: m 4 alpha processors: 833Mhz (4p-SMP) m 4 Gb of memory m 38 SMPs: a total of 152 processors r Communications m 1 Gb/sec ethernet m 1.6 Gb/sec quadrics connection r November 2001: #183 in the world m Fastest academic computer in Canada m 6 th fastest academic computer in North America

Sharcnet Great White (in Western Science Building)

Sharcnet r Extend “Beowulf” approach to clusters of high performance clusters r Connect clusters: “clusters of clusters” m Build on emerging optical communications m Initial configuration used optical equipment from telecommunications industry r Collectively a supercomputer!

Sharcnet GUELPH MAC UWO Optical communication Clusters across Universities (initial cluster)

Sharcnet r In 2004, UWO received an investment of 56 million dollars from the government and private industry (HP) to expand Sharcnet. r With the new capabilities, Sharcnet could be in the top 100 or 150 of supercomputers. r Will be the fastest supercomputer of its kind – I.e.,a distributed system where nodes are clusters.

Sharcnet

Sharcnet Applications r Applications running on Sharcnet come from all sorts of domains including m Chemistry m Bioinformatics m Economics m Astrophysics m Material Science and Engineering