PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.

Slides:



Advertisements
Similar presentations
Distributed Data Processing
Advertisements

Parallel Computing Glib Dmytriiev
Threads, SMP, and Microkernels
Distributed Processing, Client/Server and Clusters
© 2009 Fakultas Teknologi Informasi Universitas Budi Luhur Jl. Ciledug Raya Petukangan Utara Jakarta Selatan Website:
Distributed Systems CS
2. Computer Clusters for Scalable Parallel Computing
Dinker Batra CLUSTERING Categories of Clusters. Dinker Batra Introduction A computer cluster is a group of linked computers, working together closely.
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Distributed Processing, Client/Server, and Clusters
Chapter 16 Client/Server Computing Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Lecture 37: Chapter 7: Multiprocessors Today’s topic –Introduction to multiprocessors –Parallelism in software –Memory organization –Cache coherence 1.
DISTRIBUTED COMPUTING
Introduction to Symmetric Multiprocessors Süha TUNA Bilişim Enstitüsü UHeM Yaz Çalıştayı
THE AFFORDABLE SUPERCOMPUTER HARRISON CARRANZA APARICIO CARRANZA JOSE REYES ALAMO CUNY – NEW YORK CITY COLLEGE OF TECHNOLOGY ECC Conference 2015 – June.
CLUSTER COMPUTING Prepared by: Kalpesh Sindha (ITSNS)
Shilpa Seth.  Centralized System Centralized System  Client Server System Client Server System  Parallel System Parallel System.
Computer System Architectures Computer System Software
Operating System A program that controls the execution of application programs An interface between applications and hardware 1.
A brief overview about Distributed Systems Group A4 Chris Sun Bryan Maden Min Fang.
1 Lecture 20: Parallel and Distributed Systems n Classification of parallel/distributed architectures n SMPs n Distributed systems n Clusters.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
CLUSTER COMPUTING STIMI K.O. ROLL NO:53 MCA B-5. INTRODUCTION  A computer cluster is a group of tightly coupled computers that work together closely.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Fall 2000M.B. Ibáñez Lecture 01 Introduction What is an Operating System? The Evolution of Operating Systems Course Outline.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Types of Operating Systems
Threads, SMP, and Microkernels Chapter 4. Process Resource ownership - process is allocated a virtual address space to hold the process image Scheduling/execution-
Multiprossesors Systems.. What are Distributed Databases ? “ A Logically interrelated collection of shared data ( and a description of this data) physically.
CLUSTER COMPUTING TECHNOLOGY BY-1.SACHIN YADAV 2.MADHAV SHINDE SECTION-3.
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
PARALLEL PROCESSOR- TAXONOMY. CH18 Parallel Processing {Multi-processor, Multi-computer} Multiple Processor Organizations Symmetric Multiprocessors Cache.
Types of Operating Systems 1 Computer Engineering Department Distributed Systems Course Assoc. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 2015.
Chapter 20 Parallel Sysplex
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
| nectar.org.au NECTAR TRAINING Module 4 From PC To Cloud or HPC.
Outline Why this subject? What is High Performance Computing?
Data Communications and Networks Chapter 9 – Distributed Systems ICT-BVF8.1- Data Communications and Network Trainer: Dr. Abbes Sebihi.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
CS4315A. Berrached:CMS:UHD1 Introduction to Operating Systems Chapter 1.
Cluster computing. 1.What is cluster computing? 2.Need of cluster computing. 3.Architecture 4.Applications of cluster computing 5.Advantages of cluster.
Background Computer System Architectures Computer System Software.
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
Chapter 16 Client/Server Computing Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
PERFORMANCE OF THE OPENMP AND MPI IMPLEMENTATIONS ON ULTRASPARC SYSTEM Abstract Programmers and developers interested in utilizing parallel programming.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
These slides are based on the book:
Flynn’s Taxonomy Many attempts have been made to come up with a way to categorize computer architectures. Flynn’s Taxonomy has been the most enduring of.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
CHAPTER SEVEN PARALLEL PROCESSING © Prepared By: Razif Razali.
Computer Science 2 What’s this course all about?
Introduction to parallel programming
Grid Computing.
Storage Virtualization
Chapter 16: Distributed System Structures
Distributed System Structures 16: Distributed Structures
Threads, SMP, and Microkernels
Chapter 17 Parallel Processing
CLUSTER COMPUTING.
Distributed computing deals with hardware
Lecture 4- Threads, SMP, and Microkernels
Multithreaded Programming
Database System Architectures
Chapter 2 Operating System Overview
Presentation transcript:

PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer having a single Central Processing Unit (CPU); A problem is broken into a discrete series of instructions. Instructions are executed one after another. Only one instruction may execute at any moment in time.

PARALLEL COMPUTING overview

In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:To be run using multiple CPUs A problem is broken into discrete parts that can be solved concurrently Each part is further broken down to a series of instructions Instructions from each part execute simultaneously on different CPUs

PARALLEL COMPUTING overview

Parallel Computing

Cluster Computing - Introduction Definition: Cluster computing is the technique of linking two or more computers into a network (usually through a local area network) in order to take advantage of the parallel processing power of those computers.

Cluster Computing - Introduction

Types of Clusters * High Availability Clusters HA Clusters are designed to ensure constant access to service applications. The clusters are designed to maintain redundant nodes that can act as backup systemsin the event of failure. The minimum number of nodes in a HA cluster is two – one active and one redundant – though most HA clusters will use considerably more nodes. * Benefits of Computer Clusters Computer clusters offer a number of benefits over mainframe computers, including: Reduced Cost: The price of off-the-shelf consumer desktops has plummeted in recent years, and this drop in price has corresponded with a vast increase in their processing power and performance. The average desktop PC today is many times more powerful than the first mainframe computers. Processing Power : The parallel processing power of a high-performance cluster can, in many cases, prove more cost effective than a mainframe with similar power. This reduced price per unit of power enables enterprises to get a greater ROI from their IT budget. Improved Network Technology: Driving the development of computer clusters has been a vast improvement in the technology related to networking, along with a reduction in the price of such technology. Computer clusters are typically connected via a single virtual local area network (VLAN), and the network treats each computer as a separate node. Information can be passed throughout these networks with very little lag, ensuring that data doesn’t bottleneck between nodes. Scalability: Perhaps the greatest advantage of computer clusters is the scalability they offer. While mainframe computers have a fixed processing capacity, computer clusters can be easily expanded as requirements change by adding additional nodes to the network. Availability: When a mainframe computer fails, the entire system fails. However, if a node in a computer cluster fails, its operations can be simply transferred to another node within the cluster, ensuring that there is no interruption in service. backup systems

Types of Clusters HA clusters aim to solve the problems that arise from mainframe failure in an enterprise. Rather than lose all access to IT systems, HA clusters ensure 24/7 access to computational power. This feature is especially important in business, where data processing is usually time-sensitive.

Types of Clusters Load-balancing Clusters Load-balancing clusters operate by routing all work through one or more load-balancing front-end nodes, which then distribute the workload efficiently between the remaining active nodes. Load-balancing clusters are extremely useful for those working with limited IT budgets. Devoting a few nodes to managing the workflow of a cluster ensures that limited processing power can be optimised.

Types of Clusters High-performance Clusters HPC clusters are designed to exploit the parallel processing power of multiple nodes. They are most commonly used to perform functions that require nodes to communicate as they perform their tasks – for instance, when calculation results from one node will affect future results from another.

Cluster Architecture A cluster is a type of parallel or distributed processing system which consists of a collection of interconnected stand alone computers working together as a single integrated computing resource. A computer node can be single or multiprocessor system with memory I/O facilities and an operating system.

Cluster Architecture A cluster generally refers to two or more computers connected together. The nodes can exist in a single cabinet or be physically separated and connected via a LAN. An interconnected cluster of computers can appear as a single system to users and applications.

Cluster Architecture figure

Cluster Architecture The following are some prominent components of cluster computers. Multiple High Performance Computers(PCs,Workstations or SMPs) State-of-the-art Operating Systems(Layered or Microkernel based) High Performance Networks/Switches(such as Gigabit Ethernet and Myrinet) Network Interface cards(NICs)

Cluster Architecture Fast communication protocols and services(such as active and fast messages) Cluster Middleware(Single System Image and System availability infrastructure) Hardware(such as Digital(DEC) Memory Channel,hardware DSM and SMP techniques Operating System Kernel or Gluing Layer(such as Solaris MC and GLU-nix)

Cluster Architecture Applications and Subsystems Applications(such as system management tools and electronic forms) Runtime Systems(such as software DSM and parallel file system) Resource Management and Scheduling software(such as LSF(Load Sharing Facility) and CODINE(Computing in Distributed Networked Environment))

Cluster Architecture Parallel programming Environments and Tools(such as compilers,PVM(Parallel Virtual Machine), and MPI(Message Passing Interface)) Applications Sequential Parallel or Distributed

Cluster Architecture The network interface hardware acts as communication processor and is responsible for transmitting and receiving packets of data between cluster nodes via a network switch.

Cluster Architecture Communication software offers a means of fast and reliable data communication among cluster nodes and to the outside world. The cluster nodes can work collectively, as an integrated computing resource, or they can operate as individual computers.

Cluster Architecture The cluster middleware is responsible for offering an illusion of a unified system image and availability out of a collection on independent but interconnected computers. Programming environments can offer portable,efficient and easy-to use tools for development of applicaitions. They include message passing libraries,debuggers and profilers.

Parallel Programming Models

Applications of Clusters