Rewiev of Lab Manual for Parallel Processing Systems Emina I. Milovanović, Vladimir M.Ćirić.

Slides:



Advertisements
Similar presentations
Threads, SMP, and Microkernels
Advertisements

SE-292 High Performance Computing
Efficient Parallel Algorithms COMP308
Taxanomy of parallel machines. Taxonomy of parallel machines Memory – Shared mem. – Distributed mem. Control – SIMD – MIMD.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
History of Distributed Systems Joseph Cordina
Chapter 5 Concurrency: Mutual Exclusion and Synchronization Operating Systems: Internals and Design Principles, 6/E William Stallings Patricia Roy Manatee.
Slide 1 Parallel Computation Models Lecture 3 Lecture 4.
PZ11B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ11B - Parallel execution Programming Language Design.
Advanced Topics in Algorithms and Data Structures An overview of the lecture 2 Models of parallel computation Characteristics of SIMD models Design issue.
Overview Efficient Parallel Algorithms COMP308. COMP 308 Exam Time allowed : 2.5 hours Answer four questions (out of six). If you attempt to answer more.
Tuesday, September 12, 2006 Nothing is impossible for people who don't have to do it themselves. - Weiler.
2. Multiprocessors Main Structures 2.1 Shared Memory x Distributed Memory Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all.
Multiprocessors ELEC 6200: Computer Architecture and Design Instructor : Agrawal Name: Nam.
1  1998 Morgan Kaufmann Publishers Chapter 9 Multiprocessors.
Cs238 Lecture 3 Operating System Structures Dr. Alan R. Davis.
Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006.
1 Programming Languages Translation  Lecture Objectives:  Be able to list and explain five features of the Java programming language.  Be able to explain.
1 CSE SUNY New Paltz Chapter Nine Multiprocessors.
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
4. Multiprocessors Main Structures 4.1 Shared Memory x Distributed Memory Shared-Memory (Global-Memory) Multiprocessor:  All processors can access all.
Introduction to Parallel Processing Ch. 12, Pg
Chapter 5 Array Processors. Introduction  Major characteristics of SIMD architectures –A single processor(CP) –Synchronous array processors(PEs) –Data-parallel.
Introduction to Parallel Processing 3.1 Basic concepts 3.2 Types and levels of parallelism 3.3 Classification of parallel architecture 3.4 Basic parallel.
Computer Architecture Parallel Processing
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Course Outline Introduction in software and applications. Parallel machines and architectures –Overview of parallel machines –Cluster computers (Myrinet)
Comparative Programming Languages hussein suleman uct csc304s 2003.
General What is an OS? What do you get when you buy an OS? What does the OS do? What are the parts of an OS? What is the kernel? What is a device.
LIGO-G Z 8 June 2001L.S.Finn/LDAS Camp1 How to think about parallel programming.
CS668- Lecture 2 - Sept. 30 Today’s topics Parallel Architectures (Chapter 2) Memory Hierarchy Busses and Switched Networks Interconnection Network Topologies.
1 Interconnects Shared address space and message passing computers can be constructed by connecting processors and memory unit using a variety of interconnection.
Multiprocessor systems Objective n the multiprocessors’ organization and implementation n the shared-memory in multiprocessor n static and dynamic connection.
1 Chapter 1 Parallel Machines and Computations (Fundamentals of Parallel Processing) Dr. Ranette Halverson.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Computer Programming A program is a set of instructions a computer follows in order to perform a task. solve a problem Collectively, these instructions.
BLU-ICE and the Distributed Control System Constraints for Software Development Strategies Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
MIMD Distributed Memory Architectures message-passing multicomputers.
CHAPTER 12 INTRODUCTION TO PARALLEL PROCESSING CS 147 Guy Wong page
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Chapter 6 Multiprocessor System. Introduction  Each processor in a multiprocessor system can be executing a different instruction at any time.  The.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
SEMAPHORE By: Wilson Lee. Concurrency Task Synchronization Example of semaphore Language Support.
Chapter 9: Alternative Architectures In this course, we have concentrated on single processor systems But there are many other breeds of architectures:
1 Parallel Programming Aaron Bloomfield CS 415 Fall 2005.
Getting started with Programming using IDE. JAVA JAVA IS A PROGRAMMING LANGUAGE AND A PLATFORM. IT CAN BE USED TO DELIVER AND RUN HIGHLY INTERACTIVE DYNAMIC.
Introduction Fall 2001 Foundations of Computer Systems Prerequisite:91.166* or * Section A Instructor: Dr. David Hutchinson Office:
Chapter 7 -1 CHAPTER 7 PROCESS SYNCHRONIZATION CGS Operating System Concepts UCF, Spring 2004.
A compiler is a computer program that translate written code (source code) into another computer language Associated with high level languages A well.
Basic Linear Algebra Subroutines (BLAS) – 3 levels of operations Memory hierarchy efficiently exploited by higher level BLAS BLASMemor y Refs. FlopsFlops/
13-1 Chapter 13 Concurrency Topics Introduction Introduction to Subprogram-Level Concurrency Semaphores Monitors Message Passing Java Threads C# Threads.
1 Chapter 1 Programming Languages Evolution of Programming Languages To run a Java program: Java instructions need to be translated into an intermediate.
By: Cheryl Mok & Sarah Tan. Java is partially interpreted. 1. Programmer writes a program in textual form 2. Runs the compiler, which converts the textual.
Outline Why this subject? What is High Performance Computing?
Super computers Parallel Processing
Parallel Processing Presented by: Wanki Ho CS147, Section 1.
1 MIMD Computers Based on Textbook – Chapter 4. 2 PMS Notation (pg. 59) (Bell & Newell, 1987) Similar to a block notation, except using single letters.
1 Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Parallel Computing Presented by Justin Reschke
Background Computer System Architectures Computer System Software.
1 ParallelAlgorithms Parallel Algorithms Dr. Stephen Tse Lesson 9.
نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Computer Software.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
Chapter 2- Visual Basic Schneider1 Programming Languages: Machine Language Assembly Language High level Language.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
Distributed and Parallel Processing
Parallel Architectures Based on Parallel Computing, M. J. Quinn
Different Architectures
Presentation transcript:

Rewiev of Lab Manual for Parallel Processing Systems Emina I. Milovanović, Vladimir M.Ćirić

Chapters F Parallaxis F Synchronization and communication in MIMD systems F BACI concurrency simulator

Chapter 1: Parallaxis F Parallaxis is structured programming language for data-parallel programming (SIMD systems) F Developed by Thomas Brunl in 1989 F Based on Modula-2 extended with machine- independent parallel constructs F Parallaxis simulation environment allows studying of data parallel fundamentals on single processor systems and development of parallel programs which can later be executed on real SIMD systems

In Parallaxis one can define “virtual machine” with: arbitrary number of PEs arrangement of PEs connections between the PEs This manual contains a full description of Parallaxis language: data types specification of virtual processors and connections control statements parallel data transfer working with subroutines

Manual contains a lot of examples for most common interconnection networks (linear, mesh, hexagonal, binary tree, torus, hypercube) At the end of the chapter, 7 groups of assignments for students are given The student have to define required network topology arrangement of processors in the system solve the problem on defined system

Synchronization and communication in MIMD systems Chapter 2: critical sections Semaphores Monitors Message passing

Chapter 3: BACI concurrency simulator (Ben Ari Concurrency Interpreter) Developed at William and Mary College by Bill Bynum and Tracy Camp David Strite from the University of Pennsylvania developed GUI for BACI in Java (jBACI) which is portable over all platforms BACI simulator consists of Pascal (or C) compiler and interpreter Pascal and C compilers support binary and counting semaphores and Hoare monitors

The manual contains the description of: concurrency constructs (cobegin-coend, semaphores, monitors, built-in functions,…) Instructions for using BACI compiler and interpreter Three groups of assignments for students

jBACI edit mode

jBACI run mode

When executing the program, the window is divided into two areas: The left area contains the process table with process number and name, process status (active or suspended) Process windows, one for each process

After this cycle of lab exercises, the student should be able to deeply understand mechanisms of interprocessor synchronization and communication in multiprocessor systems. At the end of lab exercises students have a colloquia which contributes with 20% in the exam.