CS 221 – May 13 Review chapter 1 Lab – Show me your C programs – Black spaghetti – connect remaining machines – Be able to ping, ssh, and transfer files.

Slides:



Advertisements
Similar presentations
Computer Organization, Bus Structure
Advertisements

CSE 160 – Lecture 9 Speed-up, Amdahl’s Law, Gustafson’s Law, efficiency, basic performance metrics.
Practical techniques & Examples
Distributed Systems CS
1 Chapter 1 Why Parallel Computing? An Introduction to Parallel Programming Peter Pacheco.
Bits and Bytes + Controlling 8 LED with 3 Pins Binary Counting and Shift Registers.
Chess Problem Solver Solves a given chess position for checkmate Problem input in text format.
CS 345 Computer System Overview
Parallel System Performance CS 524 – High-Performance Computing.
Kevin Walsh CS 3410, Spring 2010 Computer Science Cornell University Multicore & Parallel Processing P&H Chapter ,
Reference: Message Passing Fundamentals.
1: Operating Systems Overview
Hardware Basics: Inside the Box 2  2001 Prentice Hall2.2 Chapter Outline “There is no invention – only discovery.” Thomas J. Watson, Sr. What Computers.
Interface Guidelines & Principles Responsiveness.
Computer Science 162 Section 1 CS162 Teaching Staff.
Chapter 1 and 2 Computer System and Operating System Overview
Operating Systems CS208. What is Operating System? It is a program. It is the first piece of software to run after the system boots. It coordinates the.
Parallel Algorithms - Introduction Advanced Algorithms & Data Structures Lecture Theme 11 Prof. Dr. Th. Ottmann Summer Semester 2006.
Distributed Computations MapReduce
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
07/14/08. 2 Points Introduction. Cluster and Supercomputers. Cluster Types and Advantages. Our Cluster. Cluster Performance. Cluster Computer for Basic.
C++ Crash Course Class 1 What is programming?. What’s this course about? Goal: Be able to design, write and run simple programs in C++ on a UNIX machine.
CHAPTER 1 XNA Game Studio 4.0. Your First Project A computer game is not just a program—it is also lots of other bits and pieces that make playing the.
Introduction to Parallel Programming MapReduce Except where otherwise noted all portions of this work are Copyright (c) 2007 Google and are licensed under.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Chapter 2 Operating System Overview Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Operating System Overview
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
 Introduction to Operating System Introduction to Operating System  Types Of An Operating System Types Of An Operating System  Single User Single User.
Lecture 1: Performance EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2013, Dr. Rozier.
Operating Systems. Without an operating system your computer would be useless! A computer contains an Operating System on its Hard Drive. This is loaded.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
Loosely Coupled Parallelism: Clusters. Context We have studied older archictures for loosely coupled parallelism, such as mesh’s, hypercubes etc, which.
Frontiers in Massive Data Analysis Chapter 3.  Difficult to include data from multiple sources  Each organization develops a unique way of representing.
The Alternative Larry Moore. 5 Nodes and Variant Input File Sizes Hadoop Alternative.
CS 111 – Sept. 15 Chapter 2 – Manipulating data by performing instructions “What is going on in the CPU?” Commitment: –Please read through section 2.3.
Silberschatz and Galvin  Operating System Concepts Module 1: Introduction What is an operating system? Simple Batch Systems Multiprogramming.
Summary Background –Why do we need parallel processing? Moore’s law. Applications. Introduction in algorithms and applications –Methodology to develop.
Interface Guidelines & Principles Responsiveness.
Lab 2 Parallel processing using NIOS II processors
Motivation: Sorting is among the fundamental problems of computer science. Sorting of different datasets is present in most applications, ranging from.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Data Structures and Algorithms in Parallel Computing Lecture 4.
1  1998 Morgan Kaufmann Publishers Chapter Six. 2  1998 Morgan Kaufmann Publishers Pipelining Improve perfomance by increasing instruction throughput.
Computer Organization CS224 Fall 2012 Lesson 52. Introduction  Goal: connecting multiple computers to get higher performance l Multiprocessors l Scalability,
1.1 Sandeep TayalCSE Department MAIT 1: Introduction What is an operating system? Simple Batch Systems Multiprogramming Batched Systems Time-Sharing Systems.
Parallel Computing Presented by Justin Reschke
Concurrency and Performance Based on slides by Henri Casanova.
1/50 University of Turkish Aeronautical Association Computer Engineering Department Ceng 541 Introduction to Parallel Computing Dr. Tansel Dökeroğlu
1 A simple parallel algorithm Adding n numbers in parallel.
CPIT Program Execution. Today, general-purpose computers use a set of instructions called a program to process data. A computer executes the.
Hybrid Parallel Implementation of The DG Method Advanced Computing Department/ CAAM 03/03/2016 N. Chaabane, B. Riviere, H. Calandra, M. Sekachev, S. Hamlaoui.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Applied Operating System Concepts
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
The University of Adelaide, School of Computer Science
Morgan Kaufmann Publishers
Distributed System Structures 16: Distributed Structures
Summary Background Introduction in algorithms and applications
Operating System Concepts
Chapter Six.
Chapter 1 Introduction.
Dr. Tansel Dökeroğlu University of Turkish Aeronautical Association Computer Engineering Department Ceng 442 Introduction to Parallel.
MARIE: An Introduction to a Simple Computer
Lecture 20 Parallel Programming CSE /27/2019.
Operating System Concepts
Presentation transcript:

CS 221 – May 13 Review chapter 1 Lab – Show me your C programs – Black spaghetti – connect remaining machines – Be able to ping, ssh, and transfer files among nodes For tomorrow, please read sections 2.4, 3.1 and 3.2 to get ready to write real parallel programs! – Chapter 3 contains nuts and bolts – Chapter 2 has background material

Chapter 1 ideas Why is parallel computing necessary in the long run? – Moore’s law – Speed of light  size of chip It’s a problem of HW as well as SW: how? How can we tell if a problem could benefit well from a parallelized solution?

Problem solving Generally, we begin by writing an ordinary “serial” program. – Benefits? Then, think of ways to redo the program to take advantage of parallelism – There is no automatic tool like a compiler to do all the work for us! – Often, the data needs to be partitioned among the individual processors. – Need some background software (e.g. MPI) to handle low- level communication details.

Simple example Suppose we wanted to sum a large set of values. – You are an investment firm, and your fund owns shares in 10,000 securities. What is today’s total asset value? – Serial solution is straightforward, but a bit slow. Suppose we have 5 computers working on this task at once – Design 1 program that can shared among all 5 machines. – What information is passed to each machine? – What happens when they are finished?

Delegate data collection It’s tempting to just have the “master” node add up all the data. If you have many cores, this is not efficient – Suppose you have 1 million securities to look up and 1,000 cores sharing the workload. Then, the master node is receiving messages from 999 nodes (in series!), and must add up the 1,000 values. – As much as possible, we want results to be communicated simultaneously. – Let’s work out a simple scenario with 8 cores. Each contains a number that needs to be added. How would you do it? How would the pseudocode look?

Major principles A good parallel solution pays close attention to 3 concepts. What do they mean? What could go wrong? Communication Load balancing Synchronization – each core is working at its own pace, for example reading input…