Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 1 Introduction EE692 Parallel and Distribution Computation | Prof. Song.

Slides:



Advertisements
Similar presentations
Parallel Processing & Parallel Algorithm May 8, 2003 B4 Yuuki Horita.
Advertisements

Development of Parallel Simulator for Wireless WCDMA Network Hong Zhang Communication lab of HUT.
1 A Practical Efficiency Criterion For The Null Message Algorithm András Varga 1, Y. Ahmet Şekerciuğlu 2, Gregory K. Egan 2 1 Omnest Global, Inc. 2 CTIE,
April “ Despite the increasing importance of mathematics to the progress of our economy and society, enrollment in mathematics programs has been.
1 Distributed Computing Algorithms CSCI Distributed Computing: everything not centralized many processors.
Numerical Methods for Engineers MECH 300 Hong Kong University of Science and Technology.
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Wireless Sensor Networks 16th Lecture Christian Schindelhauer.
Two Approaches to Multiphysics Modeling Sun, Yongqi FAU Erlangen-Nürnberg.
Architecture and Real Time Systems Lab University of Massachusetts, Amherst An Application Driven Reliability Measures and Evaluation Tool for Fault Tolerant.
Chapter 11: Distributed Processing Parallel programming Principles of parallel programming languages Concurrent execution –Programming constructs –Guarded.
1 Localization Technologies for Sensor Networks Craig Gotsman, Technion/Harvard Collaboration with: Yehuda Koren, AT&T Labs.
Models of Parallel Computation Advanced Algorithms & Data Structures Lecture Theme 12 Prof. Dr. Th. Ottmann Summer Semester 2006.
On the Task Assignment Problem : Two New Efficient Heuristic Algorithms.
CprE 458/558: Real-Time Systems
Fall 2008Introduction to Parallel Processing1 Introduction to Parallel Processing.
Parallel and Distributed Simulation Introduction and Motivation By Syed S. Rizvi.
Introduction to Parallel Processing Ch. 12, Pg
18-447: Computer Architecture Lecture 30B: Multiprocessors Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013.
Distributed Asynchronous Bellman-Ford Algorithm
LIGO-G Z 8 June 2001L.S.Finn/LDAS Camp1 How to think about parallel programming.
“The Architecture of Massively Parallel Processor CP-PACS” Taisuke Boku, Hiroshi Nakamura, et al. University of Tsukuba, Japan by Emre Tapcı.
Heterogeneous Parallelization for RNA Structure Comparison Eric Snow, Eric Aubanel, and Patricia Evans University of New Brunswick Faculty of Computer.
Institute for Mathematical Modeling RAS 1 Dynamic load balancing. Overview. Simulation of combustion problems using multiprocessor computer systems For.
Introduction Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
Parallel and Distributed Simulation Hardware Platforms Simulation Fundamentals.
HPC Technology Track: Foundations of Computational Science Lecture 1 Dr. Greg Wettstein, Ph.D. Research Support Group Leader Division of Information Technology.
Performance Model & Tools Summary Hung-Hsun Su UPC Group, HCS lab 2/5/2004.
Chapter 6 Multiprocessor System. Introduction  Each processor in a multiprocessor system can be executing a different instruction at any time.  The.
Multiprocessor and Real-Time Scheduling Chapter 10.
Chapter 101 Multiprocessor and Real- Time Scheduling Chapter 10.
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 January Session 4.
Distributed Systems and Algorithms Sukumar Ghosh University of Iowa Spring 2011.
1 M. Tudruj, J. Borkowski, D. Kopanski Inter-Application Control Through Global States Monitoring On a Grid Polish-Japanese Institute of Information Technology,
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 10, 2005 Session 9.
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
Fig.1. Flowchart Functional network identification via task-based fMRI To identify the working memory network, each participant performed a modified version.
CSE 291A Interconnection Networks Instructor: Prof. Chung-Kuan, Cheng CSE Dept. UCSD Winter-2007.
A Systematic Approach to the Design of Distributed Wearable Systems Urs Anliker, Jan Beutel, Matthias Dyer, Rolf Enzler, Paul Lukowicz Computer Engineering.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
Energy-Aware Resource Adaptation in Tessellation OS 3. Space-time Partitioning and Two-level Scheduling David Chou, Gage Eads Par Lab, CS Division, UC.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
Parallel Processing I’ve gotta spend at least 10 hours studying for the IT 344 final! I’m going to study with 9 friends… we’ll be done in an hour.
Several sets of slides by Prof. Jennifer Welch will be used in this course. The slides are mostly identical to her slides, with some minor changes. Set.
Content caching and scheduling in wireless networks with elastic and inelastic traffic Group-VI 09CS CS CS30020 Performance Modelling in Computer.
Outline Why this subject? What is High Performance Computing?
Lecture 3: Computer Architectures
ES 84 Numerical Methods for Engineers, Mindanao State University- Iligan Institute of Technology Prof. Gevelyn B. Itao.
Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 3 Iterative Method for Nonlinear problems EE692 Parallel and Distribution.
Discrete, Amorphous Physical Models Erik Rauch Digital Perspectives on Physics Workshop.
Smart Sleeping Policies for Wireless Sensor Networks Venu Veeravalli ECE Department & Coordinated Science Lab University of Illinois at Urbana-Champaign.
Spring EE 437 Lillevik 437s06-l22 University of Portland School of Engineering Advanced Computer Architecture Lecture 22 Distributed computer Interconnection.
CS4315A. Berrached:CMS:UHD1 Introduction to Operating Systems Chapter 1.
Agenda  Quick Review  Finish Introduction  Java Threads.
Clock Synchronization (Time Management) Deadlock Avoidance Using Null Messages.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 April 28, 2005 Session 29.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
Lecture 13 Parallel Processing. 2 What is Parallel Computing? Traditionally software has been written for serial computation. Parallel computing is the.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Introduction to parallel programming
Parallel Programming By J. H. Wang May 2, 2017.
Parallel and Distributed Simulation Techniques
Introduction to locality sensitive approach to distributed systems
CLUSTER COMPUTING.
Distributed computing deals with hardware
Market-based Dynamic Task Allocation in Mobile Surveillance Systems
Software Architecture
Variational Inequalities
Distributed Systems and Algorithms
Presentation transcript:

Network Systems Lab. Korea Advanced Institute of Science and Technology No.1 Ch. 1 Introduction EE692 Parallel and Distribution Computation | Prof. Song Chong

Network Systems Lab. Korea Advanced Institute of Science and Technology No.2 Issues in Parallelization  Task allocation to processors  The breakdown of the total workload in small tasks assigned to different processors  Proper sequencing of the tasks when some of them are interdependent and cannot be executed simultaneously  Communication of interim computation results between processors  Synchronization of computations of processors  Synchronous: predetermined points for the completion of computations or for the arrival of data  Asynchronous: no such points  Performance measures  Effects of system’s architecture on performance

Network Systems Lab. Korea Advanced Institute of Science and Technology No.3 Need for Parallel & Distributed Computation  Restrict attention to numerical computation in this lecture  Large-scale fast computation  Partial differential eq.s (PDEs) e.g.) fluid dynamics, weather prediction, image processing  Can be decomposed along a spatial dimension  Each processor manipulates the variables associated with a small region in space  Interaction between variables are local in nature  Systems of equations, mathematical programming (optimization)

Network Systems Lab. Korea Advanced Institute of Science and Technology No.4 Need for Parallel & Distributed Computation (Cont’d)  Analysis, simulation and optimization of large scale interconnected systems e.g.) queueing system  Information acquisition, extraction and control in geographically distributed system e.g.) sensor networks, communication networks, wireless networks  Synchronization, unreliable communication, absence of a central control mechanism

Network Systems Lab. Korea Advanced Institute of Science and Technology No.5 Distinction between parallel and distributed computing systems  Parallel computing systems  Processors are located within a small distance of each other  Designed in such a way that processors execute jointly a computational task  Communication between processors is reliable and predictable  Distributed computing systems  Processors may be far apart (geographically distributed)  Communication delay may be unpredictable  Communication links may be unreliable  Topology may undergo changes while operating  Usually loosely coupled, there is very little central coordination and control

Network Systems Lab. Korea Advanced Institute of Science and Technology No.6 Parameters to classify parallel and/or distributed computing systems  Type and number of processors  Presence or absence of a global control mechanism  Synchronous vs. asynchronous operation  Processor interconnections  Tow extremes  Shared memory system solve inter-processor computation problem by lacing a global memory, but introduce the problem of simultaneous accessing of different locations of the memory by several processors  Message-passing system local memory with inter-connection networks