Motivation: Sorting is among the fundamental problems of computer science. Sorting of different datasets is present in most applications, ranging from.

Slides:



Advertisements
Similar presentations
Theory of Computer Science - Algorithms
Advertisements

Performance Measurement n Assignment? n Timing #include double When() { struct timeval tp; gettimeofday(&tp, NULL); return((double)tp.tv_sec + (double)tp.tv_usec.
LIBRA: Lightweight Data Skew Mitigation in MapReduce
Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
Lecture 7-2 : Distributed Algorithms for Sorting Courtesy : Michael J. Quinn, Parallel Programming in C with MPI and OpenMP (chapter 14)
1 Coven a Framework for High Performance Problem Solving Environments Nathan A. DeBardeleben Walter B. Ligon III Sourabh Pandit Dan C. Stanzione Jr. Parallel.
Complexity Theory  Complexity theory is a problem can be solved? Given: ◦ Limited resources: processor time and memory space.
Algorithms Today we will look at: what we mean by efficiency in programs why efficiency matters what causes programs to be inefficient? will one algorithm.
Parallel Programming Laboratory1 Fault Tolerance in Charm++ Sayantan Chakravorty.
Distributed Process Scheduling Summery Distributed Process Scheduling Summery BY:-Yonatan Negash.
Lincoln University Canterbury New Zealand Evaluating the Parallel Performance of a Heterogeneous System Elizabeth Post Hendrik Goosen formerly of Department.
Reference: Message Passing Fundamentals.
Towards Data Partitioning for Parallel Computing on Three Interconnected Clusters Brett A. Becker and Alexey Lastovetsky Heterogeneous Computing Laboratory.
Removing Spyware Cleaning Up Your Computer. Pin Point the Problem Is Your Computer Not Running Properly? Are You Sure That the Problem is Spyware? Observe.
1 Friday, November 17, 2006 “In the confrontation between the stream and the rock, the stream always wins, not through strength but by perseverance.” -H.
CS 584. Logic The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding. The basis of.
ISPDC 2007, Hagenberg, Austria, 5-8 July On Grid-based Matrix Partitioning for Networks of Heterogeneous Processors Alexey Lastovetsky School of.
Chapter 1 Introduction 1.1A Brief Overview - Parallel Databases and Grid Databases 1.2Parallel Query Processing: Motivations 1.3Parallel Query Processing:
CS 584 Lecture 11 l Assignment? l Paper Schedule –10 Students –5 Days –Look at the schedule and me your preference. Quickly.
Parallel Computation in Biological Sequence Analysis Xue Wu CMSC 838 Presentation.
Monica Garika Chandana Guduru. METHODS TO SOLVE LINEAR SYSTEMS Direct methods Gaussian elimination method LU method for factorization Simplex method of.
Algorithmic Complexity Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
07/14/08. 2 Points Introduction. Cluster and Supercomputers. Cluster Types and Advantages. Our Cluster. Cluster Performance. Cluster Computer for Basic.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
ADLB Update Recent and Current Adventures with the Asynchronous Dynamic Load Balancing Library Rusty Lusk Mathematics and Computer Science Division Argonne.
Load Balancing Dan Priece. What is Load Balancing? Distributed computing with multiple resources Need some way to distribute workload Discreet from the.
CS 221 – May 13 Review chapter 1 Lab – Show me your C programs – Black spaghetti – connect remaining machines – Be able to ping, ssh, and transfer files.
The Group Runtime Optimization for High-Performance Computing An Install-Time System for Automatic Generation of Optimized Parallel Sorting Algorithms.
Chapter 3 Memory Management: Virtual Memory
© Fujitsu Laboratories of Europe 2009 HPC and Chaste: Towards Real-Time Simulation 24 March
A solution to the Von Neumann bottleneck
Heterogeneous Parallelization for RNA Structure Comparison Eric Snow, Eric Aubanel, and Patricia Evans University of New Brunswick Faculty of Computer.
Performance Evaluation of Parallel Processing. Why Performance?
Data Warehousing 1 Lecture-24 Need for Speed: Parallelism Virtual University of Pakistan Ahsan Abdullah Assoc. Prof. & Head Center for Agro-Informatics.
Performance Measurement n Assignment? n Timing #include double When() { struct timeval tp; gettimeofday(&tp, NULL); return((double)tp.tv_sec + (double)tp.tv_usec.
المحاضرة الاولى Operating Systems. The general objectives of this decision explain the concepts and the importance of operating systems and development.
A performance evaluation approach openModeller: A Framework for species distribution Modelling.
Performance Measurement. A Quantitative Basis for Design n Parallel programming is an optimization problem. n Must take into account several factors:
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
Frontiers in Massive Data Analysis Chapter 3.  Difficult to include data from multiple sources  Each organization develops a unique way of representing.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Chapter 10 Algorithm Analysis.  Introduction  Generalizing Running Time  Doing a Timing Analysis  Big-Oh Notation  Analyzing Some Simple Programs.
Programming Multi-Core Processors based Embedded Systems A Hands-On Experience on Cavium Octeon based Platforms Lab Exercises (Lab 2: Sorting)
Analysing Miss O’Grady. Analysing Analysing is the interpretation of the data. It involves examining the data and giving meaning to it. When data has.
Data Structures and Algorithms in Parallel Computing
CDP Tutorial 3 Basics of Parallel Algorithm Design uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison.
Exploiting Multithreaded Architectures to Improve Data Management Operations Layali Rashid The Advanced Computer Architecture U of C (ACAG) Department.
Computer Science 320 Load Balancing. Behavior of Parallel Program Why do 3 threads take longer than two?
Image Processing A Study in Pixel Averaging Building a Resolution Pyramid With Parallel Computing Denise Runnels and Farnaz Zand.
Data Structures and Algorithms in Parallel Computing Lecture 8.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Unit - 4 Introduction to the Other Databases.  Introduction :-  Today single CPU based architecture is not capable enough for the modern database.
Hierarchical Load Balancing for Large Scale Supercomputers Gengbin Zheng Charm++ Workshop 2010 Parallel Programming Lab, UIUC 1Charm++ Workshop 2010.
Uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003.
MEMORY MANAGEMENT Operating System. Memory Management Memory management is how the OS makes best use of the memory available to it Remember that the OS.
1a.1 Parallel Computing and Parallel Computers ITCS 4/5145 Cluster Computing, UNC-Charlotte, B. Wilkinson, 2006.
Performance Evaluation of Parallel Algorithms on a Computational Grid Environment Simona Blandino 1, Salvatore Cavalieri 2 1 Consorzio COMETA, 2 Faculty.
1 Potential for Parallel Computation Chapter 2 – Part 2 Jordan & Alaghband.
BAHIR DAR UNIVERSITY Institute of technology Faculty of Computing Department of information technology Msc program Distributed Database Article Review.
© 2017 Pearson Education, Hoboken, NJ. All rights reserved
PERFORMANCE EVALUATIONS
COMPUTATIONAL MODELS.
Distributed Processors
Parallel Density-based Hybrid Clustering
Objective of This Course
CS 584.
CS 584.
COMP60621 Fundamentals of Parallel and Distributed Systems
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
COMP60611 Fundamentals of Parallel and Distributed Systems
Presentation transcript:

Motivation: Sorting is among the fundamental problems of computer science. Sorting of different datasets is present in most applications, ranging from simple user applications to complex software. Today, in this modern age, the amount of data to be sorted is often so big, that even the most efficient sequential sorting algorithms become the bottleneck of the application. It may be a database or scientific data. With the appearance of parallel computing, new possibilities have appeared to remove this bottleneck and improve the performance of known sorting algorithms by modifying them for parallel execution. Improving Quick Sort Algorithm Performance By Using Parallel Algorithms.

AIM: We wanted to find the trade off point after which parallel quick sort and Hyper Quick sort’s performance started decreasing as the number of processors increases.

Observations: N= number of processors Time is in msec.

Speed up achieved by Parallel Quick Sort over Sequential Sort. Speed up (N=8) = Running time of sequential Sort algorithm Best case run time achieved by Parallel Sort = 2896 = Speed up achieved by Hyper Quick Sort over Sequential Sort. Speed up (N=8) = Running time of sequential Sort algorithm Best case run time achieved by Hyper Quick Sort = 2896 = Speed Up calculated over data set of size 10^6

Speed up achieved by Hyper Quick Sort over Parallel Sort. Speed up (N=8) = Running time of Parallel Sort algorithm Running time achieved by Hyper Quick Sort = 195 =

Conclusion: The basis of comparison is the running time, number of comparison and speed up achieved. 1) From the graphs it is obvious that Hyper Quick Sort and Parallel Quick Sort performs much more better than Sequential sort as they take the advantage of parallelism to reduce the waiting time. 2) The performance of Parallel Quick Sort is low as compared to Hyper Quick Sort due to the improper load balancing. 3) It is observed that when the number of processors increases after a particular number the performance of Parallel and Hyper Quick Sort decreases because of MPI communication overheads.