Independent Study of Parallel Programming Languages An Independent Study By: Haris Ribic, Computer Science - Theoretical Independent Study Advisor: Professor.

Slides:



Advertisements
Similar presentations
Operating System.
Advertisements

Parallel Processing with OpenMP
Introduction to Openmp & openACC
Introductions to Parallel Programming Using OpenMP
A Dynamic World, what can Grids do for Multi-Core computing? Daniel Goodman, Anne Trefethen and Douglas Creager
Types of Parallel Computers
Information Technology Center Introduction to High Performance Computing at KFUPM.
Scientific Programming OpenM ulti- P rocessing M essage P assing I nterface.
Software Group © 2006 IBM Corporation Compiler Technology Task, thread and processor — OpenMP 3.0 and beyond Guansong Zhang, IBM Toronto Lab.
Where Are They Now? Current Status of C++ Parallel Language Extensions and Libraries From 1995 Supercomputing Workshop 1.
Beowulf Cluster Computing Each Computer in the cluster is equipped with: – Intel Core 2 Duo 6400 Processor(Master: Core 2 Duo 6700) – 2 Gigabytes of DDR.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
Contemporary Languages in Parallel Computing Raymond Hummel.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Monte Carlo Simulation Used when it is infeasible or impossible to compute an exact result with a deterministic algorithm Especially useful in –Studying.
Hossein Bastan Isfahan University of Technology 1/23.
Instructor: Li Ma Department of Computer Science Texas Southern University, Houston August, 2011.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
High Performance Computation --- A Practical Introduction Chunlin Tian NAOC Beijing 2011.
Parallel Processing LAB NO 1.
Budapest, November st ALADIN maintenance and phasing workshop Short introduction to OpenMP Jure Jerman, Environmental Agency of Slovenia.
CC02 – Parallel Programming Using OpenMP 1 of 25 PhUSE 2011 Aniruddha Deshmukh Cytel Inc.
1 Datamation Sort 1 Million Record Sort using OpenMP and MPI Sammie Carter Department of Computer Science N.C. State University November 18, 2004.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
MPI3 Hybrid Proposal Description
OpenMP in a Heterogeneous World Ayodunni Aribuki Advisor: Dr. Barbara Chapman HPCTools Group University of Houston.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
CSE 260 – Parallel Processing UCSD Fall 2006 A Performance Characterization of UPC Presented by – Anup Tapadia Fallon Chen.
Parallelization: Area Under a Curve. AUC: An important task in science Neuroscience – Endocrine levels in the body over time Economics – Discounting:
Computer Fundamentals MSCH 233 Lecture 2. What is a Software? Its step by step instructions telling the computer how to process data, execute operations.
المحاضرة الاولى Operating Systems. The general objectives of this decision explain the concepts and the importance of operating systems and development.
INVITATION TO COMPUTER SCIENCE, JAVA VERSION, THIRD EDITION Chapter 6: An Introduction to System Software and Virtual Machines.
1.1 Operating System Concepts Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered.
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Hybrid MPI and OpenMP Parallel Programming
10/8: Software What is software? –Types of software System software: Operating systems Applications Creating software –Evolution of software development.
CS 591 x I/O in MPI. MPI exists as many different implementations MPI implementations are based on MPI standards MPI standards are developed and maintained.
Debugging parallel programs. Breakpoint debugging Probably the most widely familiar method of debugging programs is breakpoint debugging. In this method,
Shashwat Shriparv InfinitySoft.
CS- 492 : Distributed system & Parallel Processing Lecture 7: Sun: 15/5/1435 Foundations of designing parallel algorithms and shared memory models Lecturer/
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 1.
Running Mantevo Benchmark on a Bare-metal Server Mohammad H. Mofrad January 28, 2016
Timing in MPI Tarik Booker MPI Presentation May 7, 2003.
Exploring Parallelism with Joseph Pantoga Jon Simington.
نظام المحاضرات الالكترونينظام المحاضرات الالكتروني Computer Software.
Constructing a system with multiple computers or processors 1 ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson. Jan 13, 2016.
Chapter 1 Introduction.
Modeling Big Data Execution speed limited by: Model complexity
Learning About Operating Systems
Introduction to MPI.
Spatial Analysis With Big Data
Multi-Processing in High Performance Computer Architecture:
Chapter 3: Windows7 Part 1.
Multi-Processing in High Performance Computer Architecture:
CSE8380 Parallel and Distributed Processing Presentation
Introduction to Operating Systems
Hybrid Programming with OpenMP and MPI
Monte Carlo Integration Using MPI
By Brandon, Ben, and Lee Parallel Computing.
Introduction to parallelism and the Message Passing Interface
Hybrid Parallel Programming
Subject Name: Operating System Concepts Subject Number:
DNA microarrays. Infinite Mixture Model-Based Clustering of DNA Microarray Data Using openMP.
Hybrid MPI and OpenMP Parallel Programming
Example of an early computer system. Classification of operating systems. Operating systems can be grouped into the following categories: Supercomputing.
Hybrid Parallel Programming
Working in The IITJ HPC System
Presentation transcript:

Independent Study of Parallel Programming Languages An Independent Study By: Haris Ribic, Computer Science - Theoretical Independent Study Advisor: Professor Daniel Bennett Everybody has seen these advertisements but what do they mean?... moreover we have seen an increase in DUAL-CORE processors but still what does it all mean? Simple schematics of a DUAL-CORE processor build by Intel and AMD. Why DUAL-CORE? - Ability to parallelize programs - CPU uses less energy and delivers more performance - Better system responsiveness and multi-tasking capability “For example you could have your Internet browser open along with a virus scanner running in the background, while using Media Player to stream your favorite radio station and the dual-core processor will handle the multiple tasks without the decrease of performance and efficiency.” WHY THIS STUDY? “In 2007, the degree of parallelism for personal computing in current desktop systems such as Linux and Windows Vista is nil, which either indicates the impossibility of the task or the inadequacy of our creativity.” – Gordon Bell in Communications of the ACM Preliminary results using Monte Carlo Method for calculating Pi (π) using MPI and OpenMP languages on a Computer Cluster and Symmetric Multiprocessing Computer. 2 Monte Carlo Method - Generate random numbers x and y ranging from 0 to 1 - Count the number of hits inside the circle by x^2 + y^2 < 1 - Probability (Hit) = Surface Circle/Surface Square - Π = number of hits in circle / total hits - Generating more numbers increases accuracy of the number π - However more numbers slow down the computer 20 System BUS - Efficiency increases when using more nodes - Language independent, could use C++, FORTRAN - Difficult to use Program Process 1 Process 2 … Process K … K MasterNodesMasterNode - Easy to write - Depends of operating system scheduling #pragma omp parallel private(i, trdID) shared(rndAry, hits, darts) trdID = omp_get_thread_num() srand(rndAry[trdID]) #pragma omp for reduction(+:hits) for (i = 0; i < darts; i++) x_value = drand48() y_value = drand48() if ( ((x_value*x_value)+(y_value*y_value)) <= 1 ) hits = hits + 1 Thread CPU 1 CPU 2 Simple schematics of a computer cluster like the one used by Computer Science Department. Why Computer Cluster? - Ability to parallelize programs - More cost-effective supercomputer - Used in scientific research A computer cluster is a group of loosely coupled computers that work together closely so that in many respects they can be viewed as though hey are a single computer. The components of a cluster are commonly connected to each other through fast local area networks. MPI-Message Passing InterfaceUPC-Unified Parallel COpenMP Charm++ Standard based library that allows many computers to communicate with one another. An extension of the C programming language designed for high-performance computing on large-scale parallel machines An application programming interface that supports multi- platform shared memory and multiprocessing programming. Parallel object-oriented programming language based on C++. MPI 0 Process MPI_Init(&argc, &argv) MPI_Comm_rank(MPI_COMM_WORLD, &my_rank) MPI_Comm_size(MPI_COMM_WORLD, &p) MPI_Barrier(MPI_COMM_WORLD) Get number of darts MPI_Bcast(&totalDarts, 1, MPI_INT, src, MPI_COMM_WORLD) Each node performs calculation MPI_Gather(&time, 1, MPI_INT, aryTime, 1, MPI_INT, src, MPI_COMM_WORLD) MPI_Reduce(&hits, &allHits, 1, MPI_INT, MPI_SUM, src, MPI_COMM_WORLD) Print results OpenMP