Parallel Computing in the Multicore Era

Slides:



Advertisements
Similar presentations
MULTICORE PROCESSOR TECHNOLOGY.  Introduction  history  Why multi-core ?  What do you mean by multicore?  Multi core architecture  Comparison of.
Advertisements

Instructor: Sazid Zaman Khan Lecturer, Department of Computer Science and Engineering, IIUC.
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
Introduction Companion slides for
GPGPU Introduction Alan Gray EPCC The University of Edinburgh.
Prof. Srinidhi Varadarajan Director Center for High-End Computing Systems.
From Yale-45 to Yale-90: Let Us Not Bother the Programmers Guri Sohi University of Wisconsin-Madison Celebrating September 19, 2014.
Dr. Alexandra Fedorova August 2007 Introduction to Systems Research at SFU.
SYNAR Systems Networking and Architecture Group CMPT 886: Special Topics in Operating Systems and Computer Architecture Dr. Alexandra Fedorova School of.
Chapter Chapter Goals Describe the layers of a computer system Describe the concept of abstraction and its relationship to computing Describe.
Chapter 1 The Big Picture Chapter Goals Describe the layers of a computer system Describe the concept of abstraction and its relationship to computing.
COMP3221: Microprocessors and Embedded Systems Lecture 2: Instruction Set Architecture (ISA) Lecturer: Hui Wu Session.
11/14/05ELEC Fall Multi-processor SoCs Yijing Chen.
The Challenging (and Fun!) World of Computer Engineering Professor Dave Meyer School of Electrical & Computer Engineering Purdue University.
Chapter 1 The Big Picture Chapter Goals Describe the layers of a computer system Describe the concept of abstraction and its relationship to computing.
INTEL CONFIDENTIAL Why Parallel? Why Now? Introduction to Parallel Programming – Part 1.
Cisc Complex Instruction Set Computing By Christopher Wong 1.
Chapter 1 The Big Picture Chapter Goals Describe the layers of a computer system Describe the concept of abstraction and its relationship to computing.
© 2009 Mathew J. Sottile, Timothy G. Mattson, and Craig E Rasmussen 1 Concurrency in Programming Languages Matthew J. Sottile Timothy G. Mattson Craig.
Chapter 01 Nell Dale & John Lewis.
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
Last Words COSC Big Data (frameworks and environments to analyze big datasets) has become a hot topic; it is a mixture of data analysis, data mining,
Introduction to Parallel Computing. Serial Computing.
BY: ALI AJORIAN ISFAHAN UNIVERSITY OF TECHNOLOGY 2012 GPU Architecture 1.
Parallel and Distributed Systems Instructor: Xin Yuan Department of Computer Science Florida State University.
Chapter 1 The Big Picture.
Extreme-scale computing systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward exa-scale computing.
Multi-core Programming Introduction Topics. Topics General Ideas Moore’s Law Amdahl's Law Processes and Threads Concurrency vs. Parallelism.
EEL Software development for real-time engineering systems.
Programming Concepts in GPU Computing Dušan Gajić, University of Niš Programming Concepts in GPU Computing Dušan B. Gajić CIITLab, Dept. of Computer Science.
SJSU SPRING 2011 PARALLEL COMPUTING Parallel Computing CS 147: Computer Architecture Instructor: Professor Sin-Min Lee Spring 2011 By: Alice Cotti.
Reminder Lab 0 Xilinx ISE tutorial Research Send me an if interested Looking for those interested in RC with skills in compilers/languages/synthesis,
Big Data Analytics Large-Scale Data Management Big Data Analytics Data Science and Analytics How to manage very large amounts of data and extract value.
Last Words DM 1. Mining Data Steams / Incremental Data Mining / Mining sensor data (e.g. modify a decision tree assuming that new examples arrive continuously,
Chapter 1 The Big Picture Chapter Goals Describe the layers of a computer system Describe the concept of abstraction and its relationship to computing.
Dean Tullsen UCSD.  The parallelism crisis has the feel of a relatively new problem ◦ Results from a huge technology shift ◦ Has suddenly become pervasive.
Introduction to computing Why study computer science? It encompasses many things. There is a multiplicity of university and industry courses available.
MULTICORE PROCESSOR TECHNOLOGY.  Introduction  history  Why multi-core ?  What do you mean by multicore?  Multi core architecture  Comparison of.
SOFTWARE ENGINEERING. Objectives Have a basic understanding of the origins of Software development, in particular the problems faced in the Software Crisis.
Is MPI still part of the solution ? George Bosilca Innovative Computing Laboratory Electrical Engineering and Computer Science Department University of.
Multi-Core CPUs Matt Kuehn. Roadmap ► Intel vs AMD ► Early multi-core processors ► Threads vs Physical Cores ► Multithreading and Multi-core processing.
Heterogeneous Processing KYLE ADAMSKI. Overview What is heterogeneous processing? Why it is necessary Issues with heterogeneity CPU’s vs. GPU’s Heterogeneous.
 Programming methodology: ◦ is a process of developing programs that involves strategically dividing important tasks into functions to be utilized by.
Chapter 4: Threads Modified by Dr. Neerja Mhaskar for CS 3SH3.
Introduction to threads
Introduction to Computing Systems
CompSci 280 S Introduction to Software Development
Conclusions on CS3014 David Gregg Department of Computer Science
Lynn Choi School of Electrical Engineering
QUANTUM COMPUTING: Quantum computing is an attempt to unite Quantum mechanics and information science together to achieve next generation computation.
Chapter 1- Introduction
for the Offline and Computing groups
Super Computing By RIsaj t r S3 ece, roll 50.
Constructing a system with multiple computers or processors
Frequently asked questions about software engineering
Introduction to Reconfigurable Computing
COMP60611 Fundamentals of Parallel and Distributed Systems
Computer Architecture
Parallel Processing Sharing the load.
Lesson Objectives Aims You should be able to:
ECNG 1014: Digital Electronics Lecture 1: Course Overview
Constructing a system with multiple computers or processors
COMP60611 Fundamentals of Parallel and Distributed Systems
COMP60621 Fundamentals of Parallel and Distributed Systems
Multithreaded Programming
Parallel Computing in the Multicore Era
Computer Evolution and Performance
Vrije Universiteit Amsterdam
COMP60611 Fundamentals of Parallel and Distributed Systems
Presentation transcript:

Parallel Computing in the Multicore Era Graham Riley & Mikel Lujan 20th September 2017

MSc in Advanced Computer Science Theme on “Parallel Computing in the Multicore Era” Routine increases in clock speeds for uni-core microprocessors - which drove computational performance increases for more than 40 years, at a rate of 2 every ~2 years - are over. Microprocessor chips are now multi-core: but each CPU core has much the same performance as one uni-core microprocessor from around 2005, and it is the number of cores that is now increasing at an exponential rate (still doubles every ~2 years). Also seen accelerators, e.g. GPGPUs, becoming mainstream. This implies that parallel computing is now the concern of everyone – parallel computational activities need to be handled as the norm, rather than the exception. Mainstream programmers of the future will need skills that are currently possessed by very few (due to the small numbers of, and inherent complexities in, parallel systems). 17 July 2018

Multicore “Roadmap” year, cores per chip, feature size 2006, 2 cores, 65nm 2008, 4 cores, 45nm 2010, ~8 cores, 33nm 2012, ~16 cores, ~23nm 2014, ~32 cores, ~16nm sharing discontinuity 2016, ~64 cores, ~12nm 2018, ~128 cores, ~9nm 2020, ~256 cores, ~6nm 2022, ~512 cores, ~4nm 2024, ~1024 cores, ~3nm 2026, ~2048 cores, ~2nm scale discontinuity? 2028, ~4096 cores, ~1.5nm 2030, ~8192 cores, ~1nm 2032, ~16384 cores, <1nm 17 July 2018

Dell PowerEdge, quad-core 17 July 2018

MSc in Advanced Computer Science Theme on “Parallel Computing in the Multicore Era” “This theme introduces students to the complexities of parallel computing by reviewing hardware developments and by providing programming techniques and tools that can alleviate the ensuing problems of correctness, reliability and performance in modern parallel systems.” Energy use is increasingly an issue both for low-power devices and Extreme-scale supercomputers. 17 July 2018

Challenge and Opportunity Parallel programming is undoubtedly more difficult than sequential programming, especially if there is a need for high performance. Moreover, the larger the number of processors (cores), the more difficult it becomes. So, the downside is that programming is becoming progressively more difficult (e.g. Parallel C plus CUDA/OpenCL). But, this also presents an opportunity to anyone who can develop appropriate skills. There will be no shortage of available jobs! 17 July 2018

“Parallel Computing in the Multicore Era” The theme contains the following Course Units: COMP60611 – Period 1 – “Parallel Programs and their Performance” – Graham Riley COMP60621 – Period 2 – “Advanced Topics in Multicore Computing” – Mikel Luján Thursdays (commencing 28th September) room 2.15 Kilburn Building 17 July 2018

“Parallel Computing in the Multicore Era” COMP60611: Parallel Programs and their Performance Explores a methodology for the development, analysis and improvement of high performance parallel programs for the solution of predominantly numerical problems – focus is on the performance of numerical kernels. Primary learning via “hands-on” experience of a (small-ish) state-of-the-art multicore parallel computer (quad 12-core: up to 48 cores in total) AMD Opteron based-server). Group-based laboratory work including a mini-project. C-based (or Fortran-based) programming. 17 July 2018

“Parallel Computing in the Multicore Era” COMP60611: Parallel Programs and their Performance Two main experiences emerge: First, parallel programming is intellectually challenging (i.e. hard!). Frequently, new parallel programs do not compute what the programmer was expecting. Moreover, when they do compute what was expected, getting them to perform well can be challenging (and counter-intuitive). Second, it will become apparent that parallel computing is not a static discipline – things are changing all the time, in hardware, software, algorithms and applications. 17 July 2018

“Parallel Computing in the Multicore Era” COMP60621: Advanced Topics in Multicore Computing Studies technological issues which will determine both the future hardware architecture and the programming techniques and design methodologies that will be necessary to extract high performance from multicore processors. Reviews limitations of current approaches and studies in detail those areas of research that are most likely to provide solutions. Lots of directed reading. Group-based laboratory with projects investigating specific topics via experiments. Large-scale, individually tailored mini-projects. Java-based programming. 17 July 2018

“Parallel Computing in the Multicore Era” Summary The “Uni-core Era” has gone. We are currently entering the “Heterogeneous Multicore Era”! Parallel computers are now mainstream. Parallel programming is therefore pervasive. Parallel programming is inherently more difficult than sequential programming; the more parallel, the more difficult. Theme starts by studying parallel programming: what it involves, how it differs from sequential programming and how high performance can be achieved (or: why it is not). Follows this up by studying how parallel computers and software may evolve in the future to deal with important practical problems. 17 July 2018

“Parallel Computing in the Multicore Era” Links with other themes Big Computations → Parallel Computing. Many other themes involve at least some Big Computations. “Big Data” is an emerging phenomenon that will provide many new employment opportunities. Big Data → Big Computations. Suggests that students studying themes on “managing data”, “learning from data” or “making sense of complex data” will be well-placed to tackle future “Big Data” challenges if they also study this theme. Machine Learning (e.g. Neural Networks) and Computer Vision are also emerging key areas for parallel computing. 17 July 2018