Declarative Programming Languages for Multicore Architecures Workshop 15 January 2006.

Slides:



Advertisements
Similar presentations
Chapter 4 Computation Bjarne Stroustrup
Advertisements

TRAMP Workshop Some Challenges Facing Transactional Memory Craig Zilles and Lee Baugh University of Illinois at Urbana-Champaign.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 1 Three Challenges for Transactional Computing J. Eliot B. Moss Associate Professor,
Design and Implementation Issues for Atomicity Dan Grossman University of Washington Workshop on Declarative Programming Languages for Multicore Architectures.
Created by Jodi Satovsky
Leading Australian Curriculum: Science Day 4. Australian Curriculum PURPOSE Curriculum leaders develop capacity to lead change and support schools and.
Concurrency Issues Motivation, Problems, Directions Dennis Kafura - CS Operating Systems1.
CoDeR-MP SSF meeting, May 3, 2011, Uppsala Agenda  Overview (Coffee will be served) Introduction, Olof Lindgren CoDeR-MP: Goals, progress.
Bell Schedules Club Time is available from 8:05-8:20  1 st 8:20 – 9:15  2 nd 9:20 – 10:10  3 rd 10:15 – 11:05  4 th 11:10 – 12:50 A(11:10)
The University of Adelaide, School of Computer Science
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
Prof. Srinidhi Varadarajan Director Center for High-End Computing Systems.
1  1998 Morgan Kaufmann Publishers Chapter 9 Multiprocessors.
Chapter Hardwired vs Microprogrammed Control Multithreading
Microprocessors Introduction to ia64 Architecture Jan 31st, 2002 General Principles.
CS 300 – Lecture 23 Intro to Computer Architecture / Assembly Language Virtual Memory Pipelining.
1 New Architectures Need New Languages A triumph of optimism over experience! Ian Watson 3 rd July 2009.
Concurrency and Software Transactional Memories Satnam Singh, Microsoft Faculty Summit 2005.
INTEL CONFIDENTIAL Why Parallel? Why Now? Introduction to Parallel Programming – Part 1.
SEC(R) 2008 Intel® Concurrent Collections for C++ - a model for parallel programming Nikolay Kurtov Software and Services.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
Computer Architecture Computational Models Ola Flygt V ä xj ö University
ECE 526 – Network Processing Systems Design Network Processor Architecture and Scalability Chapter 13,14: D. E. Comer.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 4: Threads.
Parallel Programming Models Jihad El-Sana These slides are based on the book: Introduction to Parallel Computing, Blaise Barney, Lawrence Livermore National.
Scaling to New Heights Retrospective IEEE/ACM SC2002 Conference Baltimore, MD.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
View-Oriented Parallel Programming for multi-core systems Dr Zhiyi Huang World 45 Univ of Otago.
Overview of implementations openBGP (and openOSPF) –Active development Zebra –Commercialized Quagga –Active development XORP –Hot Gated –Dead/commercialized.
Multi-core Programming Introduction Topics. Topics General Ideas Moore’s Law Amdahl's Law Processes and Threads Concurrency vs. Parallelism.
Introduction, background, jargon Jakub Yaghob. Literature T.G.Mattson, B.A.Sanders, B.L.Massingill: Patterns for Parallel Programming, Addison- Wesley,
High-Performance Computing An Applications Perspective REACH-IIT Kanpur 10 th Oct
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
HPC User Forum Back End Compiler Panel SiCortex Perspective Kevin Harris Compiler Manager April 2009.
CSC 7600 Lecture 28 : Final Exam Review Spring 2010 HIGH PERFORMANCE COMPUTING: MODELS, METHODS, & MEANS FINAL EXAM REVIEW Daniel Kogler, Chirag Dekate.
Summary Background –Why do we need parallel processing? Moore’s law. Applications. Introduction in algorithms and applications –Methodology to develop.
CS162 Week 5 Kyle Dewey. Overview Announcements Reactive Imperative Programming Parallelism Software transactional memory.
A few issues on the design of future multicores André Seznec IRISA/INRIA.
1. 2 Pipelining vs. Parallel processing  In both cases, multiple “things” processed by multiple “functional units” Pipelining: each thing is broken into.
Intermediate 2 Computing Unit 2 - Software Development Topic 2 - Software Development Languages and Environments.
Processor Architecture
Stored Programs In today’s lesson, we will look at: what we mean by a stored program computer how computers store and run programs what we mean by the.
Programmability Hiroshi Nakashima Thomas Sterling.
Platform Abstraction Group 3. Question How to deal with different types hardware and software platforms? What detail to expose to the programmer? What.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 4: Threads.
3/12/2013Computer Engg, IIT(BHU)1 OpenMP-1. OpenMP is a portable, multiprocessing API for shared memory computers OpenMP is not a “language” Instead,
1 Why Threads are a Bad Idea (for most purposes) based on a presentation by John Ousterhout Sun Microsystems Laboratories Threads!
HParC language. Background Shared memory level –Multiple separated shared memory spaces Message passing level-1 –Fast level of k separate message passing.
Gauss Students’ Views on Multicore Processors Group members: Yu Yang (presenter), Xiaofang Chen, Subodh Sharma, Sarvani Vakkalanka, Anh Vo, Michael DeLisi,
Agenda  Quick Review  Finish Introduction  Java Threads.
Tools and Libraries for Manycore Computing Kathy Yelick U.C. Berkeley and LBNL.
11 Brian Van Straalen Portable Performance Discussion August 7, FASTMath SciDAC Institute.
Community Grids Laboratory
CS5102 High Performance Computer Systems Thread-Level Parallelism
Alternative system models
The University of Adelaide, School of Computer Science
Computer Engg, IIT(BHU)
Task Scheduling for Multicore CPUs and NUMA Systems
Lecture 5: GPU Compute Architecture
Multi-Processing in High Performance Computer Architecture:
Chapter 4: Threads.
Lecture 5: GPU Compute Architecture for the last time
Fast Communication and User Level Parallelism
Design and Implementation Issues for Atomicity
EE 4xx: Computer Architecture and Performance Programming
Lecture 17 Multiprocessors and Thread-Level Parallelism
Lecture 17 Multiprocessors and Thread-Level Parallelism
Lecture 17 Multiprocessors and Thread-Level Parallelism
Presentation transcript:

Declarative Programming Languages for Multicore Architecures Workshop 15 January 2006

Declarative Programming Languages for Multicore Architectures Welcome… …to the first workshop on Declarative Programming Languages for Multicore Architectures! Multicores are coming We need to program them Correctly! Implicit parallelism is promising But can it be made effective

Declarative Programming Languages for Multicore Architectures Logistics Workshop, breakfast, and breaks here in Cypress Lunch in suite J2 – buffet style Informal style –Ask questions –Discuss –Mixture of retrospective, work in progress, and speculation on the future Name tags at the back if you dont have one No wireless

Declarative Programming Languages for Multicore Architectures Programme 8:40Keynote: The Evolution of Computing Architectures 1:00Panel 9:40Break2:00Break 9:55pH: Lessons Learned2:20Now you C it. Now you Dont. 10:20Nesl2:45Hume and Multicore Architectures 10:45Break3:10Automatic Parallelization and Granularity Control of Logic and Constraint Programs 11:00The Next Generation of Logic Languages 3:35Break 11:25A Look Back and Forward at Parallel Logic Programming 3:55Design and Implementation Issues for Atomicity 11:50Lunch4:20Stabilizers 4:45Nested Data Parallelism in Haskell 5:10Wrap Up

Declarative Programming Languages for Multicore Architectures Panel: Lessons from the Past and What it Means for Multicores Questions to ponder: What worked and what did not work in the research of the 80s and 90s? What are the remaining hard problems from previous research? Which hard problems are relevant to multicores, and which are mitigated? What does the experience of 80s and 90s suggest for multicore design/architecture?

Declarative Programming Languages for Multicore Architectures Thanks To all of you for attending To all the presenters To Anwar Ghuloum, Leaf Petersen, and Jesse Fang for helping me to organise the workshop To Intels PSL for sponsoring the workshop To Margarida Strickland for administrative support

Declarative Programming Languages for Multicore Architectures Keynote The Evolution of Computing Architectures Douglas Carmean Senior Principal Architect Intel Corporation

Declarative Programming Languages for Multicore Architectures Panel: Lessons from the Past and What it Means for Multicores What worked and what did not work in the research of the 80s and 90s? What are the remaining hard problems from previous research? Which hard problems are relevant to multicores, and which are mitigated? What does the experience of 80s and 90s suggest for multicore design/architecture?

Declarative Programming Languages for Multicore Architectures Summary Hardware Implicit parallelism Language design Key problems Cross fertilisation Hardware ideas Intel Next steps

Declarative Programming Languages for Multicore Architectures Hardware Summary Three types of parallelism: Vector units – floating point, but integer too SMT – multiple register sets in single core SMP – multiple cores on chip

Declarative Programming Languages for Multicore Architectures Implicit Parallelism Summary Over promised, under delivered Easy to generate lots of parallelism Problem is to schedule Cactus stacks, thunks worked well Sequential (deterministic) semantics is important Easier to debug Cost models are important Communication was a problem Not with multicores

Declarative Programming Languages for Multicore Architectures Language Design Thoughts Dont commit to specific hardware Dont base the language on the implementation Exact reduction operations are important (see Nesl) Domain specific languages Bad idea, use libraries instead Atomic great for when you have effects

Declarative Programming Languages for Multicore Architectures More Language Design Thoughts Control of effects is very beneficial Fewer writes TM logs Three layers of language: Explicit – transactional memory, atomicity Implicit – declarative and sequential Data parallelism – especially vector

Declarative Programming Languages for Multicore Architectures Controversy Are annotations needed? No Minimal (and vs or parallelism) Strategies Manuels ideas Message Passing – good or bad? MPI bad Perhaps a good programming abstraction exists for certain applications Cost models

Declarative Programming Languages for Multicore Architectures Key Problems Memory management Concurrent GC is an important problem Data locality How to place data Shared vs non-shared memory Memory Models Cost Models Debugging Put the models together Thread packages and good low-level libraries

Declarative Programming Languages for Multicore Architectures Cross Fertilisation Or-parallelism for search What is the equivalent in FP? Nesl good at Vector, SMT, and SMP Can LP do this too? What are the similarities between the communities and can they exchange results?

Declarative Programming Languages for Multicore Architectures Hardware Ideas - General General purpose mechanisms, language specific bad Memory bandwidth is fundamental issue Latency solved? Dataflow? Programmable hardware?

Declarative Programming Languages for Multicore Architectures Hardware Ideas - Specific Hardware support for Transactional Memory Read and/or write barriers for GC, TM, I-structs Fast context switching, lightweight spawn & sched No glass-jaws in synchronisation instructions Non-shared memory NUMA caches Allocate in cache

Declarative Programming Languages for Multicore Architectures Intel Feel free to talk to us: Anwar, David, Leaf, Neal, and Rob PSL is working on: McRT – threading, scheduling, and synchronisation Pillar – target language (a la C--) for parallel languages Intel Research: Jekyll, Autolocker, Lockbend, …

Declarative Programming Languages for Multicore Architectures This Workshop Follow On Feel free to give me feedback at: A website will be constructed if presenters are willing to give me their slides Ill send a message when ready

Declarative Programming Languages for Multicore Architectures The Future A once off? Repeat again next year and then see? –Open? –Associated with what? Or by itself? Something else? Follow on for the hardware ideas?