Inbyggda realtidssystem: En flerkärning frälsning Embedded real-time systems: A multi-core salvation Thomas Nolte, 2013-04-25.

Slides:



Advertisements
Similar presentations
Parallel Programming and Algorithms : A Primer Kishore Kothapalli IIIT-H Workshop on Multi-core Technologies International Institute.
Advertisements

1 Hardware Support for Isolation Krste Asanovic U.C. Berkeley MURI “DHOSA” Site Visit April 28, 2011.
Operating System Concepts and Techniques Lecture 12 Interprocess communication-1 M. Naghibzadeh Reference M. Naghibzadeh, Operating System Concepts and.
WHAT IS AN OPERATING SYSTEM? An interface between users and hardware - an environment "architecture ” Allows convenient usage; hides the tedious stuff.
1 Chapter 1 Why Parallel Computing? An Introduction to Parallel Programming Peter Pacheco.
*time Optimization Heiko, Diego, Thomas, Kevin, Andreas, Jens.
Introduction CSCI 444/544 Operating Systems Fall 2008.
Prof. Srinidhi Varadarajan Director Center for High-End Computing Systems.
Real-Time Scheduling CIS700 Insup Lee October 3, 2005 CIS 700.
Threads in C# Threads in C#.
Piccolo – Paper Discussion Big Data Reading Group 9/20/2010.
An Introduction To PARALLEL PROGRAMMING Ing. Andrea Marongiu
1 Complexity of Network Synchronization Raeda Naamnieh.
Revisiting a slide from the syllabus: CS 525 will cover Parallel and distributed computing architectures – Shared memory processors – Distributed memory.
PZ11B Programming Language design and Implementation -4th Edition Copyright©Prentice Hall, PZ11B - Parallel execution Programming Language Design.
CS 584. A Parallel Programming Model We need abstractions to make it simple. The programming model needs to fit our parallel machine model. Abstractions.
1 Concurrent and Distributed Systems Introduction 8 lectures on concurrency control in centralised systems - interaction of components in main memory -
1: Operating Systems Overview
1 Dr. Frederica Darema Senior Science and Technology Advisor NSF Future Parallel Computing Systems – what to remember from the past RAMP Workshop FCRC.
Chapter 1 and 2 Computer System and Operating System Overview
CS533 - Concepts of Operating Systems
Chapter 1 and 2 Computer System and Operating System Overview
INTEL CONFIDENTIAL Why Parallel? Why Now? Introduction to Parallel Programming – Part 1.
Multi-core processors. History In the early 1970’s the first Microprocessor was developed by Intel. It was a 4 bit machine that was named the 4004 The.
Advances in Language Design
Course Outline DayContents Day 1 Introduction Motivation, definitions, properties of embedded systems, outline of the current course How to specify embedded.
Computer System Architectures Computer System Software
A Bridge to Your First Computer Science Course Prof. H.E. Dunsmore Concurrent Programming Threads Synchronization.
Introduction to Parallel Computing. Serial Computing.
Simple Wait-Free Snapshots for Real-Time Systems with Sporadic Tasks Håkan Sundell Philippas Tsigas.
LOGO OPERATING SYSTEM Dalia AL-Dabbagh
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
1 Computer System Overview Chapter 1. 2 n An Operating System makes the computing power available to users by controlling the hardware n Let us review.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Concurrency: Threads, Address Spaces, and Processes Andy Wang Operating Systems COP 4610 / CGS 5765.
Multi-core Programming Introduction Topics. Topics General Ideas Moore’s Law Amdahl's Law Processes and Threads Concurrency vs. Parallelism.
Recall: Three I/O Methods Synchronous: Wait for I/O operation to complete. Asynchronous: Post I/O request and switch to other work. DMA (Direct Memory.
Chapter 2 Parallel Architecture. Moore’s Law The number of transistors on a chip doubles every years. – Has been valid for over 40 years – Can’t.
DESIGNING VM SCHEDULERS FOR EMBEDDED REAL-TIME APPLICATIONS Alejandro Masrur, Thomas Pfeuffer, Martin Geier, Sebastian Drössler and Samarjit Chakraborty.
Super computers Parallel Processing By Lecturer: Aisha Dawood.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
Dr. Alexandra Fedorova School of Computing Science SFU
Multithreading in Java Sameer Singh Chauhan Lecturer, I. T. Dept., SVIT, Vasad.
SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING | SCHOOL OF COMPUTER SCIENCE | GEORGIA INSTITUTE OF TECHNOLOGY MANIFOLD Manifold Execution Model and System.
1 Computer Systems II Introduction to Processes. 2 First Two Major Computer System Evolution Steps Led to the idea of multiprogramming (multiple concurrent.
Thinking in Parallel – Implementing In Code New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
Lecture 3 : Performance of Parallel Programs Courtesy : MIT Prof. Amarasinghe and Dr. Rabbah’s course note.
Energy-Aware Resource Adaptation in Tessellation OS 3. Space-time Partitioning and Two-level Scheduling David Chou, Gage Eads Par Lab, CS Division, UC.
Concurrency, Processes, and System calls Benefits and issues of concurrency The basic concept of process System calls.
Static WCET Analysis vs. Measurement: What is the Right Way to Assess Real-Time Task Timing? Worst Case Execution Time Prediction by Static Program Analysis.
Platform Abstraction Group 3. Question How to deal with different types hardware and software platforms? What detail to expose to the programmer? What.
… begin …. Parallel Computing: What is it good for? William M. Jones, Ph.D. Assistant Professor Computer Science Department Coastal Carolina University.
Threaded Programming Lecture 1: Concepts. 2 Overview Shared memory systems Basic Concepts in Threaded Programming.
1 Parallel execution Programming Language Design and Implementation (4th Edition) by T. Pratt and M. Zelkowitz Prentice Hall, 2001 Section
Agenda  Quick Review  Finish Introduction  Java Threads.
Process by Dr. Amin Danial Asham. References Operating System Concepts ABRAHAM SILBERSCHATZ, PETER BAER GALVIN, and GREG GAGNE.
1/50 University of Turkish Aeronautical Association Computer Engineering Department Ceng 541 Introduction to Parallel Computing Dr. Tansel Dökeroğlu
Introduction Goal: connecting multiple computers to get higher performance – Multiprocessors – Scalability, availability, power efficiency Job-level (process-level)
Page 1 2P13 Week 1. Page 2 Page 3 Page 4 Page 5.
Static Translation of Stream Program to a Parallel System S. M. Farhad The University of Sydney.
Resource Optimization for Publisher/Subscriber-based Avionics Systems Institute for Software Integrated Systems Vanderbilt University Nashville, Tennessee.
Multi-Core CPUs Matt Kuehn. Roadmap ► Intel vs AMD ► Early multi-core processors ► Threads vs Physical Cores ► Multithreading and Multi-core processing.
Applied Operating System Concepts
The University of Adelaide, School of Computer Science
Multi-Processing in High Performance Computer Architecture:
EE 193: Parallel Computing
Multi-Processing in High Performance Computer Architecture:
CPSC 531: System Modeling and Simulation
Dr. Tansel Dökeroğlu University of Turkish Aeronautical Association Computer Engineering Department Ceng 442 Introduction to Parallel.
Types of Parallel Computers
Presentation transcript:

Inbyggda realtidssystem: En flerkärning frälsning Embedded real-time systems: A multi-core salvation Thomas Nolte,

1997 – 2000 – 2006 – 2009 – 2012 Undergrad. Student Grad./PHD Student Assistant Professor Associate Professor ABB CRC , 2006

3

5 Embedded systems

Environment Control program sensors actuators Task CPU Key properties of interest = performance & predictability 6 Embedded systems

LINK

8

9

Too EARLY inflationToo LATE inflationPerfect TIMING

Environment Control program sensors actuators Task When software is executing, it is executing as tasks void foo() { … while(status) { in = read_sensor(); action = take_action(in); … status = perform_action(action); sleep(1000); } Task 11

12 Instruction and data memory (1)

13 Instruction and data memory (2) DATA INSTRUCTION void foo() { … while(status) { in = read_sensor(); action = take_action(in); … status = perform_action(action); sleep(1000); } Task

14 Instruction and data memory (3) cache memory

15 Instruction and data memory (4) cache memory Key properties of interest = performance & predictability

A simple model Environment Control program sensors actuators Task task time period worst-case execution time 16

What to run and when Environment Control program sensors actuators Task task 1 time task 2 time task 2 prio low prio high time task 2 task 1 The scheduler decides which task to execute 17

Determining the response time Environment Control program sensors actuators Task time task 2 task 1 preempted resumed finished response time of task 1 response time of task 2 18

Predictability wrt embedded control systems ●Program, or task ●Instruction for how the computer should work, i.e., what the computer shall do ●Execution time ●How long time it takes for the program/task to execute ●Response time ●How long time it takes for the program/task to execute to completion, also when running together with other programs/tasks ●Research conducted the past 30 years Pretty good understanding how predictable embedded systems should be constructed 19

The multi-core arrived! ●Isn’t more better?

The Free Lunch Is Over A Fundamental Turn Toward Concurrency in Software By Herb Sutter 21

22 The multi-core revolution single coredual corequad coremulti core parallel computer = many cores

23 Multi-core memory hierarchy (1)

24 Multi-core memory hierarchy (2)

Challenges with many cores ●Sequential guarantee is not valid anymore ●Programs could previously execute in parallel, but at any given time only one program is executing ●Now at any given point in time as many programs as there are cores can be executing 25 pgm 1 time pgm 2 time pgm 2 Core 1 Core 2 pgm 2 pgm 1 time Core

Performance on a parallel machine (1) ●The way we construct programs become different chefs do not bake 1 cake 100x faster than 1 chef

Performance on a parallel machine (2) ●The way we execute programs become different 27 pgm 1 time Core pgm pgm 2 51

Performance on a parallel machine (3) ●The way we execute programs become different 28 pgm 1 time pgm 2 time Core 1 Core 2 pgm %

Performance on a parallel machine (4) ●The way we execute programs become different 29 time pgm 2 time Core 1 Core 2 pgm 1 pgm j

Predictability under parallelism (1) ●New techniques are required to resolve conflicts 30 pgm 1 time pgm 2 time pgm 2 Core 1 Core 2

Predictability under parallelism (2) ●New techniques are required to resolve conflicts 31 pgm 1 time pgm 2 time pgm 2 Core 1 Core 2 shared memory

Predictability under parallelism (3) ●New techniques are required to resolve conflicts 32 pgm 1 time pgm 2 time pgm 2 Core 1 Core 2 shared memory

…and the many-core is here! ●More is better?

34 Epophany TM Multi-core Solution

single core dual core many core empty core77 task scheduling task partitioning task scheduling local global task partitioning task scheduling idle? assume 7 tasks: 35 New challenges

task 0 task 1 task 2 task 3 task 4 task 5 task 6 task 7 task 8 task 9 Less shared state and data among cores Less shared state and data among cores Locality important when sharing data Locality important when sharing data More message passing (via powerful interconnects) More message passing (via powerful interconnects) Sleep cores when idle to save powerSleep cores when idle to save power Less shared state and data among cores Less shared state and data among cores Locality important when sharing data Locality important when sharing data More message passing (via powerful interconnects) More message passing (via powerful interconnects) Sleep cores when idle to save powerSleep cores when idle to save power

Our efforts wrt. multi/many-core ●New ways to program and analyze ●Meng Liu (PhD Student) ●Hamid Reza Faragardi (PhD Student) ●New ways to partition and design ●Moris Behnam (Senior Lecturer) ●Daniel Hallmans (PhD Student) ●New ways to execute ●Mikael Åsberg (PhD Student) ●Nima M. Khalilzad (PhD Student) ●New ways to communicate and synchronize ●Mohammad Ashjaei (PhD Student) ●Sara Afshar (PhD Student) 37

PhD student Post-doc Key CORE main research areas (2013):  Multi-core and many-core real-time systems  Real-time systems scheduling and synchronization  Predictable execution of real-time systems  Compositional execution and analysis of real-time systems  Simulation-based analysis of embedded systems  Stochastic and statistical analysis of real-time systems  Real-time communications  Adaptive and reconfigurable real-time systems CORE alumni: The Complex Real-Time Embedded (CORE) research group: Mikael Åsberg Nima M. KhalilzadDaniel HallmansMohammad AshjaeiSara AfsharMeng Liu Kristian Sandström Moris BehnamThomas Nolte Hamid R. Faragardi TBD Insik Shin Holger KienleJohan KraftFarhang NematiYue LuAnders Wall

Embedded real-time systems: A multi-core salvation Thomas Nolte,