 Actor Model  Software Transactional Memory  Data Flow Programming.

Slides:



Advertisements
Similar presentations
The many faces of TM Tim Harris. Granularity Distributed, large-scale atomic actions Composable shared memory data structures Leaf shared memory data.
Advertisements

QuakeTM: Parallelizing a Complex Serial Application Using Transactional Memory Vladimir Gajinov 1,2, Ferad Zyulkyarov 1,2,Osman S. Unsal 1, Adrián Cristal.
Divide and Conquer Yan Gu. What is Divide and Conquer? An effective approach to designing fast algorithms in sequential computation is the method known.
Winter, 2004CSS490 MPI1 CSS490 Group Communication and MPI Textbook Ch3 Instructor: Munehiro Fukuda These slides were compiled from the course textbook,
Concurrency The need for speed. Why concurrency? Moore’s law: 1. The number of components on a chip doubles about every 18 months 2. The speed of computation.
Master/Slave Architecture Pattern Source: Pattern-Oriented Software Architecture, Vol. 1, Buschmann, et al.
1 Distributed Computing Algorithms CSCI Distributed Computing: everything not centralized many processors.
Copyright W. Howden1 Lecture 3: Elaboration and System Architecture.
Threads Clients Servers Code Migration Software Agents Summary
Scripting Languages For Virtual Worlds. Outline Necessary Features Classes, Prototypes, and Mixins Static vs. Dynamic Typing Concurrency Versioning Distribution.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
Lecture 2 – MapReduce CPE 458 – Parallel Programming, Spring 2009 Except as otherwise noted, the content of this presentation is licensed under the Creative.
Course Outline DayContents Day 1 Introduction Motivation, definitions, properties of embedded systems, outline of the current course How to specify embedded.
Lecture 29 Fall 2006 Lecture 29: Parallel Programming Overview.
Parallel Processing LAB NO 1.
Lecture 4: Parallel Programming Models. Parallel Programming Models Parallel Programming Models: Data parallelism / Task parallelism Explicit parallelism.
1 Chapter Client-Server Interaction. 2 Functionality  Transport layer and layers below  Basic communication  Reliability  Application layer.
LIGO-G Z 8 June 2001L.S.Finn/LDAS Camp1 How to think about parallel programming.
Concurrency Programming Chapter 2. The Role of Abstraction Scientific descriptions of the world are based on abstractions. A living animal is a system.
Chapter 8: Modelling Interactions and Behaviour UML Activity Diagram
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Computing with C# and the.NET Framework Chapter 1 An Introduction to Computing with C# ©2003, 2011 Art Gittleman.
SUMA: A Scientific Metacomputer Cardinale, Yudith Figueira, Carlos Hernández, Emilio Baquero, Eduardo Berbín, Luis Bouza, Roberto Gamess, Eric García,
MIMD Distributed Memory Architectures message-passing multicomputers.
ECE200 – Computer Organization Chapter 9 – Multiprocessors.
MapReduce Kristof Bamps Wouter Deroey. Outline Problem overview MapReduce o overview o implementation o refinements o conclusion.
Parallel architecture Technique. Pipelining Processor Pipelining is a technique of decomposing a sequential process into sub-processes, with each sub-process.
Chapter 5 Implementing UML Specification (Part II) Object-Oriented Technology From Diagram to Code with Visual Paradigm for UML Curtis H.K. Tsang, Clarence.
SOFTWARE DESIGN. INTRODUCTION There are 3 distinct types of activities in design 1.External design 2.Architectural design 3.Detailed design Architectural.
The Client-Server Model And the Socket API. Client-Server (1) The datagram service does not require cooperation between the peer applications but such.
MPI: Portable Parallel Programming for Scientific Computing William Gropp Rusty Lusk Debbie Swider Rajeev Thakur.
Charm++ overview L. V. Kale. Parallel Programming Decomposition – what to do in parallel –Tasks (loop iterations, functions,.. ) that can be done in parallel.
Lesson 1 1 LESSON 1 l Background information l Introduction to Java Introduction and a Taste of Java.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
Distributed Algorithms Dr. Samir Tartir Extracted from Principles of Concurrent and Distributed Programming, Second Edition By M. Ben-Ari.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Lesson 2: First Java Programs. 2.1 Why Java? Java is one of the most popular programming languages in the world. Java is a modern object-oriented programming.
Developing Business Processes Developing an activity diagram of the business processes can provide us with an overall view of the system.
Parallel Programming Models EECC 756 David D. McGann 18 May, 1999.
Development of concurrent and distributed programs with the Actor model and Akka César Aguilera Padilla Thematic CERN School of Computing Split, 27th of.
Programming Languages and Data Organization
Java Programming: From the Ground Up
UML Diagrams By Daniel Damaris Novarianto S..
SparkBWA: Speeding Up the Alignment of High-Throughput DNA Sequencing Data - Aditi Thuse.
Lecture 1 – Parallel Programming Primer
Miraj Kheni Authors: Toyotaro Suzumura, Koji Ueno
Object-Oriented Analysis and Design
MPI: Portable Parallel Programming for Scientific Computing
Unified Modeling Language
Parallel Virtual Machine
C# and the .NET Framework
课程名 编译原理 Compiling Techniques
UML Diagrams Jung Woo.
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
New trends in parallel computing
Chapter 8: Modelling Interactions and Behaviour UML Activity Diagram
CH#3 Software Designing (Object Oriented Design)
MapReduce Computing Paradigm Basics Fall 2013 Elke A. Rundensteiner
Introduction to Operating Systems
Database Applications (15-415) Hadoop Lecture 26, April 19, 2016
Requirements To Design In This Iteration
External Sorting The slides for this text are organized into chapters. This lecture covers Chapter 11. Chapter 1: Introduction to Database Systems Chapter.
Programming Languages
Activity Diagrams.
Distributed Algorithms
Prof. Leonardo Mostarda University of Camerino
Transactions in Distributed Systems
WG4: Language Integration & Tools
Uml diagrams In ooad.
Parallel I/O for Distributed Applications (MPI-Conn-IO)
Presentation transcript:

 Actor Model  Software Transactional Memory  Data Flow Programming

 Actor: an autonomous and concurrent entity sending “messages”  In response to a “message”, an actor may › Send an finite set of messages to known actors › Create a finite set of new actors › Define how it will respond to future messages

 Translating sequential object implementation to concurrent non- blocking ones  Transaction: a finite sequence of local and shared memory machine instructions  The illusion: Isolation and Atomicity  A shared object plays the role of an STM.

 Divide, conquer, merge the processing data  MapReduce › Map: map incoming data to intermediate results › Reduce: merge intermediate results to final results › Specialized File System

Relevant: Killim, Clojure

Similars: Erlang, ActorFoundry, Jetlang

STM on Groovy/Java

Map/Reduce

GPars Haskell  Based on Groovy and Java  Parallel Collection Functions  Data Flows and MapReduce  Actors  forkIO and Mvars  STM  Foreign Function Interface  Nested Data Parallelism

Akka MPI  Actors and Remote Actors  STM  Transactors: Transactional Actors  Message Passing Interface  Distributed Memory  C, C++, Fortran, Boost Library