Download presentation
Presentation is loading. Please wait.
Published byJunior Sparks Modified over 9 years ago
1
High Performance Embedded Computing © 2007 Elsevier Lecture 4: Models of Computation Embedded Computing Systems Mikko Lipasti, adapted from M. Schulte Based on slides and textbook from Wayne Wolf
2
© 2006 Elsevier Topics Why models of computation? Structural models. Finite-state machines. Turing machines. Petri nets. Control flow graphs. Data flow models. Task graphs. Control flow models.
3
© 2006 Elsevier Models of computation Models of computation: Define the basic capabilities of an abstract computer Affect programming style No one model of computation is good for all algorithms. Systems that use multiple programming languages are called heterogeneously programmed.
4
© 2006 Elsevier Aspects of models of computation Large systems may require different models of computation for different parts. Models must communicate with one another Models of computation can be distinguished by: Finite state vs. infinite state Control vs. data Sequential vs. parallel Introduce models now, cover in more detail later
5
© 2006 Elsevier Processor graph M1M1 L1L1 M2M2 M3M3 M4M4 L2L2 L3L3
6
© 2006 Elsevier Finite state machines State transition graph and table are equivalent: s3 s1s2 0/0 0/1 1/0 0/0 1/1 1/0 0s1s20 1s1 0 0s2 1 1 s30 0 0 1 s11 I S Δ O
7
© 2006 Elsevier Nondeterministic FSM Several transitions out of a state for a given input. Equivalent to executing all alternatives in parallel. Future inputs may cause some paths to be pruned Can allow moves---goes to next state without input. Also known as nondeterministic finite automata (NFA) s1s2 a a
8
© 2006 Elsevier Deterministic FSM from NFA Any NFA can be transformed into a deterministic FSM. Add states for the various combinations of non-determinism. So why use NFAs? s1s2 a a s3 nondeterministic b s4 c s1s12 a s3 b s4 c c deterministic
9
© 2006 Elsevier 10101010110110 Turing machine General model of computing: program head tape state
10
© 2006 Elsevier Turing machine step 1. Read current square. 2. Erase current square. 3. Take state-dependent action: 1. Print new value. 2. Move to adjacent cell. 3. Set machine to next state.
11
© 2006 Elsevier Turing machine properties Example program: If (state = 2 and cell = 0): print 0, move left, state = 4. If (state = 2 and cell = 1): print 1, move left, state = 3. Can be implemented on many physical devices. Turing machine is a general model of computability. Can be extended to probabilistic behavior.
12
© 2006 Elsevier Turing machine properties Infinite tape = infinite state machine. Basic model of computability. Lambda calculus is alternative model. Other models of computing can be shown to be equivalent to or proper subset of Turing machine. What advantages and disadvantages do Turing machines have compared to finite state machines?
13
© 2006 Elsevier Control flow graph (CFG) Commonly used to model program structure. Nodes are unconditionally or conditionally executed. Is this a finite state or infinite state model of computation? x = a - b y = c + d x = a i = 0?
14
© 2006 Elsevier Data flow graph (DFG) Partially-ordered computations: +- * + -, * +, -, * -, +, *
15
© 2006 Elsevier Data flow streams Captures sequence but not time. Totally-ordered set of values. New values are appended at the end as they appear. May be infinite. + 88 -23 7 44 9 -28 -44 88 -23 7 44 9
16
© 2006 Elsevier Firing rules A node may have one or more firing rules. Firing rules determine when tokens are consumed and produced. Firing consumes a set of tokens at inputs, generates token at output.
17
© 2006 Elsevier Example firing rules Basic rule fires when tokens are available at all inputs: Conditional firing rule depends on control input: + a b c a b T
18
© 2006 Elsevier Data flow graph properties Finite state model. Basic data flow graph is acyclic. Scheduling provides a total ordering of operations.
19
© 2006 Elsevier Synchronous data flow (SDF) Lee/Messerschmitt: Relate data flow graph properties to schedulability. Synchronous communication between data flow nodes. Nodes may communicate at multiple rates.
20
© 2006 Elsevier SDF notation Nodes may have rates at which data are produced or consumed. Edges may have delays. +- 1 2 5
21
© 2006 Elsevier SDF example What are the input and output rates of each node? ++ 2 1 + 1 12 1 separate outputs
22
© 2006 Elsevier Delays in SDF graphs Delays do not change rates, only the amount of data stored in the system. Changes system start-up. +- 1 2 5 0
23
© 2006 Elsevier Kahn process network Process has unbounded FIFO at each input: Each channel carries a possibly infinite sequence or stream. A process maps one or more input sequences to one or more output sequences. processchannel
24
© 2006 Elsevier Monotonicity of Khan processes If F(X) is the output of a processor for input stream X, the system is monotonic if: X in X’ => F(X) in F(X’)
25
© 2006 Elsevier Networks of processes A network of processes relates the streams of several processes. If I = input sequences, X = internal sequences + outputs, then network behavior fixed point behavior is X = F(X,I) A network of monotonic processes is a monotonic process. Even in the presence of feedback loops.
26
© 2006 Elsevier Task graph Used to model parallelism. 11 22 P1P1 P2P2 P3P3 P4P4 P5P5
27
© 2006 Elsevier Task graph properties Not a Turning machine. No branching behavior. May be extended to provide conditionals, but still FSMs Possible models of execution time: Constant. Min-max bounds. Statistical. Can model late arrivals, early departures by adding dummy processes.
28
© 2006 Elsevier Petri net Parallel model of computation. place arc token transition
29
© 2006 Elsevier Firing rule A transition is enabled if each place at its inputs have at least one token. A transition doesn’t have to fire right away. Firing a transition removes tokens from inputs and adds a token to each output place. In general, may require multiple tokens to enable firing.
30
© 2006 Elsevier Properties of Petri nets Turing complete. Arbitrary number of tokens. Nondeterministic behavior. Naturally model parallelism.
31
© 2006 Elsevier Communication models Buffered communication – assumes memory is available to store a values if the receiver is not ready to receive it Unbuffered communication – assumes no memory between the sender and receiver Blocking communication – sender waits until the receiver has the data Nonblocking communication – does no require the sender to wait until the receiver has the data
32
© 2006 Elsevier Sources of parallelism Instruction-level parallelism – from executing independent instructions in parallel Data-level parallelism – from executing the same operation on multiple pieces of data in parallel Task-level parallelism – from performing multiple tasks in parallel What are mechanisms used to exploit each type of parallelism?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.