Presentation is loading. Please wait.

Presentation is loading. Please wait.

Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University.

Similar presentations


Presentation on theme: "Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University."— Presentation transcript:

1 Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University Riyadh, Kingdom of Saudi Arabia batouche@ccis.edu.sa

2 Outline Introduction Cellular Automata Artificial Neural Networks Swarm Intelligence Evolutionary Computing Quantum Computing DNA/Molecular Computing Artificial Life Artificial Immune System Conclusion 2

3 Introduction 3

4 4 Complex systems The fundamental characteristic of a complex system is that it exhibits emergent properties: Local interaction rules between simple agents give rise to complex pattern and global behavior

5 Complex Systems Complex systems are systems that exhibit emergent behavior: Anthills Human societies Financial Markets Climate Nervous systems Immune systems Human societies Cities Galaxies Modern telecommunication infrastructures 5

6 6 Characteristics of a complex system? A complex system displays some or all of the following characteristics: Agent-based Basic building blocks are the characteristics and activities of individual agents Heterogeneous The agents differ in important characteristics Dynamic Characteristics change over time, usually in a nonlinear way; adaptation Feedback Changes are often the result of feedback from the environment Organization Agents are organized into groups or hierarchies Emergence Macro-level behaviors that emerge from agent actions and interactions

7 7 Attributes for complex system? Interdependent Independent Distributed Cooperative Competitive Adaptive

8 8 Complex Systems perspectives Complex Systems as a Science to Understand Nature Complex Systems as a New Form of Engineering

9 Complex System Engineering How can we understand and make use of these emergent phenomena to develop new ways of generating computer programs? Can we build self-adapting, self-organizing, and evolving computer systems and programs? 9 See Demo – Video 3D Creatures

10 Complex System Engineering 10 Most biological systems do not forecast or Schedule. They respond to their environment — quickly, robustly, and adaptively As engineers, let us don’t try to control the system… Design the system so that it controls and adapts itself to the environment

11 Artificial Complex Systems Cellular Automata Artificial Neural Networks Swarm Intelligence Evolutionary Computing Quantum Computing DNA/Molecular Computing Artificial Life Artificial Immune System 11

12 Cellular Automata 12

13 Purpose 13 In Theory: Computation of all computable functions Construction of (also non-homogenous) automata by other automata, the offspring being at least as powerful (in some well-defined sense) as the parent In Practice: Exploring how complex systems with emergent patterns seem to evolve from purely local interactions of agents. I.e. Without a “master plan!”

14 Cellular Automata A cellular automata is a family of simple, finite- state machines that exhibit interesting, emergent behaviors through their interactions in a population 14

15 The famous BOIDS model shows how flocking behavior can emerge from a collection of agents following a few simple rules. Emergent Behavior 15

16 Original concept of CA is most strongly associated with John von Neumann who was interested in the connections between biology and the new study of automata theory Stanislaw Ulam suggested to von Neumann the use a cellular automata as a framework for researching these connections. The original concept of CA can be credited to Ulam, while the early development of the concept is credited to von Neumann. Ironically, although von Neumann made many contributions and developments in CA, they are commonly referred to as “non-von Neumann style”, while the standard model of computation (CPU, globally addressable memory, serial processing) is know as “von Neumann style ”. 16

17 17 Cellular Automata (CAs) Have been used as: massively parallel computer architecture model of physical phenomena (Fredkin, Wolfram) VLSI Testing Data Encryption Error Correcting Code Correction Testable Synthesis Generation of hashing Function Currently being investigated as model of quantum computation (QCAs)

18 Grid Mesh of cells. Simplest mesh is one dimensional. Cell Basic element of a CA. Cells can be thought of as memory elements that store state information. All cells are updated synchronously according to the transition rules. Rules 18

19 Local interaction leads to global dynamics. One can think of the behavior of a cellular automata like that of a “wave” at a sports event. Each person reacts to the state of his neighbors (if they stand, he stands). 19

20 Rule Application Next state of the core cell is related to the states of the neighboring cells and its current state. An example rule for a one dimensional CA: 011->x0x All possible states must be described. Next state of the core cell is only dependent upon the sum of the states of the neighboring cells. For example, if the sum of the adjacent cells is 4 the state of the core cell is 1, in all other cases the state of the core cell is 0. 20

21 21 Structure Discrete space (lattice) of regular cells 1D, 2D, 3D, … rectangular, hexagonal, … At each unit of time a cell changes state in response to ( Time advances in discrete steps ): its own previous state states of neighbors (within some “radius”) All cells obey same state update rule, depending only on local relations an FSA Synchronous updating (parallel processing)

22 Structure: Neighborhoods 22

23 1-DIMENSIONAL AUTOMATA 23

24 One-Dimensional CA’s Game of Life is 2-D. Many simpler 1-D CAs have been studied For a given rule-set, and a given starting setup, the deterministic evolution of a CA with one state (on/off) can be pictured as successive lines of colored squares, successive lines under each other 24

25 Neighborhoods 25

26 3 Black = White 2 Black = Black 1 Black = Black 3 White = White Now make your own CA 26

27 “A New Kind of Science” Stephen Wolfram ISBN 1-57955-008-8 www.wolframscience.com

28 1-D CA Example 28 Rules 0 0 1 1 0 1 1 0 Rule# = 2 + 4 + 16 + 32 = 54

29 Wolfram Model 1 1 1 1 1 1 1 0 Rule #126 = 64 + 32 + 16 + 8 + 4 + 2 + 0 = 126 Most of the rules are degenerate, meaning they create repetitive patterns of no interest. However there are a few rules which produce surprisingly complex patterns that do not repeat themselves. 1 1 1 1 1 1 0 0 Rule #124 = 64 + 32 + 16 + 8 + 4 + 0 + 0 = 124 29

30 Wolfram Model we can view the state of the model at any time in the future as long as we step through all the previous states. 30

31 31 Hundred generations of Rule 30

32 CA Example: Rule 30 111 110 101 100 011 010 001 000 0 0 0 1 1 1 1 0 Rule (0 + 0 + 0 + 16 + 8 + 4 + 2 +0 ) = 30 32

33 Conus Textile pattern 33

34 The pattern is neither regular nor completely random. It appears to have some order, but is never predictable. 34

35 See Demo - NetLogo Rule #45=32+8+4+1 = 0 2 7 + 0 2 6 + 1 2 5 + 0 2 4 +1 2 3 + 1 2 2 + 0 2 1 + 1 2 0 =0 0 1 0 1 1 0 1 Rule #30=16+8+4+2 = 0 2 7 + 0 2 6 + 0 2 5 + 1 2 4 +1 2 3 + 1 2 2 + 1 2 1 + 0 2 0 =0 0 0 1 1 1 1 0 This naming convention of the 256 distinct update rules is due to Stephen Wolfram. He is one of the pioneers of Cellular Automata and author of the book a New Kind of Science, which argues that discoveries about cellular automata are not isolated facts but have significance for all disciplines of science. Wolfram Model 35

36 0 1 0 1 1 0 1 0 Rule (0 + 2 + 0 + 8 + 16 + 0 + 64) = 90 Wolfram Rule 90 36

37 Wolfram Rule 110 Proven to be Turing Complete - Rich enough for universal computation interesting result because Rule 110 is an extremely simple system, simple enough to suggest that naturally occurring physical systems may also be capable of universality 37

38 Wolfram Rule 99 38 Rule# 99 = 0 2 7 + 1 2 6 + 1 2 5 + 0 2 4 + 0 2 3 + 0 2 2 + 1 2 1 + 1 2 0 0 + 64 + 32 + 0 + 0 + 0 + 2 + 1

39 39

40 Mollusc Pigmentation Patterns 40

41 Wolfram’s CA classes 1,2 From observation, initially of 1-D CA spacetime patterns, Wolfram noticed 4 different classes of rule- sets. Any particular rule-set falls into one of these:-: CLASS 1: From any starting setup, pattern converges to all blank -- fixed attractor CLASS 2: From any start, goes to a limit cycle, repeats same sequence of patterns for ever. -- cyclic attractors 41

42 Wolfram’s CA classes 3,4 CLASS 3: Turbulent mess, chaos, no patterns to be seen. CLASS 4: From any start, patterns emerge and continue without repetition for a very long time (could only be 'forever' in infinite grid) Classes 1 and 2 are boring, Class 3 is messy, Class 4 is 'At the Edge of Chaos' - at the transition between order and chaos -- where Game of Life is!. 42

43 2-DIMENSIONAL AUTOMATA 43

44 2-dimensional cellular automaton consists of an infinite (or finite) grid of cells, each in one of a finite number of states. Time is discrete and the state of a cell at time t is a function of the states of its neighbors at time t-1. 44

45 Neighborhoods 45 Von NeumannMoore margolus

46 Cryptography use, Rule 30 Simulations Gas behaviour. Forest fire propagation. Urban development. Traffic flow. Air flow. Crystallization process. Alternative to differential equations 46

47 47 Snowflakes

48 Bead-sort Bead-Sort is a method of ordering a set of positive integers by mimicking the natural process of objects falling to the ground, as beads on an abacus slide down vertical rods. The number of beads on each horizontal row represents one of the numbers of the set to be sorted, and it is clear that the final state will represent the sorted set. 48 [10 8 6 2 4 12][2 4 6 8 10 12]

49 Bead-sort 49

50 Bead-sort - CA implementation 50 [10 8 6 2 4 12] [2 4 6 8 10 12] [10 8 6 2 4 12] [2 4 6 8 10 12] See Demo - Netlogo the natural process of objects falling to the ground

51 Bead-sort Extended CA implementation The Bead-Sort can be modeled by the two-dimensional cellular automaton (CA) rule,. For the Antigravity Bead-Sort, we add a second rule:. 51

52 Bead-sort Extended 52 The "extended" (anti-gravity) mode allows the inclusion of all integers, with "negative beads" rising while "positive beads" fall.

53 53 Example: Conway’s Game of Life Invented by Conway in late 1960s A simple CA capable of universal computation Structure: 2D space rectangular lattice of cells binary states (alive/dead) neighborhood of 8 surrounding cells (& self) simple population-oriented rule

54 54 Example: Conway’s Game of Life Cell State = dead/off/0 State = alive/on/1

55 A cell dies or lives according to some transition rule The world is round (flips over edges) How many rules for Life? 20, 40, 100, 1000? T = 0T = 1 transition rules Example: Conway’s Game of Life 55

56 56 State Transition Rule Live cell has 2 or 3 live neighbors  stays as is (stasis) Live cell has < 2 live neighbors  dies (loneliness) Live cell has > 3 live neighbors  dies (overcrowding) Empty cell has 3 live neighbors  comes to life (reproduction)

57 57 State Transition Rule Survive with 2 or 3 live neighbors Live cell  stays as is (stasis) otherwise dies from loneliness or overcrowding Generate with 3 live neighbors Empty cell  comes to life (reproduction)

58 58 State Transition Rule Three simple rules: dies if number of alive neighbor cells =< 1(loneliness) dies if number of alive neighbor cells >= 4 (overcrowding) generate alive cell if number of alive neighbor cells = 3(procreation)

59 Examples of the rules loneliness(dies if #alive =< 1) overcrowding(dies if #alive >= 4) procreation(lives if #alive = 3) State Transition Rule 59

60 CA: Discrete Time, Discrete Space Initial Setup Number of Neighbors After Pass 1 After Pass 2 60

61 T = 0 neighboring values T = 1 Game of Life: 2D Cellular Automata using simple rules Emergent pattern: Blinker 61

62 Emergent patterns Conway automaton can simulate a number of different effects that can be found in the evolution of a living population. Equilibria Oscillation Movement square 2 steps diagonal beehive 2 steps horizontal boat 3 steps instability (all the space is filled up by horizontal lines) ship 15 steps instability toast chaos? 62

63 Game of Life: emergent patterns 63 gliders: patterns that moves constantly across the grid Conway’s Rules: Game of Life Survive with 2 – 3 living neighbors Generate with 3 living neighbors Gosper’s glider gun : emits glider stream

64 Emergent Patterns 64

65 Emergent Patterns Oscillators-objects that change from step to step, but eventually repeat themselves. These include, but are not limited to, period 2 oscillators, including the blinker and the toad. Blinker Toad 65

66 Emergent Patterns: A Clock 66 See Demo: Game of Life

67 Emergent Patterns: 67 See Demo: Game of Life Oscillator Barber’s Pole Pulsar SpaceShip Beacon Glider

68 Emergent Patterns: Gosper’s Glider Gun 68 See Demo: Game of Life

69 Emergent Patterns: Puffer train 69

70 Emergent Patterns: Double-Barreled Gun 70

71 Emergent Patterns: Edge Shooter 71

72 Emergent Patterns: Evolution of a breeder... 72

73 Game of Life - implications Typical Artificial Life, or Non-Symbolic AI, computational paradigm: bottom-up parallel locally-determined Complex behavior from (... emergent from...) simple rules. Gliders, blocks, traffic lights, blinkers, glider-guns, eaters, puffer-trains... 73

74 Game of Life as a Computer ? Higher-level units in Game of Life can in principle be assembled into complex 'machines' -- even into a full computer, or Universal Turing Machine. 'Computer memory' held as 'bits' denoted by 'blocks‘ laid out in a row stretching out as a potentially infinite 'tape'. Bits can be turned on/off by well-aimed gliders. 74

75 Game of Life as a Computer ? Bits can be turned on/off by well-aimed gliders: 75

76 This is a Turing Machine implemented in Conway's Game of Life. http://rendell-attic.org/gol/tm.htm 76

77 This is a Universal Turing Machine (UTM) implemented in Conway's Game of Life. Designed by Paul Rendell 10/February/2010. http://rendell-attic.org/gol/utm/index.htm 77

78 78 Foundations Turing Machine -> Computer The theory of computation and the practical application it made possible — the computer — was developed by an Englishman called Alan Turing.

79 Turing machines Turing machines (TM ’ s) were introduced by Alan Turing in 1936 They are more powerful than both finite automata or pushdown automata. In fact, they are as powerful as any computer we have ever built. The main improvement from PDA ’ s is that they have infinite accessible memory (in the form of a tape) which can be read and written to.

80 Basic design of Turing machine Control Infinite tape Tape head Usual finite state control The basic improvement from the previous automata is the infinite tape, which can be read and written to. The input data begins on the tape, so no separate input is needed (e.g. DFA, PDA).

81 Turing Machine... Infinite tape: Γ* Tape head: read current square on tape, write into current square, move one square left or right FSM:like PDA (PushDown Automata), except: transitions also include direction (left/right) final accepting and rejecting states FSM

82 82 A Turing Machine...... Tape Read-Write head Control Unit (FSM: Finite Sate Machine)

83 83 The Tape...... Read-Write head No boundaries -- infinite length The head moves Left or Right

84 84...... Read-Write head The head at each time step: 1. Reads a symbol 2. Writes a symbol 3. Moves Left or Right

85 85 Turing Machine EG Successor Program Sample Rules: If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! Let’s see how they are carried out on a piece of paper that contains the reverse binary representation of 47:

86 86 Turing Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 111101

87 87 Turing Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 011101

88 88 Turing Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 001101

89 89 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 000101

90 90 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 000001

91 91 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 000011

92 92 Turing Machine EG Successor Program So the successor’s output on 111101 was 000011 which is the reverse binary representation of 48. 111101 = 1 + 2 + 4 +8 + 0 + 32 = 47 000011 = 0 + 0 + 0 + 0 + 16 + 32 = 48

93 93 Turing Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 1111111

94 94 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 0111111

95 95 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 0011111

96 96 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 0001111

97 97 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 0000111

98 98 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 0000011

99 99 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 0000001

100 100 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 0000000

101 101 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! 00000001 The successor of 127 is 128 !!!

102 03/03/2000FLAC: Reductions102 Turing machines as functions TM Input word Output word f  A function f is a computable function if a TM with word w as input halts with f(w) as output. w f(w)

103 03/03/2000FLAC: Reductions103 First computers: custom computing machines 1950 -- Eniac: the control is hardwired manually for each problem. Control Input tape (read only) Output tape (write only) Work tape (memory)

104 03/03/2000FLAC: Reductions104 TMs can be described using text Input tape (read only) Control Output tape (write only) Work tape (memory) Program n states s letters for alphabet transition function: d(q0,a) = (q1, b, L) d(q0,b) = (q1, a, R) …. Consequence: There is a countable number of Turing Machines

105 The Universal Turing Machine We can build a Turing Machine that can do the job of any other Turing Machine. That is, it can tell you what the output from the second TM would be for any input! This is known as a universal Turing machine (UTM). A UTM acts as a completely general- purpose computer, and basically it stores a program and data on the tape and then executes the program.

106 03/03/2000FLAC: Reductions106 There is a TM which can simulate any other TM (Alan Turing, 1930) Input tape Universal TM Output tape Work tape (memory) Interpreter Program

107 03/03/2000FLAC: Reductions107 The Digital Computer: a UTM Machine code Universal TM Output Memory + disk Machine-code Interpreter Input

108 Palindrome Example - UTM Figure 2 – Palindrome Example of a TM computation

109 Palindrome Example Figure 2 – Palindrome Example of a TM computation

110 Palindrome Example Figure 2 – Palindrome Example of a TM computation

111 Palindrome Example Figure 2 – Palindrome Example of a TM computation

112 Palindrome Example Figure 2 – Palindrome Example of a TM computation

113 Palindrome Example Figure 2 – Palindrome Example of a TM computation

114 Palindrome Example Figure 2 – Palindrome Example of a TM computation

115 Palindrome Example Figure 2 – Palindrome Example of a TM computation

116 Palindrome Example Figure 2 – Palindrome Example of a TM computation

117 Palindrome Example Figure 2 – Palindrome Example of a TM computation

118 Palindrome Example Figure 2 – Palindrome Example of a TM computation

119 119 Can Machines Think? In “Computing machinery and intelligence,” written in 1950, Turing asks whether machines can think. He claims that this question is too vague, and proposes, instead, to replace it with a different one. That question is: Can machines pass the “imitation game” (now called the Turing test)? If they can, they are intelligent. Turing is thus the first to have offered a rigorous test for the determination of intelligence quite generally.

120 120 Self-reproducing CAs von Neumann saw CAs as a good framework for studying the necessary and sufficient conditions for self-replication of structures. von N's approach: self-rep of abstract structures, in the sense that gliders are abstract structures. His CA had 29 possible states for each cell (compare with Game of Life 2, black and white) and his minimum self-rep structure had some 200,000 cells.

121 121 Self-rep and DNA This was early 1950s, pre-discovery of DNA, but von N's machine had clear analogue of DNA which is both: used to determine pattern of 'body' interpreted and itself copied directly copied without interpretation as a symbol string Simplest general logical form of reproduction (?) How simple can you get?

122 122 Langton’s Loops Chris Langton formulated a much simpler form of self- rep structure - Langton's loops - with only a few different states, and only small starting structures. [ Further developments -- eg 'Wireworld'. ]

123 3-Dimensional CA 123 One difficulty with three-dimensional cellular automata is the graphical representation (on two- dimensional paper or screen)

124 Artificial Neural Networks 124

125 125 Artificial Neural Networks (ANN) What is an Artificial Neural Network? Artificial Neural Networks are crude attempts to model the highly massive parallel and distributed processing we believe takes place in the brain.Artificial Neural Networks are crude attempts to model the highly massive parallel and distributed processing we believe takes place in the brain.

126 Developing Intelligent Program Systems Neural Nets Two main areas of activity: Biological: Try to model biological neural systems. Computational: develop powerful applications. 126

127 127 Biological Motivation: Brain Networks of processing units (neurons) with connections (synapses) between them Large number of neurons: 10 11 Large connectitivity: each connected to, on average, 10 4 others Parallel processing Distributed computation/memory Processing is done by neurons and the memory is in the synapses Robust to noise, failures  ANNs attempt to capture this mode of computation

128 128 The Brain as a Complex System The brain uses the outside world to shape itself. (Self-organization) It goes through crucial periods in which brain cells must have certain kinds of stimulation to develop such powers as vision, language, smell, muscle control, and reasoning. (Learning, evolution, emergent properties)

129 129 Main Features of the Brain Robust – fault tolerant and degrade gracefully Flexible -- can learn without being explicitly programmed Can deal with fuzzy, probabilistic information Is highly parallel

130 130 Characteristic of Biological Computation Massive Parallelism Locality of Computation → Scalability Adaptive (Self Organizing) Representation is Distributed

131 Artificial Neural Networks History of ANNs

132 132 History of Artificial Neural Networks 1943: McCulloch and Pitts proposed a model of a neuron --> Perceptron 1960s: Widrow and Hoff explored Perceptron networks (which they called “Adalines”) and the delta rule. 1962: Rosenblatt proved the convergence of the perceptron training rule. 1969: Minsky and Papert showed that the Perceptron cannot deal with nonlinearly-separable data sets---even those that represent simple function such as X-OR. 1970-1985: Very little research on Neural Nets 1986: Invention of Backpropagation [Rumelhart and McClelland, but also Parker and earlier on: Werbos] which can learn from nonlinearly- separable data sets. Since 1985: A lot of research in Neural Nets -> Complex Systems

133 Developing Intelligent Program Systems Neural Nets Applications Neural nets can be used to answer the following: Pattern recognition: Does that image contain a face? Classification problems: Is this cell defective? Prediction: Given these symptoms, the patient has disease X Forecasting: predicting behavior of stock market Handwriting: is character recognized? Optimization: Find the shortest path for the TSP. 133

134 Artificial Neural Networks Biological Neuron

135 Typical Biological Neuron 135

136 136 The Neuron The neuron receives nerve impulses through its dendrites. It then sends the nerve impulses through its axon to the terminal buttons where neurotransmitters are released to simulate other neurons.

137 137 The neuron The unique components are: Cell body or soma which contains the nucleus The dendrites The axon The synapses

138 138 The neuron - dendrites The dendrites are short fibers (surrounding the cell body) that receive messages The dendrites are very receptive to connections from other neurons. The dendrites carry signals from the synapses to the soma.

139 139 The neuron - axon The axon is a long extension from the soma that transmits messages Each neuron has only one axon. The axon carries action potentials from the soma to the synapses.

140 140 The neuron - synapses The synapses are the connections made by an axon to another neuron. They are tiny gaps between axons and dendrites (with chemical bridges) that transmit messages A synapse is called excitatory if it raises the local membrane potential of the post synaptic cell. Inhibitory if the potential is lowered.

141 Artificial Neural Networks artificial Neurons

142 142 Typical Artificial Neuron inputs connection weights threshold output

143 143 Typical Artificial Neuron linear combination net input (local field) activation function

144 144 Equations Net input: Neuron output:

145 145 Artificial Neuron Incoming signals to a unit are combined by summing their weighted values Output function: Activation functions include Step function, Linear function, Sigmoid function, … 1 f(  ) Input s Output=f(  ) xiwixiwi x1x1 xpxp w1w1 w0w0 wpwp

146 146 Activation functions Step function Sign functionSigmoid (logistic) function step(x) = 1, if x >= threshold 0, if x < threshold (in picture above, threshold = 0) sign(x) = +1, if x >= 0 -1, if x < 0 sigmoid(x) = 1/(1+e -x ) Adding an extra input with activation a 0 = -1 and weight W 0,j = t (called the bias weight) is equivalent to having a threshold at t. This way we can always assume a 0 threshold. Linear function pl(x) =x

147 147 Real vs. Artificial Neurons axon dendrites synapse cell x0x0 xnxn w0w0 wnwn o Threshold units

148 148 Neurons as Universal computing machine In 1943, McCulloch and Pitts showed that a synchronous assembly of such neurons is a universal computing machine. That is, any Boolean function can be implemented with threshold (step function) units.

149 149 Implementing AND x1x1 x2x2 o(x 1,x 2 ) 1 1 W=1.5

150 150 Implementing OR x1x1 x2x2 o(x 1,x 2 ) 1 1 W=0.5 o(x1,x2) = 1 if –0.5 + x1 + x2 > 0 = 0 otherwise

151 151 Implementing NOT x1x1 o(x 1 ) W=-0.5

152 152 Implementing more complex Boolean functions x1x1 x2x2 1 1 0.5 x 1 or x 2 x3x3 1 1 1.5 (x 1 or x 2 ) and x 3

153 153 Using Artificial Neural Networks When using ANN, we have to define: Artificial Neuron Model ANN Architecture Learning mode

154 Artificial Neural Networks ANN Architecture

155 155 ANN Architecture Feedforward: Links are unidirectional, and there are no cycles, i.e., the network is a directed acyclic graph (DAG). Units are arranged in layers, and each unit is linked only to units in the next layer. There is no internal state other than the weights. Recurrent: Links can form arbitrary topologies, which can implement memory. Behavior can become unstable, oscillatory, or chaotic.

156 156 Artificial Neural Network Feedforward Network Output layer Input layer Hidden layers fully connected sparsely connected

157 157 Artificial Neural Network FeedForward Architecture Information flow unidirectional Multi-Layer Perceptron (MLP) Radial Basis Function (RBF) Kohonen Self- Organising Map (SOM)

158 158 Artificial Neural Network Recurrent Architecture Feedback connections Hopfield Neural Networks: Associative memory Adaptive Resonance Theory (ART)

159 159 Artificial Neural Network Learning paradigms Supervised learning: Teacher presents ANN input-output pairs, ANN weights adjusted according to error Classification Control Function approximation Associative memory Unsupervised learning: no teacher Clustering

160 160 ANN capabilities Learning Approximate reasoning Generalisation capability Noise filtering Parallel processing Distributed knowledge base Fault tolerance

161 161 Main Problems with ANN Contrary to Expert sytems, with ANN the Knowledge base is not transparent (black box) Learning sometimes difficult/slow Limited storage capability

162 162 Some applications of ANNs Pronunciation: NETtalk program (Sejnowski & Rosenberg 1987) is a neural network that learns to pronounce written text: maps characters strings into phonemes (basic sound elements) for learning speech from text Speech recognition Handwritten character recognition:a network designed to read zip codes on hand-addressed envelops ALVINN (Pomerleau) is a neural network used to control vehicles steering direction so as to follow road by staying in the middle of its lane Face recognition Backgammon learning program Forecasting e.g., predicting behavior of stock market

163 163 Application of ANNs Network StimulusResponse 0 1 0 1 1 1 0 0 1 1 0 0 1 0 1 0 Input Pattern Output Pattern encoding decoding The general scheme when using ANNs is as follows:

164 164 Application: Digit Recognition

165 165 Matlab Demo Learning XOR function Function approximation Digit Recognition

166 166 Learning XOR Operation: Matlab Code P = [ 0 0 1 1;... 0 1 0 1] T = [ 0 1 1 0]; net = newff([0 1;0 1],[6 1],{'tansig' 'tansig'}); net.trainParam.epochs = 4850; net = train(net,P,T); X = [0 1]; Y = sim(net,X); display(Y);

167 167 Function Approximation: Learning Sinus Function P = 0:0.1:10; T = sin(P)*10.0; net = newff([0.0 10.0],[8 1],{'tansig' 'purelin'}); plot(P,T); pause; Y = sim(net,P); plot(P,T,P,Y,’o’); pause; net.trainParam.epochs = 4850; net = train(net,P,T); Y = sim(net,P); plot(P,T,P,Y,’o’);

168 168 Digit Recognition: P = [ 1 0 1 1 1 1 1 1 1 1 ; 1 1 1 1 0 1 1 1 1 1 ; 1 0 1 1 1 1 1 1 1 1 ; 1 0 0 0 1 1 1 0 1 1 ; 0 1 0 0 0 0 0 0 0 0 ; 1 0 1 1 1 0 0 1 1 1 ; 1 0 1 1 1 1 1 0 1 1 ; 0 1 1 1 1 1 1 0 1 1 ; 1 0 1 1 1 1 1 1 1 1 ; 1 0 1 0 0 0 1 0 1 0 ; 0 1 0 0 0 0 0 0 0 0 ; 1 0 0 1 1 1 1 1 1 1 ; 1 0 1 1 0 1 1 0 1 1 ; 1 1 1 1 0 1 1 0 1 1 ; 1 0 1 1 1 1 1 1 1 1 ]; T = [ 1 0 0 0 0 0 0 0 0 0 ; 0 1 0 0 0 0 0 0 0 0 ; 0 0 1 0 0 0 0 0 0 0 ; 0 0 0 1 0 0 0 0 0 0 ; 0 0 0 0 1 0 0 0 0 0 ; 0 0 0 0 0 1 0 0 0 0 ; 0 0 0 0 0 0 1 0 0 0 ; 0 0 0 0 0 0 0 1 0 0 ; 0 0 0 0 0 0 0 0 1 0 ; 0 0 0 0 0 0 0 0 0 1 ];

169 169 Digit Recognition: net = newff([0 1;0 1;0 1;0 1;0 1;0 1;0 1; 0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1], [20 10],{'tansig' 'tansig'}); net.trainParam.epochs = 4850; net = train(net,P,T);

170 170 When to use ANNs? Input is high-dimensional discrete or real-valued (e.g. raw sensor input). Inputs can be highly correlated or independent. Output is discrete or real valued Output is a vector of values Possibly noisy data. Data may contain errors Form of target function is unknown Long training time are acceptable Fast evaluation of target function is required Human readability of learned target function is unimportant ⇒ ANN is much like a black-box

171 171 Conclusions

172 172 Conclusions This topic is very hot and has widespread implications Biology Chemistry Computer science Complexity We’ve seen the basic concepts … But we’ve only scratched the surface!  From now on, Think Biology, Emergence, Complex Systems …

173 References 173

174 References Jay Xiong, New Software Engineering Paradigm Based on Complexity Science, Springer 2011. Claudios Gros : Complex and Adaptive Dynamical Systems. Second Edition, Springer, 2011. Blanchard, B. S., Fabrycky, W. J., Systems Engineering and Analysis, Fourth Edition, Pearson Education, Inc., 2006. Braha D., Minai A. A., Bar-Yam, Y. (Editors), Complex Engineered Systems, Springer, 2006 Gibson, J. E., Scherer, W. T., How to Do Systems Analysis, John Wiley & Sons, Inc., 2007. International Council on Systems Engineering (INCOSE) website (www.incose.org).www.incose.org New England Complex Systems Institute (NECSI) website (www.necsi.org).www.necsi.org Rouse, W. B., Complex Engineered, Organizational and Natural Systems, Issues Underlying the Complexity of Systems and Fundamental Research Needed To Address These Issues, Systems Engineering, Vol. 10, No. 3, 2007. 174

175 References Wilner, M., Bio-inspired and nanoscale integrated computing, Wiley, 2009. Yoshida, Z., Nonlinear Science: the Challenge of Complex Systems, Springer 2010. Gardner M., The Fantastic Combinations of John Conway’s New Solitaire Game “Life”, Scientific American 223 120–123 (1970). Nielsen, M. A. & Chuang, I. L.,Quantum Computation and Quantum Information, 3rd ed., Cambridge Press, UK, 2000. 175

176 176


Download ppt "Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University."

Similar presentations


Ads by Google