Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University.

Slides:



Advertisements
Similar presentations
Princess Nora University Artificial Intelligence Artificial Neural Network (ANN) 1.
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Network Motivation By Dr. Rezaeian Modified: Vali Derhami Yazd University, Computer Department HomePage:
Kostas Kontogiannis E&CE
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
Un Supervised Learning & Self Organizing Maps Learning From Examples
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University.
Artificial Intelligence CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi Arabia
Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University.
Introduction to Artificial Life and Cellular Automata
NEURAL NETWORKS Introduction
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Nawaf M Albadia Introduction. Components. Behavior & Characteristics. Classes & Rules. Grid Dimensions. Evolving Cellular Automata using Genetic.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
The Role of Artificial Life, Cellular Automata and Emergence in the study of Artificial Intelligence Ognen Spiroski CITY Liberal Studies 2005.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
2101INT – Principles of Intelligent Systems Lecture 10.
Artificial Intelligence Neural Networks ( Chapter 9 )
CS 484 – Artificial Intelligence1 Announcements Lab 4 due today, November 8 Homework 8 due Tuesday, November 13 ½ to 1 page description of final project.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Cellular Automata and Applications Ajith Abraham Telephone Number: (918) WWW:
1 Chapter 13 Artificial Life: Learning through Emergent Behavior.
CS344 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 29 Introducing Neural Nets.
EASy Summer 2006Non Symbolic AI Lecture 131 Non Symbolic AI - Lecture 13 Symbolic AI is often associated with the idea that “ Intelligence is Computation”
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Feed-Forward Neural Networks 主講人 : 虞台文. Content Introduction Single-Layer Perceptron Networks Learning Rules for Single-Layer Perceptron Networks – Perceptron.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University.
Neural Network Architecture Session 2
Spatio-Temporal Information for Society Münster, 2014
Artificial Intelligence (CS 370D)
Complex Systems Engineering SwE 488 Artificial Complex Systems
What is an ANN ? The inventor of the first neuro computer, Dr. Robert defines a neural network as,A human brain like system consisting of a large number.
Artificial Intelligence CSC 361
Perceptron as one Type of Linear Discriminants
Machine Learning: Lecture 4
PYTHON Deep Learning Prof. Muhammad Saeed.
Presentation transcript:

Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University Riyadh, Kingdom of Saudi Arabia

Outline Introduction Cellular Automata Artificial Neural Networks Swarm Intelligence Evolutionary Computing Quantum Computing DNA/Molecular Computing Artificial Life Artificial Immune System Conclusion 2

Introduction 3

4 Complex systems The fundamental characteristic of a complex system is that it exhibits emergent properties: Local interaction rules between simple agents give rise to complex pattern and global behavior

Complex Systems Complex systems are systems that exhibit emergent behavior: Anthills Human societies Financial Markets Climate Nervous systems Immune systems Human societies Cities Galaxies Modern telecommunication infrastructures 5

6 Characteristics of a complex system? A complex system displays some or all of the following characteristics: Agent-based Basic building blocks are the characteristics and activities of individual agents Heterogeneous The agents differ in important characteristics Dynamic Characteristics change over time, usually in a nonlinear way; adaptation Feedback Changes are often the result of feedback from the environment Organization Agents are organized into groups or hierarchies Emergence Macro-level behaviors that emerge from agent actions and interactions

7 Attributes for complex system? Interdependent Independent Distributed Cooperative Competitive Adaptive

8 Complex Systems perspectives Complex Systems as a Science to Understand Nature Complex Systems as a New Form of Engineering

Complex System Engineering How can we understand and make use of these emergent phenomena to develop new ways of generating computer programs? Can we build self-adapting, self-organizing, and evolving computer systems and programs? 9 See Demo – Video 3D Creatures

Complex System Engineering 10 Most biological systems do not forecast or Schedule. They respond to their environment — quickly, robustly, and adaptively As engineers, let us don’t try to control the system… Design the system so that it controls and adapts itself to the environment

Artificial Complex Systems Cellular Automata Artificial Neural Networks Swarm Intelligence Evolutionary Computing Quantum Computing DNA/Molecular Computing Artificial Life Artificial Immune System 11

Cellular Automata 12

Purpose 13 In Theory: Computation of all computable functions Construction of (also non-homogenous) automata by other automata, the offspring being at least as powerful (in some well-defined sense) as the parent In Practice: Exploring how complex systems with emergent patterns seem to evolve from purely local interactions of agents. I.e. Without a “master plan!”

Cellular Automata A cellular automata is a family of simple, finite- state machines that exhibit interesting, emergent behaviors through their interactions in a population 14

The famous BOIDS model shows how flocking behavior can emerge from a collection of agents following a few simple rules. Emergent Behavior 15

Original concept of CA is most strongly associated with John von Neumann who was interested in the connections between biology and the new study of automata theory Stanislaw Ulam suggested to von Neumann the use a cellular automata as a framework for researching these connections. The original concept of CA can be credited to Ulam, while the early development of the concept is credited to von Neumann. Ironically, although von Neumann made many contributions and developments in CA, they are commonly referred to as “non-von Neumann style”, while the standard model of computation (CPU, globally addressable memory, serial processing) is know as “von Neumann style ”. 16

17 Cellular Automata (CAs) Have been used as: massively parallel computer architecture model of physical phenomena (Fredkin, Wolfram) VLSI Testing Data Encryption Error Correcting Code Correction Testable Synthesis Generation of hashing Function Currently being investigated as model of quantum computation (QCAs)

Grid Mesh of cells. Simplest mesh is one dimensional. Cell Basic element of a CA. Cells can be thought of as memory elements that store state information. All cells are updated synchronously according to the transition rules. Rules 18

Local interaction leads to global dynamics. One can think of the behavior of a cellular automata like that of a “wave” at a sports event. Each person reacts to the state of his neighbors (if they stand, he stands). 19

Rule Application Next state of the core cell is related to the states of the neighboring cells and its current state. An example rule for a one dimensional CA: 011->x0x All possible states must be described. Next state of the core cell is only dependent upon the sum of the states of the neighboring cells. For example, if the sum of the adjacent cells is 4 the state of the core cell is 1, in all other cases the state of the core cell is 0. 20

21 Structure Discrete space (lattice) of regular cells 1D, 2D, 3D, … rectangular, hexagonal, … At each unit of time a cell changes state in response to ( Time advances in discrete steps ): its own previous state states of neighbors (within some “radius”) All cells obey same state update rule, depending only on local relations an FSA Synchronous updating (parallel processing)

Structure: Neighborhoods 22

1-DIMENSIONAL AUTOMATA 23

One-Dimensional CA’s Game of Life is 2-D. Many simpler 1-D CAs have been studied For a given rule-set, and a given starting setup, the deterministic evolution of a CA with one state (on/off) can be pictured as successive lines of colored squares, successive lines under each other 24

Neighborhoods 25

3 Black = White 2 Black = Black 1 Black = Black 3 White = White Now make your own CA 26

“A New Kind of Science” Stephen Wolfram ISBN

1-D CA Example 28 Rules Rule# = = 54

Wolfram Model Rule #126 = = 126 Most of the rules are degenerate, meaning they create repetitive patterns of no interest. However there are a few rules which produce surprisingly complex patterns that do not repeat themselves Rule #124 = =

Wolfram Model we can view the state of the model at any time in the future as long as we step through all the previous states. 30

31 Hundred generations of Rule 30

CA Example: Rule Rule ( ) = 30 32

Conus Textile pattern 33

The pattern is neither regular nor completely random. It appears to have some order, but is never predictable. 34

See Demo - NetLogo Rule #45= = = Rule #30= = = This naming convention of the 256 distinct update rules is due to Stephen Wolfram. He is one of the pioneers of Cellular Automata and author of the book a New Kind of Science, which argues that discoveries about cellular automata are not isolated facts but have significance for all disciplines of science. Wolfram Model 35

Rule ( ) = 90 Wolfram Rule 90 36

Wolfram Rule 110 Proven to be Turing Complete - Rich enough for universal computation interesting result because Rule 110 is an extremely simple system, simple enough to suggest that naturally occurring physical systems may also be capable of universality 37

Wolfram Rule Rule# 99 =

39

Mollusc Pigmentation Patterns 40

Wolfram’s CA classes 1,2 From observation, initially of 1-D CA spacetime patterns, Wolfram noticed 4 different classes of rule- sets. Any particular rule-set falls into one of these:-: CLASS 1: From any starting setup, pattern converges to all blank -- fixed attractor CLASS 2: From any start, goes to a limit cycle, repeats same sequence of patterns for ever. -- cyclic attractors 41

Wolfram’s CA classes 3,4 CLASS 3: Turbulent mess, chaos, no patterns to be seen. CLASS 4: From any start, patterns emerge and continue without repetition for a very long time (could only be 'forever' in infinite grid) Classes 1 and 2 are boring, Class 3 is messy, Class 4 is 'At the Edge of Chaos' - at the transition between order and chaos -- where Game of Life is!. 42

2-DIMENSIONAL AUTOMATA 43

2-dimensional cellular automaton consists of an infinite (or finite) grid of cells, each in one of a finite number of states. Time is discrete and the state of a cell at time t is a function of the states of its neighbors at time t-1. 44

Neighborhoods 45 Von NeumannMoore margolus

Cryptography use, Rule 30 Simulations Gas behaviour. Forest fire propagation. Urban development. Traffic flow. Air flow. Crystallization process. Alternative to differential equations 46

47 Snowflakes

Bead-sort Bead-Sort is a method of ordering a set of positive integers by mimicking the natural process of objects falling to the ground, as beads on an abacus slide down vertical rods. The number of beads on each horizontal row represents one of the numbers of the set to be sorted, and it is clear that the final state will represent the sorted set. 48 [ ][ ]

Bead-sort 49

Bead-sort - CA implementation 50 [ ] [ ] [ ] [ ] See Demo - Netlogo the natural process of objects falling to the ground

Bead-sort Extended CA implementation The Bead-Sort can be modeled by the two-dimensional cellular automaton (CA) rule,. For the Antigravity Bead-Sort, we add a second rule:. 51

Bead-sort Extended 52 The "extended" (anti-gravity) mode allows the inclusion of all integers, with "negative beads" rising while "positive beads" fall.

53 Example: Conway’s Game of Life Invented by Conway in late 1960s A simple CA capable of universal computation Structure: 2D space rectangular lattice of cells binary states (alive/dead) neighborhood of 8 surrounding cells (& self) simple population-oriented rule

54 Example: Conway’s Game of Life Cell State = dead/off/0 State = alive/on/1

A cell dies or lives according to some transition rule The world is round (flips over edges) How many rules for Life? 20, 40, 100, 1000? T = 0T = 1 transition rules Example: Conway’s Game of Life 55

56 State Transition Rule Live cell has 2 or 3 live neighbors  stays as is (stasis) Live cell has < 2 live neighbors  dies (loneliness) Live cell has > 3 live neighbors  dies (overcrowding) Empty cell has 3 live neighbors  comes to life (reproduction)

57 State Transition Rule Survive with 2 or 3 live neighbors Live cell  stays as is (stasis) otherwise dies from loneliness or overcrowding Generate with 3 live neighbors Empty cell  comes to life (reproduction)

58 State Transition Rule Three simple rules: dies if number of alive neighbor cells =< 1(loneliness) dies if number of alive neighbor cells >= 4 (overcrowding) generate alive cell if number of alive neighbor cells = 3(procreation)

Examples of the rules loneliness(dies if #alive =< 1) overcrowding(dies if #alive >= 4) procreation(lives if #alive = 3) State Transition Rule 59

CA: Discrete Time, Discrete Space Initial Setup Number of Neighbors After Pass 1 After Pass 2 60

T = 0 neighboring values T = 1 Game of Life: 2D Cellular Automata using simple rules Emergent pattern: Blinker 61

Emergent patterns Conway automaton can simulate a number of different effects that can be found in the evolution of a living population. Equilibria Oscillation Movement square 2 steps diagonal beehive 2 steps horizontal boat 3 steps instability (all the space is filled up by horizontal lines) ship 15 steps instability toast chaos? 62

Game of Life: emergent patterns 63 gliders: patterns that moves constantly across the grid Conway’s Rules: Game of Life Survive with 2 – 3 living neighbors Generate with 3 living neighbors Gosper’s glider gun : emits glider stream

Emergent Patterns 64

Emergent Patterns Oscillators-objects that change from step to step, but eventually repeat themselves. These include, but are not limited to, period 2 oscillators, including the blinker and the toad. Blinker Toad 65

Emergent Patterns: A Clock 66 See Demo: Game of Life

Emergent Patterns: 67 See Demo: Game of Life Oscillator Barber’s Pole Pulsar SpaceShip Beacon Glider

Emergent Patterns: Gosper’s Glider Gun 68 See Demo: Game of Life

Emergent Patterns: Puffer train 69

Emergent Patterns: Double-Barreled Gun 70

Emergent Patterns: Edge Shooter 71

Emergent Patterns: Evolution of a breeder... 72

Game of Life - implications Typical Artificial Life, or Non-Symbolic AI, computational paradigm: bottom-up parallel locally-determined Complex behavior from (... emergent from...) simple rules. Gliders, blocks, traffic lights, blinkers, glider-guns, eaters, puffer-trains... 73

Game of Life as a Computer ? Higher-level units in Game of Life can in principle be assembled into complex 'machines' -- even into a full computer, or Universal Turing Machine. 'Computer memory' held as 'bits' denoted by 'blocks‘ laid out in a row stretching out as a potentially infinite 'tape'. Bits can be turned on/off by well-aimed gliders. 74

Game of Life as a Computer ? Bits can be turned on/off by well-aimed gliders: 75

This is a Turing Machine implemented in Conway's Game of Life. 76

This is a Universal Turing Machine (UTM) implemented in Conway's Game of Life. Designed by Paul Rendell 10/February/

78 Foundations Turing Machine -> Computer The theory of computation and the practical application it made possible — the computer — was developed by an Englishman called Alan Turing.

Turing machines Turing machines (TM ’ s) were introduced by Alan Turing in 1936 They are more powerful than both finite automata or pushdown automata. In fact, they are as powerful as any computer we have ever built. The main improvement from PDA ’ s is that they have infinite accessible memory (in the form of a tape) which can be read and written to.

Basic design of Turing machine Control Infinite tape Tape head Usual finite state control The basic improvement from the previous automata is the infinite tape, which can be read and written to. The input data begins on the tape, so no separate input is needed (e.g. DFA, PDA).

Turing Machine... Infinite tape: Γ* Tape head: read current square on tape, write into current square, move one square left or right FSM:like PDA (PushDown Automata), except: transitions also include direction (left/right) final accepting and rejecting states FSM

82 A Turing Machine Tape Read-Write head Control Unit (FSM: Finite Sate Machine)

83 The Tape Read-Write head No boundaries -- infinite length The head moves Left or Right

Read-Write head The head at each time step: 1. Reads a symbol 2. Writes a symbol 3. Moves Left or Right

85 Turing Machine EG Successor Program Sample Rules: If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! Let’s see how they are carried out on a piece of paper that contains the reverse binary representation of 47:

86 Turing Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

87 Turing Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

88 Turing Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

89 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

90 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

91 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

92 Turing Machine EG Successor Program So the successor’s output on was which is the reverse binary representation of = = = = 48

93 Turing Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

94 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

95 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

96 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

97 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

98 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

99 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

100 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT!

101 A Thinking Machine EG Successor Program If read 1, write 0, go right, repeat. If read 0, write 1, HALT! If read , write 1, HALT! The successor of 127 is 128 !!!

03/03/2000FLAC: Reductions102 Turing machines as functions TM Input word Output word f  A function f is a computable function if a TM with word w as input halts with f(w) as output. w f(w)

03/03/2000FLAC: Reductions103 First computers: custom computing machines Eniac: the control is hardwired manually for each problem. Control Input tape (read only) Output tape (write only) Work tape (memory)

03/03/2000FLAC: Reductions104 TMs can be described using text Input tape (read only) Control Output tape (write only) Work tape (memory) Program n states s letters for alphabet transition function: d(q0,a) = (q1, b, L) d(q0,b) = (q1, a, R) …. Consequence: There is a countable number of Turing Machines

The Universal Turing Machine We can build a Turing Machine that can do the job of any other Turing Machine. That is, it can tell you what the output from the second TM would be for any input! This is known as a universal Turing machine (UTM). A UTM acts as a completely general- purpose computer, and basically it stores a program and data on the tape and then executes the program.

03/03/2000FLAC: Reductions106 There is a TM which can simulate any other TM (Alan Turing, 1930) Input tape Universal TM Output tape Work tape (memory) Interpreter Program

03/03/2000FLAC: Reductions107 The Digital Computer: a UTM Machine code Universal TM Output Memory + disk Machine-code Interpreter Input

Palindrome Example - UTM Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

Palindrome Example Figure 2 – Palindrome Example of a TM computation

119 Can Machines Think? In “Computing machinery and intelligence,” written in 1950, Turing asks whether machines can think. He claims that this question is too vague, and proposes, instead, to replace it with a different one. That question is: Can machines pass the “imitation game” (now called the Turing test)? If they can, they are intelligent. Turing is thus the first to have offered a rigorous test for the determination of intelligence quite generally.

120 Self-reproducing CAs von Neumann saw CAs as a good framework for studying the necessary and sufficient conditions for self-replication of structures. von N's approach: self-rep of abstract structures, in the sense that gliders are abstract structures. His CA had 29 possible states for each cell (compare with Game of Life 2, black and white) and his minimum self-rep structure had some 200,000 cells.

121 Self-rep and DNA This was early 1950s, pre-discovery of DNA, but von N's machine had clear analogue of DNA which is both: used to determine pattern of 'body' interpreted and itself copied directly copied without interpretation as a symbol string Simplest general logical form of reproduction (?) How simple can you get?

122 Langton’s Loops Chris Langton formulated a much simpler form of self- rep structure - Langton's loops - with only a few different states, and only small starting structures. [ Further developments -- eg 'Wireworld'. ]

3-Dimensional CA 123 One difficulty with three-dimensional cellular automata is the graphical representation (on two- dimensional paper or screen)

Artificial Neural Networks 124

125 Artificial Neural Networks (ANN) What is an Artificial Neural Network? Artificial Neural Networks are crude attempts to model the highly massive parallel and distributed processing we believe takes place in the brain.Artificial Neural Networks are crude attempts to model the highly massive parallel and distributed processing we believe takes place in the brain.

Developing Intelligent Program Systems Neural Nets Two main areas of activity: Biological: Try to model biological neural systems. Computational: develop powerful applications. 126

127 Biological Motivation: Brain Networks of processing units (neurons) with connections (synapses) between them Large number of neurons: Large connectitivity: each connected to, on average, 10 4 others Parallel processing Distributed computation/memory Processing is done by neurons and the memory is in the synapses Robust to noise, failures  ANNs attempt to capture this mode of computation

128 The Brain as a Complex System The brain uses the outside world to shape itself. (Self-organization) It goes through crucial periods in which brain cells must have certain kinds of stimulation to develop such powers as vision, language, smell, muscle control, and reasoning. (Learning, evolution, emergent properties)

129 Main Features of the Brain Robust – fault tolerant and degrade gracefully Flexible -- can learn without being explicitly programmed Can deal with fuzzy, probabilistic information Is highly parallel

130 Characteristic of Biological Computation Massive Parallelism Locality of Computation → Scalability Adaptive (Self Organizing) Representation is Distributed

Artificial Neural Networks History of ANNs

132 History of Artificial Neural Networks 1943: McCulloch and Pitts proposed a model of a neuron --> Perceptron 1960s: Widrow and Hoff explored Perceptron networks (which they called “Adalines”) and the delta rule. 1962: Rosenblatt proved the convergence of the perceptron training rule. 1969: Minsky and Papert showed that the Perceptron cannot deal with nonlinearly-separable data sets---even those that represent simple function such as X-OR : Very little research on Neural Nets 1986: Invention of Backpropagation [Rumelhart and McClelland, but also Parker and earlier on: Werbos] which can learn from nonlinearly- separable data sets. Since 1985: A lot of research in Neural Nets -> Complex Systems

Developing Intelligent Program Systems Neural Nets Applications Neural nets can be used to answer the following: Pattern recognition: Does that image contain a face? Classification problems: Is this cell defective? Prediction: Given these symptoms, the patient has disease X Forecasting: predicting behavior of stock market Handwriting: is character recognized? Optimization: Find the shortest path for the TSP. 133

Artificial Neural Networks Biological Neuron

Typical Biological Neuron 135

136 The Neuron The neuron receives nerve impulses through its dendrites. It then sends the nerve impulses through its axon to the terminal buttons where neurotransmitters are released to simulate other neurons.

137 The neuron The unique components are: Cell body or soma which contains the nucleus The dendrites The axon The synapses

138 The neuron - dendrites The dendrites are short fibers (surrounding the cell body) that receive messages The dendrites are very receptive to connections from other neurons. The dendrites carry signals from the synapses to the soma.

139 The neuron - axon The axon is a long extension from the soma that transmits messages Each neuron has only one axon. The axon carries action potentials from the soma to the synapses.

140 The neuron - synapses The synapses are the connections made by an axon to another neuron. They are tiny gaps between axons and dendrites (with chemical bridges) that transmit messages A synapse is called excitatory if it raises the local membrane potential of the post synaptic cell. Inhibitory if the potential is lowered.

Artificial Neural Networks artificial Neurons

142 Typical Artificial Neuron inputs connection weights threshold output

143 Typical Artificial Neuron linear combination net input (local field) activation function

144 Equations Net input: Neuron output:

145 Artificial Neuron Incoming signals to a unit are combined by summing their weighted values Output function: Activation functions include Step function, Linear function, Sigmoid function, … 1 f(  ) Input s Output=f(  ) xiwixiwi x1x1 xpxp w1w1 w0w0 wpwp

146 Activation functions Step function Sign functionSigmoid (logistic) function step(x) = 1, if x >= threshold 0, if x < threshold (in picture above, threshold = 0) sign(x) = +1, if x >= 0 -1, if x < 0 sigmoid(x) = 1/(1+e -x ) Adding an extra input with activation a 0 = -1 and weight W 0,j = t (called the bias weight) is equivalent to having a threshold at t. This way we can always assume a 0 threshold. Linear function pl(x) =x

147 Real vs. Artificial Neurons axon dendrites synapse cell x0x0 xnxn w0w0 wnwn o Threshold units

148 Neurons as Universal computing machine In 1943, McCulloch and Pitts showed that a synchronous assembly of such neurons is a universal computing machine. That is, any Boolean function can be implemented with threshold (step function) units.

149 Implementing AND x1x1 x2x2 o(x 1,x 2 ) 1 1 W=1.5

150 Implementing OR x1x1 x2x2 o(x 1,x 2 ) 1 1 W=0.5 o(x1,x2) = 1 if –0.5 + x1 + x2 > 0 = 0 otherwise

151 Implementing NOT x1x1 o(x 1 ) W=-0.5

152 Implementing more complex Boolean functions x1x1 x2x x 1 or x 2 x3x (x 1 or x 2 ) and x 3

153 Using Artificial Neural Networks When using ANN, we have to define: Artificial Neuron Model ANN Architecture Learning mode

Artificial Neural Networks ANN Architecture

155 ANN Architecture Feedforward: Links are unidirectional, and there are no cycles, i.e., the network is a directed acyclic graph (DAG). Units are arranged in layers, and each unit is linked only to units in the next layer. There is no internal state other than the weights. Recurrent: Links can form arbitrary topologies, which can implement memory. Behavior can become unstable, oscillatory, or chaotic.

156 Artificial Neural Network Feedforward Network Output layer Input layer Hidden layers fully connected sparsely connected

157 Artificial Neural Network FeedForward Architecture Information flow unidirectional Multi-Layer Perceptron (MLP) Radial Basis Function (RBF) Kohonen Self- Organising Map (SOM)

158 Artificial Neural Network Recurrent Architecture Feedback connections Hopfield Neural Networks: Associative memory Adaptive Resonance Theory (ART)

159 Artificial Neural Network Learning paradigms Supervised learning: Teacher presents ANN input-output pairs, ANN weights adjusted according to error Classification Control Function approximation Associative memory Unsupervised learning: no teacher Clustering

160 ANN capabilities Learning Approximate reasoning Generalisation capability Noise filtering Parallel processing Distributed knowledge base Fault tolerance

161 Main Problems with ANN Contrary to Expert sytems, with ANN the Knowledge base is not transparent (black box) Learning sometimes difficult/slow Limited storage capability

162 Some applications of ANNs Pronunciation: NETtalk program (Sejnowski & Rosenberg 1987) is a neural network that learns to pronounce written text: maps characters strings into phonemes (basic sound elements) for learning speech from text Speech recognition Handwritten character recognition:a network designed to read zip codes on hand-addressed envelops ALVINN (Pomerleau) is a neural network used to control vehicles steering direction so as to follow road by staying in the middle of its lane Face recognition Backgammon learning program Forecasting e.g., predicting behavior of stock market

163 Application of ANNs Network StimulusResponse Input Pattern Output Pattern encoding decoding The general scheme when using ANNs is as follows:

164 Application: Digit Recognition

165 Matlab Demo Learning XOR function Function approximation Digit Recognition

166 Learning XOR Operation: Matlab Code P = [ ; ] T = [ ]; net = newff([0 1;0 1],[6 1],{'tansig' 'tansig'}); net.trainParam.epochs = 4850; net = train(net,P,T); X = [0 1]; Y = sim(net,X); display(Y);

167 Function Approximation: Learning Sinus Function P = 0:0.1:10; T = sin(P)*10.0; net = newff([ ],[8 1],{'tansig' 'purelin'}); plot(P,T); pause; Y = sim(net,P); plot(P,T,P,Y,’o’); pause; net.trainParam.epochs = 4850; net = train(net,P,T); Y = sim(net,P); plot(P,T,P,Y,’o’);

168 Digit Recognition: P = [ ; ; ; ; ; ; ; ; ; ; ; ; ; ; ]; T = [ ; ; ; ; ; ; ; ; ; ];

169 Digit Recognition: net = newff([0 1;0 1;0 1;0 1;0 1;0 1;0 1; 0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1], [20 10],{'tansig' 'tansig'}); net.trainParam.epochs = 4850; net = train(net,P,T);

170 When to use ANNs? Input is high-dimensional discrete or real-valued (e.g. raw sensor input). Inputs can be highly correlated or independent. Output is discrete or real valued Output is a vector of values Possibly noisy data. Data may contain errors Form of target function is unknown Long training time are acceptable Fast evaluation of target function is required Human readability of learned target function is unimportant ⇒ ANN is much like a black-box

171 Conclusions

172 Conclusions This topic is very hot and has widespread implications Biology Chemistry Computer science Complexity We’ve seen the basic concepts … But we’ve only scratched the surface!  From now on, Think Biology, Emergence, Complex Systems …

References 173

References Jay Xiong, New Software Engineering Paradigm Based on Complexity Science, Springer Claudios Gros : Complex and Adaptive Dynamical Systems. Second Edition, Springer, Blanchard, B. S., Fabrycky, W. J., Systems Engineering and Analysis, Fourth Edition, Pearson Education, Inc., Braha D., Minai A. A., Bar-Yam, Y. (Editors), Complex Engineered Systems, Springer, 2006 Gibson, J. E., Scherer, W. T., How to Do Systems Analysis, John Wiley & Sons, Inc., International Council on Systems Engineering (INCOSE) website ( New England Complex Systems Institute (NECSI) website ( Rouse, W. B., Complex Engineered, Organizational and Natural Systems, Issues Underlying the Complexity of Systems and Fundamental Research Needed To Address These Issues, Systems Engineering, Vol. 10, No. 3,

References Wilner, M., Bio-inspired and nanoscale integrated computing, Wiley, Yoshida, Z., Nonlinear Science: the Challenge of Complex Systems, Springer Gardner M., The Fantastic Combinations of John Conway’s New Solitaire Game “Life”, Scientific American –123 (1970). Nielsen, M. A. & Chuang, I. L.,Quantum Computation and Quantum Information, 3rd ed., Cambridge Press, UK,

176