Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University.

Slides:



Advertisements
Similar presentations
Princess Nora University Artificial Intelligence Artificial Neural Network (ANN) 1.
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Computer Science Department FMIPA IPB 2003 Neural Computing Yeni Herdiyeni Computer Science Dept. FMIPA IPB.
Artificial Neural Network
Artificial Neural Network Motivation By Dr. Rezaeian Modified: Vali Derhami Yazd University, Computer Department HomePage:
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Artificial Neural Networks: An Introduction S. Bapi Raju Dept. of Computer and Information Sciences, University of Hyderabad.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Artificial Neural Networks (ANNs)
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Artificial Neural Networks
LOGO Classification III Lecturer: Dr. Bo Yuan
Artificial Intelligence CSC 361 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi Arabia
NEURAL NETWORKS Introduction
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Artificial Neural Networks An Overview and Analysis.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
2101INT – Principles of Intelligent Systems Lecture 10.
Artificial Intelligence Neural Networks ( Chapter 9 )
Semiconductors, BP&A Planning,
Machine Learning Chapter 4. Artificial Neural Networks
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Neural Networks and Machine Learning Applications CSC 563 Prof. Mohamed Batouche Computer Science Department CCIS – King Saud University Riyadh, Saudi.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Feed-Forward Neural Networks 主講人 : 虞台文. Content Introduction Single-Layer Perceptron Networks Learning Rules for Single-Layer Perceptron Networks – Perceptron.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
1 Perceptron as one Type of Linear Discriminants IntroductionIntroduction Design of Primitive UnitsDesign of Primitive Units PerceptronsPerceptrons.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University.
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Artificial Intelligence (CS 370D)
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Artificial Intelligence CSC 361
Intelligent Leaning -- A Brief Introduction to Artificial Neural Networks Chiung-Yao Fang.
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
PYTHON Deep Learning Prof. Muhammad Saeed.
Presentation transcript:

Complex Systems Engineering SwE 488 Artificial Complex Systems Prof. Dr. Mohamed Batouche Department of Software Engineering CCIS – King Saud University Riyadh, Kingdom of Saudi Arabia

Artificial Neural Networks 2

3 Artificial Neural Networks (ANN) What is an Artificial Neural Network? Artificial Neural Networks are crude attempts to model the highly massive parallel and distributed processing we believe takes place in the brain.Artificial Neural Networks are crude attempts to model the highly massive parallel and distributed processing we believe takes place in the brain.

Developing Intelligent Program Systems Neural Nets Two main areas of activity: Biological: Try to model biological neural systems. Computational: develop powerful applications. 4

5 Biological Motivation: Brain Networks of processing units (neurons) with connections (synapses) between them Large number of neurons: Large connectitivity: each connected to, on average, 10 4 others Parallel processing Distributed computation/memory Processing is done by neurons and the memory is in the synapses Robust to noise, failures  ANNs attempt to capture this mode of computation

6 The Brain as a Complex System The brain uses the outside world to shape itself. (Self-organization) It goes through crucial periods in which brain cells must have certain kinds of stimulation to develop such powers as vision, language, smell, muscle control, and reasoning. (Learning, evolution, emergent properties)

7 Main Features of the Brain Robust – fault tolerant and degrade gracefully Flexible -- can learn without being explicitly programmed Can deal with fuzzy, probabilistic information Is highly parallel

8 Characteristic of Biological Computation Massive Parallelism Locality of Computation → Scalability Adaptive (Self Organizing) Representation is Distributed

Artificial Neural Networks History of ANNs

10 History of Artificial Neural Networks 1943: McCulloch and Pitts proposed a model of a neuron --> Perceptron 1960s: Widrow and Hoff explored Perceptron networks (which they called “Adalines”) and the delta rule. 1962: Rosenblatt proved the convergence of the perceptron training rule. 1969: Minsky and Papert showed that the Perceptron cannot deal with nonlinearly-separable data sets---even those that represent simple function such as X-OR : Very little research on Neural Nets 1986: Invention of Backpropagation [Rumelhart and McClelland, but also Parker and earlier on: Werbos] which can learn from nonlinearly- separable data sets. Since 1985: A lot of research in Neural Nets -> Complex Systems

Developing Intelligent Program Systems Neural Nets Applications Neural nets can be used to answer the following: Pattern recognition: Does that image contain a face? Classification problems: Is this cell defective? Prediction: Given these symptoms, the patient has disease X Forecasting: predicting behavior of stock market Handwriting: is character recognized? Optimization: Find the shortest path for the TSP. 11

Artificial Neural Networks Biological Neuron

Typical Biological Neuron 13

14 The Neuron The neuron receives nerve impulses through its dendrites. It then sends the nerve impulses through its axon to the terminal buttons where neurotransmitters are released to simulate other neurons.

15 The neuron The unique components are: Cell body or soma which contains the nucleus The dendrites The axon The synapses

16 The neuron - dendrites The dendrites are short fibers (surrounding the cell body) that receive messages The dendrites are very receptive to connections from other neurons. The dendrites carry signals from the synapses to the soma.

17 The neuron - axon The axon is a long extension from the soma that transmits messages Each neuron has only one axon. The axon carries action potentials from the soma to the synapses.

18 The neuron - synapses The synapses are the connections made by an axon to another neuron. They are tiny gaps between axons and dendrites (with chemical bridges) that transmit messages A synapse is called excitatory if it raises the local membrane potential of the post synaptic cell. Inhibitory if the potential is lowered.

Artificial Neural Networks artificial Neurons

20 Typical Artificial Neuron inputs connection weights threshold output

21 Typical Artificial Neuron linear combination net input (local field) activation function

22 Equations Net input: Neuron output:

23 Artificial Neuron Incoming signals to a unit are combined by summing their weighted values Output function: Activation functions include Step function, Linear function, Sigmoid function, … 1 f(  ) Input s Output=f(  ) xiwixiwi x1x1 xpxp w1w1 w0w0 wpwp

24 Activation functions Step function Sign functionSigmoid (logistic) function step(x) = 1, if x >= threshold 0, if x < threshold (in picture above, threshold = 0) sign(x) = +1, if x >= 0 -1, if x < 0 sigmoid(x) = 1/(1+e -x ) Adding an extra input with activation a 0 = -1 and weight W 0,j = t (called the bias weight) is equivalent to having a threshold at t. This way we can always assume a 0 threshold. Linear function pl(x) =x

25 Real vs. Artificial Neurons axon dendrites synapse cell x0x0 xnxn w0w0 wnwn o Threshold units

26 Neurons as Universal computing machine In 1943, McCulloch and Pitts showed that a synchronous assembly of such neurons is a universal computing machine. That is, any Boolean function can be implemented with threshold (step function) units.

27 Implementing AND x1x1 x2x2 o(x 1,x 2 ) 1 1 W=1.5

28 Implementing OR x1x1 x2x2 o(x 1,x 2 ) 1 1 W=0.5 o(x1,x2) = 1 if –0.5 + x1 + x2 > 0 = 0 otherwise

29 Implementing NOT x1x1 o(x 1 ) W=-0.5

30 Implementing more complex Boolean functions x1x1 x2x x 1 or x 2 x3x (x 1 or x 2 ) and x 3

31 Using Artificial Neural Networks When using ANN, we have to define: Artificial Neuron Model ANN Architecture Learning mode

Artificial Neural Networks ANN Architecture

33 ANN Architecture Feedforward: Links are unidirectional, and there are no cycles, i.e., the network is a directed acyclic graph (DAG). Units are arranged in layers, and each unit is linked only to units in the next layer. There is no internal state other than the weights. Recurrent: Links can form arbitrary topologies, which can implement memory. Behavior can become unstable, oscillatory, or chaotic.

34 Artificial Neural Network Feedforward Network Output layer Input layer Hidden layers fully connected sparsely connected

35 Artificial Neural Network FeedForward Architecture Information flow unidirectional Multi-Layer Perceptron (MLP) Radial Basis Function (RBF) Kohonen Self- Organising Map (SOM)

36 Artificial Neural Network Recurrent Architecture Feedback connections Hopfield Neural Networks: Associative memory Adaptive Resonance Theory (ART)

37 Artificial Neural Network Learning paradigms Supervised learning: Teacher presents ANN input-output pairs, ANN weights adjusted according to error Classification Control Function approximation Associative memory Unsupervised learning: no teacher Clustering

38 ANN capabilities Learning Approximate reasoning Generalisation capability Noise filtering Parallel processing Distributed knowledge base Fault tolerance

39 Main Problems with ANN Contrary to Expert sytems, with ANN the Knowledge base is not transparent (black box) Learning sometimes difficult/slow Limited storage capability

40 Some applications of ANNs Pronunciation: NETtalk program (Sejnowski & Rosenberg 1987) is a neural network that learns to pronounce written text: maps characters strings into phonemes (basic sound elements) for learning speech from text Speech recognition Handwritten character recognition:a network designed to read zip codes on hand-addressed envelops ALVINN (Pomerleau) is a neural network used to control vehicles steering direction so as to follow road by staying in the middle of its lane Face recognition Backgammon learning program Forecasting e.g., predicting behavior of stock market

41 Application of ANNs Network StimulusResponse Input Pattern Output Pattern encoding decoding The general scheme when using ANNs is as follows:

42 Application: Digit Recognition

43 Matlab Demo Learning XOR function Function approximation Digit Recognition

44 Learning XOR Operation: Matlab Code P = [ ; ] T = [ ]; net = newff([0 1;0 1],[6 1],{'tansig' 'tansig'}); net.trainParam.epochs = 4850; net = train(net,P,T); X = [0 1]; Y = sim(net,X); display(Y);

45 Function Approximation: Learning Sinus Function P = 0:0.1:10; T = sin(P)*10.0; net = newff([ ],[8 1],{'tansig' 'purelin'}); plot(P,T); pause; Y = sim(net,P); plot(P,T,P,Y,’o’); pause; net.trainParam.epochs = 4850; net = train(net,P,T); Y = sim(net,P); plot(P,T,P,Y,’o’);

46 Digit Recognition: P = [ ; ; ; ; ; ; ; ; ; ; ; ; ; ; ]; T = [ ; ; ; ; ; ; ; ; ; ];

47 Digit Recognition: net = newff([0 1;0 1;0 1;0 1;0 1;0 1;0 1; 0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1], [20 10],{'tansig' 'tansig'}); net.trainParam.epochs = 4850; net = train(net,P,T);

48 When to use ANNs? Input is high-dimensional discrete or real-valued (e.g. raw sensor input). Inputs can be highly correlated or independent. Output is discrete or real valued Output is a vector of values Possibly noisy data. Data may contain errors Form of target function is unknown Long training time are acceptable Fast evaluation of target function is required Human readability of learned target function is unimportant ⇒ ANN is much like a black-box

49 Conclusions

50 Conclusions This topic is very hot and has widespread implications Biology Chemistry Computer science Complexity We’ve seen the basic concepts … But we’ve only scratched the surface!  From now on, Think Biology, Emergence, Complex Systems …

References 51

References Jay Xiong, New Software Engineering Paradigm Based on Complexity Science, Springer Claudios Gros : Complex and Adaptive Dynamical Systems. Second Edition, Springer, Blanchard, B. S., Fabrycky, W. J., Systems Engineering and Analysis, Fourth Edition, Pearson Education, Inc., Braha D., Minai A. A., Bar-Yam, Y. (Editors), Complex Engineered Systems, Springer, 2006 Gibson, J. E., Scherer, W. T., How to Do Systems Analysis, John Wiley & Sons, Inc., International Council on Systems Engineering (INCOSE) website ( New England Complex Systems Institute (NECSI) website ( Rouse, W. B., Complex Engineered, Organizational and Natural Systems, Issues Underlying the Complexity of Systems and Fundamental Research Needed To Address These Issues, Systems Engineering, Vol. 10, No. 3,

References Wilner, M., Bio-inspired and nanoscale integrated computing, Wiley, Yoshida, Z., Nonlinear Science: the Challenge of Complex Systems, Springer Gardner M., The Fantastic Combinations of John Conway’s New Solitaire Game “Life”, Scientific American –123 (1970). Nielsen, M. A. & Chuang, I. L.,Quantum Computation and Quantum Information, 3rd ed., Cambridge Press, UK,

54