KULIAH II JST: BASIC CONCEPTS

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

NEURAL NETWORKS Biological analogy
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Introduction to Artificial Neural Networks
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Network
Machine Learning Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Introduction CS/CMPE 537 – Neural Networks. CS/CMPE Neural Networks (Sp 2004/2005) - Asim LUMS2 Biological Inspiration The brain is a highly.
Neural Networks Basic concepts ArchitectureOperation.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
2806 Neural Computation Introduction Lecture Ari Visa.
Artificial Neurons, Neural Networks and Architectures
Neural Nets Applications Introduction. Outline(1/2) 1. What is a Neural Network? 2. Benefit of Neural Networks 3. Structural Levels of Organization in.
NEURAL NETWORKS Introduction
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Machine Learning. Learning agent Any other agent.
Neural Networks 2nd Edition Simon Haykin
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
Computer Science and Engineering
Neural NetworksNN 11 Neural netwoks thanks to: Basics of neural network theory and practice for supervised and unsupervised.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Intro. ANN & Fuzzy Systems Lecture 3 Basic Definitions of ANN.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
Introduction to Neural Network Session 1 Course: T0293 – NEURO COMPUTING Year: 2013.
Neural Network Architecture Session 2
Neural Nets Applications
Learning in Neural Networks
Artificial Intelligence (CS 370D)
Other Classification Models: Neural Network
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Artificial Intelligence Methods
XOR problem Input 2 Input 1
The Network Approach: Mind as a Web
Lecture 09: Introduction Image Recognition using Neural Networks
Presentation transcript:

KULIAH II JST: BASIC CONCEPTS Amer Sharif, S.Si, M.Kom

INTRODUCTION REVIEW Neural Network definition: A massively parallel distributed processor of simple processing units (neuron) Store experiential knowledge and make it available for use Knowledge is acquired from the environment through learning process Knowledge is stored as internerneuron connection strengths (synaptic weights)

INTRODUCTION REVIEW Benefits: Nonlinearity Input Output Mapping Adaptivity Evidential Response Contextual Information Fault Tolerance/Graceful Degrading VLSI Implementability Uniform Analysis & Design

NEURON MODELLING Basic elements of neuron: A set of synapses or connecting links Each synapse is characterized by its weight Signal xj at synapse j connected to neuron k is multiplied by synaptic weight wkj Bias is bk An adder for summing the input signals An activation function for limiting the output amplitude of the neuron

NEURON MODELLING Block diagram of a nonlinier neuron

NEURON MODELLING Note x1, x2,…, xm are input signals wk1, wk2,…, wkm are synaptic weights of neuron k uk is the linier combiner output bk is bias is the activation function yk is the output signal of the neuron

NEURON MODELLING If and bias is substituted for a synapse where x0 = + 1 with weight wk0 = bk then and

NEURON MODELLING Modified block diagram of a nonlinier neuron

ACTIVATION FUNCTIONS Activation Function types: Threshold Function and also known as the McCulloch-Pitts model -2 -1 0 1 2 v 1.2 1 0.8 0.6 0.4 0.2

ACTIVATION FUNCTIONS Piecewise-Linear Function

ACTIVATION FUNCTIONS Sigmoid Function S-shaped Sample logistic a is the slope parameter: the larger a the steeper the function Differentiable everywhere increasing a

NEURAL NETWORKS AS DIRECTED GRAPHS Neural networks maybe represented as directed graphs: Synaptic links (linier I/O) Activation links (nonlinier I/O) Synaptic convergence Synaptic divergence wkj xj yk= wkj xj xj yk=yi + yj yi yj xj

NEURAL NETWORKS AS DIRECTED GRAPHS Architectural graph: partially complete directed graph Output yk x0 =+1 xm x2 x1 .

FEEDBACK Output of a system influences some of the input applied to the system One or more closed paths of signal transmission around the system Feedback plays an important role in recurrent networks

FEEDBACK x’j (n) yk(n) xj(n) Sample single-loop feedback system w z-1 Output signal yk(n) is an infinite weighted summation of present and past samples of input signal xj(n) x’j (n) xj(n) yk(n) w z-1 w is fixed weight z-1 is unit-delay operator is sample of input signal delayed by l time units

FEEDBACK Dynamic system behavior is determined by weight w w < 1 yk(n) wxj(0) 0 1 2 3 4 n Dynamic system behavior is determined by weight w w < 1 w < 1 System is exponentially convergent/stable System posses infinite memory: Output depends on input samples extending into the infinite past Memory is fading: influence of past samples is reduced exponentially with time n

FEEDBACK w = 1 w > 1 System is linearly divergent yk(n) wxj(0) 0 1 2 3 4 n w = 1 w = 1 System is linearly divergent w > 1 System is exponentially divergent yk(n) wxj(0) 0 1 2 3 4 n w > 1

NETWORK ARCHITECTURES Single-Layered Feedforward Networks input layer of source nodes output layer of neurons Neurons are organized in layers “Single-layer” refers to output neurons Source nodes supply to output neurons but not vice versa Network is feedforward or acyclic

NETWORK ARCHITECTURES Multilayer Feedforward Networks Input layer of source nodes Layer of hidden neurons Layer of output One or more hidden layers Hidden neurons enable extractions of higher-order statistic Network acquires global perspective due to extra set of synaptic connections and neural interactions 7-4-2 fully connected network: 7 source nodes 4 hidden neurons 2 output neurons

NETWORK ARCHITECTURES Recurrent Networks z-1 Unit-delay operators Inputs Outputs At least one feedback loop Feedback loops affect learning capability and performance of the network

KNOWLEDGE REPRESENTATION Definition of Knowledge: Knowledge refers to stored information or models used by a person or a machine to interpret, predict, and appropriately respond to the outside world Issues: What information is actually made explicit How information is physically encoded for subsequent use Knowledge representation is goal-directed Good solution depends on good representation of knowledge

KNOWLEDGE REPRESENTATION Challenges faced by Neural Networks: Learn the model of the world/environment Maintain the model to be consistent with the real world to achieve the goals desired Neural Networks may learn from a set of observations data in form of input-output pairs (training data/training sample) Input is input signal and output is the corresponding desired response

KNOWLEDGE REPRESENTATION Handwritten digit recognition problem Input signal: one of 10 images of digits Goal: to identify image presented to the network as input Design steps: Select the appropriate architecture Train the network with subset of examples (learning phase) Test the network with presentation of data/digit image not seen before, then compare response of network with actual identity of the digit image presented (generalization phase)

KNOWLEDGE REPRESENTATION Difference with classical pattern-classifier: Classical pattern-classifier design steps: Formulate mathematical model of the problem Validate model with real data Build based on model Neural Network design is: Based on real life data Data may “speak for itself” Neural network not only provides model of the environment but also process the information

ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS AI systems must be able to: Store knowledge Use stored knowledge to solve problem Acquire new knowledge through experience AI components: Representation Knowledge is presented in a language of symbolic structures Symbolic representation makes it relatively easy for human users

ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS Reasoning Able to express and solve broad range of problems Able to make explicit and implicit information known to it Have a control mechanism to determine which operation for a particular problem, when a solution is obtained, or when further work on the problem must be terminated Rules, Data, and Control: Rules operate on Data Control operate on Rules The Travelling Salesman Problem: Data: possible tours and cost Rules: ways to go from city to city Control: which Rules to apply and when

ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS Learning Inductive learning: determine rules from raw data and experience Deductive learning: use rules to determine specific facts Environment Learning element Knowlegdge Base Performance element

ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS Parameter Artificial Intelligence Neural Networks Level of Explanation Symbolic representation with sequential processing Parallel distributed processing (PDP) Processing Style Sequential Parallel Representational Structure Quasi-linguistic structure Poor Summary Formal manipulation of algorithm and data representation in top down fashion Parallel distributed processing with natural ability to learn in bottom up fashion