Download presentation
1
Introduction to Artificial Neural Networks
主講人: 虞台文
2
Content Fundamental Concepts of ANNs. Basic Models and Learning Rules
Neuron Models ANN structures Learning Distributed Representations Conclusions
3
Introduction to Artificial Neural Networks
Fundamental Concepts of ANNs
4
What is ANN? Why ANN? ANN Artificial Neural Networks
To simulate human brain behavior A new generation of information processing system.
5
Applications Pattern Matching Pattern Recognition
Associate Memory (Content Addressable Memory) Function Approximation Learning Optimization Vector Quantization Data Clustering . . .
6
Traditional Computers are inefficient at these tasks although their computation speed is faster.
Applications Pattern Matching Pattern Recognition Associate Memory (Content Addressable Memory) Function Approximation Learning Optimization Vector Quantization Data Clustering . . .
7
The Configuration of ANNs
An ANN consists of a large number of interconnected processing elements called neurons. A human brain consists of ~1011 neurons of many different types. How ANN works? Collective behavior.
8
The Biologic Neuron
9
The Biologic Neuron 二神經原之神經絲接合部分 軸突 樹狀突
10
The Biologic Neuron Excitatory or Inhibitory
11
The Artificial Neuron x1 x2 xm wi1 wi2 wim f (.) a (.) i yi
12
The Artificial Neuron x1 x2 xm wi1 wi2 wim f (.) a (.) i yi
13
The Artificial Neuron wij positive excitatory
negative inhibitory zero no connection The Artificial Neuron x1 x2 xm wi1 wi2 wim f (.) a (.) i yi
14
The Artificial Neuron Proposed by McCulloch and Pitts [1943]
M-P neurons x1 x2 xm wi1 wi2 wim f (.) a (.) i yi
15
What can be done by M-P neurons?
A hard limiter. A binary threshold unit. Hyperspace separation. x1 x2 y w1 w2 x1 x2 w1 x1 + w2 x2 = 1
16
What ANNs will be? ANN A neurally inspired mathematical model.
Consists a large number of highly interconnected PEs. Its connections (weights) holds knowledge. The response of PE depends only on local information. Its collective behavior demonstrates the computation power. With learning, recalling and, generalization capability.
17
Three Basic Entities of ANN Models
Models of Neurons or PEs. Models of synaptic interconnections and structures. Training or learning rules.
18
Introduction to Artificial Neural Networks
Basic Models and Learning Rules Neuron Models ANN structures Learning
19
Processing Elements f (.) a (.) Extensions of M-P neurons
What integration functions we may have? What activation functions we may have?
20
Integration Functions
M-P neuron f (.) a (.) i Quadratic Function Spherical Function Polynomial Function
21
Activation Functions f (.) a (.) M-P neuron: (Step function) a f i
1 a f
22
Activation Functions f (.) a (.) Hard Limiter (Threshold function) a
1 a 1 f
23
Activation Functions Ramp function: f (.) a (.) i 1 a f
24
Activation Functions Unipolar sigmoid function: f (.) a (.) i
25
Activation Functions Bipolar sigmoid function: f (.) a (.) i
26
Example: Activation Surfaces
y L1 L3 x y L1 L2 L3 L2
27
Example: Activation Surfaces
y L1 L3 x1=0 1 1=1 1 3= 4 xy+4=0 1 2=1 x y L1 L3 L2 L2 y1=0
28
Example: Activation Surfaces
010 x y L1 L2 L3 Region Code 011 x y L1 L3 L2 110 111 001 101 100
29
Example: Activation Surfaces
z x y L1 L2 L3 z=0 x y L1 L3 L2 z=1
30
Example: Activation Surfaces
z x y L1 L2 L3 z=0 1 4=2.5 x y L1 L2 L3 z=1
31
Example: Activation Surfaces
M-P neuron: (Step function) L4 z x y L1 L2 L3
32
Example: Activation Surfaces
Unipolar sigmoid function: Example: Activation Surfaces =2 =3 L4 z x y L1 L3 L2 =5 =10
33
Introduction to Artificial Neural Networks
Basic Models and Learning Rules Neuron Models ANN structures Learning
34
ANN Structure (Connections)
35
Single-Layer Feedforward Networks
yn x1 x2 xm w11 w12 w1m w21 w22 w2m wn1 wnm wn2
36
Multilayer Feedforward Networks
x1 x2 xm y1 y2 yn Output Layer Hidden Layer Input Layer
37
Multilayer Feedforward Networks
Pattern Recognition Multilayer Feedforward Networks Where the knowledge from? Classification Output Analysis Learning Input
38
Single Node with Feedback to Itself
Loop
39
Single-Layer Recurrent Networks
x1 x2 xm y1 y2 yn
40
Multilayer Recurrent Networks
x1 x2 x3 y1 y2 y3
41
Introduction to Artificial Neural Networks
Basic Models and Learning Rules Neuron Models ANN structures Learning
42
Learning Consider an ANN with n neurons and each with m adaptive weights. Weight matrix:
43
How? Learning To “Learn” the weight matrix.
Consider an ANN with n neurons and each with m adaptive weights. Weight matrix: To “Learn” the weight matrix.
44
Learning Rules Supervised learning Reinforcement learning
Unsupervised learning
45
Supervised Learning Learning with a teacher Learning by examples
Training set
46
Supervised Learning y x ANN W d Error signal Generator
47
Reinforcement Learning
Learning with a critic Learning by comments
48
Reinforcement Learning
y x ANN W Reinforcement Signal Critic signal Generator
49
Unsupervised Learning
Self-organizing Clustering Form proper clusters by discovering the similarities and dissimilarities among objects.
50
Unsupervised Learning
y x ANN W
51
The General Weight Learning Rule
. wi1 wi2 wij wi,m-1 x1 x2 xj xm-1 yi i Input: Output:
52
The General Weight Learning Rule
We want to learn the weights & bias. The General Weight Learning Rule i . wi1 wi2 wij wi,m-1 x1 x2 xj xm-1 yi i Input: Output:
53
The General Weight Learning Rule
We want to learn the weights & bias. The General Weight Learning Rule x1 wi1 Input: x2 wi2 . wij i xj Let xm = 1 and wim = i. . xm-1 wi,m-1 i
54
The General Weight Learning Rule
x1 wi1 Input: x2 wi2 . wij i xj Let xm = 1 and wim = i. . xm-1 wi,m-1 wim=i xm= 1
55
The General Weight Learning Rule
We want to learn wi=(wi1, wi2 ,…,wim)T The General Weight Learning Rule x1 wi1 Input: x2 wi2 . wij i xj yi . xm-1 wi,m-1 wi(t) = ? wim=i xm= 1
56
The General Weight Learning Rule
wi x yi r di Learning Signal Generator
57
The General Weight Learning Rule
wi x yi r di Learning Signal Generator
58
The General Weight Learning Rule
wi x yi r di Learning Signal Generator
59
The General Weight Learning Rule
wi x yi r di Learning Signal Generator Learning Rate
60
The General Weight Learning Rule
We want to learn wi=(wi1, wi2 ,…,wim)T The General Weight Learning Rule Discrete-Time Weight Modification Rule: Continuous-Time Weight Modification Rule:
61
Hebb’s Learning Law Hebb [1994] hypothesis that when an axonal input from A to B causes neuron B to immediately emit a pulse (fire) and this situation happens repeatedly or persistently. Then, the efficacy of that axonal input, in terms of ability to help neuron B to fire in future, is somehow increased. Hebb’s learning rule is a unsupervised learning rule.
62
Hebb’s Learning Law + +
63
Introduction to Artificial Neural Networks
Distributed Representations
64
Distributed Representations
An entity is represented by a pattern of activity distributed over many PEs. Each Processing element is involved in representing many different entities. Local Representation: Each entity is represented by one PE.
65
Example + _ Dog Cat Bread + _ + _ P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10
66
Advantages What is this? + _ Dog Cat Bread + _ + _ + P0 P1 P2 P3 P4 P5
Act as a content addressable memory. Advantages P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 + _ Dog Cat Bread + _ + _ P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 + What is this?
67
Advantages Dog has 4 legs? How many for Fido? + _ Dog Cat Bread + _ +
Act as a content addressable memory. Make induction easy. Advantages P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 + _ Dog Cat Bread + _ + _ P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 + _ Fido Dog has 4 legs? How many for Fido?
68
Advantages Add doughnut by changing weights. + _ Dog Cat Bread + _ + _
Act as a content addressable memory. Make induction easy. Make the creation of new entities or concept easy (without allocation of new hardware). Advantages P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 + _ Dog Cat Bread + _ + _ + _ Doughnut Add doughnut by changing weights.
69
Advantages Some PEs break down don’t cause problem. + _ Dog Cat Bread
Act as a content addressable memory. Make induction easy. Make the creation of new entities or concept easy (without allocation of new hardware). Fault Tolerance. Advantages P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 + _ Dog Cat Bread + _ + _ Some PEs break down don’t cause problem.
70
Disadvantages Learning procedures are required. How to understand?
How to modify? Learning procedures are required.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.