Presentation is loading. Please wait.

Presentation is loading. Please wait.

Backpropagation An efficient way to compute the gradient Hung-yi Lee.

Similar presentations


Presentation on theme: "Backpropagation An efficient way to compute the gradient Hung-yi Lee."— Presentation transcript:

1 Backpropagation An efficient way to compute the gradient Hung-yi Lee

2 Review: Notation …… nodes Layer …… Layer nodes …… :output of a neuron :output of a layer : input of activation function : input of activation function for a layer

3 Review: Notation …… : a weight : a bias : a bias for all neurons in a layer : the weights between layers nodes Layer nodes

4 Review: Relations between Layer Outputs …… nodes Layer …… Layer nodes ……

5 Review: Neural Network is a function vector x vector y (to be learned from training examples)

6 Review: Gradient Descent Given training examples: Find a set of parameters θ * minimizing the error function C(θ) We have to compute and

7 Neat Representation is the multiplication of two terms … … … … Layer

8 Neat Representation – First Term is the multiplication of two terms … … … … Layer

9 Neat Representation – First Term … … Layer l-1 … … Layer l If l > 1

10 Neat Representation – First Term If l = 1 … … Input … … Layer 1 If l > 1

11 Neat Representation – Second Term is always the multiplication of two terms … … … … Layer

12 Neat Representation – Second Term … … Layer l-1 … … Layer l … … Layer l+1 …… … … Layer L (output layer) Two Questions: 1. How to compute 2. The relation of and

13 Neat Representation – Second Term Two Questions: 1. How to compute 2. The relation of and … … Layer L (output layer) Depending on the definition of error function

14 Neat Representation – Second Term Two Questions: 1. How to compute 2. The relation of and

15 Neat Representation – Second Term … … Two Questions: 1. How to compute 2. The relation of and … … Layer l … … Layer l+1

16 Neat Representation – Second Term … … Layer l … … Layer l+1 … …

17 Neat Representation – Second Term multiply a constant … … output input new type of neuron … … Layer l … … Layer l+1

18 Neat Representation – Second Term … … Layer l+1 Layer l

19 Neat Representation – Second Term … … Layer l+1 Layer l Compare … … Layer l … … Layer l+1

20 … … Layer L … …… … Layer l+1 Layer l … Layer L-1 … …… Two Questions: 1. How to compute 2. The relation of and

21 Backpropagation Forward Pass Backward Pass … … … … Layer

22 Appendix

23

24 A reverse network … … Layer L (Output layer) … (formed by new types of neurons) … … … Layer l+1 Layer l … Layer l+2 … …… Two Questions: 1. How to compute 2. The relation of and

25 Review: Gradient descent Start at paramter θ 0 Compute gradient at W 0 : g 0 Move to W 1 = W 0 - μg 0 Compute gradient at W 1 : g 1 Move to W 2 = W 1 – μg 1 Movement Gradient …… θ0θ0 θ1θ1 θ2θ2 θ3θ3 g0g0 g1g1 g2g2 g3g3

26 Neat Representation – First Term …… Layer 1 …… Layer L-1 …… Input

27 Neat Representation – Second Term … … Layer L (output layer) Two Questions: 1. How to compute 2. The relation of and … … Layer L (Output layer)

28 Neat Representation – Second Term Two Questions: 1. How to compute 2. The relation of and … … Layer L (Output layer)

29 Reference https://theclevermachine.wordpress.com/


Download ppt "Backpropagation An efficient way to compute the gradient Hung-yi Lee."

Similar presentations


Ads by Google