Presentation is loading. Please wait.

Presentation is loading. Please wait.

Neural Network Introduction Hung-yi Lee. Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function.

Similar presentations


Presentation on theme: "Neural Network Introduction Hung-yi Lee. Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function."— Presentation transcript:

1 Neural Network Introduction Hung-yi Lee

2 Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function Set “Best” Function “2” (label) x: function input y: function output “2”“2”

3 Neural Network Realize it How to pick the “best” function? What is the “best” function? What does the function hypothesis set (model) look like?

4 Neural Network Realize it How to pick the “best” function? What is the “best” function? What does the function hypothesis set (model) look like?

5 Neural Network Fully Connected Feedforward Network …… Layer 1 …… Layer 2 …… Layer L …… … Input Output You can always connect the neurons in your own way. vector x vector y

6 Neural Network …… Layer 1 …… Layer 2 …… Layer L …… … Input Output Input layer Output layer Hidden Layers vector x vector y

7 Notation …… nodes Layer …… Layer nodes …… Output of a neuron: Neuron i Layer Output of one layer: : a vector

8 Notation …… nodes Layer …… Layer nodes …… Layer to Layer from neuron j to neuron i (Layer )

9 Notation …… nodes Layer …… Layer nodes …… : bias for neuron i at layer l bias for all neurons in layer l

10 Notation …… nodes Layer …… Layer nodes …… : input of the activation function for neuron i at layer l : input of the activation function all the neurons in layer l

11 Notation - Summary :output of a neuron :output of a layer : input of activation function : input of activation function for a layer : a weight : a weight matrix : a bias : a bias vector

12 Relations between Layer Outputs …… nodes Layer …… Layer nodes ……

13 Relations between Layer Outputs …… nodes Layer …… Layer nodes ……

14 Relations between Layer Outputs …… nodes Layer …… Layer nodes ……

15 Relations between Layer Outputs …… nodes Layer …… Layer nodes ……

16 Function of Neural Network vector x vector y

17 Neural Network Realize it How to pick the “best” function? What is the “best” function? What does the function hypothesis set (model) look like?

18 Format of Training Data The input/output of neural network model are vectors. Object x and label y should also be represented as vectors. “2” Example: Handwriting Digit Recognition “1” 10 dimensions for digit recognition “1” “2” “3” “1” “2” “3” 1: for ink, 0: otherwise Each pixel corresponds to an element in the vector 28 x 28 28 x 28 = 784 dimensions x: y:

19 What is the “Best” Function? Given training data: The “best” function f * is the one who makes for all training examples x r is most close to The best function f * is the one minimizes C. C(f) evaluate the badness of a function f C(f) is a “function of function” (error function, cost function, objective function ……)

20  What is the “Best” Function? The best function f * is the one minimizes C(f). Do you like this definition of “best”? Question  Is the distance a good measure to evaluate the closeness? Reference: Golik, Pavel, Patrick Doetsch, and Hermann Ney. "Cross- entropy vs. squared error training: a theoretical and experimental comparison." INTERSPEECH. 2013.

21 What is the “Best” Function? Error function: Given training data: (“function of function”) How to find the best parameter θ * that minimizes C(θ). Pick the “best” parameter set θ* (Hypothesis Function Set) Pick the “best” function f*

22 Neural Network Realize it How to pick the “best” function? What is the “best” function? What does the function hypothesis set (model) look like?

23 Possible Solution Statement of problems: There is a function C(θ) θ is a set of parameters θ = {θ 1, θ 2, θ 3, ……} Find θ * that minimizes C(θ) Brute force? Enumerate all possible θ Calculus? Find θ * such that

24 Gradient descent Starting Parameters Hopefully, with sufficient iterations, we can finally find θ* such that C(θ*) is minimized. ……

25 Gradient descent – one variable For simplification, first consider that θ has only one variable  Randomly start at a point θ 0  Compute C(θ 0 -ε) and C(θ 0 +ε)  If C(θ 0 +ε) < C(θ 0 -ε) θ 1 = θ 0 + ε ……

26 Gradient descent – two variables Suppose that θ has two variables {θ 1, θ 2 } How to find the smallest value on the red circle? C(θ)

27 Taylor series Let h(x) be infinitely differentiable around x = x 0.

28 Taylor series Taylor series for h(x)=sin(x) around x 0 =π/4 sin(x)=

29 Taylor series Taylor series for h(x)=sin(x) around x 0 =π/4 The approximation is good around π/4. sin(x)= ……

30 Taylor series One variable: Multivariable: When x is close to x 0 When x and y is close to x 0 and y 0

31 Gradient descent – two variables Red Circle:(If the radius is small)

32 Gradient descent – two variables Red Circle:(If the radius is small) Find θ 1 and θ 2 to minimize C’(θ) Simple, right?

33 Gradient descent – two variables Red Circle:(If the radius is small) Find θ 1 and θ 2 to minimize C’(θ) To minimize C’(θ)

34 Gradient descent – two variables The results is intuitive, isn’t it?

35 Gradient descent – High dimension Space of parameter set θ A ball …… The point with minimum C(θ) on the ball is at θ = {θ 1, θ 2, θ 3, ……}

36 Gradient descent Starting Parameters …… η should be small enough, but should not be too small. η is called “learning rate”

37 Gradient descent - Problem Different Initializations lead to different local minimums Who is Afraid of Non-Convex Loss Functions? http://videolectures.net/eml07_lecun_wia/

38 Gradient descent - Problem Different Initializations lead to different local minimums 20 x y Toy Example

39 Neural Network Realize it How to pick the “best” function? What is the “best” function? What does the function hypothesis set (model) look like?

40 Gradient descent for Neural Network

41 Chain Rule Case 1 Case 2

42 (chain rule) Gradient descent for Neural Network … Layer L (Output layer) … … … Layer L-1 … … … … Example:

43 Gradient descent for Neural Network … Layer L (Output layer) … … … Layer L-1 … … … … (constant) (chain rule) Example:

44 Gradient descent for Neural Network … Layer L (Output layer) … … … Layer L-1 … … … … (chain rule) Example:

45 Gradient descent for Neural Network (as input is “1”) … Layer L (Output layer) … … … Layer L-1 … … … … Example:

46 … … Layer L-2 … Layer L (Output layer) … … … Layer L-1 … … … …

47 (chain rule) Sum over layer L … Layer L … … … Layer L-1 … … … …

48 (chain rule) Sum over layer L … … Layer L-2 … … Layer L-1

49 … … Layer L-2 … Layer L (Output layer) … … … Layer L-1 … … Layer L-3 … … … …

50 Sum over layer L Sum over layer L-1

51 Summarizing what we have done For parameters between layer L and L-1 For parameters between layer L-2 and L-1 For parameters between layer L-3 and L-2 There are efficient way to compute the gradient – backpropagation.

52 Reference for Neural network Chapter 2 of Neural network and Deep Learning http://neuralnetworksanddeeplearning.com/ch ap2.html LeCun, Yann A., et al. "Efficient backprop." http://yann.lecun.com/exdb/publis/pdf/lecun- 98b.pdf Bengio, Yoshua. "Practical recommendations for gradient-based training of deep architectures.“ http://www.iro.umontreal.ca/~bengioy/papers/ YB-tricks.pdf

53 Thank you for your listening!

54 Appendix

55 Layer-by-layer 20 20-20 20-20-20 20-20-20-20

56 (constant)

57 (chain rule) Sum over layer L … … Layer L-2 … … Layer L-1

58 (chain rule) Sum over layer L … Layer L … … … Layer L-1 … … … …

59

60

61 Gradient descent for Neural Network … Layer L (Output layer) … … … Layer L-1 … … … … (as input is “1”) Example:

62 What is the “Best” Function? (Hypothesis Function Set) The best function θ * is the one minimizes C(θ). Different θ Different f Different C Objective function C is a function of θ C(θ) How to find θ * ? The best function f * is the one minimizes C.

63 Notation

64


Download ppt "Neural Network Introduction Hung-yi Lee. Review: Supervised Learning Training: Pick the “best” Function f * Training Data Model Testing: Hypothesis Function."

Similar presentations


Ads by Google