Presentation is loading. Please wait.

Presentation is loading. Please wait.

Supervised Learning. Teacher response: Emulation. Error: y1 – y2, where y1 is teacher response ( desired response, y2 is actual response ). Aim: To reduce.

Similar presentations


Presentation on theme: "Supervised Learning. Teacher response: Emulation. Error: y1 – y2, where y1 is teacher response ( desired response, y2 is actual response ). Aim: To reduce."— Presentation transcript:

1 Supervised Learning

2 Teacher response: Emulation. Error: y1 – y2, where y1 is teacher response ( desired response, y2 is actual response ). Aim: To reduce the error. It’s Closed Feedback System. Suppose the error, Error, is on the surface, due to teacher response, we have to bring it down, which is called minimising error, Point is called Local Minimum, or Global Minimum.

3 Reinforcement Learning.

4 There is no teacher. Converts Primary reinforcement to heuristic reinforcement. There can be delay for primary reinforcement, as the inputs have to be analysed, which is called credit assignement problem. Example Character Recognition.

5 Unsupervised Learning

6 Error Correction Learning. Desired Output – Actual Output

7 Consider Multi Layer Network.

8 Multi Layer Network, showing output.

9 Cost Function or Index of Performance. Widrow Hoff Rule ( Delta Rule ) New Weights New Weights ( after Unit Delay ) z (power ) – 1, is called unit delay operator. n is disrete time.

10 Memory Based Learning Past experinces are stores, to find the relatiion between Input and desired output. Consider, Consider

11 K Nearest Neighbour

12 Hebb’s Associative Rules.

13 Hebb’s Synapse

14 Four Characteristics of Hebbian Synapse.

15

16 Hebb’s Model Hebb’s Hypothesis: Covariance Hypothesis Increase in inputs, presynapsis, increases outputs ( postsynapsis), leads to saturation.( Activity Product Rule) Here thresholds are used on inputs and outputs.

17

18

19

20

21 Output Function. Summation of Weights. Change in Weights.

22 Xj is input and xk is output T is pseudotempearature. There are two types of neurons, visible and hiddeen

23

24

25

26 This is applicable in Error Correction Learning.

27 1)Pattern association problem Pattern Recognition tasks by Feed Forward Networks

28 1)Here every input ( training data ) is associated with an output. 2) So if an input ( test data ), is close to any training data, like, Then,Is associated with 3)But if the test data, is very far away from, training data, then Test data, will be associated with an output, Note: Is very small And not 4) System displays Accretive Behaviour. 5) Follows Feed Forward Network.

29 Ai=al + i1, i1 is small number.

30 2)Pattern classification problem

31 1)In Pattern Association problem, if a set of inputs map to an output, the size of output data set is smaller than input data set. Classes of inputs get a label. 2)If a test data, which is close to any inputs ( training data ), in a class, it gets classified, to that class, for which there is a label. 3) Here, test data is not associated, with output, but the class has a label, and test data is part of it. 4) It creates Accretive behaviour. 5) Follows Feed Forward Network.

32

33 3)Pattern mapping

34 1)Here output is a map of input. 2) So if any input ( test data ), is close to any one training data, the output of test data, will be interpolation of output of training data, means they are in one range. 3) Pattern Association and Pattern Classification are derived from Pattern Mapping. Show it by Interpolation. 4) Pattern Mapping performs Generalization. 5) Follows Feed Forward Network.

35 Auto Association Problem Pattern Storage Problem Pattern Environment Storage Problem Pattern Recognition tasks by Feed Backward Networks

36 1)Auto Association Networks 1)Inputs and Outputs are identical. 2) Implementation has to be done by feed backward networks. 3)Follows Feed Back Network.

37 2)Pattern Storage problem. The input and output are once again identical. Three separate neurons are used to realize, the output. So output points and input points are different. Follows Feed Back Network.

38 3)Pattern Environment Storage problem If set of patterns, have certain probability, it is called as pattern environment Storage problem. Follows Feed Back Network.There is feedback, as to get Output, we have to look at flip of states.

39 Pattern Recognition tasks by Competitive Learning. If input patterns are replaced by new patterns, so that, the patterns get the output, over other patterns, it is called as temporary storage Problem. Follows CL. Some Input patterns want to reach output, 1)Temporary Storage problem

40 2)Pattern Clustering problem The test data is classified to the output, based on being near to first class. It creates Accretive Behaviour. Follows CL. Somehow test data, wants to enter testing data. A student, wants to enterer cluster of engg students.

41 Output is interpolative. Follows CL. Test Data, wants to somehow reach output.

42 3)Feature Mapping problem To cluster we need features by competitive Learning. Ex BP Algorithm.

43 Back Propagation Algorithm

44

45

46

47

48 Error at output neuron j. Total Error Energy 1 2

49 Average of all energies ( at different discrete time intervals ) Activation Values. 3 4

50 Activation Function. 5

51 Consider 6 7 8

52 9 10 Substituing 7,8,9,10 in 6 we get 11

53 UsingError Correction Rule And LMS rule, we get 12 13 Using 11 in 13, we get 14

54 Where, the error gradient is given by, 15 The above, is to show another way of getting error gradient, to be used in 2 nd part.

55

56

57 Using the expression in 15, 16 17 Total error at output layer 18

58 19 20 21

59 22 23 24

60 From 19 25 26 Substituting 20 in 16, we get

61 Using 14.

62

63

64 BP Algorithm Summary

65

66 Virtues

67

68

69 Limitations ( where brain is better)


Download ppt "Supervised Learning. Teacher response: Emulation. Error: y1 – y2, where y1 is teacher response ( desired response, y2 is actual response ). Aim: To reduce."

Similar presentations


Ads by Google