Presentation is loading. Please wait.

Presentation is loading. Please wait.

L. Manevitz U. Haifa 1 Neural Networks: Capabilities and Examples L. Manevitz Computer Science Department HIACS Research Center University of Haifa.

Similar presentations


Presentation on theme: "L. Manevitz U. Haifa 1 Neural Networks: Capabilities and Examples L. Manevitz Computer Science Department HIACS Research Center University of Haifa."— Presentation transcript:

1 L. Manevitz U. Haifa 1 Neural Networks: Capabilities and Examples L. Manevitz Computer Science Department HIACS Research Center University of Haifa

2 L. Manevitz U. Haifa 2 What Are Neural Networks? What Are They Good for? How Do We Use Them? Definitions and some history Basics –Basic Algorithms –Examples Recent Examples Future Directions

3 L. Manevitz U. Haifa 3 Natural versus Artificial Neuron Natural NeuronMcCullough Pitts Neuron

4 L. Manevitz U. Haifa 4 Definitions and History McCullough –Pitts Neuron Perceptron Adaline Linear Separability Multi-Level Neurons Neurons with Loops

5 L. Manevitz U. Haifa 5 Sample Feed forward Network (No loops) Weights Input Output Wji Vik F(  wji xj

6 L. Manevitz U. Haifa 6 Replacement of Threshold Neurons with Sigmoid or Differentiable Neurons Threshold Sigmoid

7 L. Manevitz U. Haifa 7 Reason for Explosion of Interest Two co-incident affects (around 1985 – 87) –(Re-)discovery of mathematical tools and algorithms for handling large networks –Availability (hurray for Intel and company!) of sufficient computing power to make experiments practical.

8 L. Manevitz U. Haifa 8 Some Properties of NNs Universal: Can represent and accomplish any task. Uniform: “Programming” is changing weights Automatic: Algorithms for Automatic Programming; Learning

9 L. Manevitz U. Haifa 9 Networks are Universal All logical functions represented by three level (non-loop) network (McCullough-Pitts) All continuous (and more) functions represented by three level feed-forward networks (Cybenko et al.) Networks can self organize (without teacher). Networks serve as associative memories

10 L. Manevitz U. Haifa 10 Universality McCullough-Pitts: Adaptive Logic Gates; can represent any logic function Cybenko: Any continuous function representable by three-level NN.

11 L. Manevitz U. Haifa 11 Networks can “LEARN” and Generalize (Algorithms) One Neuron (Perceptron and Adaline) Very popular in 1960s – early 70s –Limited by representability (only linearly separable Feed forward networks (Back Propagation) –Currently most popular network (1987 –now) Kohonen self-Organizing Network (1980s – now)(loops) Attractor Networks (loops)

12 L. Manevitz U. Haifa 12 Learnability (Automatic Programming) One neuron: Perceptron and Adaline algorithms (Rosenblatt and Widrow-Hoff) (1960s –now) Feed forward Networks: Backpropagation (1987 – now) Associative Memories and Looped Networks (“Attractors”) (1990s – now)

13 L. Manevitz U. Haifa 13 Generalizability Typically train a network on a sample set of examples Use it on general class Training can be slow; but execution is fast.

14 L. Manevitz U. Haifa 14 Pattern Identification (Note: Neuron is trained) weights Perceptron

15 L. Manevitz U. Haifa 15 weights Feed Forward Network weights

16 L. Manevitz U. Haifa 16 Classical Applications (1986 – 1997) “Net Talk” : text to speech ZIPcodes: handwriting analysis Glovetalk: Sign Language to speech Data and Picture Compression: “Bottleneck” Steering of Automobile (up to 55 m.p.h) Market Predictions Associative Memories Cognitive Modeling: (especially reading, …) Phonetic Typewriter (Finnish)

17 L. Manevitz U. Haifa 17 Neural Network Once the architecture is fixed; the only free parameters are the weights Uniform ProgrammingThus Uniform Programming Potentially Automatic ProgrammingPotentially Automatic Programming Search for Learning AlgorithmsSearch for Learning Algorithms

18 L. Manevitz U. Haifa 18 Programming: Just find the weights! AUTOMATIC PROGRAMMING One Neuron: Perceptron or Adaline Multi-Level: Gradient Descent on Continuous Neuron (Sigmoid instead of step function).

19 L. Manevitz U. Haifa 19 Prediction Input/Output NN delay Compare

20 L. Manevitz U. Haifa 20 Training NN to Predict

21 L. Manevitz U. Haifa 21 Finite Element Method Numerical Method for solving p.d.e.s Many user chosen parameters Replace user expertise with NNs.

22 L. Manevitz U. Haifa 22 FEM Flow chart

23 L. Manevitz U. Haifa 23 Problems and Methods

24 L. Manevitz U. Haifa 24 Finite Element Method and Neural Networks Place mesh on body Predict where to adapt mesh

25 L. Manevitz U. Haifa 25 Placing Mesh on Body (Manevitz, Givoli and Yousef) Need to place geometry on topology Method: Use Kohonen algorithm Idea: Identify neurons with FEM nodes –Identify weights of nodes with geometric location –Identify topology with adjaceny –RESULT: Equi-probably placement

26 L. Manevitz U. Haifa 26 Kohonen Placement for FEM Include slide from Malik’s work.

27 L. Manevitz U. Haifa 27 Self-Organizing Network Weights from input to neurons Topology between neurons

28 L. Manevitz U. Haifa 28 Self-Organizing Network Weights from input give “location” to neuron Kohonen algorithm results in “winner” neuron After training, close input patterns have topologically close winners Results in Equi- probable Continuous Mapping (without teacher)

29 L. Manevitz U. Haifa 29 Placement of Mesh via Self Organizing NNs

30 L. Manevitz U. Haifa 30 Placement of Mesh via Self Organizing NNs2 Iteration 0Iteration 500; Quality =288 Iteration 2000; Quality = 238 Iteration 6000; Quality =223 Iteration 12000; Quality = 208 Iteration 30000; Quality =202

31 L. Manevitz U. Haifa 31 Comparison of NN and PLTMG PLTMG (249 nodes)NN (225 nodes); Quality = 279 Node Error Value Error Pltmg2.4 E-024.51 E-02 NN7.5 E-039.09E-03

32 L. Manevitz U. Haifa 32 FEM Temporal Adaptive Meshes

33 L. Manevitz U. Haifa 33 Prediction of Refinement of Elements Method simulates time Current adaptive method uses gradient Can just MISS all the action. We use NNs to PREDICT the gradient. Under development with Manevitz, Givoli and Bitar.

34 L. Manevitz U. Haifa 34 Training NN to Predict2

35 L. Manevitz U. Haifa 35 Refinement Predictors Need to choose features Need to identify kinds of elements

36 L. Manevitz U. Haifa 36 Other Predictions? Stock Market (really!) Credit Card Fraud (Master Card, USA)

37 L. Manevitz U. Haifa 37 Surfer’s Apprentice Program Manevitz and Yousef Make a “model” of user for retrieving information from internet. Many issues: here focus on retrieval of new pages similar to other pages of interest to user. Note ONLY POSITIVE DATA.

38 L. Manevitz U. Haifa 38

39 L. Manevitz U. Haifa 39 Bottleneck Network Train to Identity on Sample Data Should be identity only on similar data NOVELTY FILTER

40 L. Manevitz U. Haifa 40 How well does it work? Tested on Standard Reuter’s Data Base. Used 25% for training Withholding information on representation The best method for retrieval using only positive training. (Better than SVM, etc.)

41 L. Manevitz U. Haifa 41 How to help Intel? (Make Billions? Reset NASDAQ) Branch prediction? (Note similarity to FEM refinement.) Perhaps can use to give predictor that is even user or application dependent. (Note: Neural activity is, I am told, natural for VLSI design and there have been several such chips produced.)

42 L. Manevitz U. Haifa 42 Other Different Directions Modify basic model to handle temporal adaptivity. (Occurs in real neurons according to latest biological information.) Apply to model human diseases, etc.


Download ppt "L. Manevitz U. Haifa 1 Neural Networks: Capabilities and Examples L. Manevitz Computer Science Department HIACS Research Center University of Haifa."

Similar presentations


Ads by Google