L. Manevitz U. Haifa 1 Neural Networks: Capabilities and Examples L. Manevitz Computer Science Department HIACS Research Center University of Haifa.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Slides from: Doug Gray, David Poole
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Artificial Neural Networks (1)
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Decision Support Systems
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
NNs Adaline 1 Neural Networks - Adaline L. Manevitz.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Learning via Neural Networks L. Manevitz All rights reserved.
Back-Propagation Algorithm
1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Artificial Neural Networks
Neural Networks An Introduction.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 8: Neural Networks.
Artificial Intelligence Chapter 3 Neural Networks Artificial Intelligence Chapter 3 Neural Networks Biointelligence Lab School of Computer Sci. & Eng.
Computational Modeling of Cognitive Activity Prof. Larry M. Manevitz Course Slides: 2012.
Akram Bitar and Larry Manevitz Department of Computer Science
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Computer Architecture Lecture 26 Past and Future Ralph Grishman November 2015 NYU.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Chapter 6 Neural Network.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Business Intelligence and Decision Support Systems (9 th Ed., Prentice Hall) Chapter 6: Artificial Neural Networks for Data Mining.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Artificial Intelligence (CS 370D)
第 3 章 神经网络.
Real Neurons Cell structures Cell body Dendrites Axon
Soft Computing Applied to Finite Element Tasks
Neural Networks CS 446 Machine Learning.
Wed June 12 Goals of today’s lecture. Learning Mechanisms
CSE P573 Applications of Artificial Intelligence Neural Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Computational Modeling of Cognitive Activity
Artificial Intelligence Chapter 3 Neural Networks
Learning via Neural Networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
Neural Network - 2 Mayank Vatsa
Pulsed Neural Networks
Lecture Notes for Chapter 4 Artificial Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
PYTHON Deep Learning Prof. Muhammad Saeed.
Artificial Intelligence Chapter 3 Neural Networks
Akram Bitar and Larry Manevitz Department of Computer Science
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

L. Manevitz U. Haifa 1 Neural Networks: Capabilities and Examples L. Manevitz Computer Science Department HIACS Research Center University of Haifa

L. Manevitz U. Haifa 2 What Are Neural Networks? What Are They Good for? How Do We Use Them? Definitions and some history Basics –Basic Algorithms –Examples Recent Examples Future Directions

L. Manevitz U. Haifa 3 Natural versus Artificial Neuron Natural NeuronMcCullough Pitts Neuron

L. Manevitz U. Haifa 4 Definitions and History McCullough –Pitts Neuron Perceptron Adaline Linear Separability Multi-Level Neurons Neurons with Loops

L. Manevitz U. Haifa 5 Sample Feed forward Network (No loops) Weights Input Output Wji Vik F(  wji xj

L. Manevitz U. Haifa 6 Replacement of Threshold Neurons with Sigmoid or Differentiable Neurons Threshold Sigmoid

L. Manevitz U. Haifa 7 Reason for Explosion of Interest Two co-incident affects (around 1985 – 87) –(Re-)discovery of mathematical tools and algorithms for handling large networks –Availability (hurray for Intel and company!) of sufficient computing power to make experiments practical.

L. Manevitz U. Haifa 8 Some Properties of NNs Universal: Can represent and accomplish any task. Uniform: “Programming” is changing weights Automatic: Algorithms for Automatic Programming; Learning

L. Manevitz U. Haifa 9 Networks are Universal All logical functions represented by three level (non-loop) network (McCullough-Pitts) All continuous (and more) functions represented by three level feed-forward networks (Cybenko et al.) Networks can self organize (without teacher). Networks serve as associative memories

L. Manevitz U. Haifa 10 Universality McCullough-Pitts: Adaptive Logic Gates; can represent any logic function Cybenko: Any continuous function representable by three-level NN.

L. Manevitz U. Haifa 11 Networks can “LEARN” and Generalize (Algorithms) One Neuron (Perceptron and Adaline) Very popular in 1960s – early 70s –Limited by representability (only linearly separable Feed forward networks (Back Propagation) –Currently most popular network (1987 –now) Kohonen self-Organizing Network (1980s – now)(loops) Attractor Networks (loops)

L. Manevitz U. Haifa 12 Learnability (Automatic Programming) One neuron: Perceptron and Adaline algorithms (Rosenblatt and Widrow-Hoff) (1960s –now) Feed forward Networks: Backpropagation (1987 – now) Associative Memories and Looped Networks (“Attractors”) (1990s – now)

L. Manevitz U. Haifa 13 Generalizability Typically train a network on a sample set of examples Use it on general class Training can be slow; but execution is fast.

L. Manevitz U. Haifa 14 Pattern Identification (Note: Neuron is trained) weights Perceptron

L. Manevitz U. Haifa 15 weights Feed Forward Network weights

L. Manevitz U. Haifa 16 Classical Applications (1986 – 1997) “Net Talk” : text to speech ZIPcodes: handwriting analysis Glovetalk: Sign Language to speech Data and Picture Compression: “Bottleneck” Steering of Automobile (up to 55 m.p.h) Market Predictions Associative Memories Cognitive Modeling: (especially reading, …) Phonetic Typewriter (Finnish)

L. Manevitz U. Haifa 17 Neural Network Once the architecture is fixed; the only free parameters are the weights Uniform ProgrammingThus Uniform Programming Potentially Automatic ProgrammingPotentially Automatic Programming Search for Learning AlgorithmsSearch for Learning Algorithms

L. Manevitz U. Haifa 18 Programming: Just find the weights! AUTOMATIC PROGRAMMING One Neuron: Perceptron or Adaline Multi-Level: Gradient Descent on Continuous Neuron (Sigmoid instead of step function).

L. Manevitz U. Haifa 19 Prediction Input/Output NN delay Compare

L. Manevitz U. Haifa 20 Training NN to Predict

L. Manevitz U. Haifa 21 Finite Element Method Numerical Method for solving p.d.e.s Many user chosen parameters Replace user expertise with NNs.

L. Manevitz U. Haifa 22 FEM Flow chart

L. Manevitz U. Haifa 23 Problems and Methods

L. Manevitz U. Haifa 24 Finite Element Method and Neural Networks Place mesh on body Predict where to adapt mesh

L. Manevitz U. Haifa 25 Placing Mesh on Body (Manevitz, Givoli and Yousef) Need to place geometry on topology Method: Use Kohonen algorithm Idea: Identify neurons with FEM nodes –Identify weights of nodes with geometric location –Identify topology with adjaceny –RESULT: Equi-probably placement

L. Manevitz U. Haifa 26 Kohonen Placement for FEM Include slide from Malik’s work.

L. Manevitz U. Haifa 27 Self-Organizing Network Weights from input to neurons Topology between neurons

L. Manevitz U. Haifa 28 Self-Organizing Network Weights from input give “location” to neuron Kohonen algorithm results in “winner” neuron After training, close input patterns have topologically close winners Results in Equi- probable Continuous Mapping (without teacher)

L. Manevitz U. Haifa 29 Placement of Mesh via Self Organizing NNs

L. Manevitz U. Haifa 30 Placement of Mesh via Self Organizing NNs2 Iteration 0Iteration 500; Quality =288 Iteration 2000; Quality = 238 Iteration 6000; Quality =223 Iteration 12000; Quality = 208 Iteration 30000; Quality =202

L. Manevitz U. Haifa 31 Comparison of NN and PLTMG PLTMG (249 nodes)NN (225 nodes); Quality = 279 Node Error Value Error Pltmg2.4 E E-02 NN7.5 E E-03

L. Manevitz U. Haifa 32 FEM Temporal Adaptive Meshes

L. Manevitz U. Haifa 33 Prediction of Refinement of Elements Method simulates time Current adaptive method uses gradient Can just MISS all the action. We use NNs to PREDICT the gradient. Under development with Manevitz, Givoli and Bitar.

L. Manevitz U. Haifa 34 Training NN to Predict2

L. Manevitz U. Haifa 35 Refinement Predictors Need to choose features Need to identify kinds of elements

L. Manevitz U. Haifa 36 Other Predictions? Stock Market (really!) Credit Card Fraud (Master Card, USA)

L. Manevitz U. Haifa 37 Surfer’s Apprentice Program Manevitz and Yousef Make a “model” of user for retrieving information from internet. Many issues: here focus on retrieval of new pages similar to other pages of interest to user. Note ONLY POSITIVE DATA.

L. Manevitz U. Haifa 38

L. Manevitz U. Haifa 39 Bottleneck Network Train to Identity on Sample Data Should be identity only on similar data NOVELTY FILTER

L. Manevitz U. Haifa 40 How well does it work? Tested on Standard Reuter’s Data Base. Used 25% for training Withholding information on representation The best method for retrieval using only positive training. (Better than SVM, etc.)

L. Manevitz U. Haifa 41 How to help Intel? (Make Billions? Reset NASDAQ) Branch prediction? (Note similarity to FEM refinement.) Perhaps can use to give predictor that is even user or application dependent. (Note: Neural activity is, I am told, natural for VLSI design and there have been several such chips produced.)

L. Manevitz U. Haifa 42 Other Different Directions Modify basic model to handle temporal adaptivity. (Occurs in real neurons according to latest biological information.) Apply to model human diseases, etc.