Advanced information retreival

Slides:



Advertisements
Similar presentations
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Advertisements

Kostas Kontogiannis E&CE
Machine Learning Neural Networks
Artificial Intelligence (CS 461D)
1 Part I Artificial Neural Networks Sofia Nikitaki.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks for Information Retrieval Hassan Bashiri May 2005.
CS 484 – Artificial Intelligence
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Introduction to Directed Data Mining: Neural Networks
Microsoft Enterprise Consortium Data Mining Concepts Introduction to Directed Data Mining: Neural Networks Prepared by David Douglas, University of ArkansasHosted.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Advanced information retreival Chapter 02: Modeling - Neural Network Model Neural Network Model.
Chapter 9 Neural Network.
Using Neural Networks to Predict Claim Duration in the Presence of Right Censoring and Covariates David Speights Senior Research Statistician HNC Insurance.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Najah Alshanableh. Fuzzy Set Model n Queries and docs represented by sets of index terms: matching is approximate from the start n This vagueness can.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Chapter 8: Adaptive Networks
Information Retrieval CSE 8337 Spring 2005 Modeling (Part II) Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates.
Recuperação de Informação B Cap. 02: Modeling (Latent Semantic Indexing & Neural Network Model) 2.7.2, September 27, 1999.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural Networks The Elements of Statistical Learning, Chapter 12 Presented by Nick Rizzolo.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Chapter 7. Classification and Prediction
Deep Feedforward Networks
Artificial Neural Networks
Introduction to Artificial Neural Network Session 1
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
Artificial Intelligence (CS 370D)
第 3 章 神经网络.
Real Neurons Cell structures Cell body Dendrites Axon
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Latent Semantic Indexing
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning Today: Reading: Maria Florina Balcan
Neural Networks Advantages Criticism
Artificial Intelligence Chapter 3 Neural Networks
Perceptron as one Type of Linear Discriminants
An Introduction To The Backpropagation Algorithm
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Machine Learning. Support Vector Machines A Support Vector Machine (SVM) can be imagined as a surface that creates a boundary between points of data.
Artificial Intelligence Lecture No. 28
Artificial Intelligence Chapter 3 Neural Networks
Neural Networks ICS 273A UC Irvine Instructor: Max Welling
Backpropagation.
Artificial neurons Nisheeth 10th January 2019.
Artificial Intelligence 12. Two Layer ANNs
Artificial Intelligence Chapter 3 Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
COSC 4335: Part2: Other Classification Techniques
ARTIFICIAL NEURAL networks.
Recuperação de Informação B
Seminar on Machine Learning Rada Mihalcea
Introduction to Neural Network
David Kauchak CS158 – Spring 2019
Introduction to Neural Networks
Artificial Intelligence Chapter 3 Neural Networks
Presentation transcript:

Advanced information retreival Chapter 02: Modeling - Neural Network Model

Neural Network Model A neural network is an oversimplified representation of the neuron interconnections in the human brain: nodes are processing units edges are synaptic connections the strength of a propagating signal is modelled by a weight assigned to each edge the state of a node is defined by its activation level depending on its activation level, a node might issue an output signal

Neural Networks Neural Networks Complex learning systems recognized in animal brains Single neuron has simple structure Interconnected sets of neurons perform complex learning tasks Human brain has 1015 synaptic connections Artificial Neural Networks attempt to replicate non-linear learning found in nature Dendrites Cell Body Axon

Neural Networks (cont’d) Dendrites gather inputs from other neurons and combine information Then generate non-linear response when threshold reached Signal sent to other neurons via axon Artificial neuron model is similar Data inputs (xi) are collected from upstream neurons input to combination function (sigma)

Neural Networks (cont’d) Activation function reads combined input and produces non-linear response (y) Response channeled downstream to other neurons What problems applicable to Neural Networks? Quite robust with respect to noisy data Can learn and work around erroneous data Results opaque to human interpretation Often require long training times

Input and Output Encoding Neural Networks require attribute values encoded to [0, 1] Numeric Apply Min-max Normalization to continuous variables Works well when Min and Max known Also assumes new data values occur within Min-Max range Values outside range may be rejected or mapped to Min or Max

Input and Output Encoding (cont’d) Neural Networks always return continuous values [0, 1] Many classification problems have two outcomes Solution uses threshold established a priori in single output node to separate classes For example, target variable is “leave” or “stay” Threshold value is “leave if output >= 0.67” Single output node value = 0.72 classifies record as “leave”

Simple Example of a Neural Network Node 1 Node 2 Node 3 Node B Node A Node Z W1A W1B W2A W2B WAZ W3A W3B W0A WBZ W0Z W0B Input Layer Hidden Layer Output Layer Neural Network consists of layered, feedforward, completely connected network of nodes Feedforward restricts network flow to single direction Flow does not loop or cycle Network composed of two or more layers

Simple Example of a Neural Network (cont’d) Most networks have Input, Hidden, Output layers Network may contain more than one hidden layer Network is completely connected Each node in given layer, connected to every node in next layer Every connection has weight (Wij) associated with it Weight values randomly assigned 0 to 1 by algorithm Number of input nodes dependent on number of predictors Number of hidden and output nodes configurable

Simple Example of a Neural Network (cont) Node 1 Node 2 Node 3 Node B Node A Node Z W1A W1B W2A W2B WAZ W3A W3B W0A WBZ W0Z W0B Input Layer Hidden Layer Output Layer Combination function produces linear combination of node inputs and connection weights to single scalar value For node j, xij is ith input Wij is weight associated with ith input node I+ 1 inputs to node j x1, x2, ..., xI are inputs from upstream nodes x0 is constant input value = 1.0 Each input node has extra input W0jx0j = W0j

Simple Example of a Neural Network (cont’d) W0A = 0.5 W0B = 0.7 W0Z = 0.5 x1 = 0.4 W1A = 0.6 W1B = 0.9 WAZ = 0.9 x2 = 0.2 W2A = 0.8 W2B = 0.8 WBZ = 0.9 x3 = 0.7 W3A = 0.6 W3B = 0.4 The scalar value computed for hidden layer Node A equals For Node A, netA = 1.32 is input to activation function Neurons “fire” in biological organisms Signals sent between neurons when combination of inputs cross threshold

Simple Example of a Neural Network (cont’d) Firing response not necessarily linearly related to increase in input stimulation Neural Networks model behavior using non-linear activation function Sigmoid function most commonly used In Node A, sigmoid function takes netA = 1.32 as input and produces output

Simple Example of a Neural Network (cont’d) Node A outputs 0.7892 along connection to Node Z, and becomes component of netZ Before netZ is computed, contribution from Node B required Node Z combines outputs from Node A and Node B, through netZ

Simple Example of a Neural Network (cont’d) Inputs to Node Z not data attribute values Rather, outputs are from sigmoid function in upstream nodes Value 0.8750 output from Neural Network on first pass Represents predicted value for target variable, given first observation

Sigmoid Activation Function Sigmoid function combines nearly linear, curvilinear, and nearly constant behavior depending on input value Function nearly linear for domain values -1 < x < 1 Becomes curvilinear as values move away from center At extreme values, f(x) is nearly constant Moderate increments in x produce variable increase in f(x), depending on location of x Sometimes called “Squashing Function” Takes real-valued input and returns values [0, 1]

Back-Propagation Neural Networks are supervised learning method Require target variable Each observation passed through network results in output value Output value compared to actual value of target variable (Actual – Output) = Error Prediction error analogous to residuals in regression models Most networks use Sum of Squares (SSE) to measure how well predictions fit target values

Back-Propagation (cont’d) Squared prediction errors summed over all output nodes, and all records in data set Model weights constructed that minimize SSE Actual values that minimize SSE are unknown Weights estimated, given the data set

Neural Network for IR: From the work by Wilkinson & Hingston, SIGIR’91 Document Terms Query Terms Documents ka kb kc k1 kt d1 dj dj+1 dN

Neural Network for IR Three layers network Signals propagate across the network First level of propagation: Query terms issue the first signals These signals propagate accross the network to reach the document nodes Second level of propagation: Document nodes might themselves generate new signals which affect the document term nodes Document term nodes might respond with new signals of their own

Quantifying Signal Propagation Normalize signal strength (MAX = 1) Query terms emit initial signal equal to 1 Weight associated with an edge from a query term node ki to a document term node ki: Wiq = wiq sqrt ( i wiq ) Weight associated with an edge from a document term node ki to a document node dj: Wij = wij sqrt ( i wij ) 2 2

Quantifying Signal Propagation After the first level of signal propagation, the activation level of a document node dj is given by: i Wiq Wij = i wiq wij sqrt ( i wiq ) * sqrt ( i wij ) which is exactly the ranking of the Vector model New signals might be exchanged among document term nodes and document nodes in a process analogous to a feedback cycle A minimum threshold should be enforced to avoid spurious signal generation 2 2 2

Conclusions Model provides an interesting formulation of the IR problem Model has not been tested extensively It is not clear the improvements that the model might provide