Download presentation
Presentation is loading. Please wait.
Published byEileen Morrison Modified over 8 years ago
1
Introduction to Neural Networks Gianluca Pollastri, Head of Lab School of Computer Science and Informatics and Complex and Adaptive Systems Labs University College Dublin gianluca.pollastri@ucd.ie
2
Credits Geoffrey Hinton, University of Toronto. borrowed some of his slides for “Neural Networks” and “Computation in Neural Networks” courses. Paolo Frasconi, University of Florence. This guy taught me Neural Networks in the first place (*and* I borrowed some of his slides too!).
3
Recurrent Neural Networks (RNN) One of the earliest versions: Jeffrey Elman, 1990, Cognitive Science. Problem: it isn’t easy to represent time with Feedforward Neural Nets: usually time is represented with space. Attempt to design networks with memory.
4
RNNs The idea is having discrete time steps, and considering the hidden layer at time t-1 as an input at time t. This effectively removes cycles: we can model the network using an FFNN, and model memory explicitly.
5
ItIt XtXt OtOt d d = delay element
6
BPTT BackPropagation Through Time. If O t is the output at time t, I t the input at time t, and X t the memory (hidden) at time t, we can model the dependencies as follows:
7
BPTT We can model both f() and g() with (possibly multilayered) networks. We can transform the recurrent network by unrolling it in time. Backpropagation works on any DAG. An RNN becomes one once it’s unrolled.
8
ItIt XtXt OtOt d d = delay element
9
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
10
gradient in BPTT GRADIENT(I,O,T) { # I=inputs, O=outputs, T=targets T := size(O); X 0 := 0; for t := 1..T X t := f( X t-1, I t ); for t := 1..T { O t := g( X t, I t ); g.gradient( O t - T t ); δ t = g.deltas( O t - T t ); } for t := T..1 f.gradient( δ t ); δ t-1 += f.deltas( δ t ); }
11
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
12
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
13
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
14
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
15
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
16
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
17
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
18
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
19
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
20
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
21
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
22
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
23
ItIt XtXt OtOt I t+1 X t+1 O t+1 I t-1 X t-1 O t-1 I t+2 X t+2 O t+2 I t-2 X t-2 O t-2
24
What I will talk about Neurons Multi-Layered Neural Networks: Basic learning algorithm Expressive power Classification How can we *actually* train Neural Networks: Speeding up training Learning just right (not too little, not too much) Figuring out you got it right Feed-back networks? Anecdotes on real feed-back networks (Hopfield Nets, Boltzmann Machines) Recurrent Neural Networks Bidirectional RNN 2D-RNN Concluding remarks
25
Bidirectional Recurrent Neural Networks (BRNN)
26
BRNN F t = ( F t-1, U t ) B t = ( B t+1, U t ) Y t = ( F t, B t, U t ) () () ed () are realised with NN (), () and () are independent from t: stationary
27
BRNN F t = ( F t-1, U t ) B t = ( B t+1, U t ) Y t = ( F t, B t, U t ) () () ed () are realised with NN (), () and () are independent from t: stationary
28
BRNN F t = ( F t-1, U t ) B t = ( B t+1, U t ) Y t = ( F t, B t, U t ) () () ed () are realised with NN (), () and () are independent from t: stationary
29
BRNN F t = ( F t-1, U t ) B t = ( B t+1, U t ) Y t = ( F t, B t, U t ) () () ed () are realised with NN (), () and () are independent from t: stationary
30
Inference in BRNNs FORWARD(U) { T size(U); F 0 B T+1 0; for t 1..T F t = ( F t-1, U t ); for t T..1 B t = ( B t+1, U t ); for t 1..T Y t = ( F t, B t, U t ); return Y; }
31
Learning in BRNNs GRADIENT(U,Y) { T size(U); F 0 B T+1 0; for t 1..T F t = ( F t-1, U t ); for t T..1 B t = ( B t+1, U t ); for t 1..T { Y t = ( F t, B t, U t ); [δ F t, δ B t ] = .backprop&gradient( Y t - Y t ); } for t T..1 δ F t-1 += .backprop&gradient(δ F t ); for t 1..T δ B t+1 += .backprop&gradient(δ B t ); }
32
What I will talk about Neurons Multi-Layered Neural Networks: Basic learning algorithm Expressive power Classification How can we *actually* train Neural Networks: Speeding up training Learning just right (not too little, not too much) Figuring out you got it right Feed-back networks? Anecdotes on real feed-back networks (Hopfield Nets, Boltzmann Machines) Recurrent Neural Networks Bidirectional RNN 2D-RNN Concluding remarks
33
2D RNNs Pollastri & Baldi 2002, Bioinformatics Baldi & Pollastri 2003, JMLR
34
2D RNNs
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.