Presentation is loading. Please wait.

Presentation is loading. Please wait.

2806 Neural Computation Recurrent Neetworks Lecture 12 2005 Ari Visa.

Similar presentations


Presentation on theme: "2806 Neural Computation Recurrent Neetworks Lecture 12 2005 Ari Visa."— Presentation transcript:

1 2806 Neural Computation Recurrent Neetworks Lecture 12 2005 Ari Visa

2 Agenda n Some historical notes n Some theory n Recurrent networks n Training n Conclusions

3 Some Historical Notes The recurrent network : ”Automata Studies”, Kleene 1954 Kalman filter theory (Rudolf E. Kalman, 1960) Controllability and observability (Zadeh & Desoer, 1963) (Kailath, 1980), (Sontag, 1990), (Lewis & Syrmos, 1995) The NARX model (Leontaritis & Billings 1985) The NARX model in the context of neural networks (Chen et al, 1990) Recurrent network architectures (Jordan 1986) Olin and Giles (1996) showed that using second-order recurrent networks the correct classification of temporal sequences of finite length is quaranteed.

4 Some Historical Notes n The idea behind back-propagation through time (Minsky & Papert, 1969), Werbos (1974), Rumelhart (1986). n The real-time recurrent learning algorithm (Williams & Zipser, 1989) <- compare with McBride & Narenda (1965) system identification for tuning the parameters of an arbitary dynamical system. n System identification (Ljung, 1987),(Ljung&Glad,1994)

5 Some Theory n Recurrent networks are neural networks with one or more feedback loops. n The feedback can be of a local or global kind. n Input-output mapping networks, a recurrent network responds temporally to an externally applied input signal → dynamically driven recurrent network. n The application of feedback enables recurrent network to acquire state representations, which make them suitable devices for such diverse applications as nonlinear prediction and modeling, adaptive equalization speech processing, plant control, automobile engine diagnostics.

6 Some Theory n Four specific network architectures will be represented. n They all incorporate a static multilayer perceptron or parts thereof. n They all exploit the nonlinear mapping capability of the multilayer perceptron.

7 Some Theory n Input-Output Recurrent Model → nonlinear autoregressive with exogeneous inputs model (NARX) y(n+1) = F(y(n),...,y(n- q+1),u(n),...,u(n-q+1)) n The model has a single input that is applied to a tapped-delay-line memory of q units. It has a single output that is fed back to the input via another tapped-delay-line memory also of q units. n The contents of these two tapped-delay-lines memories are used to feed the input layer of the multilayer perceptron.The present value of the model input is denoted u(n), and the corresponding value of the model output is denoted by y(n+1). The signal vector applied to the input layer of the multilayer perceptron consists of a data window made up as follows: Present and past values of the input (exogenous inputs), Delayed values of the output (regressed).

8 NARX n Consider a recurrent network with a single input and output. n y(n+q) =  (x(n),u q (n)) where q is the dimensionality of the state space, and  :R 2q →R. n Provided that the recurrent network is observable x(n) =  (y q (n),u q- 1 (n)) where  :R 2q →R. n y(n+q) = F(y q (n),u q (n)) where u q- 1 (n) is contained in u q (n) as its first (q-1) elements, and the nonlinear mapping F:R 2q →R takes care of both  and . n y(n+1) = F(y(n),...,y(n- q+1),u(n),...,u(n-q+1))

9 Some Theory n State-Space Model n The hidden neurons define the state of the network. The output of the hidden layer is fed back to the input layer via a bank of unit delays. The input layer consists of a concatenation of feedback nodes and source nodes. The network is connected to the external environment via the source node. The number of unit delays used to feed the output of the hidden layer back to the input layer determines the order of the model. n x(n+1) =f(x(n),u(n)) n y(n) = Cx(n) n The simple recurrent network (SRN) differs from the main model by replacing the output layer by a nonlinear one and by omitting the bank of unit delays at the output.

10 State-Space Model n The state of a dynamical system is defined as a set of quantities that summarizes all the information about the past behavior of the system that is needed to uniquely describe its future behavior, except for the purely external effects arising from the applied input (excitation). n Let the q-by-1 vector x(n) denote the state of a nonlinear discrete-time system. Let the m-by-1 vector u(n) denote the input applied to the system, and the p- by-1 vector y(n) denote the corresponding output of the system. n The dynamic behavior of the system (noise free) is described. x(n+1) =  (W a x(n)+W b u(n)) (the process equation), y(n) = Cx(n) (the measurement equation) where W a is a q-by-q matrix, W b is a q-by-(m+1) matrix, C is a p- by-q matris; and  : R q →R q is a diagonal map described by  :[x 1, x 2,…, x q ] T → [  (x 1 ),  (x 2 ),…,  (x q )] T for some memoryless componen-wise nonlinearity  : R→R. n The spaces R m,R q and R p are called the input space, state space, and output space → m-input, p-output recurrent model of order q.

11 Some Theory n Recurrent multilayer perceptron (RMLP) n It has one or more hidden layers. Each computation layer of an RMLP has feedback around it. n x I (n+1) =  I (x I (n),u(n)) n x II (n+1) =  II (x II (n),x I (n+1)),..., n x O (n+1) =  O (x O (n), x K (n))

12 Some Theory n Second-order network n When the induced local field v k is combined using multiplications, we refer to the neuron as a second-order neuron. n A second-order recurrent networks n v k (n) = b k +  i  j w kij x i (n)u j (n) n x k (n+1) =  (v k (n)) n = 1 /(1+exp(- v k (n) ) n Note, represents the pair x j (n)u j (n) [state,input] and a positive weight w kij represents the presence of the transtion {state,input}→{next state}, while a negative weight represents the absence of the transition. The state stransition is described by  (x i,u j ) = x k. n Second-order networks are used for representing and learning deterministic finite-state automata (DFA).

13 Some Theory n A recurrent network is said to be controllable if an initial state is steerable to any desired state within a finite number of time steps. n A recurrent network is said to be observable if the state of the network can be determined from a finite set of input/output measurements. n A state x - is said to be an equilibrium state if for an input u it satisfies the condition x - =  (Ax - +Bu - ) n Set x - = 0 and u - = 0 → 0 =  (0). n Linearize x - =  (Ax - +Bu - ) by expanding it as a Taylor series around x - = 0 and u - = 0 and retaining first-order terms n  x(n+1) =  ’(0)W a  x(n)+  ’(0)w b  u(n)) where  x(n) and  u(n) are small displacements and the q-by-q matrix  ’(0)is the Jacobian of  (v) with respect to its argument v. n  x(n+1) = A  x(n)+ b  u(n)) and  y(n) = c T  x(n) n The linearized system represented by  x(n+1) = A  x(n)+ b  u(n)) is controllable if the matrix M c = [A q-1 b,…,Ab,b] is of rank q, that is, full rank, because then the linearized process equation above would have a unique solution. n The matrix M c s called the controllability matrix of the linearized system.

14 Some Theory n In the similar way: n  y(n) = c T  x(n) → M O =[c, cA T,…, c(A T ) q-1 ] n The linearized system represented by  x(n+1) = A  x(n)+ b  u(n)) and  y(n) = c T  x(n) is observable if the matrix M O =[c, cA T,…, c(A T ) q- 1 ] is of rank q, that is, full rank. n The matrix M O s called the observability matrix of the linearized system. n Let a recurrent network and its linearized version around the origin. If the linearized system is controllable, then the recurrent network is locally controllable around the origin. n Let a recurrent network and its linearized version around the origin. If the linearized system is observable, then the recurrent network is locally observable around the origin.

15 Some Theory n Computational power of recurrent networks n I All Touring machines may be simulated by fully connected recurrent networks built on neurons with sigmoid activation functions. n The Touring machine: n 1) control unit n 2) linear tape n 3) read-write head

16 Some Theory II NARX networks with one layer of hidden neurons with bounded, one-sided saturated activation functions and a linear output neuron can simulate fully connected recurrent networks with bounded, one-sided saturated activation functions, except for a linear slowdown. Bounded, one-sided saturated activation functions (BOSS): 1) a ≤  (x) ≤ b, a≠b, for all x  R 2) There exist values s and S  (x) = S for all a ≤ s. 3)  (x 1 ) ≠  (x 2 ) for some x 1 and x 2. NARX networks with one hidden layer of neurons with BOSS activation functions and a linear output neuron are turing equivalent.

17 Training n Epochwise training: For a given epoch, the recurrent network starts running from some initial state until it reaches a new state, at which point the training is stopped and the network is reset to an initial state for the next epoch. n Continuous training: this is suitable for situations where there are no reset states available and/or on- line learning is required. The network learns while signal processing is being performed by the network.

18 Training n The back-propagation- through-time algorithm (BPTT): is an extension of the standard back- propagation algorithm. It may be derived by unfolding the temporal operation of the network into a layered feedforward network, the topology of which grows by one layer at every time step.

19 Training n Epochwise Back- propagation through time n E Total (n 0,n 1 ) = ½  n1 n=n0  j  A e j (n) ²

20 Training n Truncated Bak- propagation through time in real-time fashion. n E(n) = ½  j  A e j (n) ² n We save only the relevant history of input data and network state for a fixed number of time steps. → n the truncation depth

21 Training Real-time recurrent learning (RTRL) concatenated input- feedback layer processing layer of computational nodes e(n) = d(n) – y(n) E Total =  n e(n)

22 Training

23 Summary The subject was recurrent networks that involve the use of global feedback applied to a static (memoryless) multilayer perceptron. 1) 1) Nonlinear autoregressive with exogeneous inputs (NARX) network using feedback from the output layer to the input layer. 2) 2) Fully connected recurrent networks with feedback from the hidden layer to the input layer. 3) 3) Recurrent multilayer perceptron with more than one hidden layer, using feedback from the output of each computation layer to its own input. 4) 4) Second-order recurrent networks using second-order neurons. 5) All these recurrent networks use tapped-delay-line memories as a feedback channel. 6) The methods 1 -3 use a state-space framework.

24 Summary n Three basic learning algorithms for the training of recurrent networks: n 1) back-propagation through time (BPTT) n 2) real-time recurrent learning (RTRL) n 3) decoupled extended Kalman filter (DEKF) n Recurrent networks may also be used to process sequentially ordered data that do not have a straightforward temporal interpretation.


Download ppt "2806 Neural Computation Recurrent Neetworks Lecture 12 2005 Ari Visa."

Similar presentations


Ads by Google