Presentation is loading. Please wait.

Presentation is loading. Please wait.

Financial Data Modelling

Similar presentations


Presentation on theme: "Financial Data Modelling"β€” Presentation transcript:

1 Financial Data Modelling
Dr Nikolay Nikolaev Department of Computing Goldsmiths College University of London 2018

2 Dynamic Nonlinear Models
Lecture (FDM 2018) Dynamic Nonlinear Models When processing time series data the feedforward TDNN, which are static by design, accommodate the time using sliding window vectors (also called tapped delay lines). The sliding input window shifts at each discrete time step over the data series taking the next data point and removing respectively the oldest one (according to a predefined lag/dimension). In this way, using delay windows helps us to process temporal patterns. The TDNN neural network models have several drawbacks: they limit the duration of the temporal events because they do not have implicit memory, and require to determine the lag space and delay time in advance; they face difficulties in capturing long-term temporal relationships in the data; they are trained with static learning algorithms (like backpropagation, and standard optimizers). Proper handling of sequential time series data with single-layer and multilayer neural networks is accomplished out by adding memory that contains, remembers past outputs. Adding feedback connections to the neural network structure gives the model potential to capture better temporal relationships between serial data, as well as to describe better the hidden dynamics of the unknown data generator.

3 Lecture (FDM 2018) Having feedback connections makes these recurrent neural networks dynamic systems, in other words this is the memory that makes the recurrent networks powerful tools for learning temporal dependencies in serial data. It should be noted also that having memory renders such recurrent networks especially suitable for describing non-stationarity, so they are especially useful for learning from nonstationary time series. There are two main advantages of having memory in neural network models: the memory stores the state of the dynamical neural system and determines the evolution of the output; the memory enables learning of longer time dependencies without the need to determine accurately the input size in advance, in other words it makes possible to learn with imprecise embeddings from time series data. A common learning framework can be also the maximum likelihood estimation (MLE) method, but for treating nonlinear models it is usually implemented with exact derivatives rather than using numerical integration.

4 Lecture (FDM 2018) Dynamic NARMA Models Nonlinear versions of autoregressive moving average (NARMA) models can be developed using neural network representations. These are NARMA connectionist architectures in which we pass as inputs the latest time series measurements together with feedback past outputs. The NARMA model is defined as follows: 𝑦 𝑑 =𝑓 𝒙 𝑑 , 𝒆 𝑑 + πœ€ 𝑑 =𝑓( π‘₯ π‘‘βˆ’1 , π‘₯ π‘‘βˆ’2 ,…, π‘₯ π‘‘βˆ’π‘ , πœ€ π‘‘βˆ’1 , πœ€ π‘‘βˆ’2 ,…, πœ€ π‘‘βˆ’π‘ž )+ πœ€ 𝑑 where πœ€ π‘‘βˆ’1 = 𝑦 π‘‘βˆ’1 βˆ’π‘“ 𝒙 π‘‘βˆ’1 , 𝒆 π‘‘βˆ’1 are the recent prediction errors. Consider a simple recurrent single-neuron (Percepton) network having such inputs: 1 if 𝑙 =0 𝑧 π‘‘βˆ’π‘™ = π‘₯ π‘‘βˆ’π‘™ if 1 ≀𝑙 ≀𝑝 𝑓 π‘‘βˆ’π‘™+𝑝 if (𝑝+1) ≀𝑙 ≀(𝑝+π‘ž) where 𝑝 is the number of lagged inputs, and π‘ž is the number of recurrent connections.

5 Lecture (FDM 2018) Assuming that the output node uses the π‘‘π‘Žπ‘›β„Ž activation function the model computes: 𝑓 𝒙 𝑑 , 𝒆 𝑑 =π‘‘π‘Žπ‘›β„Ž 𝑙=1 𝑝 𝑀 𝑙 π‘₯ π‘‘βˆ’π‘™ + 𝑙=𝑝+1 π‘ž 𝑀 𝑙 𝑓 π‘‘βˆ’π‘™ + 𝑀 0 where the temporal variables capture information from the past, and send it via the loop, thus providing memory capacity. This is what helps to capture time-varying patterns in data.

6 Training NARMA Networks
Lecture (FDM 2018) Training NARMA Networks There are two algorithms for computing dynamic gradients in such recurrent single-neuron networks: BackPropagation-Through-Time (BPTT)- this algorithm unfolds the network back in time and calculates the error derivatives backwards as an expansion; Real-Time Recurrent Learning (RTRL)- this algorithm computes the error derivatives forward in time. Having dynamic, temporal derivatives one can plug them into a standard optimizer or implement gradient-descent training with first-order or second-order methods.

7 Online Gradient Descent Training
Lecture 3 (FDM 2018) Online Gradient Descent Training The first-order online gradient-descent training algorithm updates the weights at each particular time step in direction opposite to the instantaneous gradient of the cost function with the following equation: 𝑀 𝑗,𝑑 = 𝑀 𝑗,π‘‘βˆ’1 +Ξ· πœ• C 𝑑 πœ• 𝑀 𝑗,π‘‘βˆ’1 = 𝑀 𝑑 +Ξ· πœ€ 𝑑 𝑓 𝑑 β€² πœ• 𝑠 𝑑 πœ• 𝑀 𝑗,π‘‘βˆ’1 where 𝑓 𝑑 β€²denotes the derivative of the activation function, πœ€ 𝑑 is the error πœ€ 𝑑 = 𝑦 𝑑 βˆ’π‘“ 𝒙 𝑑 , π’˜ 𝑑 , and s 𝑑 is the summation at the output node s 𝑑 = 𝑙=1 𝑝 𝑀 𝑙 π‘₯ π‘‘βˆ’π‘™ + 𝑙=𝑝+1 π‘ž 𝑀 𝑙 𝑓 π‘‘βˆ’π‘™ + 𝑀 0 . This derivative is obtained according to the maximum likelihood principle starting from the instantaneous cost function: C 𝑑 is the instantaneous cost function C 𝑑 = 0.5 ( 𝑦 𝑑 βˆ’π‘“ 𝒙 𝑑 , π’˜ 𝑑 ) 2 . The so called Real-Time Recurrent Learning (RTRL) derivatives are calculated using the chain rule in the following way: πœ• C 𝑑 πœ• 𝑀 𝑗 = πœ• C 𝑑 πœ• 𝑓 𝑑 πœ• 𝑓 𝑑 πœ• 𝑠 𝑑 πœ• 𝑠 𝑑 πœ• 𝑀 𝑗 = πœ€ 𝑑 (1βˆ’ 𝑓 𝒙 𝑑 , π’˜ 𝑑 2 ) πœ• 𝑠 𝑑 πœ• 𝑀 𝑗 where the derivative of the tanh activation function is 𝑓 𝑑 β€² =(1βˆ’ 𝑓 𝒙 𝑑 , π’˜ 𝑑 2 ), and the time subscripts for the weights are omitted for clarity.

8 Temporal RTRL Derivatives
Lecture (FDM 2018) Temporal RTRL Derivatives The derivatives at the output node summation with respect to –input-to-hidden weights are taken as follows: πœ• 𝑠 𝑑 πœ• 𝑀 𝑗 = πœ• 𝑙=1 𝑝+π‘ž 𝑀 𝑙 𝑧 π‘‘βˆ’π‘™ + 𝑧 0 πœ• 𝑀 𝑗 = 𝑙=1 𝑝+π‘ž 𝑀 𝑙 πœ• 𝑧 π‘‘βˆ’π‘™ πœ• 𝑀 𝑗 + 𝑧 π‘‘βˆ’π‘™ πœ• 𝑀 𝑙 πœ• 𝑀 𝑗 = = 𝑙=1 𝑝 𝑀 𝑙 πœ• π‘₯ π‘‘βˆ’π‘™ πœ• 𝑀 𝑗 + 𝑙=𝑝+1 π‘ž 𝑀 𝑙 πœ• 𝑓 π‘‘βˆ’π‘™ πœ• 𝑀 𝑗 + 𝑧 π‘‘βˆ’π‘— = 𝑙=𝑝+1 π‘ž 𝑀 𝑙 πœ• 𝑓 π‘‘βˆ’π‘™ πœ• 𝑀 𝑗 + 𝑧 π‘‘βˆ’π‘— where the assumption is that πœ• π‘₯ π‘‘βˆ’π‘™ /πœ• 𝑀 𝑗 = 0. Note here that the first term accounts for the implicit effect of weight 𝑀 𝑗 on the network output, while the second term is the explicit effect of this weight on the network summation. Having knowledge about training such a recurrent single-neuron network there can be designed also recurrent multilayer Perceptrons if severe nonlinearities are present in the data (after performing initial checks with some diagnostic tests).

9 Example: RTRL training of a nonlinear recurrent single-layer network
Lecture (FDM 2018) Example: RTRL training of a nonlinear recurrent single-layer network Let a simple network having one node, one input node, a bias constant, and an output-to-input feedback connection be given. The output node has a hyperbolic tangent activation function. Suppose that the initial weights are: π’˜ 𝑑 = [βˆ’ ], the gradients from the previous time step are: πœ• 𝒔 𝑑 /πœ• π’˜ 𝑑 = [ ], and the learning rate is: Ξ·=0.1. The given time series data are: external input: π‘₯ π‘‘βˆ’1 = , and target: π‘₯ 𝑑 = Assuming that the network output generated with the previous data point is 𝑓 𝑑 = 0.5, the error is calculated as follows: 𝑒 𝑑 = π‘₯ 𝑑 βˆ’ 𝑓 π‘‘βˆ’1 = βˆ’ 0.5= Next, the input vector is constructed as follows: 𝒛 𝑑 = [1.0 π‘₯ π‘‘βˆ’1 𝑓 π‘‘βˆ’1 ] = [ ]. Then, we perform the forward propagation: s 𝑑 = 𝑀 1 𝑧 1 + 𝑀 2 𝑧 2 + 𝑀 3 𝑧 3 = βˆ’ βˆ— βˆ— βˆ—0.5 =0.139 𝑓 𝑑 =tanh s 𝑑 =0.1381

10 Example: RTRL training (continuation)
Lecture (FDM 2018) Example: RTRL training (continuation) After that, we apply the chain rule with the corresponding values: πœ• 𝐢 𝑑 /πœ• 𝑓 𝑑 βˆ— πœ• 𝑓 𝑑 /πœ• 𝑠 𝑑 = πœ€ 𝑑 (1βˆ’ 𝑓 𝑑 2 )=0.4801βˆ—(1βˆ’ ) =0.3601 Having the past derivatives, the weight deltas are computed as follows: Ξ·πœ€ 𝑑 𝑓 𝑑 β€² πœ• 𝑠 𝑑 /πœ• 𝑀 1 = 0.1βˆ—0.3601βˆ— 0.1 =0.0036 Ξ·πœ€ 𝑑 𝑓 𝑑 β€² πœ• 𝑠 𝑑 /πœ• 𝑀 2 =0.1βˆ—0.3601βˆ— 0.2 =0.0072 Ξ·πœ€ 𝑑 𝑓 𝑑 β€² πœ• 𝑠 𝑑 /πœ• 𝑀 3 =0.1βˆ—0.3601βˆ—0.3 =0.0108 Therefore, the weights are updated in the following way: 𝑀 1 = (βˆ’0.0391) = βˆ’0.0355 𝑀 2 = = 𝑀 3 = =

11 Example: RTRL training (continuation)
Lecture (FDM 2018) Example: RTRL training (continuation) Finally, the derivatives for the next time step are produced as follows: πœ• 𝑠 𝑑 /πœ• 𝑀 1 = 𝑀 3 πœ• 𝑓 𝑑 /πœ• 𝑀 1 + 𝑧 1 =0.0877βˆ— =1.0088 πœ• 𝑠 𝑑 /πœ• 𝑀 2 = 𝑀 3 πœ• 𝑓 𝑑 /πœ• 𝑀 2 + 𝑧 2 =0.0877βˆ— = πœ• 𝑠 𝑑 /πœ• 𝑀 3 = 𝑀 3 πœ• 𝑓 𝑑 /πœ• 𝑀 3 + 𝑧 3 =0.0877βˆ— =0.5263

12 Exercise: Programming the RTRL algorithm in Matlab
Lecture (FDM 2018) Exercise: Programming the RTRL algorithm in Matlab First we need to initialize all data structures and load the time series data: NVCTS = odim; NOUTS = 1; NINPUTS = 10; m = NINPUTS+1; NNODES = 3; nrows = NNODES; NCOLS = m+NNODES; eta = 0.1; e = zeros(NNODES,1); s = zeros(NNODES,1); out = zeros(NNODES,1); yprim = zeros(NNODES,1); w = zeros(NNODES,NCOLS); delw = zeros(NNODES,NCOLS); z = zeros(NVCTS,NCOLS); d = zeros(NVCTS,NCOLS); p = zeros(NNODES,NCOLS,NNODES); pold = zeros(NNODES,NCOLS,NNODES); z(:,1) = 1.0; for i = 1:NVCTS for j = 1:NINPUTS z(i,j+1) = x(i,j); % load the input vectors end for j = 1:NOUTS d(i,j) = targets(i); % load the targets w = 0.5*(rand(NNODES,NCOLS)-0.5); % initialize the weights

13 Exercise: Programming the RTRL algorithm in Matlab (continuation)
Lecture (FDM 2018) Exercise: Programming the RTRL algorithm in Matlab (continuation) Next we develop the training loops to iterate over the data, starting with forward propagation: for epoch = 1:50 for t = 1:NVCTS % Compute the error for k = 1:NOUTS e(k) = d(t,k)-out(k); end % Set previous out(k)=out(t) as part of the next input z(t,k+m) for k = 1:NNODES z(t,k+m) = out(k); % Generate the summations at each of the k nodes s(k) = 0.0; for i = 1:NCOLS s(k) = s(k)+w(k,i)*z(t,i);

14 Exercise: Programming the RTRL algorithm in Matlab (continuation)
Lecture (FDM 2018) Exercise: Programming the RTRL algorithm in Matlab (continuation) After that, we perform the backward pass and update the weights: % Compute the output out at time (t+1): out(k) = out(t+l) = f(s(t)) for k = 1:NOUTS out(k) = s(k); end for k = 1:NNODES-NOUTS out(k+NOUTS) = tanh(s(k+NOUTS)); % Compute the weight changes at time t for i = 1:NNODES for j = 1:NCOLS delw(i,j) = 0.0; delw(i,j) = delw(i,j)+eta*e(k)*pold(i,j,k); % Update the weights for time (t+1) w = w+delw;

15 Exercise: Programming the RTRL algorithm in Matlab (continuation)
Lecture (FDM 2018) Exercise: Programming the RTRL algorithm in Matlab (continuation) Finally, the temporal matrix is computed for the next iteration: for k = 1:NOUTS yprime(k) = z(t,k); end for k = 1:NNODES-NOUTS yprime(k+NOUTS) = 1-out(k)^2; for i = 1:NNODES for j = 1:NCOLS for k = 1:NNODES kron = 0.0; if (i==k) kron = 1.0; end ssum = 0.0; for l = 1:NNODES ssum = ssum+w(k,l+m)*pold(i,j,l); % pold = p(t) p(i,j,k) = yprime(k)*(ssum+kron*z(t,j)); ptemp = pold; pold = p; p = ptemp; % pold is now p(t+1)

16 Lecture (FDM 2018) References: R.J.Williams and D.Zipser, D. (1995). Gradient-based learning algorithms for recurrent networks and their computational complexity, In: Chauvin,Y. and Rumelhart,D.E. (Eds.), Back-propagation: Theory, Architectures and Applications, Chapter 13, Lawrence Erlbaum Publishers, Hillsdale, N.J., pp S.Haykin (1997). Neural Networks: A Comprehensive Foundation (2nd ed.), Pearson Higher Education, Upper Saddle River,New Jersey. N.Nikolaev and H.Iba (2006). Adaptive Learning of Polynomial Networks: Genetic Programming, Backpropagation and Bayesian Methods, Springer, New York.


Download ppt "Financial Data Modelling"

Similar presentations


Ads by Google