Download presentation
Presentation is loading. Please wait.
Published byBranden Arnold Modified over 8 years ago
2
Outline Time series prediction Find k-nearest neighbors Lag selection Weighted LS-SVM
3
Time series prediction Suppose we have an univariate time series x(t) for t = 1, 2, …, N. Then we want to know or predict the value of x(N + p). If p = 1, it would be called one-step prediction. If p > 1, it would be called multi-step prediction.
4
Flowchart
5
Find k-nearest neighbors Assume the current time index is 20. First we reconstruct the query Then the distance between the query and historical data is
6
Find k-nearest neighbors If k = 3, and the first k closest neighbors are t 14, t 15, t 16. Then we can construct the smaller data set.
7
Flowchart
8
Lag selection Lag selection is the process of selecting a subset of relevant features for use in model construction. Why we need lag? Lag selection is like feature selection, not feature extraction.
9
Lag selection Usually, the lag selection can be divided into two broad classes: filter method and wrapper method. The lag subset is chosen by an evaluation criterion, which measures the relationship of each subset of lags with the target or output.
10
Wrapper method The best lag subset is selected according to the model. The lag selection is a part of the learning.
11
Filter method In this method, we need the criterion which can measures the correlation or dependence. For example, correlation, mutual information, ….
12
Lag selection Which is better? The wrapper method solve the real problem, but need more time. The filter method provide the lag subset which perform the worse result. We use the filter method because of the architecture.
13
Entropy The entropy is a measure of uncertainty of a random variable. The entropy of a discrete random variable is defined by 0log0 = 0
14
Entropy Example, let Then
15
Entropy
16
Example, let Then
17
Joint entropy Definition: The joint entropy of a pair of discrete random variables (X, Y) is defined as
18
Conditional entropy Definition: The conditional entropy is defined as And
19
Proof
20
Mutual information The mutual information is a measure of the amount of information one random variable contains about another. It’s the extended notion of the entropy. Definition: The mutual information of the two discrete random variables is
21
Proof
22
The relationship between entropy and mutual information
23
Mutual information Definition: The mutual information of the two continuous random variables is The problem is that the joint probability density function of X and Y is hard to compute.
24
Binned Mutual information The most straightforward and widespread approach for estimating MI consists in partitioning the supports of X and Y into bins of finite size
25
Binned Mutual information For example, consider a set of 5 bivariate measurements, z i =(x i, y i ), where i = 1, 2, …, 5. And the values of these points are
26
Binned Mutual information
29
Estimating Mutual information Another approach for estimating mutual information. Consider the case with two variables. The 2-dimension space Z is spanned by X and Y. Then we can compute the distance between each point.
30
Estimating Mutual information Let us denote by the distance from to its k-nearest neighbor, and by and the distances between the same points projected into the X and Y subspaces. Then we can count the number n x (i) of points x j whose distance from x i is strictly less than, and similarly for y instead of x.
31
Estimating Mutual information
32
The estimate for MI is then Alternatively, in the second algorithm, we replace n x (i) and n y (i) by the number of points with
33
Estimating Mutual information
35
Then
36
Estimating Mutual information For the same example, k = 2 For the point p 1 (0, 1) For the point p 2 (0.5,5)
37
Estimating Mutual information For the point p 3 (1,3) For the point p 4 (3,4)
38
Estimating Mutual information For the point p 5 (4,1) Then
39
Estimating Mutual information Example – a=rand(1,100) – b=rand(1,100) – c=a*2 Then
40
Estimating Mutual information Example – a=rand(1,100) – b=rand(1,100) – d=2*a + 3*b Then
41
Flowchart
42
Model Now we have a training data set which contains k records, then we need a model to predict.
43
Instance-based learning The points that are close to the query have large weights, and the points far from the query have small weights. Locally weighted regression General Regression Neural Network(GRNN)
44
Property of the local frame
46
Weighted LS-SVM The goal of the standard LS-SVM is to minimize the risk function: Where the γ is the regularization parameter.
47
Weighted LS-SVM The modified risk function of the weighted LS-SVM is And
48
Weighted LS-SVM The weighted is designed as
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.