Presentation is loading. Please wait.

Presentation is loading. Please wait.

Randomized Variable Elimination David J. Stracuzzi Paul E. Utgoff.

Similar presentations


Presentation on theme: "Randomized Variable Elimination David J. Stracuzzi Paul E. Utgoff."— Presentation transcript:

1 Randomized Variable Elimination David J. Stracuzzi Paul E. Utgoff

2 Agenda Background Filter and wrapper methods Randomized Variable Elimination Cost Function RVE algorithm when r is known (RVE) RVE algorithm when r is not known (RVErS) Results Questions

3 Variable Selection Problem Choosing relevant attributes from set of attributes. Producing a subset of variables from large set of input variables that best predicts target function. Forward selection algorithm starts with an empty set and searches for variables to add. Backward selection algorithm starts with entire set of variables and go on removing irrelevant variable(s). In some cases, forward selection algorithm also removes variables in order to recover from previous poor selections. Caruna and Freitag (1994) experimented with greedy search methods and found that allowing search to add or remove variables outperform simple forward and backward searches Filter and wrapper methods for variable selection.

4 Filter methods Uses statistical measures to evaluate the quality of variable subsets. Subset of variables are evaluated with respect to specific quality measure. Statistical evaluation of variables require very little computational cost as compared to running the learning algorithm. FOCUS (Almuallim and Dietterich, 1991) searches for smallest subset that completely discriminates between target classes. Relief (Kira and Rendell, 1992) ranks variables as per distance. In filter methods, variables are evaluated independently and not in context of learning problem.

5 Wrapper methods Uses performance of the learning algorithm to evaluate the quality of subset of input variables. The learning algorithm is executed on the candidate variable set and then tested for the accuracy of resulting hypothesis. Advantage: Since wrapper methods evaluate variables in the context of learning problem, they outperform filter methods. Disadvantage: Cost of repeatedly executing the learning algorithm can become problematic. John, Kohavi, and Pfleger (1994) coined the term “wrapper” but the technique was used before that (Devijver and Kittler, 1982)

6 Randomized Variable Elimination Falls under the category of wrapper methods. First, a hypothesis is produced for entire set of ‘n’ variables. A subset if formed by randomly selecting ‘k’ variables. A hypothesis is then produced for remaining (n-k) variables. Accuracy of the two hypotheses are compared. Removal of any relevant variable should cause an immediate decline in performance Uses a cost function to achieve a balance between successive failures and cost of running the learning algorithm several times.

7 The Cost Function

8 Probability of selecting ‘k’ variables The probability of successfully selecting ‘k’ irrelevant variables at random is given by where, n … remaining variables r … relevant variables

9 Expected number of failures The expected number of consecutive failures before a success at selecting k irrelevant variables is given by Number of consecutive trials in which at least one of the r relevant variables will be randomly selected along with irrelevant variables.

10 Cost of removing k variables The expected cost of successfully removing k variables from n remaining given r relevant variables is given by where, M(L, n) represents an upper bound on the cost of running algorithm ‘L’ on n inputs.

11 Optimal cost of removing irrelevant variables The optimal cost of removing irrelevant variables from n remaining and r relevant is given by

12 Optimal value for ‘k’ The optimal value is computed as It is the value of k for which the cost of removing variables is optimal.

13 Algorithms

14 Algorithm for computing k and cost values Given: L, N, r I sum [r+1…N] ← 0 k opt [r+1…N] ← 0 for i ← r+1 to N do bestCost ← ∞ for k ← 1 to i-r do temp ← I(i,r,k) + I sum [i-k] if (temp < bestCost) then bestCost ← temp bestK ← k I sum [i] ← bestCost k opt [i] ← bestK

15 Randomized Variable Elimination (RVE) when r is known Given: L,n,r, tolerance Compute tables for I sum (i,r) and k opt (i,r) h ← hypothesis produced by L on ‘n’ inputs while n > r do k ← k opt (n,r) select k variables at random and remove them h’ ← hypothesis produced by L on n-k inputs if e(h’) – e(h) ≤ tolerance then n ← n-k h ← h’ else replace the selected k variables

16 RVE example Plot of expected cost of running RVE(I sum (N,r = 10)) along with cost of removing inputs individually, and the estimated number of updates M(L,n). L is function that learns a boolean function using perceptron unit.

17 Randomized Variable Elimination including a search for ‘r’ (RVErS) Given: L, c 1, c 2, n, r max, r min, tolerance Compute tables I sum (i,r) and k opt (i,r) for r min ≤ r ≤ r max r ← (r max + r min ) / 2 success, fail ← 0 h ← hypothesis produced by L on ‘n’ inputs repeat k ← k opt (n,r) select k variables at random and remove them h’ ← hypothesis produced by L on (n-k) inputs if e(h’) – e(h) ≤ tolerance then n ← n – k h ← h’ success ← success + 1 fail ← 0 else replace the selected k variables fail ← fail + 1 success ← 0

18 RVErS (contd…) if n ≤ r min then r, r max, r min ← n else if fail ≥ c 1 E⁻(n,r,k) then r min ← r r ← (r max + r min ) / 2 success, fail ← 0 else if success ≥ c 2 (r – E⁻(n,r,k)) then r max ← r r ← (r max + r min ) / 2 success, fail ← 0 until r min < r max and fail ≤ c 1 E⁻(n,r,k)

19 Comparison of RVE and RVErS

20 Results

21 Variable Selection results using naïve Bayes and C4.5 algorithms

22

23 My implementation Integrate with Weka Extend the NaiveBayes and J48 algorithms Obtain results for some UCI datasets used Compare results with those reported by authors Work in progress

24 RECAP

25 Questions

26 References H. Almuallim and T.G Dietterich. Leraning with many irrelevant features. In Proceedings of the Ninth National Conference on Artificial Intelligence, Anaheim, CA, 1991. MIT Press. R. Caruna and D. Freitag. Greedy attribute selection. In Machine Learning: Proceedings of Eleventh International Conference, Amherst, MA, 1993. Morgan Kaufmann. K. Kira and L. Rendell. A practical approach to feature selection. In D. Sleeman and P. Edwards, editors, Machine Learning: Proceedings of Ninth International Conference, San Mateo, CA, 1992. Morgan Kaufmann.

27 References (contd…) G. H. John, R. Kohavi, and K. Pfleger. Irrelevant features and subset selection problem. In Machine Learning: Proceedings of Eleventh Internaltional Conference, pages 121-129, New Brunswick, NJ, 1994. Morgan Kauffmann. P.A. Devijver and J. Kittler. Pattern Recognition: A statistical approach. Prentice Hall/International, 1982


Download ppt "Randomized Variable Elimination David J. Stracuzzi Paul E. Utgoff."

Similar presentations


Ads by Google