Presentation is loading. Please wait.

Presentation is loading. Please wait.

Support Vector Machines for Data Fitting and Classification David R. Musicant with Olvi L. Mangasarian UW-Madison Data Mining Institute Annual Review June.

Similar presentations


Presentation on theme: "Support Vector Machines for Data Fitting and Classification David R. Musicant with Olvi L. Mangasarian UW-Madison Data Mining Institute Annual Review June."— Presentation transcript:

1 Support Vector Machines for Data Fitting and Classification David R. Musicant with Olvi L. Mangasarian UW-Madison Data Mining Institute Annual Review June 2, 2000

2 Overview l Regression and its role in data mining l Robust support vector regression –Our general formulation l Tolerant support vector regression –Our contributions –Massive support vector regression –Integration with data mining tools l Active support vector machines l Other research and future directions

3 What is regression? l Regression forms a rule for predicting an unknown numerical feature from known ones. l Example: Predicting purchase habits. l Can we use... –age, income, level of education l To predict... –purchasing patterns? l And simultaneously... –avoid the “pitfalls” that standard statistical regression falls into?

4 Regression example l Can we use. l To predict:

5 Role in data mining l Goal: Find new relationships in data –e.g. customer behavior, scientific experimentation l Regression explores importance of each known feature in predicting the unknown one. –Feature selection l Regression is a form of supervised learning –Use data where the predictive value is known for given instances, to form a rule l Massive datasets Regression is a fundamental task in data mining.

6 Part I: Robust Regression a.k.a. Huber Regression   

7 “Standard” Linear Regression Find w, b such that:

8 Optimization problem l Find w, b such that: l Bound the error by s: l Minimize the error: Traditional approach: minimize squared error.

9 Examining the loss function l Standard regression uses a squared error loss function. –Points which are far from the predicted line (outliers) are overemphasized.

10 Alternative loss function l Instead of squared error, try absolute value of the error: This is called the 1-norm loss function.

11 1-Norm Problems And Solution –Overemphasizes error on points close to the predicted line l Solution: Huber loss function hybrid approach Quadratic Linear Many practitioners prefer the Huber loss function.

12 Mathematical Formulation  indicates switchover from quadratic to linear    Larger  means “more quadratic.”

13 Regression Approach Summary l Quadratic Loss Function –Standard method in statistics –Over-emphasizes outliers l Linear Loss Function (1-norm) –Formulates well as a linear program –Over-emphasizes small errors l Huber Loss Function (hybrid approach) –Appropriate emphasis on large and small errors

14 Previous attempts complicated l Earlier efforts to solve Huber regression: –Huber: Gauss-Seidel method –Madsen/Nielsen: Newton Method –Li: Conjugate Gradient Method –Smola: Dual Quadratic Program l Our new approach: convex quadratic program Our new approach is simpler and faster.

15 Experimental Results: Census20k Time (CPU sec)  Faster! 20,000 points 11 features

16 Experimental Results: CPUSmall Time (CPU sec)  Faster! 8,192 points 12 features

17 Introduce nonlinear kernel! l Begin with previous formulation: Substitute w = A’  and minimize  instead: l Substitute K(A,A’) for AA’: A kernel is nonlinear function.

18 Nonlinear results Nonlinear kernels improve accuracy.

19 Part II: Tolerant Regression a.k.a. Tolerant Training

20 Regression Approach Summary l Quadratic Loss Function –Standard method in statistics –Over-emphasizes outliers l Linear Loss Function (1-norm) –Formulates well as a linear program –Over-emphasizes small errors l Huber Loss Function (hybrid approach) –Appropriate emphasis on large and small errors

21 Optimization problem (1-norm) l Find w, b such that: l Bound the error by s: l Minimize the error: Minimize the magnitude of the error.

22 The overfitting issue l Noisy training data can be fitted “too well” –leads to poor generalization on future data l Prefer simpler regressions, i.e. where –some w coefficients are zero –line is “flatter”

23 Reducing overfitting l To achieve both goals –minimize magnitude of w vector l C is a parameter to balance the two goals –Chosen by experimentation l Reduces overfitting due to points far from surface

24 Overfitting again: “close” points l “Close points” may be wrong due to noise only –Line should be influenced by “real” data, not noise l Ignore errors from those points which are close!

25 Tolerant regression Allow an interval of size  with uniform error How large should  be? –Large as possible, while preserving accuracy

26 How about a nonlinear surface?

27 Introduce nonlinear kernel! l Begin with previous formulation: Substitute w = A’  and minimize  instead: l Substitute K(A,A’) for AA’: A kernel is nonlinear function.

28 Our improvements l This formulation and interpretation is new! –Improves intuition from prior results –Uses less variables –Solves faster! l Computational tests run on DMI Locop2 –Dell PowerEdge 6300 server with –Four gigabytes of memory, 36 gigabytes of disk space –Windows NT Server 4.0 –CPLEX 6.5 solver Donated to UW by Microsoft Corporation

29 Comparison Results

30 Problem size concerns l How does the problem scale? –m = number of points –n = number of features l For linear kernel: problem size is O(mn) l For nonlinear kernel: problem size is O(m 2 ) l Thousands of data points ==> massive problem! Need an algorithm that will scale well.

31 Chunking approach l Idea: Use a chunking method –Bring as much into memory as possible –Solve this subset of the problem –Retain solution and integrate into next subset l Explored in depth by Paul Bradley and O.L. Mangasarian for linear kernels Solve in pieces, one chunk at a time.

32 Row-Column Chunking l Why column chunking also? –If non-linear kernel is used, chunks are very wide. –A wide chunk must have a small number of rows to fit in memory. Both these chunks use the same memory!

33 Chunking Experimental Results

34 Objective Value & Tuning Set Error for Billion-Element Matrix Given enough time, we find the right answer!

35 Integration into data mining tools l Method runs as a stand-alone application, with data resident on disk l With minimal effort, could sit on top of a RDBMS to manage data input/output –Queries select a subset of data - easily SQLable l Database queries occur “infrequently” –Data mining can be performed on a different machine from the one maintaining the DBMS l Licensing of a linear program solver necessary Algorithm can integrate with data mining tools.

36 Part III: Active Support Vector Machines a.k.a. ASVM

37 The Classification Problem Separating Surface: A+ A- Find surface to best separate two classes.

38 Active Support Vector Machine l Features –Solves classification problems –No special software tools necessary! No LP or QP! –FAST. Works on very large problems. –Web page: www.cs.wisc.edu/~musicant/asvm Available for download and can be integrated into data mining tools MATLAB integration already provided

39 Summary and Future Work l Summary –Robust regression can be modeled simply and efficiently as a quadratic program –Tolerant regression can be used to solve massive regression problems –ASVM can solve massive classification problems quickly l Future work –Parallel approaches –Distributed approaches –ASVM for various types of regression

40 Questions?


Download ppt "Support Vector Machines for Data Fitting and Classification David R. Musicant with Olvi L. Mangasarian UW-Madison Data Mining Institute Annual Review June."

Similar presentations


Ads by Google