Presentation is loading. Please wait.

Presentation is loading. Please wait.

July 11, 2001Daniel Whiteson Support Vector Machines: Get more Higgs out of your data Daniel Whiteson UC Berkeley.

Similar presentations


Presentation on theme: "July 11, 2001Daniel Whiteson Support Vector Machines: Get more Higgs out of your data Daniel Whiteson UC Berkeley."— Presentation transcript:

1 July 11, 2001Daniel Whiteson Support Vector Machines: Get more Higgs out of your data Daniel Whiteson UC Berkeley

2 July 11, 2001Daniel Whiteson Multivariate algorithms Square cuts may work well for simpler tasks, but as the data are multivariate, the algorithms also must be.

3 July 11, 2001Daniel Whiteson Multivariate Algorithms HEP overlaps with Computer Science, Mathematics and Statistics in this area:  How can we construct an algorithm that can be taught by example and generalize effectively? We can use solutions from those fields:  Neural Networks  Probability Density Estimators  Support Vector Machines

4 July 11, 2001Daniel Whiteson Neural Networks Decision function learned using freedom in hidden layers. –Used very effectively as signal discriminators, particle identifiers and parameter estimators –Fast evaluation makes them suited to triggers Constructed from a very simple object, they can learn complex patterns.

5 July 11, 2001Daniel Whiteson Probability Density Estimation If we knew the distributions of the signal f s (x) and the background f b (x), Then we could calculate And use it to discriminate. Example disc. surface

6 July 11, 2001Daniel Whiteson Probability Density Estimation Of course we do not know the analytical distributions. Given a set of points drawn from a distribution, put down a kernel centered at each point. With high statistics, this approximates a smooth probability density. Surface with many kernels

7 July 11, 2001Daniel Whiteson Probability Density Estimation Simple techniques have advanced to more sophisticated approaches: –Adaptive PDE varies the width of the kernel for smoothness –Generalized for regression analysis Measure the value of a continuous parameter – GEM Measures the local covariance and adjusts the individual kernels to give a more accurate estimate.

8 July 11, 2001Daniel Whiteson Support Vector Machines PDEs must evaluate a kernel at every training point for every classification of a data point. Can we build a decision surface that only uses the relevant bits of information, the points in training set that are near the signal-background boundary? For a linear, separable case, this is not too difficult. We simply need to find the hyperplane that maximizes the separation.

9 July 11, 2001Daniel Whiteson (x i,y i ) are training data  i are positive Lagrange multipliers (images from applet at http://svm.research.bell-labs.com/) Support Vector Machines To find the hyperplane that gives the highest separation (lowest “energy”), we maximize the Lagrangian w.r.t  i : The solution is: Where  i =0 for non support vectors

10 July 11, 2001Daniel Whiteson Support Vector Machines But not many problems of interest are linear. Map data to higher dimensional space where separation can be made by hyperplanes We want to work in our original space. Replace dot product with kernel function: For these data, we need

11 July 11, 2001Daniel Whiteson Support Vector Machines Neither are entirely separable problems very difficult. Allow an imperfect decision boundary, but add a penalty. Training errors, points on the wrong side of the boundary, are indicated by crosses.

12 July 11, 2001Daniel Whiteson Support Vector Machines We are not limited to linear or polynomial kernels.  Gaussian kernel SVMs outperformed PDEs in recognizing handwritten numbers from the USPS database. Gives a highly flexible SVM

13 July 11, 2001Daniel Whiteson Comparative study for HEP Discriminator Value 2-dimensional discriminant with variables M jj and H t Neural Net PDE SVM Signal: Wh to bb Background: Wbb Background: tt Background: WZ

14 July 11, 2001Daniel Whiteson Comparative study for HEP Signal to Noise Enhancement Discriminator Threshold All of these methods provide powerful signal enhancement Efficiency 49% Efficiency 50% Efficiency 43%

15 July 11, 2001Daniel Whiteson Algorithm Comparisons Algorithm AdvantagesDisadvantages Neural Nets Very fast evaluationBuild structure by hand Black box Local optimization PDE Transparent operationSlow evaluation Requires high statistics SVM Fast evaluation Kernel positions chosen automatically Global optimization Complex Training can be time intensive Kernel selection by hand

16 July 11, 2001Daniel Whiteson Conclusions Difficult problems in HEP overlap with those in other fields. We can take advantage of our colleagues’ years of thought and effort. There are many areas of HEP analysis where intelligent multivariate algorithms like NNs, PDEs and SVMs can help us conduct more powerful searches and make more precise measurements.


Download ppt "July 11, 2001Daniel Whiteson Support Vector Machines: Get more Higgs out of your data Daniel Whiteson UC Berkeley."

Similar presentations


Ads by Google