Presentation is loading. Please wait.

Presentation is loading. Please wait.

Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat1 Advanced Analysis Algorithms for Top Analysis Pushpa Bhat Fermilab Top Thinkshop 2 Fermilab, IL November.

Similar presentations


Presentation on theme: "Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat1 Advanced Analysis Algorithms for Top Analysis Pushpa Bhat Fermilab Top Thinkshop 2 Fermilab, IL November."— Presentation transcript:

1

2 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat1 Advanced Analysis Algorithms for Top Analysis Pushpa Bhat Fermilab Top Thinkshop 2 Fermilab, IL November 2000 A reasonable man adapts himself to the world. An unreasonable man persists to adapt the world to himself. So, all So, all progress depends on the unreasonable one. - Bernard Shaw

3 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat2 What do we gain? b-tag efficiency in Run I: DØ ~20%, CDF ~53% But, DØ was able to measure the top quark mass with a precision approaching that of CDF by using multivariate techniques to separate signal and background while minimizing the correlation of the selection with the top quark mass.

4 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat3 Optimal Analysis Methods The new generation of experiments will be a lot more demanding than the previous in data handling at all stages The time-honored procedure of choosing and applying cuts on one event variable at a time is rarely optimal! The measurements being multivariate, the optimal methods of analyses are necessarily multivariate Discriminant Analysis: Partition multidimensional variable space, identify boundaries between classes of objects Cluster Analysis: Assign objects to groups based on similarity Regression Analysis: Functional approximation/fitting

5 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat4 Data Analysis Tasks Particle Identification e-ID,  -ID, b-ID,  , q/g Signal/Background Event Classification Signals of new physics are rare and small (Finding a “jewel” in a hay-stack) Parameter Estimation t mass, H mass, track parameters, for example Function Approximation Correction functions, tag rates, fake rates Data Exploration Data-driven extraction of information, latent structure analysis

6 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat5 x1 x2 Why Multivariate Methods? x1 x2  Because they are optimal! D(x1,x2)=2.014x1 + 1.592x2

7 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat6 Optimal Event Selection defines decision boundaries that minimize the probability of misclassification So, the problem mathematically reduces to that of calculating r(x), the Bayes Discriminant Function or probability densities Posterior probability

8 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat7 Probability Density Estimators Histogramming: The basic problem of non-parametric density estimation is very simple! Histogram data in M bins in each of the d feature variables M d bins  Curse Of Dimensionality In high dimensions, we would either require a huge number of data points or most of the bins would be empty leading to an estimated density of zero. But, the variables are generally correlated and hence tend to be restricted to a sub-space  Intrinsic Dimensionality

9 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat8 Kernel-Based Methods Akin to Histogramming but adopts importance sampling Place in d-dimensional space a hypercube of side h centered on each data point x, The estimate will have discontinuities Can be smoothed out using different forms for kernel functions H(u). A common choice is a multivariate Gaussian kernel N = Number of data points H(u) = 1 if x n in the hypercube = 0 otherwise h=smoothing parameter

10 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat9 Place a hyper-sphere centered at each data point x and allow the radius to grow to a volume V until it contains K data points. Then, density at x If our data set contains N k points in class C k and N points in total, then K nearest-neighbor Method N = Number of data points K k = # of points in volume V for class C k V for class C k

11 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat10 Discriminant Approximation with Neural Networks Output of a feed forward neural network can approximate the Bayesian posterior probability p(s|x,y) Directly without estimating class-conditional probabilities

12 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat11 Calculating the Discriminant Consider the sum Where 1 d i = 1 for signal 0 = 0 for background  = vector of parameters Then in the limit of large data samples and provided that the function n(x,y,  ) is flexible enough.

13 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat12  NN estimates a mapping function without requiring a mathematical description of how the output formally depends on the input.  The “hidden” transformation functions, g, adapt themselves to the data as part of the training process. The number of such functions need to grow only as the complexity of the problem grows. x1x1 x2x2 x3x3 x4x4 D NN Neural Networks

14 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat13 Why are NN models powerful? Neural networks are universal approximators With a sufficiently large NN, you can approximate a function to arbitrary accuracy Convergence of approximation is rapid High dimensionality is not a curse any more! Model complexity can be controlled by regularization Extrapolate gracefully

15 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat14 Also, they need to have optimal flexibility/complexity x1 x2 Mth Order Polynomial Fit M=1M=3M=10 x1 x2 x1 x2 S i mple Flexible Highly flexible

16 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat15 The Golden Rule Keep it simple As simple as possible Not any simpler - Einstein

17 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat16 Measuring the Top Quark Mass The Discriminants Discriminant variables shaded = top DØ

18 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat17 Background- rich Signal-rich Measuring the Top Quark Mass m t = 173.3 ± 5.6(stat.) ± 6.2 (syst.) GeV/c 2 DØ Lepton+jets

19 Strategy for Discovering the Higgs Boson at the Tevatron P.C. Bhat, R. Gilmartin, H. Prosper, PRD 62 (2000) hep-ph/0001152

20 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat19 WH Results from NN Analysis M H = 100 GeV/c 2 WH WH vs Wbb

21 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat20 WH (110 GeV/c2) NN Distributions

22 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat21 Results, Standard vs. NN A good chance of discovery up to M H = 130 GeV/c 2 with 20-30fb - 1

23 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat22 Improving the Higgs Mass Resolution 13.8% 12.2% 13.1% 11..3% 13%11% Use m jj and H T (=  E t jets ) to train NNs to predict the Higgs boson mass

24 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat23 Newer Approaches Ensembles of Networks Committees of Networks Performance can be better than the best single network Stacks of Networks Control both bias and variance Mixture of Experts Decompose complex problems

25 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat24 Bayesian Reasoning The Bayesian approach provides a well-founded mathematical procedure to make straight-forward and meaningful model comparisons. It also allows treatment of all uncertainties in a consistent manner. Examples of useful applications: Fitting binned data to multi-source models PLB 407 (1997) 73 Extraction of solar neutrino survival probability PRL 81(1998) 5056 Mathematically linked to adaptive algorithms such as Neural Networks (NN) Hybrid methods involving NN for probability density estimation and Bayesian treatment can be very powerful

26 Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat25 Summary Multivariate methods have already made impact discoveries and precision measurements and will be the methods of choice in future analyses. We have only scratched the surface in our use of advanced analysis algorithms. Hybrid methods combining “intelligent” algorithms and probabilistic approach will be the wave of the future!


Download ppt "Top Thinkshop-2 Nov. 10-12, 2000 Pushpa Bhat1 Advanced Analysis Algorithms for Top Analysis Pushpa Bhat Fermilab Top Thinkshop 2 Fermilab, IL November."

Similar presentations


Ads by Google