Presentation is loading. Please wait.

Presentation is loading. Please wait.

ACAT2000 Oct. 16-20, 2000 Pushpa Bhat1 Advanced Analysis Techniques in HEP Pushpa Bhat Fermilab ACAT2000 Fermilab, IL October 2000 A reasonable man adapts.

Similar presentations


Presentation on theme: "ACAT2000 Oct. 16-20, 2000 Pushpa Bhat1 Advanced Analysis Techniques in HEP Pushpa Bhat Fermilab ACAT2000 Fermilab, IL October 2000 A reasonable man adapts."— Presentation transcript:

1

2 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat1 Advanced Analysis Techniques in HEP Pushpa Bhat Fermilab ACAT2000 Fermilab, IL October 2000 A reasonable man adapts himself to the world. An unreasonable man persists to adapts the world to himself. So, all So, all progress depends on the unreasonable one. - Bernard Shaw

3 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat2 Outline Introduction Intelligent Detectors Moving intelligence closer to action Optimal Analysis Methods The Neural Network Revolution New Searches & Precision Measurements Discovery reach for the Higgs Boson Measuring Top quark mass, Higgs mass Sophisticated Approaches Probabilistic Approach to Data Analysis Summary

4 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat3 Data Collection World before Experiment World After Experiment Data Transformation Feature Extraction Global Decision Data Interpretation Data Organization Reduction Analysis Data Organization Reduction Analysis Data Collection Express Analysis

5 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat4 Intelligent Detectors Data analysis starts when a high energy event occurs Transform electronic data into useful “physics” information in real-time Move intelligence closer to action! Algorithm-specific hardware Neural Networks in Silicon Configurable hardware FPGAs, DSPs – Implement “smart” algorithms in hardware Innovative data management on-line + “smart” algorithms in hardware Data in RAM disk & AI algorithms in FPGAs Expert Systems for Control & Monitoring

6 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat5 Data Analysis Tasks Particle Identification e-ID,  -ID, b-ID, e/ , q/g Signal/Background Event Classification Signals of new physics are rare and small (Finding a “jewel” in a hay-stack) Parameter Estimation t mass, H mass, track parameters, for example Function Approximation Correction functions, tag rates, fake rates Data Exploration Knowledge Discovery via data-mining Data-driven extraction of information, latent structure analysis

7 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat6 Optimal Analysis Methods The measurements being multivariate, the optimal methods of analyses are necessarily multivariate Discriminant Analysis: Partition multidimensional variable space, identify boundaries Cluster Analysis: Assign objects to groups based on similarity Examples Fisher linear discriminant, Gaussian classifier Kernel-based methods, K-nearest neighbor (clustering) methods Adaptive/AI methods

8 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat7 x1 x2 Why Multivariate Methods? x1 x2  Because they are optimal! D(x1,x2)=2.014x1 + 1.592x2

9 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat8 Also, they need to have optimal flexibility/complexity x1 x2 Mth Order Polynomial Fit M=1M=3M=10 x1 x2 x1 x2 S i mple Flexible Highly flexible

10 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat9 The Golden Rule Keep it simple As simple as possible Not any simpler - Einstein

11 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat10 Optimal Event Selection defines decision boundaries that minimize the probability of misclassification So, the problem mathematically reduces to that of calculating r(x), the Bayes Discriminant Function or probability densities Posterior probability

12 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat11

13 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat12

14 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat13 Probability Density Estimators Histogramming: The basic problem of non-parametric density estimation is very simple! Histogram data in M bins in each of the d feature variables M d bins  Curse Of Dimensionality In high dimensions, we would either require a huge number of data points or most of the bins would be empty leading to an estimated density of zero. But, the variables are generally correlated and hence tend to be restricted to a sub-space  Intrinsic Dimensionality

15 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat14 Kernel-Based Methods Akin to Histogramming but adopts importance sampling Place in d-dimensional space a hypercube of side h centered on each data point x, The estimate will have discontinuities Can be smoothed out using different forms for kernel functions H(u). A common choice is a multivariate kernel N = Number of data points H(u) = 1 if x n in the hypercube = 0 otherwise h=smoothing parameter

16 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat15 Place a hyper-sphere centered at each data point x and allow the radius to grow to a volume V until it contains K data points. Then, density at x If our data set contains N k points in class C k and N points in total, then K nearest-neighbor Method N = Number of data points K k = # of points in volume V for class C k V for class C k

17 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat16 Discriminant Approximation with Neural Networks Output of a feed forward neural network can approximate the Bayesian posterior probability p(s|x,y) Directly without estimating class-conditional probabilities

18 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat17 Calculating the Discriminant Consider the sum Where 1 d i = 1 for signal 0 = 0 for background  = vector of parameters Then in the limit of large data samples and provided that the function n(x,y,  ) is flexible enough.

19 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat18  NN estimates a mapping function without requiring a mathematical description of how the output formally depends on the input.  The “hidden” transformation functions, g, adapt themselves to the data as part of the training process. The number of such functions need to grow only as the complexity of the problem grows. x1x1 x2x2 x3x3 x4x4 D NN Neural Networks

20 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat19 Measuring the Top Quark Mass The Discriminants Discriminant variables shaded = top

21 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat20 Background- rich Signal-rich Measuring the Top Quark Mass m t = 173.3 ± 5.6(stat.) ± 6.2 (syst.) GeV/c 2 DØ Lepton+jets

22 Strategy for Discovering the Higgs Boson at the Tevatron P.C. Bhat, R. Gilmartin, H. Prosper, PRD 62 (2000) hep-ph/0001152

23 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat22 Hints from the Analysis of Precision Data LEP Electroweak Group, http://www.cern.ch/LEPEWWG/plots/summer99 M H = GeV/c 2 M H < 225 GeV/c 2 at 95% C.L.

24 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat23 Event Simulation Signal Processes Backgrounds Event generation WH, ZH, ZZ and Top with PYTHIA Wbb, Zbb with CompHEP, fragmentation with PYTHIA Detector modeling SHW ( http://www.physics.rutgers.edu/~jconway/soft/shw/shw.html) Trigger, Tracking, Jet-finding b-tagging (double b-tag efficiency ~ 45%) Di-jet mass resolution ~ 14% ( Scaled down to 10% for RunII Higgs Studies )

25 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat24 WH Results from NN Analysis M H = 100 GeV/c 2 WH WH vs Wbb

26 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat25 WH (110 GeV/c2) NN Distributions

27 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat26 Results, Standard vs. NN A good chance of discovery up to M H = 130 GeV/c 2 with 20-30fb - 1

28 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat27 Improving the Higgs Mass Resolution 13.8% 12.2% 13.1% 11..3% 13%11% Use m jj and H T (=  E t jets ) to train NNs to predict the Higgs boson mass

29 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat28 Newer Approaches Ensembles of Networks Committees of Networks Performance can be better than the best single network Stacks of Networks Control both bias and variance Mixture of Experts Decompose complex problems

30 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat29 Exploring Models: Bayesian Approach Provides probabilistic information on each parameter of a model (SUSY, for example) via marginalization over other parameters Bayesian method enables straight-forward and meaningful model comparisons. It also allows treatment of all uncertainties in a consistent manner. Mathematically linked to adaptive algorithms such as Neural Networks (NN) Hybrid methods involving NN for probability density estimation and Bayesian treatement can be very powerful

31 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat30 Summary We are building very sophisticated equipment and will record unprecedented amounts of data in the coming decade Use of advanced “optimal” analysis techniques will be crucial to achieve the physics goals Multivariate methods, particularly Neural Network techniques, have already made impact on discoveries and precision measurements and will be the methods of choice in future analyses Hybrid methods combining “intelligent” algorithms and probabilistic approach will be the wave of the future

32 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat31 Optimal Event Selection r(x,y) = constant defines an optimal decision boundary r(x,y) = constant defines an optimal decision boundary Feature space S =B = Conventional cuts

33 Probabilistic Approach to Data Analysis Bayesian Methods (The Wave of the future)

34 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat33 Bayesian Analysis M model A uninteresting parameters p interesting parameters d data LikelihoodPriorPosterior Bayesian Analysis of Multi-source Data P.C. Bhat, H. Prosper, S. Snyder, Phys. Lett. B 407(1997) 73

35 ACAT2000 Oct. 16-20, 2000 Pushpa Bhat34 Higgs Mass Fits S=80 WH events, assume background distribution described by Wbb. Results S/B = 1/10 M fit = 114 +/- 11GeV/c 2 S/B = 1/10 M fit = 114 +/- 11GeV/c 2 S/B = 1/5 M fit = 114 +/- 7GeV/c 2 S/B = 1/5 M fit = 114 +/- 7GeV/c 2


Download ppt "ACAT2000 Oct. 16-20, 2000 Pushpa Bhat1 Advanced Analysis Techniques in HEP Pushpa Bhat Fermilab ACAT2000 Fermilab, IL October 2000 A reasonable man adapts."

Similar presentations


Ads by Google