Download presentation
Presentation is loading. Please wait.
1
Bayesian Framework EE 645 ZHAO XIN
2
A Brief Introduction to Bayesian Framework The Bayesian Philosophy Bayesian Neural Network Some Discussion on Priors
3
Bayesian ’ s Rule Likelihood Prior Distribution Normalizing Constant
4
Bayesian Prediction
6
Hierarchical Model
7
An Example Bayesian Network
9
Some Discussion on Priors Priors Converging to Gaussian Process If the number of Hidden Units is infinite Priors Leads to smooth and Brownian Functions Fractional Brownian Priors Priors Converging to Non-Gaussian Stable Process
10
Bayesian Framework for LS RBF Kernel SVM MUD Basic Problem and Solution Probabilistic Interpretation of the LS SVM First Level Inference Second Level Inference Third Level Inference Basic MUD Model Results and Discussion Summary
11
Basic Problem for LS SVM
12
Basic Solution for LS SVM
13
The Formula for SVM
14
First Level Inference
15
Some Assumptions of this Level Separable Gaussian Prior for conditional P(w,b) Independent Data Points Gaussian Distributed Errors Variance of b goes to infinite
17
Result of the First Level
18
Conditional Distribution of Weight w and Bias b
19
Unbalance Case of 1 st Level If the means of +1 class and – 1 class are not perfectly project to +1 and – 1, the bias term will come. We will introduce 2 new random variables as followed.
21
Last Solution for First Level
22
Second Level Inference
23
Result of Second Level Inference
25
Last Solution for Second Level
26
Third Level Inference
27
Some Assumption in this Level
28
Last Solution for Third Level
29
Some Comments for this Level For Gaussian Kernel machine, the variance of Gaussian function can represent the model H It ’ s impossible to calculate for all the possible model Luckily, in general, such as in Gaussian Kernel SVM, the performance of classifier is pretty smooth with respect to the varying of model parameter. Therefore, we can just take sample of the model in the area we feel interested.
30
A Synchronous CDMA Transmitter
31
The LS SVM Receiver Diagram
32
Results and Discussions
33
First Inference
34
Second Inference
35
Third Inference (Plot 1)
37
A Sample of Parameter Chosen
38
Detector Performance
39
Some Discussions on this Detector The first inference does better the performance of LS SVM detector especially in high SNR region by considering the bias term. The LS SVM detector is very smooth with respect to the varying of those hyper-parameters, which means the adaptive LS SVM will reasonably work well if the channel properties are not varying fast. The computation for 2 nd and 3 rd inference are very complex, so it ’ s not worthwhile to do calculation here. We can choose some approximation formula instead.
40
Summary of Bayesian Network Pick up a basic neural network. Properly choose the Priors (physically right and easy for theoretical deduction). Find a reasonable hierarchical framework (a three-level inference framework is very typical), apply the Bayesian Rule there and find some beneficial assumption to simplify the problem.
41
Some Comments on Bayesian Framework It can help us to physically understand a neural network model. It can theoretically help us to find the way to optimize the parameters and more important those hyper-parameters which can be sometimes impossibly set otherwise. It even can make up some exist methods in some given problems.
42
Reference Tony V. G., Johan A. K. Suykens, A Bayesian Framework for Least Square Support Vector Machine Classifiers N. Cristianini, John S., An Introduction to Support Vector Machine, 2000 Radford M. Neal, Bayesian Learning for Neural Network, 1996 Sergio Verdo, Multiuser Detection
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.