Download presentation
Presentation is loading. Please wait.
Published byElinor King Modified over 9 years ago
1
Research at the Decision Making Lab Fabio Cozman Universidade de São Paulo
2
Decision Making Lab (2002)
3
Research tree Robotics (a bit) Bayes nets Sets of probabilities Algorithms independence Applications MDPs, robustness analysis, auctions Anytime, anyspace (embedded systems) Classification Applications Medical decisions MCMC algorithms inference & testing
4
Some (bio)robotics
5
Bayesian networks
6
Decisions in medical domains (with the University Hospital) Idea: To improve decisions at medical posts in urban, poor areas We are building networks that represent cardiac arrest — can be caused by stress, cardiac problems, respiratory problems, etc – Support by FAPESP
7
The HU-network
8
A better interface for teaching
9
Embedded Bayesian networks Challenge: to implement inference algorithms compactly and efficiently Real challenge: to develop anytime anyspace inference algorithms Idea: decompose networks, apply several algorithms (UAI2002 workshop on RT) – Support by HP Labs
10
Decomposing networks How to decompose and assign algorithms to meet space and time constraints with reasonable accuracy
11
Application: Failure analysis in car-wash systems
12
The car-wash network
13
Generating random networks Problem is easy to state, hard to solve: critical properties of DAGs are not known Method based on MCMC simulation, with constraints on induced width and degree – Support by FAPESP
14
Research tree (again) Biorobotics (a bit of it) Bayes nets Sets of probabilities Algorithms independence Applications MDPs, robustness analysis, auctions Anytime, anyspace (embedded systems) Classification Applications Medical decisions MCMC algorithms inference & testing
15
Bayesian network classifiers Goal is to use probabilistic models for classification – to “learn” classifiers using labeled and unlabeled data Work with Ira Cohen, Alex Bronstein and Marsha Duro (UIUC and HP Labs)
16
Using Bayesian networks to learn from labeled and unlabeled data Suppose we want to classify events based on observations; we have recorded data that are sometimes labeled and sometimes unlabeled What is the value of unlabeled data?
17
The Naïve Bayes classifier A Bayesian-network like classifier with excellent credentials: Use Bayes rule to get classification p(label | attrs.) p(label) i=0…N p(attr. i | Class) Attribute 1 Class Attribute 2Attribute N
18
The TAN classifier Attribute N X N Attribute 1 X 1 Class Attribute 2 X 2 Attribute 3 X 3
19
Now, let’s consider unlabeled data Our database: – Americanbaseballhamburger – Brazilian soccerrice and beans – Americangolfapple pie – ?saloon soccerrice and beans – ?golfrice and beans Question: How to use the unlabeled data?
20
Unlabeled data can help… Learning a Naïve Bayes for data generated from a Naïve Bayes model (10 attributes):
21
… but unlabeled data may degrade performance! Surprising fact: more data may not help; more data may hurt
22
Some math: asymptotic analysis Asymptotic bias: Variance decreases with more data
23
A very simple example Consider the following situation: Class X Y X Y “Real” “Assumed” X and Y are Gaussian given Class
24
Effect of unlabeled data – a different perspective
25
Searching for structures Previous tests suggest that we should pay attention to modeling assumptions when dealing with unlabeled data In the context of Bayesian network classifiers, we must look for structures This is not easy; worse, existing algorithms do not focus on classification
26
Stochastic Structure Search (SSS) Idea: search for structures using classification error Hard: search space is too messy Solution: Metropolis-Hastings sampling with underlying measure proportional to 1/p error
27
Some classification results
28
Some words on unlabeled data Unlabeled data can improve performance, can degrade performance — really hard! Current understanding about this problem is shaky – people think outliers or mismatches between labeled and unlabeled data cause the problem
29
Research tree (once again) Biorobotics (a bit of it) Bayes nets Sets of probabilities Algorithms independence Applications MDPs, robustness analysis, auctions Anytime, anyspace (embedded systems) Classification Applications Medical decisions MCMC algorithms inference & testing
30
Sets of probabilities Instead of probability of rain is 0.2, say probability of rain is [0.1, 0.3] Instead of expected value of stock is 10, admit expected value of stock is [0, 1000]
31
An example Consider a set of probabilities p( 1 ) p( 2 ), p( 3 ) Set of probabilities
32
Why? More realistic and quite expressive as representation language Excellent tool for – robustness/sensitivity analysis – modeling incomplete beliefs (probabilistic logic) – group decision-making – analysis of economic interactions – for example, to study arbitrage and design auctions
33
What we have been doing Trying to formalize and apply “interval” reasoning, particularly independence Building algorithms for manipulation of these intervals and sets – To deal with independence and networks – JavaBayes is the only available software that can deal with this (to some extent!)
34
Credal networks Using graphical models to represent sets of joint probabilities Question: what do the networks represent? Several open questions and need for algorithms Family In?Dog Sick? Lights On? Dog Barking? Dog Out?
35
Concluding To summarize, we want to understand how to use probabilities in AI, and then we add a bit of robotics Support from FAPESP and HP Labs has been generous Visit the lab in your next trip to São Paulo
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.