Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reasoning Patterns Bayesian Networks Representation Probabilistic

Similar presentations


Presentation on theme: "Reasoning Patterns Bayesian Networks Representation Probabilistic"— Presentation transcript:

1 Reasoning Patterns Bayesian Networks Representation Probabilistic
Graphical Models Bayesian Networks Reasoning Patterns

2 The Student Network 0.4 0.6 d1 d0 0.3 0.7 i1 i0 Difficulty
Intelligence 0.2 0.95 s0 s1 0.8 i1 0.05 i0 0.3 0.08 0.25 0.4 g2 0.02 0.9 i1,d0 0.7 0.05 i0,d1 0.5 g1 g3 0.2 i1,d1 i0,d0 Grade SAT Letter l1 l0 0.99 0.4 0.1 0.9 g1 0.01 g3 0.6 g2

3 Causal Reasoning P(l1) ~ 0.5 P(l1 | i0 ) ~ P(l1 | i0 , d0) ~
Difficulty Difficulty Intelligence Intelligence Grade SAT P(l1) ~ 0.5 Letter P(l1 | i0 ) ~ P(l1 | i0 , d0) ~

4 Evidential Reasoning P(d1) = 0.4 P(i1) = 0.3 P(d1 | g3) ≈ P(i1 | g3) ≈
Difficulty Intelligence Student gets a C  Grade SAT 0.3 0.08 0.25 0.4 g2 0.02 0.9 i1,d0 0.7 0.05 i0,d1 0.5 g1 g3 0.2 i1,d1 i0,d0 Letter 0.63, 0.08

5 We find out that class is hard
What happens to the posterior probability of high intelligence? Intelligence Difficulty Grade Letter SAT Class is hard! Student gets a C  Goes up Goes down Doesn’t change We can’t know

6 Intercausal Reasoning
P(d1) = 0.4 P(i1) = 0.3 P(d1 | g3) ≈ 0.63 P(i1 | g3) ≈ 0.08 P(i1 | g3, d1) ≈ 0.11 Difficulty Intelligence Class is hard! Grade SAT Student gets a C  Letter 0.11

7 Intercausal Reasoning II
P(i1) = 0.3 P(i1 | g2) ≈ P(i1 | g2, d1) ≈ Difficulty Difficulty Intelligence Class is hard! Grade SAT Student gets a B  Letter 0.175, 0.34

8

9 Student Aces the SAT What happens to the posterior probability that the class is hard? Intelligence Difficulty Grade Letter SAT Student gets a C  Goes up Goes down Doesn’t change Student aces the SAT  We can’t know

10 Multiple Evidence P(d1) = 0.4 P(i1) = 0.3 P(d1 | g3) ≈ 0.63
P(i1 | g3) ≈ 0.08 P(d1 | g3, s1) ≈ P(i1 | g3, s1) ≈ Difficulty Intelligence Grade SAT Student gets a C  Student aces the SAT  Letter 0.76, 0.58

11 END

12

13

14

15

16

17 Suppose q is at a local minimum of a function
Suppose q is at a local minimum of a function. What will one iteration of gradient descent do? Leave q unchanged. Change q in a random direction. Move q towards the global minimum of J(q). Decrease q.

18 Consider the weight update:
Which of these is a correct vectorized implementation?

19 Fig. A corresponds to a=0.01, Fig. B to a=0.1, Fig. C to a=1.

20

21


Download ppt "Reasoning Patterns Bayesian Networks Representation Probabilistic"

Similar presentations


Ads by Google