CSCE 190 Computing in the Modern World

Slides:



Advertisements
Similar presentations
CS188: Computational Models of Human Behavior
Advertisements

A Tutorial on Learning with Bayesian Networks
Bayesian Network and Influence Diagram A Guide to Construction And Analysis.
1 Some Comments on Sebastiani et al Nature Genetics 37(4)2005.
Reasoning Under Uncertainty: Bayesian networks intro Jim Little Uncertainty 4 November 7, 2014 Textbook §6.3, 6.3.1, 6.5, 6.5.1,
BAYESIAN NETWORKS CHAPTER#4 Book: Modeling and Reasoning with Bayesian Networks Author : Adnan Darwiche Publisher: CambridgeUniversity Press 2009.
AI 授課教師:顏士淨 2013/09/12 1. Part I & Part II 2  Part I Artificial Intelligence 1 Introduction 2 Intelligent Agents Part II Problem Solving 3 Solving Problems.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 190 Computing in the Modern World Judea Pearl 2011 ACM A.M. Turing Award.
Bayesian Networks VISA Hyoungjune Yi. BN – Intro. Introduced by Pearl (1986 ) Resembles human reasoning Causal relationship Decision support system/ Expert.
Introduction of Probabilistic Reasoning and Bayesian Networks
Artificial Intelligence Chapter 19 Reasoning with Uncertain Information Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering An Introduction to Pearl’s Do-Calculus of Intervention Marco Valtorta Department.
Judea Pearl The A.I. Guy (2011 Turing Award Winner) Matt Hegarty CSCE Spring 2013.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
December Marginal and Joint Beliefs in BN1 A Hybrid Algorithm to Compute Marginal and Joint Beliefs in Bayesian Networks and its complexity Mark.
5/25/2005EE562 EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS Lecture 16, 6/1/2005 University of Washington, Department of Electrical Engineering Spring 2005.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 190 Computing in the Modern World Two Example Bayesian Networks March.
1 Bayesian Networks Chapter ; 14.4 CS 63 Adapted from slides by Tim Finin and Marie desJardins. Some material borrowed from Lise Getoor.
A Brief Introduction to Graphical Models
Artificial Intelligence: Definition “... the branch of computer science that is concerned with the automation of intelligent behavior.” (Luger, 2009) “The.
Bayesian Learning By Porchelvi Vijayakumar. Cognitive Science Current Problem: How do children learn and how do they get it right?
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Reasoning Under Uncertainty: Bayesian networks intro CPSC 322 – Uncertainty 4 Textbook §6.3 – March 23, 2011.
Bayesian Networks for Data Mining David Heckerman Microsoft Research (Data Mining and Knowledge Discovery 1, (1997))
Bayesian Statistics and Belief Networks. Overview Book: Ch 13,14 Refresher on Probability Bayesian classifiers Belief Networks / Bayesian Networks.
1 Monte Carlo Artificial Intelligence: Bayesian Networks.
Introduction to Bayesian Networks
Computing & Information Sciences Kansas State University Data Sciences Summer Institute Multimodal Information Access and Synthesis Learning and Reasoning.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Bayesian networks and their application in circuit reliability estimation Erin Taylor.
Lecture 29 Conditional Independence, Bayesian networks intro Ch 6.3, 6.3.1, 6.5, 6.5.1,
1 CMSC 671 Fall 2001 Class #20 – Thursday, November 8.
Introduction on Graphic Models
Conditional Probability, Bayes’ Theorem, and Belief Networks CISC 2315 Discrete Structures Spring2010 Professor William G. Tanner, Jr.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Artificial Intelligence Chapter 19 Reasoning with Uncertain Information Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Bayesian Decision Theory Introduction to Machine Learning (Chap 3), E. Alpaydin.
Artificial Intelligence Bayes’ Nets: Independence Instructors: David Suter and Qince Li Course Harbin Institute of Technology [Many slides.
A Brief Introduction to Bayesian networks
INTRODUCTION TO Machine Learning 2nd Edition
What is cognitive psychology?
CHAPTER 16: Graphical Models
Reasoning Under Uncertainty: More on BNets structure and construction
Fundamentals of Information Systems, Sixth Edition
Pekka Laitila, Kai Virtanen
Bayesian Networks: A Tutorial
Reasoning Under Uncertainty: More on BNets structure and construction
Conditional Probability, Bayes’ Theorem, and Belief Networks
Artificial Intelligence Chapter 19
Read R&N Ch Next lecture: Read R&N
Uncertainty in AI.
CAP 5636 – Advanced Artificial Intelligence
Bayesian Statistics and Belief Networks
CSCE 190 Computing in the Modern World Judea Pearl ACM A. M
CS 188: Artificial Intelligence Fall 2007
Markov Random Fields Presented by: Vladan Radosavljevic.
Class #19 – Tuesday, November 3
CS 188: Artificial Intelligence
Biointelligence Lab School of Computer Sci. & Eng.
AI and Agents CS 171/271 (Chapters 1 and 2)
Graduate School of Information Sciences, Tohoku University
Class #16 – Tuesday, October 26
CS 188: Artificial Intelligence Spring 2007
Read R&N Ch Next lecture: Read R&N
Probabilistic Reasoning
Read R&N Ch Next lecture: Read R&N
Probabilistic Reasoning
Chapter 14 February 26, 2004.
CS 188: Artificial Intelligence Fall 2008
Presentation transcript:

CSCE 190 Computing in the Modern World Judea Pearl 2011 ACM A.M. Turing Award Winner November 27, 2018 Marco Valtorta mgv@cse.sc.edu June 2012 CACM Cover Page

Professor Judea Pearl March 15, 2012. ACM today named Judea Pearl the winner of the 2011 ACM A.M. Turing Award for pioneering developments in probabilistic and causal reasoning and their application to a broad range of problems and challenges. http://www.acm.org/press-room/news-releases/2012/turing-award-11/

Biography Judea Pearl is a professor of computer science at UCLA, where he was director of the Cognitive Systems Laboratory.  Before joining UCLA in 1970, he was at RCA Research Laboratories, working on superconductive parametric and storage devices.  Previously, he was engaged in advanced memory systems at Electronic Memories, Inc.  Pearl is a graduate of the Technion, the Israel Institute of Technology, with a Bachelor of Science degree in Electrical Engineering.  In 1965, he received a Master's degree in Physics from Rutgers University, and in the same year was awarded a Ph.D. degree in Electrical Engineering from the Polytechnic Institute of Brooklyn.   Among his many awards, Pearl is the recipient of the 2003 Allen Newell Award from ACM and the AAAI (Association for the Advancement of Artificial Intelligence). His groundbreaking book on causality won the 2001 Lakatos Award from the London School of Economics and Political Science "for an outstanding significant contribution to the philosophy of science." Pearl is a member of the National Academy of Engineering and a Fellow of the American Association for the Advancement of Artificial Intelligence (AAAI) and the Institute for Electrical and Electronic Engineers (IEEE). He is President of the Daniel Pearl Foundation, named after his son. http://www.acm.org/press-room/news-releases/2012/turing-award-11/ Also: http://bayes.cs.ucla.edu/stat_bio.html

iClicker Judea Pearl is a computer scientist whose research area is: Mobile Networks Artificial Intelligence Programming Languages Operating Systems Databases

Four Books Heuristics, Addison-Wesley, 1984 Probabilistic Reasoning in Intelligent Systems, Morgan-Kaufmann, 1988 Causality: Models, Reasoning, and Inference, Cambridge University Press, 2000 (2nd edition, 2009) The Book of Why: The New Science of Cause and Effect, Basic Books, 2018.

Heuristics Pearl's early work on heuristic search—a trial-and-error method of problem-solving— propelled the evolution of AI into a mature field with sound scientific foundations.  He challenged and ultimately overturned the prevailing approach to reasoning embodied in expert systems and other technologies developed in AI.  In his 1984 book Heuristics: Intelligent Search Strategies for Computer Problem Solving, he set a new standard where algorithms, even heuristic ones, had to be analyzed rigorously in terms of their correctness and performance.  He subsequently devised ways of programming machines to discover their own heuristics.  http://www.acm.org/press-room/news-releases/2012/turing-award-11/

Bayesian Networks Pearl went on to develop the theoretical foundations for reasoning under uncertainty using a "Bayesian network," a term he coined in 1985, named for the 18th century British mathematician Thomas Bayes. An extremely general and flexible modeling tool, a Bayesian network mimics the neural activities of the human brain, constantly exchanging messages without benefit of a supervisor. These networks revolutionized AI by providing a compact way of representing probability distributions and reasoning about them. Pearl showed how Bayesian networks and their belief updating algorithms provide an intuitive, elegant characterization of complex probability distributions, and the way they track new evidence. This development was a critical step toward achieving human-level AI that can interact with the physical world.

Icy Roads, Model Source: Judea Pearl via Finn V. Jensen; Hugin Screen Shot

Icy Roads, Example of Use Source: Judea Pearl via Finn V. Jensen; Hugin Screen Shots

Wet Lawn, Model Judea Pearl via Finn V. Jensen; Hugin Screen Shot

Wet Lawn, Example of Use Judea Pearl via Finn V. Jensen; Hugin Screen Shot

Causality In addition to their impact on probabilistic reasoning, Bayesian networks completely changed the way causality is treated in the empirical sciences, which are based on experiment and observation. Pearl's work on causality is crucial to the understanding of both daily activity and scientific discovery. It has enabled scientists across many disciplines to articulate causal statements formally, combine them with data, and evaluate them rigorously. His 2000 book Causality: Models, Reasoning, and Inference is among the single most influential works in shaping the theory and practice of knowledge-based systems. His contributions to causal reasoning have had a major impact on the way causality is understood and measured in many scientific disciplines, most notably philosophy, psychology, statistics, econometrics, epidemiology and social science. http://www.acm.org/press-room/news-releases/2012/turing-award-11/

Causal Bayesian Networks Causal Bayesian networks are Bayesian networks Each variable in the graph is independent of all its non- descendants given its parents Causal Bayesian networks are causal The directed edges in the graph represent causal influences between the corresponding variables Explain the graph: variables in one-one correspondence with nodes, conditional independences encoded by the graph, factoring, chain rule for Bayesian networks. Genotype is hidden (unmeasured). Can we still recover the sufficient statistics? Can we recover the causal effect of SerumSelenium on KeshanDisease? Read the First Paragraph of the JCI Paper. A directed acyclic graph (DAG) can represent the factorization of a joint distribution of a set of random variables. To be more precise, a Bayesian network is a pair (G; P), where G is a DAG and P is a joint probability distribution of variables in one-to-one correspondence with the nodes of G, with the property that each variable is conditionally independent of its non-descendants given its parents. It follows from this definition that the joint probability P factors according to G, as the product of the conditional probabilities of each node given its parents. Thus a discrete Bayesian network is fully specified by a DAG and a set of conditional probability tables, one for each node given its parents [1, 2].

The Ladder of Causation From: Judea Pearl and Dana Mackenzie. The Book of Why: The New Science of Cause and Effect. Basic Books, 2018. This figure was removed for copyright reasons and replaced with a table from UCLA CSL Technical Report R-475 (July 2018): Judea Pearl. “Theoretical Impediments to Machine Learning with Seven Sparks from the Causal Revolution.” Read example with floss.

iClicker According to Pearl, the ladder of causation has this many rungs: 1 2 3 4 5

What is Identifiability? The sufficient parameters for discrete Bayesian network with hidden and observable nodes are the conditional probability tables (CPTs) for each family of nodes Unidentifiability_1: The ability to determine whether the CPTs can be computed from observable data alone and, if so, to compute them Unidentifiability_2: The ability to determine whether the causal effect of a set of observable variables on another observable variable in a causal Bayesian network with hidden nodes can be computed from observable data alone, and, if so, to compute it An Example of case 2 follows

Unidentifiability_2 Example(1) All the variables are binary. P(U=0) = 0.5, P(X=0|U) = (0.6,0.4), P(Y=0|X,U) = U X Y Y=0 X =0 X= 1 U =0 0.7 0.2 U=1 Fisher, Nature, 1958.

Unidentifiability_2 Example(2) Note that We get: Because of the excision semantics, the link from U to X is removed, and we have: So, PX=0 (Y=0) = (0.7x0.5) + (0.2x0.5) = 0.45 X =0 X= 1 Y =0 0.25 (=0.7x0.6x0.5+0.2x0.4x0.5) 0.25 Y=1

Unidentifiability_2 Example(3) All the variables are still binary. P(U=0) = 0.5 P(X=0|U) = (0.7,0.3) P(Y=0|X,U) = U X Y Y=0 X =0 X= 1 U =0 0.65 0.15 U=1

Unidentifiability_2 Example(4) Using We still get: From We have PX=0 (Y=0) = (0.65x0.5) + (0.35x 0.5) = 0.4 <> 0.45 So, PX(Y) is unidentifiable in this model X =0 X= 1 Y =0 0.25 Y=1

The Identifiability_2 Problem For a given causal Bayesian network, decide whether Pt(s) (i.e., P(S | do(T)) is identifiable or not If Pt(s) is identifiable, give a closed-form expression for the value of Pt(s) in term of distributions derived from the joint distribution of all observed quantities, P(n)