Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS Fall 2016 (Shavlik©), Lecture 27, Week 15

Similar presentations


Presentation on theme: "CS Fall 2016 (Shavlik©), Lecture 27, Week 15"— Presentation transcript:

1 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
11/28/2018 Today’s Topics Exam: two pages of notes and simple calculator (log, e, * / + -) allowed Final List of Topics Covered this Term (to various levels of depth, of course) Review of Rest of Fall 2015 Final The Turing Test Strong vs Weak AI Hypotheses Searle’s Chinese Room Story Future of AI? - As a science/technology Its impact of society 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

2 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
An Informal Survey First, be sure to do on-line course eval! Does the ‘singularity’ seems NEARER or farther now than it did on Day 1 of the class? 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

3 Main Topics Covered Since Midterm (incomplete sublists)
But don’t forget that ML and search played major role in second half of class as well! Bayesian Networks and Bayes’ Rule Full joint, Naïve Bayes, odds ratios, statistical inference Artificial Neural Networks Perceptrons, gradient descent, HUs, linear separability, deep nets Support Vector Machines Large margins, penalties for outliers, kernels (for non-linearity) First-Order Predicate Calculus Representation of English sentences, logical deduction, prob logic Unsupervised ML, ILP, MLNs, AI & Philosophy 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

4 Detailed List of Course Topics: Final Version
Learning from labeled data Experimental methodologies for choosing parameter settings and estimating future accuracy Decision trees and random forests Probabilistic models Nearest-neighbor methods Genetic algorithms Neural networks Support vector machines Reinforcement learning (reinforcements are ‘indirect’ labels) Inductive logic programming Markov logic networks Theory refinement Learning from unlabeled data K-means Hierarchical clustering Expectation maximization Auto association neural networks Searching for solutions Heuristically finding (shortest) solutions Algorithms for playing games like chess Simulated annealing Reasoning probabilistically Probabilistic inference Bayes' rule, Bayesian networks, Naïve Bayes, MLNs Reasoning from concrete cases Cased-based reasoning Nearest-neighbor algorithm Kernels Reasoning logically First-order predicate calculus Representing domain knowledge using mathematical logic Logical inference Probabilistic logic Problem-solving methods based on the biophysical world Genetic algorithms Simulated annealing Neural networks Reinforcement learning Philosophical aspects Turing test Searle's Chinese Room thought experiment The coming singularity Strong vs. weak AI Societal impact and future of AI 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

5 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Suggestions Be sure to carefully review all the HW solutions, especially HWs 3, 4, & 5 (out soon) Imagine a HW 6 on last few lectures (and see worked examples in lec notes) My old cs540 exams highly predictive of my future cs540 exams Some things to only know at the ‘2 pt’ level Calculus: have an intuitive sense of slope in non-linear curves (only need to know well: algebra, exp & log’s, arithmetic, (weighted) sums and products:  and ) Matrices and using linear programming to solve SVMs (do know dot product well) How to build your own walking-talking robot :-) 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

6 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Bayesian Networks A B C D a) P(A, B,  C, D) = P(A) x P(B | A) x (1- P(C | A, B)) x P(D |  C, B) = 0.028 b) P(A,  C, D) = P(A,  C, D, B) + P(A,  C, D,  B) // Create complete world states! = 0.129 c) P( B | A,  C, D) = P( B, A,  C, D) / P(A,  C, D) ) // Get rid of the GIVEN! = 0.783 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

7 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Ex # A B Output 1 True False - 2 3 + Naïve Bayes Pseudo Ex’s: FF+, FF-, TT+, TT- Odds(+ | A,  B) = P(+ | A,  B) / P(- | A,  B) = [ P(A | +) x P( B | +) x P(+) ] / [ P(A | -) x P( B | -) x P(-) ] = 2/9 We also know P(+ | A,  B) + P(- | A,  B) = 1 Two equations, two unknowns Solve to get P(+ | A,  B) = 2/11 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

8 Neural Networks (learning rate = 0.1)
4 2 Teacher = 1 2 4 -1 -5 2.4 2 1.5 1.9 2 -5 2 -1 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

9 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Value of HUs Perceptrons Three HUs - - + - + - - - + + - - + - + - + + + + - + - + - + - + + + + + - - - - - - 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

10 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Ex # A B C Output 1 + 2 3 - SVMs New Data Set Ex # Kex1 Kex2 Kex3 Output 1 3 2 + - 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

11 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Ex # P Q R Output 1 2 - -1 4 3 + SVMs Assume model is if (4 P – 3 Q + 1 R ≥ 5) then + else – Cost? sum of abs value of weights + 2 (sum of slacks) Unslacked weighted sums ex1: 0 – = -1 (no slack needed) ex2: = 3 (no slack needed) ex3: = 3 (need slack of 3 BECAUSE WE NEED TO EXCEED THRESHOLD BY 1) cost = x 3 = 14 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

12 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Misc Questions Assume we have a binary classification problem where Feature A has 3 values, Feature B has 5 values, and Feature C has 4 values. How big is a full joint probability table for this problem? 2 x 3 x 5 x 4 = 120 An unavoidable weakness of SVMs is that the kernel matrix produced when using kernels is of size N 2, where N is the number of examples: FALSE (can sample columns) Often data sets have missing values for some features in some examples. Circle the method below that is the best way to ‘fill in’ these missing values. Drop In Expectation-Maximization K-Means Transfer Learning A ‘complete world state’ that makes every WFF in a set of WFFs true is called a/an: Interpretation Model Skolemizer Tautology 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

13 Another Worked MLN Example
Given these three rules, what is the prob of P? wgt = 2 P  R wgt = 3 R  Q [ same as  R  Q ] wgt = 1 Q P Q R Unnormalized Prob F F F exp( ) F F T exp( ) F T F exp( ) F T T exp( ) T F F exp( ) T F T exp( ) T T F exp( ) T T T exp( ) To get Z, sum all the unnormalized probs Then divide all the probs by Z to normalize Finally sum prob’s of those cells where P is true 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

14 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
The Turing Test Says intelligence is judged based on behavior (rather than inspecting internal data structures) Focus is on cognition, rather than perception, so use a simple ‘ascii’ interface If human judge interacting (via a teletype) with two ‘entities’ cannot accurately say which is the human and which is the computer, then the computer is intelligent (visualized on next slide) 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

15 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Not a serious concern of nearly all AI researchers The Turing Test 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

16 Strong vs. Weak AI Hypotheses
Weak AI Hypothesis we can accurately simulate animal/human intelligence in a computer STRONG AI Hypothesis we can create algo’s that are intelligent (conscious?) 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

17 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Searle’s Chinese Room What does it mean to ‘understand’? Assume non-Chinese speaker is in a room with a bunch of IF-THEN rules written in Chinese (see next slide) Questions come in written in Chinese Human inside room matches symbols, adding intermediate deductions to some ‘scratch space’ Some rules say (in English) in their THEN part, ‘send this Chinese symbol … out to the user’ 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

18 Searle’s Chinese Room (1980)
If person inside does a great job of answering questions, can we say he or she understands? Even if she or he is only blindly following rules? (Of course the ‘person inside’ is acting like an AI program) 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

19 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Some Debate Your thoughts/comments? Is the room + the human intelligent? After all, no one part of an airplane has the property flies, but the whole thing does This is called ‘the systems reply’ (see The ‘robot reply’ says that the problem is that the person doesn’t sense/interact with the real world – ‘symbols’ would be grounded to actual physical things and thereby become meaningful 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

20 Future of AI? (Remember everyone’s bad at predicting the future!)
Your comments/questions? ML, ML, and more ML? Ditto for Data? Scaling up even more? Specialized h/w for AI/ML algorithms? Personalized s/w, learns from our every action? Our watches predict heart attacks N minutes in advance? Etc Will ‘knowledge’ (ever) play a bigger role? Eg, can we train our s/w agents and robots by talking to them, like humans teach humans? Robots becoming ubiquitous? (Eg, self-driving cars) More natural interaction with computers? Language(text, voice), gestures, sketches, images, brain waves? 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

21 Robots Teaching Robots
How much time did we, as a group, spend me teaching you a fraction of what I (and the authors of assigned readings) know about AI? 40 hrs of class time x 85 humans  150 days Plus, no doubt, 10x that time outside of class :-) How long will it take one robot that has learned ‘a lot’ to teach 84 robots? 8M robots? 8B? A few seconds? Or will robots+owners have ‘individual differences’ that preclude direct brain-to-brain copying? Remember : predictions (a) for nuclear power leading to “electricity to cheap to meter” and (b) `the war to end all wars’ 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

22 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Societal Impact of AI? When will majority of highway miles be robot driven? When will most of the 'exciting new' S/W come from ML? When will half of current jobs be fully automated? For every job automated by AI/ML, how many new jobs will be created?   0.8?  1.5? Will there be a minimal ‘basic’ income?  Proposed in Finland For ‘industrialized’ countries?  All countries? Do we really all want to retire at 30? Humanities majors victorious? 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

23 Societal Impact of AI? (2)
Ever watch “Planet of the Apes”? Societal Impact of AI? (2) When will owning a car be a hobby? When will communication between human speakers of any two natural (and non-rare) languages be as easy as communication in the same one? When will our 'digital doubles' and robots do all our Travel planning?  Entertainment planning?  Financial decision making? Medical decision making? Shopping? Cooking? Cleaning? When will the average human life span grow faster than one year per year? (Will AI drive med?) Robot care and engagement in nursing homes? AI and war? AI and privacy? AI and income distribution? Others? Comments or Questions? What is the prob we will all look back at these questions in 25 years and see them as naively optimistic? Seems likely (but other things will happen faster than we expect) 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15

24 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15
Final Comments Even if you don’t work in AI in the future, hopefully the class helps you understand the AI news, technological opportunities, and social impacts If you do/will work in AI, seems to be an exciting time! (Hope there’s no lurking ‘AI Winter 2’ due to over-hyped expectations) Good luck on the exam and keep in touch, especially if working in AI! 12/15/16 CS Fall 2016 (Shavlik©), Lecture 27, Week 15


Download ppt "CS Fall 2016 (Shavlik©), Lecture 27, Week 15"

Similar presentations


Ads by Google