Presentation is loading. Please wait.

Presentation is loading. Please wait.

Today’s Topics 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 151 Exam (comprehensive, with focus on material since midterm), Thurs 5:30-7:30pm,

Similar presentations


Presentation on theme: "Today’s Topics 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 151 Exam (comprehensive, with focus on material since midterm), Thurs 5:30-7:30pm,"— Presentation transcript:

1 Today’s Topics 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 151 Exam (comprehensive, with focus on material since midterm), Thurs 5:30-7:30pm, in this room, two pages and notes and simple calculator (log, e, * / + -) allowed The Turing Test Strong vs Weak AI Hypotheses Searle’s Chinese Room Story High-Level Recap of Topics since Midterm Final List of Topics Covered this Term (to various levels of depth, of course) Review of Fall 2014 Final (Recall: another review tomorrow of Spring 2013 final) Future of AI? [Not on Final] - As a science/technology - Its impact of society

2 An Informal Survey First, be sure to do on-line course evals! –More programming? More (but shorter) HWs? Would you prefer TWICE WEEKLY classes? How many use/do AI (ML? Other?) at work? –Not in the sense of simply using Google, Siri, etc How many expect to? Within 2 yrs? 5? NEARER fartherDoes the ‘singularity’ seems NEARER or farther now than it did on Day 1 of the class? 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 2

3 The Turing Test Says intelligence is judged based on behavior (rather than inspecting internal data structures) Focus on cognition, rather than perception, so use a simple ‘ascii’ interface If human judge interacting (via a teletype) with two ‘entities’ cannot accurately say which is the human and which is the computer, then the computer is intelligent (visualized on next slide) 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 3

4 The Turing Test 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 4 Not a serious concern of nearly all AI researchers

5 Strong vs. Weak AI Hypotheses Weak AI Hypothesis we can accurately simulate animal/human intelligence in a computer STRONG AI Hypothesis we can create algo’s that are intelligent (conscious?) 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 5

6 Searle’s Chinese Room What does it mean to ‘understand’? Assume non-Chinese speaker is in a room with a bunch of IF-THEN rules written in Chinese (see next slide) –Questions come in written in Chinese –Human inside room matches symbols, adding intermediate deductions to some ‘scratch space’ –Some rules say (in English) in their THEN part, ‘send this Chinese symbol … out to the user’ 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 6

7 Searle’s Chinese Room (1980 ) 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 7 If person inside does a great job of answering questions, can we say he or she understands? Even if she or he is only blindly following rules? (Of course the ‘person inside’ is acting like an AI program)

8 Some Debate Your thoughts/comments? Is the room + the human intelligent? –After all, no one part of an airplane has the property flies, but the whole thing does –This is called ‘the systems reply’ (see http://plato.stanford.edu/entries/chinese-room/)http://plato.stanford.edu/entries/chinese-room/ The ‘robot reply’ says that the problem is that the person doesn’t sense/interact with the real world – ‘symbols’ would be grounded to actual physical things and thereby become meaningful 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 8

9 Main Topics Covered Since Midterm (incomplete sublists) But don’t forget that ML and search played major role in second half of class as well! Bayesian Networks and Bayes’ Rule –Full joint, Naïve Bayes, odds ratios, statistical inference Artificial Neural Networks –Perceptrons, gradient descent, HUs, linear separability, deep nets Support Vector Machines –Large margins, penalties for outliers, kernels (for non-linearity) First-Order Predicate Calculus –Representation of English sentences, logical deduction, prob logic Unsupervised ML, RL, ILP, COLT, AI & Philosophy 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 9

10 Detailed List of Course Topics: Final Version 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 1510 Reasoning probabilistically Probabilistic inference Bayes' rule, Bayesian networks, Naïve Bayes Reasoning from concrete cases Cased-based reasoning Nearest-neighbor algorithm Kernels Reasoning logically First-order predicate calculus Representing domain knowledge using mathematical logic Logical inference Probabilistic logic Problem-solving methods based on the biophysical world Genetic algorithms Simulated annealing Neural networks Reinforcement learning Philosophical aspects Turing test Searle's Chinese Room thought experiment The coming singularity Strong vs. weak AI Societal impact and future of AI Learning from labeled data Experimental methodologies for choosing parameter settings and estimating future accuracy Decision trees and random forests Probabilistic models Nearest-neighbor methods Genetic algorithms Neural networks Support vector machines Reinforcement learning (reinforcements are ‘indirect’ labels) Inductive logic programming Computational learning theory Variations: incremental, active, and transfer learning Learning from unlabeled data K-means Expectation maximization Auto association neural networks Searching for solutions Heuristically finding shortest paths Algorithms for playing games like chess Simulated annealing Genetic algorithms

11 Suggestions Be sure to carefully review all the HW solutions, especially HWs 3, 4, & 5 Imagine a HW 6 on MLNs and RL (and see worked examples in lec notes) My old cs540 exams highly predictive of my future cs540 exams ILP: understand search space when predicates have k arguments Some things to only know at the ‘2 pt’ level –Calculus: have an intuitive sense of slope in non-linear curves (only need to know well: algebra, exp & log’s, arithmetic, (weighted) sums and products:  and  ) –Matrices and using linear programming to solve SVMs (do know dot product well) –Active, transfer, and incremental learning –‘Generalizing across state’ in RL –‘Covering’ algorithms for learning a set of rules (covered in ILP lecture) –Won’t be on final: Using variable types to control search in ILP –COLT: only need to understand role of epsilon and delta (  and  ) –How to build your own walking-talking robot :-) 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 11

12 An “On Your Own” RL HW (Solution) Consider the deterministic reinforcement environment drawn below. Let γ=0.5. Immediate rewards are indicated inside nodes. Once the agent reaches the ‘end’ state the current episode ends and the agent is magically transported to the ‘start’ state. (a) A one-step, Q-table learner follows the path Start  B  C  End. On the graph below, show the Q values that have changed, and show your work. Assume that for all legal actions (ie, for all the arcs on the graph), the initial values in the Q table are 4, as show above (feel free to copy the above 4’s below, but somehow highlight the changed values). 12/15/15 CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 12 Start (r=0) End (r=5) B (r=5) A (r=2) C (r=3) Start (r=0) End (r=5) B (r=5) A (r=2) C (r=3) 4 4 4 4 4 4 4 7 5 5

13 An “On Your Own” RL HW (Solution) (b) Starting with the Q table you produced in Part (a), again follow the path Start  B  C  End and show the Q values below that have changed from Part (a). Show your work. (c) What would the final Q values be in the limit of trying all possible arcs ‘infinitely’ often? Ie, what is the Bellman-optimal Q table? Explain your answer. (d) What is the optimal path between Start and End? Explain. Start  B  C  End The policy is: take the arc with the highest Q out of each node 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 13 Start (r=0) End (r=5) B (r=5) A (r=2) C (r=3) Start (r=0) End (r=5) B (r=5) A (r=2) C (r=3) 7.5 5.5 5 5 5 7.75 5.875

14 Another Worked MLN Example Given these three rules, what is the prob of P? wgt = 2P  R wgt = 3R  Q[ same as  R  Q ] wgt = 1Q [shorthand for the rule: ‘true  Q’ ] P Q R Unnormalized Prob F F F exp(0 + 3 + 0) F F T exp(0 + 0 + 0) F T F exp(0 + 3 + 1) F T T exp(0 + 3 + 1) T F F exp(0 + 3 + 0) T F T exp(2 + 0 + 0) T T F exp(0 + 3 + 1) T T T exp(2 + 3 + 1) 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 14 To get Z, sum all the unnormalized probs Then divide all the probs by Z to normalize Finally sum prob’s of those cells where P is true

15 Break to Review Fall 2014 Final 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 15

16 Future of AI? (Remember everyone’s bad at predicting the future!) Your comments/questions? ML, ML, and more ML? Ditto for Data? Scaling up even more? Specialized h/w for AI/ML algorithms (GPUs++)? Personalized s/w, learns from our every action? Our watches predict heart attacks N minutes in advance? Etc Will ‘knowledge’ (ever) play a bigger role? Eg, can we train our s/w agents and robots by talking to them, like humans teach humans? [Bonus lecture if time today] Robots becoming ubiquitous? (Eg, self-driving cars) More natural interaction with computers? Language, gestures, sketches, images, brain waves? 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 16

17 Robots Teaching Robots How much time did we, as a group, spend me teaching you a fraction of what I (and the authors of assigned readings) know about AI? –50 hrs of class time x 70 humans  150 days –Plus, no doubt, 10x that time outside of class :-) How long will it take one robot that has learned ‘a lot’ to teach 70 robots? 7M robots? 7B? –A few seconds? –Or will robots+owners have ‘individual differences’ that preclude direct brain-to-brain copying? –Remember : predictions (a) for nuclear power leading to “electricity to cheap to meter” and (b) `the war to end all wars’ 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 17

18 Societal Impact of AI? When will majority of highway miles be robot driven? When will most of the 'exciting new' S/W come from ML? When will half of current jobs be fully automated? For every job automated by AI/ML, how many new jobs will be created? 0.8? 1.5? Will there be a minimal guaranteed income? Proposed in Finland For ‘industrialized’ countries? All countries? Do we really all want to retire at 30? Humanities majors victorious? 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 18

19 Societal Impact of AI? (2) When will owning a car be a hobby? When will communication between human speakers of any two natural (and non-rare) languages be as easy as communication in the same one? When will our 'digital doubles' and robots do all our Travel planning? Entertainment planning? Financial decision making? Medical decision making? Shopping? Cooking? Cleaning? When will the average human life span grow faster than one year per year? (Will AI drive med?) Robot care and engagement in nursing homes? AI and war? AI and privacy? AI and income distribution? Others? Comments or Questions? What is the prob we will all look back at these questions in 25 years and see them as naively optimistic? Seems likely (but other things will happen faster than we expect) 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 19 Ever watch “Planet of the Apes”?

20 Final Comments Even if you don’t work in AI in the future, hopefully the class helps you understand the AI news, technological opportunities, and social impacts If you do/will work in AI, seems to be an exciting time! (Hope there’s no lurking ‘AI Winter 2’ due to over-hyped expectations) Good luck on the exam and keep in touch, especially if working in AI! 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 15 20


Download ppt "Today’s Topics 12/15/15CS 540 - Fall 2015 (Shavlik©), Lecture 31, Week 151 Exam (comprehensive, with focus on material since midterm), Thurs 5:30-7:30pm,"

Similar presentations


Ads by Google