CS Fall 2016 (Shavlik©), Lecture 27, Week 15

Slides:



Advertisements
Similar presentations
Rerun of machine learning Clustering and pattern recognition.
Advertisements

Godfather to the Singularity
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 15 Nov, 1, 2011 Slide credit: C. Conati, S.
Rutgers CS440, Fall 2003 Review session. Rutgers CS440, Fall 2003 Topics Final will cover the following topics (after midterm): 1.Uncertainty & introduction.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Introduction to Artificial Neural Network and Fuzzy Systems
Machine Learning Usman Roshan Dept. of Computer Science NJIT.
Crash Course on Machine Learning
Topics on Final Perceptrons SVMs Precision/Recall/ROC Decision Trees Naive Bayes Bayesian networks Adaboost Genetic algorithms Q learning Not on the final:
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Today’s Topics Chapter 2 in One Slide Chapter 18: Machine Learning (ML) Creating an ML Dataset –“Fixed-length feature vectors” –Relational/graph-based.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 16 Nov, 3, 2011 Slide credit: C. Conati, S.
CPS 270: Artificial Intelligence Machine learning Instructor: Vincent Conitzer.
Today’s Topics Read –For exam: Chapter 13 of textbook –Not on exam: Sections & Genetic Algorithms (GAs) –Mutation –Crossover –Fitness-proportional.
How Solvable Is Intelligence? A brief introduction to AI Dr. Richard Fox Department of Computer Science Northern Kentucky University.
Introduction to Artificial Intelligence Mitch Marcus CIS391 Fall, 2008.
Today’s Topics 11/10/15CS Fall 2015 (Shavlik©), Lecture 22, Week 101 Support Vector Machines (SVMs) Three Key Ideas –Max Margins –Allowing Misclassified.
Today’s Topics Graded HW1 in Moodle (Testbeds used for grading are linked to class home page) HW2 due (but can still use 5 late days) at 11:55pm tonight.
Course Overview  What is AI?  What are the Major Challenges?  What are the Main Techniques?  Where are we failing, and why?  Step back and look at.
Today’s Topics 12/15/15CS Fall 2015 (Shavlik©), Lecture 31, Week 151 Exam (comprehensive, with focus on material since midterm), Thurs 5:30-7:30pm,
Spring, 2005 CSE391 – Lecture 1 1 Introduction to Artificial Intelligence Martha Palmer CSE391 Spring, 2005.
Today’s Topics 11/17/15CS Fall 2015 (Shavlik©), Lecture 23, Week 111 Review of the linear SVM with Slack Variables Kernels (for non-linear models)
Today’s Topics Remember: no discussing exam until next Tues! ok to stop by Thurs 5:45-7:15pm for HW3 help More BN Practice (from Fall 2014 CS 540 Final)
Today’s Topics Bayesian Networks (BNs) used a lot in medical diagnosis M-estimates Searching for Good BNs Markov Blanket what is conditionally independent.
Today’s Topics Some Exam-Review Notes –Midterm is Thurs, 5:30-7:30pm HERE –One 8.5x11 inch page of notes (both sides), simple calculator (log’s and arithmetic)
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Machine Learning Usman Roshan Dept. of Computer Science NJIT.
CMPS 142/242 Review Section Fall 2011 Adapted from Lecture Slides.
Usman Roshan Dept. of Computer Science NJIT
Brief Intro to Machine Learning CS539
CS Fall 2016 (Shavlik©), Lecture 5
Machine Learning – Classification David Fenyő
Large Margin classifiers
Artificial Intelligence
CS Fall 2015 (Shavlik©), Midterm Topics
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
School of Computer Science & Engineering
Done Done Course Overview What is AI? What are the Major Challenges?
COMP3710 Artificial Intelligence Thompson Rivers University
CS Fall 2016 (Shavlik©), Lecture 11, Week 6
cs540 - Fall 2015 (Shavlik©), Lecture 25, Week 14
Artificial Intelligence and Society
Classification with Perceptrons Reading:
CH. 1: Introduction 1.1 What is Machine Learning Example:
CS Fall 2016 (Shavlik©), Lecture 12, Week 6
Data Mining Lecture 11.
CS 4/527: Artificial Intelligence
Basic Intro Tutorial on Machine Learning and Data Mining
Artificial Intelligence
Pattern Recognition CS479/679 Pattern Recognition Dr. George Bebis
cs540- Fall 2016 (Shavlik©), Lecture 15, Week 9
CS540 - Fall 2016(Shavlik©), Lecture 16, Week 9
Artificial Intelligence (Lecture 1)
cs540 - Fall 2016 (Shavlik©), Lecture 20, Week 11
cs540 - Fall 2016 (Shavlik©), Lecture 18, Week 10
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
CS Fall 2016 (Shavlik©), Lecture 2
Overview of Machine Learning
Artificial Intelligence Lecture No. 28
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
CS Fall 2016 (Shavlik©), Lecture 12, Week 6
MACHINE LEARNING TECHNIQUES IN IMAGE PROCESSING
Future of Artificial Intelligence
Machine Learning in Practice Lecture 6
Overview of deep learning
COMP3710 Artificial Intelligence Thompson Rivers University
Usman Roshan Dept. of Computer Science NJIT
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Instructor: Vincent Conitzer
Artificial Intelligence Machine Learning
Presentation transcript:

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 11/28/2018 Today’s Topics Exam: two pages of notes and simple calculator (log, e, * / + -) allowed Final List of Topics Covered this Term (to various levels of depth, of course) Review of Rest of Fall 2015 Final The Turing Test Strong vs Weak AI Hypotheses Searle’s Chinese Room Story Future of AI? - As a science/technology - Its impact of society 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 An Informal Survey First, be sure to do on-line course eval! Does the ‘singularity’ seems NEARER or farther now than it did on Day 1 of the class? 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Main Topics Covered Since Midterm (incomplete sublists) But don’t forget that ML and search played major role in second half of class as well! Bayesian Networks and Bayes’ Rule Full joint, Naïve Bayes, odds ratios, statistical inference Artificial Neural Networks Perceptrons, gradient descent, HUs, linear separability, deep nets Support Vector Machines Large margins, penalties for outliers, kernels (for non-linearity) First-Order Predicate Calculus Representation of English sentences, logical deduction, prob logic Unsupervised ML, ILP, MLNs, AI & Philosophy 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Detailed List of Course Topics: Final Version Learning from labeled data Experimental methodologies for choosing parameter settings and estimating future accuracy Decision trees and random forests Probabilistic models Nearest-neighbor methods Genetic algorithms Neural networks Support vector machines Reinforcement learning (reinforcements are ‘indirect’ labels) Inductive logic programming Markov logic networks Theory refinement Learning from unlabeled data K-means Hierarchical clustering Expectation maximization Auto association neural networks Searching for solutions Heuristically finding (shortest) solutions Algorithms for playing games like chess Simulated annealing Reasoning probabilistically Probabilistic inference Bayes' rule, Bayesian networks, Naïve Bayes, MLNs Reasoning from concrete cases Cased-based reasoning Nearest-neighbor algorithm Kernels Reasoning logically First-order predicate calculus Representing domain knowledge using mathematical logic Logical inference Probabilistic logic Problem-solving methods based on the biophysical world Genetic algorithms Simulated annealing Neural networks Reinforcement learning Philosophical aspects Turing test Searle's Chinese Room thought experiment The coming singularity Strong vs. weak AI Societal impact and future of AI 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Suggestions Be sure to carefully review all the HW solutions, especially HWs 3, 4, & 5 (out soon) Imagine a HW 6 on last few lectures (and see worked examples in lec notes) My old cs540 exams highly predictive of my future cs540 exams Some things to only know at the ‘2 pt’ level Calculus: have an intuitive sense of slope in non-linear curves (only need to know well: algebra, exp & log’s, arithmetic, (weighted) sums and products:  and ) Matrices and using linear programming to solve SVMs (do know dot product well) How to build your own walking-talking robot :-) 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Bayesian Networks A B C D a) P(A, B,  C, D) = P(A) x P(B | A) x (1- P(C | A, B)) x P(D |  C, B) = 0.028 b) P(A,  C, D) = P(A,  C, D, B) + P(A,  C, D,  B) // Create complete world states! = 0.129 c) P( B | A,  C, D) = P( B, A,  C, D) / P(A,  C, D) ) // Get rid of the GIVEN! = 0.783 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Ex # A B Output 1 True False - 2 3 + Naïve Bayes Pseudo Ex’s: FF+, FF-, TT+, TT- Odds(+ | A,  B) = P(+ | A,  B) / P(- | A,  B) = [ P(A | +) x P( B | +) x P(+) ] / [ P(A | -) x P( B | -) x P(-) ] = 2/9 We also know P(+ | A,  B) + P(- | A,  B) = 1 Two equations, two unknowns Solve to get P(+ | A,  B) = 2/11 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Neural Networks (learning rate = 0.1) 4 2 Teacher = 1 2 4 -1 -5 2.4 2 1.5 1.9 2 -5 2 -1 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Value of HUs Perceptrons Three HUs - - + - + - - - + + - - + - + - + + + + - + - + - + - + + + + + - - - - - - 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Ex # A B C Output 1 + 2 3 - SVMs New Data Set Ex # Kex1 Kex2 Kex3 Output 1 3 2 + - 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Ex # P Q R Output 1 2 - -1 4 3 + SVMs Assume model is if (4 P – 3 Q + 1 R ≥ 5) then + else – Cost? sum of abs value of weights + 2 (sum of slacks) Unslacked weighted sums ex1: 0 – 3 + 2 = -1 (no slack needed) ex2: -4 + 3 + 4 = 3 (no slack needed) ex3: 4 + 0 - 1 = 3 (need slack of 3 BECAUSE WE NEED TO EXCEED THRESHOLD BY 1) cost = 4 + 3 + 1 + 2 x 3 = 14 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Misc Questions Assume we have a binary classification problem where Feature A has 3 values, Feature B has 5 values, and Feature C has 4 values. How big is a full joint probability table for this problem? 2 x 3 x 5 x 4 = 120   An unavoidable weakness of SVMs is that the kernel matrix produced when using kernels is of size N 2, where N is the number of examples: FALSE (can sample columns) Often data sets have missing values for some features in some examples. Circle the method below that is the best way to ‘fill in’ these missing values. Drop In Expectation-Maximization K-Means Transfer Learning A ‘complete world state’ that makes every WFF in a set of WFFs true is called a/an: Interpretation Model Skolemizer Tautology 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Another Worked MLN Example Given these three rules, what is the prob of P? wgt = 2 P  R wgt = 3 R  Q [ same as  R  Q ] wgt = 1 Q P Q R Unnormalized Prob F F F exp(0 + 3 + 0) F F T exp(0 + 0 + 0) F T F exp(0 + 3 + 1) F T T exp(0 + 3 + 1) T F F exp(0 + 3 + 0) T F T exp(2 + 0 + 0) T T F exp(0 + 3 + 1) T T T exp(2 + 3 + 1) To get Z, sum all the unnormalized probs Then divide all the probs by Z to normalize Finally sum prob’s of those cells where P is true 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 The Turing Test Says intelligence is judged based on behavior (rather than inspecting internal data structures) Focus is on cognition, rather than perception, so use a simple ‘ascii’ interface If human judge interacting (via a teletype) with two ‘entities’ cannot accurately say which is the human and which is the computer, then the computer is intelligent (visualized on next slide) 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Not a serious concern of nearly all AI researchers The Turing Test 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Strong vs. Weak AI Hypotheses Weak AI Hypothesis we can accurately simulate animal/human intelligence in a computer STRONG AI Hypothesis we can create algo’s that are intelligent (conscious?) 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Searle’s Chinese Room What does it mean to ‘understand’? Assume non-Chinese speaker is in a room with a bunch of IF-THEN rules written in Chinese (see next slide) Questions come in written in Chinese Human inside room matches symbols, adding intermediate deductions to some ‘scratch space’ Some rules say (in English) in their THEN part, ‘send this Chinese symbol … out to the user’ 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Searle’s Chinese Room (1980) If person inside does a great job of answering questions, can we say he or she understands? Even if she or he is only blindly following rules? (Of course the ‘person inside’ is acting like an AI program) 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Some Debate Your thoughts/comments? Is the room + the human intelligent? After all, no one part of an airplane has the property flies, but the whole thing does This is called ‘the systems reply’ (see http://plato.stanford.edu/entries/chinese-room/) The ‘robot reply’ says that the problem is that the person doesn’t sense/interact with the real world – ‘symbols’ would be grounded to actual physical things and thereby become meaningful 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Future of AI? (Remember everyone’s bad at predicting the future!) Your comments/questions? ML, ML, and more ML? Ditto for Data? Scaling up even more? Specialized h/w for AI/ML algorithms? Personalized s/w, learns from our every action? Our watches predict heart attacks N minutes in advance? Etc Will ‘knowledge’ (ever) play a bigger role? Eg, can we train our s/w agents and robots by talking to them, like humans teach humans? Robots becoming ubiquitous? (Eg, self-driving cars) More natural interaction with computers? Language(text, voice), gestures, sketches, images, brain waves? 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Robots Teaching Robots How much time did we, as a group, spend me teaching you a fraction of what I (and the authors of assigned readings) know about AI? 40 hrs of class time x 85 humans  150 days Plus, no doubt, 10x that time outside of class :-) How long will it take one robot that has learned ‘a lot’ to teach 84 robots? 8M robots? 8B? A few seconds? Or will robots+owners have ‘individual differences’ that preclude direct brain-to-brain copying? Remember : predictions (a) for nuclear power leading to “electricity to cheap to meter” and (b) `the war to end all wars’ 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Societal Impact of AI? When will majority of highway miles be robot driven? When will most of the 'exciting new' S/W come from ML? When will half of current jobs be fully automated? For every job automated by AI/ML, how many new jobs will be created?   0.8?  1.5? Will there be a minimal ‘basic’ income?  Proposed in Finland For ‘industrialized’ countries?  All countries? Do we really all want to retire at 30? Humanities majors victorious? 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

Societal Impact of AI? (2) Ever watch “Planet of the Apes”? Societal Impact of AI? (2) When will owning a car be a hobby? When will communication between human speakers of any two natural (and non-rare) languages be as easy as communication in the same one? When will our 'digital doubles' and robots do all our Travel planning?  Entertainment planning?  Financial decision making? Medical decision making? Shopping? Cooking? Cleaning? When will the average human life span grow faster than one year per year? (Will AI drive med?) Robot care and engagement in nursing homes? AI and war? AI and privacy? AI and income distribution? Others? Comments or Questions? What is the prob we will all look back at these questions in 25 years and see them as naively optimistic? Seems likely (but other things will happen faster than we expect) 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15

CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15 Final Comments Even if you don’t work in AI in the future, hopefully the class helps you understand the AI news, technological opportunities, and social impacts If you do/will work in AI, seems to be an exciting time! (Hope there’s no lurking ‘AI Winter 2’ due to over-hyped expectations) Good luck on the exam and keep in touch, especially if working in AI! 12/15/16 CS 540 - Fall 2016 (Shavlik©), Lecture 27, Week 15