Recitation #3 Tel Aviv University 2016/2017 Slava Novgorodov

Slides:



Advertisements
Similar presentations
AE1APS Algorithmic Problem Solving John Drake.  Invariants – Chapter 2  River Crossing – Chapter 3  Logic Puzzles – Chapter 5  Matchstick Games -
Advertisements

Class Project Due at end of finals week Essentially anything you want, so long as it’s AI related and I approve Any programming language you want In pairs.
AE1APS Algorithmic Problem Solving John Drake..  Previously we introduced matchstick games, using one pile of matches  It is possible to have more than.
Artificial Intelligence in Game Design
Artificial Intelligence in Game Design Introduction to Learning.
Probability And Expected Value ————————————
Investigation #1 Factors and Products.
CPSC 171 Introduction to Computer Science More Algorithm Discovery and Design.
More on Logic Today we look at the for loop and then put all of this together to look at some more complex forms of logic that a program will need The.
Making Python Pretty!. How to Use This Presentation… Download a copy of this presentation to your ‘Computing’ folder. Follow the code examples, and put.
Today’s Topics Playing Deterministic (no Dice, etc) Games –Mini-max –  -  pruning –ML and games? 1997: Computer Chess Player (IBM’s Deep Blue) Beat Human.
Naïve Bayes Classification Material borrowed from Jonathan Huang and I. H. Witten’s and E. Frank’s “Data Mining” and Jeremy Wyatt and others.
More on Logic Today we look at the for loop and then put all of this together to look at some more complex forms of logic that a program will need The.
Understanding AI of 2 Player Games. Motivation Not much experience in AI (first AI project) and no specific interests/passion that I wanted to explore.
Naïve Bayes Classification Recitation, 1/25/07 Jonathan Huang.
Make it Fair Dominic has made up a simple game.
Name 8/29/17.
Lecture 13.
FLOWCHARTS Part 1.
Great Theoretical Ideas in Computer Science
9.3 Nim Game (Game with matches)
Iterative Deepening A*
Monte Carlo simulation
Lecture 11.
CS Fall 2016 (Shavlik©), Lecture 11, Week 6
Lecture 10.
Perceptrons Lirong Xia.
Summary Tel Aviv University 2016/2017 Slava Novgorodov
CHESS.
Announcements HW4 due today (11:59pm) HW5 out today (due 11/17 11:59pm)
C ODEBREAKER Class discussion.
CHAPTER 6 PROBABILITY & SIMULATION
Let's Race! Typing on the Home Row
Games & Adversarial Search
Algorithmic Applications of Game Theory
A Game Show for everyone to join in
4 Probability Lesson 4.8 Combinations and Probability.
Introduction to TouchDevelop
Probability And Expected Value ————————————
Tools, Materials and Organisation
The
Lesson Objectives Aims You should be able to:
Probabilities and Proportions
Games & Adversarial Search
Games & Adversarial Search
Reception (National Numeracy Strategy) (Based on DFEE Sample Lessons)
Matthew Renner, Trish Beeksma, Patch Kenny
Lesson Day 2 – Teacher Notes
Game Theory Day 4.
Probability And Expected Value ————————————
Strike it out.
Recitation #2 Tel Aviv University 2017/2018 Slava Novgorodov
Breakfast Bytes Easy, Hard, and Impossible
Reception (National Numeracy Strategy) (Based on DFEE Sample Lessons)
Team Dont Block Me, National Taiwan University
Reception (National Numeracy Strategy) (Based on DFEE Sample Lessons)
Recitation #2 Tel Aviv University 2016/2017 Slava Novgorodov
Reception (National Numeracy Strategy) (Based on DFEE Sample Lessons)
Recitation #1 Tel Aviv University 2017/2018 Slava Novgorodov
Recitation #4 Tel Aviv University 2017/2018 Slava Novgorodov
Summary Tel Aviv University 2017/2018 Slava Novgorodov
Try starting with fewer objects to investigate.
Recitation #1 Tel Aviv University 2016/2017 Slava Novgorodov
Factor Game Sample Game.
Games & Adversarial Search
Game Description Player 1: Decides on integer x > 0
Discrete Math for CS CMPSC 360 LECTURE 34 Last time:
Perceptrons Lirong Xia.
Games & Adversarial Search
Agenda Warmup Lesson 1.9 (random #s, Boolean variables, etc)
Presentation transcript:

Recitation #3 Tel Aviv University 2016/2017 Slava Novgorodov Intro to Data Science Recitation #3 Tel Aviv University 2016/2017 Slava Novgorodov

Today’s lesson Introduction to Data Science: Naïve Bayes What is learning system? Simple example Implementation Naïve Bayes Algorithm Use cases

Learning (simple AI) - example The system starts with only knowing the rules of the game Initially doing only random moves Learning the outcome of the game and fixes the strategy After several iterations the system will be able to do smarter moves Finally will be able to win every game

Learning (simple AI) - example Running example – 11 matchsticks game Rules: 2 players, taking 1 or 2 matchsticks, turn by turn, until nothing left. The winner is the player that took the last matchstick.

Analysis of the game Who is the player that wins the game? Is there an “always win” strategy? How to solve it?

Analysis of the game Let’s look on the game from the end. States 1 and 2 are winning states (In both cases player can take the last matchstick) Hence state 3 is losing state - any move puts the opponent in the winning state… Continue – 4 is winning state case it puts the opponent in the losing state… and so on….

Analysis of the game As we can see – Player 1 always wins (if plays correctly), because he starts in the winning state Let’s try to see how we can play against computer.

Teaching computer to play Let’s give a computer option to decide how many matchsticks it takes for every state. We will initialize 10 “boxes” (decision units) that will be in charge for the states 2 to 11 (State 1 is trivial). We initialize each box with two notes: black (with “one” written on it) and white (with “two”)

Teaching computer to play “The brain”:

Teaching computer to play - strategy 1. The computer checks how many matchsticks left and goes to the box that represents the corresponding state 2. Then randomly choose from the “notes” inside the box and makes a move accordingly to what is written on a note (“one” – take 1 matchstick, “two” – take 2) 3. For each step remember which note was chosen. 4. If there are no notes inside – defeat.

Teaching computer to play - learning If the computer wins – we add same note as was chosen to each box If the computes loses – remove the note from the last box

Simulation – Initial Computer | Player

Simulation – Step 1 Computer | Player 1

Simulation – Step 2 Computer | Player 1 2

Simulation – Step 3 Computer | Player 1 2 2

Simulation – Step 4 Computer | Player 1 2 2 1

Simulation – Step 5 Computer | Player 1 2 2 1 2

Simulation – Step 6 Computer | Player 1 2 2 1

Simulation – Step 7 Computer | Player 1 2 2 1 1

Simulation – Step 8 Computer | Player 1 2 2 1 1 1

Simulation - lose

Simulation – Step 6 Computer | Player 1 2 2 1

Simulation – Step 7 Computer | Player 1 2 2 1 2

Simulation - win

After several games… This is well-trained systems “brain” that never loses

Let’s code!

Naïve Bayes classifier The classifier based on the Bayes’ theorem: Widely used in Data Science for: Sentiment analysis Documents categorization Spam detection Fraud detection

Naïve Bayes classifier - assumptions Bag of words – position doesn’t matter Conditional independence:

Example – Sentiment Analysis The task: given a review decide if it’s positive or negative The movie is awesome! Avoid this place! Recommended! Will come back again! The worst place ever!!!  Great place to visit with kids! Never again! Terrible! Very bad… Best food ever! Must visit!

Sentiment Analysis Use already classified reviews for learning Represent each review as a vector of words Calculate probabilities Review Class Never visit again NEGATIVE Great place – visit! POSITIVE Good food Awesome Bad place Visit again Great food – must visit! ???

Sentiment Analysis Use already classified reviews for learning Represent each review as a vector of words Calculate probabilities never visit again great place good food awesome bad must Class 1 NEG POS

Sentiment Analysis P(POS) = 4/6 P(NEG) = 2/6 P(great|POS) = (1+1)/(9+10) = 2/19 P(great|NEG) = (0+1)/(5+10) = 1/15 P(food|POS) = (1+1)/(9+10) = 2/19 P(food|NEG) = (0+1)/(5+10) = 1/15 P(must|POS) = (0+1)/(9+10) = 1/19 P(must|NEG) = (0+1)/(5+10) = 1/15 P(visit|POS) = (2+1)/(9+10) = 3/19 P(visit|POS) = (1+1)/(5+10) = 2/15 P(POS|”great food must visit”) ~ P(POS) * P(“great”|POS) * P(“food”|POS) * P(“must”|POS) * P(“visit”|POS) = 4/6 * 2/19 * 2/19 * 1/19 * 3/19 = 0.00006138688 P(NEG|”great food must visit”) ~ P(NES) * P(“great”|NEG) * P(“food”|NEG) * P(“must”|NEG) * P(“visit”|NEG) = 2/6 * 1/15 * 1/15 * 1/15 * 2/15 = 0.00001316872 ArgMax ({P(POS|”…”), P(NEG|”…”)}) = POSITIVE Review Class Never visit again NEGATIVE Great place – visit! POSITIVE Good food Awesome Bad place Will visit again Great food – must visit! ???

Let’s code!

Questions?