1er. Escuela Red ProTIC - Tandil, 18-28 de Abril, 2006 1. Introduction How to program computers to learn? Learning: Improving automatically with experience.

Slides:



Advertisements
Similar presentations
Machine learning Overview
Advertisements

1 Machine Learning: Lecture 1 Overview of Machine Learning (Based on Chapter 1 of Mitchell T.., Machine Learning, 1997)
2. Concept Learning 2.1 Introduction
Adversarial Search Chapter 6 Section 1 – 4. Types of Games.
ChooseMove=16  19 V =30 points. LegalMoves= 16  19, or SimulateMove = 11  15, or …. 16  19,
1er. Escuela Red ProTIC - Tandil, de Abril, Decision Tree Learning 3.1 Introduction –Method for approximation of discrete-valued target functions.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Introduction to Machine Learning Alejandro Ceccatto Instituto de Física Rosario CONICET-UNR.
Adversarial Search Chapter 5.
Adversarial Search: Game Playing Reading: Chapter next time.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Minimax and Alpha-Beta Reduction Borrows from Spring 2006 CS 440 Lecture Slides.
Place captured red pieces here Place captured yellow pieces here To use as Kings Rules New Game Exit Left mouse click on piece – drag to desired location.
1er. Escuela Red ProTIC - Tandil, de Abril, Instance-Based Learning 4.1 Introduction Instance-Based Learning: Local approximation to the.
CS 484 – Artificial Intelligence1 Announcements Project 1 is due Tuesday, October 16 Send me the name of your konane bot Midterm is Thursday, October 18.
Game Playing CSC361 AI CSC361: Game Playing.
1 Machine Learning Spring 2010 Rong Jin. 2 CSE847 Machine Learning  Instructor: Rong Jin  Office Hour: Tuesday 4:00pm-5:00pm Thursday 4:00pm-5:00pm.
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Machine Learning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning.
1 Some rules  No make-up exams ! If you miss with an official excuse, you get average of your scores in the other exams – at most once.  WP only-if you.
RULES Each player begins the game with twelve normal pieces (either white or black). The pieces are automatically set in their proper positions. The object.
1er. Escuela Red ProTIC - Tandil, de Abril, Bayesian Learning 5.1 Introduction –Bayesian learning algorithms calculate explicit probabilities.
A Brief Survey of Machine Learning
Adversarial Search: Game Playing Reading: Chess paper.
陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心
Introduction to machine learning
CISC 235: Topic 6 Game Trees.
Artificial Neural Networks
Mohammad Ali Keyvanrad
For Friday Read chapter 18, sections 3-4 Homework: –Chapter 14, exercise 12 a, b, d.
CpSc 810: Machine Learning Design a learning system.
Introduction Many decision making problems in real life
Upper Confidence Trees for Game AI Chahine Koleejan.
1 Machine Learning What is learning?. 2 Machine Learning What is learning? “That is what learning is. You suddenly understand something you've understood.
Machine Learning Chapter 11.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 9 of 42 Wednesday, 13 September.
Computer Go : A Go player Rohit Gurjar CS365 Project Proposal, IIT Kanpur Guided By – Prof. Amitabha Mukerjee.
Decision Trees & the Iterative Dichotomiser 3 (ID3) Algorithm David Ramos CS 157B, Section 1 May 4, 2006.
Machine Learning Introduction. 2 교재  Machine Learning, Tom T. Mitchell, McGraw- Hill  일부  Reinforcement Learning: An Introduction, R. S. Sutton and.
Computing & Information Sciences Kansas State University Lecture 10 of 42 CIS 530 / 730 Artificial Intelligence Lecture 10 of 42 William H. Hsu Department.
Lecture 10: 8/6/1435 Machine Learning Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Introduction to Artificial Intelligence CS 438 Spring 2008 Today –AIMA, Ch. 6 –Adversarial Search Thursday –AIMA, Ch. 6 –More Adversarial Search The “Luke.
Machine Learning (ML) and Knowledge Discovery in Databases (KDD) Instructor: Rich Maclin Texts: Machine Learning, Mitchell Notes based.
Well Posed Learning Problems Must identify the following 3 features –Learning Task: the thing you want to learn. –Performance measure: must know when you.
Machine Learning for an Artificial Intelligence Playing Tic-Tac-Toe Computer Systems Lab 2005 By Rachel Miller.
Computing & Information Sciences Kansas State University Wednesday, 12 Sep 2007CIS 530 / 730: Artificial Intelligence Lecture 9 of 42 Wednesday, 12 September.
Learning from observations
GAME PLAYING 1. There were two reasons that games appeared to be a good domain in which to explore machine intelligence: 1.They provide a structured task.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 42 Wednesday, 14.
DEEP RED An Intelligent Approach to Chinese Checkers.
Chapter 1: Introduction. 2 목 차목 차 t Definition and Applications of Machine t Designing a Learning System  Choosing the Training Experience  Choosing.
Adversarial Search Chapter Games vs. search problems "Unpredictable" opponent  specifying a move for every possible opponent reply Time limits.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 34 of 41 Wednesday, 10.
Data Mining and Decision Support
1 Introduction to Machine Learning Chapter 1. cont.
Introduction Machine Learning: Chapter 1. Contents Types of learning Applications of machine learning Disciplines related with machine learning Well-posed.
Well Posed Learning Problems Must identify the following 3 features –Learning Task: the thing you want to learn. –Performance measure: must know when you.
Computing & Information Sciences Kansas State University CIS 530 / 730: Artificial Intelligence Lecture 09 of 42 Wednesday, 17 September 2008 William H.
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
Friday’s Deliverable As a GROUP, you need to bring 2N+1 copies of your “initial submission” –This paper should be a complete version of your paper – something.
Machine Learning & Datamining CSE 454. © Daniel S. Weld 2 Project Part 1 Feedback Serialization Java Supplied vs. Manual.
Introduction to Machine Learning. Introduce yourself Why you choose this course? Any examples of machine learning you know?
1 Machine Learning Patricia J Riddle Computer Science 367 6/26/2016Machine Learning.
Machine Learning. Definition: The ability of a machine to improve its performance based on previous results.
Supervise Learning Introduction. What is Learning Problem Learning = Improving with experience at some task – Improve over task T, – With respect to performance.
Data Mining Lecture 3.
Spring 2003 Dr. Susan Bridges
Checkers Move Prediction Algorithms
Why Machine Learning Flood of data
Machine Learning (ML) and Knowledge Discovery in Databases (KDD)
CAP 5610: Introduction to Machine Learning Spring 2011 Dr
Presentation transcript:

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction How to program computers to learn? Learning: Improving automatically with experience Example: Computers learning from medical records which treatments are most effective for new diseases Added value: Better understanding of human learning abilities

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction 1.1 Well-Posed Learning Problems –Definition: A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction Example –Task T: Playing checkers –Training Experience E: Playing games against itself –Performance Measure P: Percentage of games won against opponents

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction 1.2 Designing a Learning System –Choosing the training experience: Direct (teacher) Indirect (credit assignment) Distribution of examples

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction –Choosing the target function Legal moves are known a priori, but the best search strategy is not known Target function: ChooseMoveB  M B: legal board statesM: optimal legal move Alternatively: Real function V : B   Learning task: Discover an operational description of the ideal target function V (function approximation)

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction –Choosing a Representation for the Target Function V(b) = w 0 + w 1 X w 6 X 6 X 1,2 :Number of black/red pieces on the board X 3,4 : Number of black/red kings on the board X 5,6 :Number of black/red pieces threatened (can be captured on red/black next turn)

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction –Choosing a Function Approximation Algorithm Training examples (b,V train (b)) Rule for estimating training values: V train (b)  V[Successor(b)] –Adjusting the Weights

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction –Design Choices

1er. Escuela Red ProTIC - Tandil, de Abril, Introduction 1.3 Some Issues in Machine Learning –What algorithms can approximate functions well (and when)? –How does number of training examples influence accuracy? –How does complexity of hypothesis representation impact it? –How does noisy data influence accuracy? –What clues can we get from biological learning systems?