Chapter 1: Introduction. 2 목 차목 차 t Definition and Applications of Machine t Designing a Learning System  Choosing the Training Experience  Choosing.

Slides:



Advertisements
Similar presentations
Machine learning Overview
Advertisements

1 Machine Learning: Lecture 1 Overview of Machine Learning (Based on Chapter 1 of Mitchell T.., Machine Learning, 1997)
Machine Learning 참고 자료 2 Learning Definition Learning is the improvement of performance in some environment through the acquisition of knowledge resulting.
ChooseMove=16  19 V =30 points. LegalMoves= 16  19, or SimulateMove = 11  15, or …. 16  19,
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Support Vector Machines
Kansas State University Department of Computing and Information Sciences CIS 798: Intelligent Systems and Machine Learning Tuesday, August 24, 1999 William.
CS 4700: Foundations of Artificial Intelligence
CS 484 – Artificial Intelligence1 Announcements Project 1 is due Tuesday, October 16 Send me the name of your konane bot Midterm is Thursday, October 18.
1er. Escuela Red ProTIC - Tandil, de Abril, Introduction How to program computers to learn? Learning: Improving automatically with experience.
CII504 Intelligent Engine © 2005 Irfan Subakti Department of Informatics Institute Technology of Sepuluh Nopember Surabaya - Indonesia.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Machine Learning Bob Durrant School of Computer Science
Machine Learning (Extended) Dr. Ata Kaban
1 Machine Learning Spring 2010 Rong Jin. 2 CSE847 Machine Learning  Instructor: Rong Jin  Office Hour: Tuesday 4:00pm-5:00pm Thursday 4:00pm-5:00pm.
CSE 5731 Lecture 26 Introduction to Machine Learning CSE 573 Artificial Intelligence I Henry Kautz Fall 2001.
Machine Learning: Foundations Yishay Mansour Tel-Aviv University.
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
Machine Learning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning.
1 Some rules  No make-up exams ! If you miss with an official excuse, you get average of your scores in the other exams – at most once.  WP only-if you.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
A Brief Survey of Machine Learning
Learning Programs Danielle and Joseph Bennett (and Lorelei) 4 December 2007.
陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心
CS 391L: Machine Learning Introduction
Short Introduction to Machine Learning Instructor: Rada Mihalcea.
机器学习 陈昱 北京大学计算机科学技术研究所 信息安全工程研究中心. Concept Learning Reference : Ch2 in Mitchell’s book 1. Concepts: Inductive learning hypothesis General-to-specific.
Midterm Review Rao Vemuri 16 Oct Posing a Machine Learning Problem Experience Table – Each row is an instance – Each column is an attribute/feature.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
CpSc 810: Machine Learning Design a learning system.
1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 3. Rote Learning.
1 Artificial Neural Networks Sanun Srisuk EECP0720 Expert Systems – Artificial Neural Networks.
Introduction Many decision making problems in real life
1 Machine Learning What is learning?. 2 Machine Learning What is learning? “That is what learning is. You suddenly understand something you've understood.
Machine Learning Chapter 11.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 9 of 42 Wednesday, 13 September.
1 Machine Learning Introduction Paola Velardi. 2 Course material Slides (partly) from: 91L/
Machine Learning Introduction. 2 교재  Machine Learning, Tom T. Mitchell, McGraw- Hill  일부  Reinforcement Learning: An Introduction, R. S. Sutton and.
Computing & Information Sciences Kansas State University Lecture 10 of 42 CIS 530 / 730 Artificial Intelligence Lecture 10 of 42 William H. Hsu Department.
1 Mining in geographic data Original slides:Raymond J. Mooney University of Texas at Austin.
Lecture 10: 8/6/1435 Machine Learning Lecturer/ Kawther Abas 363CS – Artificial Intelligence.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
1 Machine Learning (Extended) Dr. Ata Kaban Algorithms to enable computers to learn –Learning = ability to improve performance automatically through experience.
Well Posed Learning Problems Must identify the following 3 features –Learning Task: the thing you want to learn. –Performance measure: must know when you.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 2.
1 Machine Learning 1.Where does machine learning fit in computer science? 2.What is machine learning? 3.Where can machine learning be applied? 4.Should.
Computing & Information Sciences Kansas State University Wednesday, 12 Sep 2007CIS 530 / 730: Artificial Intelligence Lecture 9 of 42 Wednesday, 12 September.
Learning from observations
Machine Learning, Decision Trees, Overfitting Machine Learning Tom M. Mitchell Machine Learning Department Carnegie Mellon University January 14,
Chapter 6 Bayesian Learning
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 9 of 42 Wednesday, 14.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 34 of 41 Wednesday, 10.
1 Introduction to Machine Learning Chapter 1. cont.
Introduction Machine Learning: Chapter 1. Contents Types of learning Applications of machine learning Disciplines related with machine learning Well-posed.
Well Posed Learning Problems Must identify the following 3 features –Learning Task: the thing you want to learn. –Performance measure: must know when you.
Chapter 6 Neural Network.
Computing & Information Sciences Kansas State University CIS 530 / 730: Artificial Intelligence Lecture 09 of 42 Wednesday, 17 September 2008 William H.
Machine Learning Chapter 7. Computational Learning Theory Tom M. Mitchell.
Machine Learning & Datamining CSE 454. © Daniel S. Weld 2 Project Part 1 Feedback Serialization Java Supplied vs. Manual.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
1 Machine Learning Patricia J Riddle Computer Science 367 6/26/2016Machine Learning.
Machine Learning. Definition: The ability of a machine to improve its performance based on previous results.
Supervise Learning Introduction. What is Learning Problem Learning = Improving with experience at some task – Improve over task T, – With respect to performance.
Spring 2003 Dr. Susan Bridges
Data Mining Lecture 11.
Machine Learning: Lecture 3
Why Machine Learning Flood of data
Machine Learning: Lecture 6
Machine Learning: UNIT-3 CHAPTER-1
CAP 5610: Introduction to Machine Learning Spring 2011 Dr
Presentation transcript:

Chapter 1: Introduction

2 목 차목 차 t Definition and Applications of Machine t Designing a Learning System  Choosing the Training Experience  Choosing the Target Function  Choosing a Representation for the Target Function  Choosing a Function Approximation Algorithm  The Final Design t Perspectives and Issues in Machine Learning t Organization of the Book t Summary and Bibliography

3 Applications of Machine Learning t Recognizing spoken words  음소와 단어 인식, 신호 해석  신경망, Hidden Markov models t Driving an autonomous vehicle  무인 자동차 운전, 센서기반 제어 등에도 응용 t Classifying new astronomical structures  천체 물체 분류, Decision tree learning 기법 사용 t Playing world-class Backgammon  실제 게임을 통해서 전략을 학습, 탐색공간 문 제에 응용

4 Disciplines Related with Machine Learning t Artificial intelligence  기호 표현 학습, 탐색문제, 문제해결, 기존지식의 활용 t Bayesian methods  가설 확률계산의 기초, naïve Bayes classifier, unobserved 변수 값 추정 t Computational complexity theory  계산 효율, 학습 데이터의 크기, 오류의 수 등의 측정에 필 요한 이론적 기반 t Control theory  이미 정의된 목적을 최적화하는 제어과정과 다음 상태 예 측을 학습

5 t Information theory  Entropy 와 Information Content 를 측정, Minimum Description Length, Optimal Code 와 Optimal Training 의 관계 t Philosophy  Occam’s Razor, 일반화의 타당성 분석 t Psychology and neurobiology  Neural network models t Statistics  가설의 정확도 추정시 발생하는 에러의 특성화, 신뢰구간, 통계적 검증 Disciplines Related with Machine Learning (2)

6 Well-posed Learning Problems t Definition  A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. t A class of tasks T t Experience E t Performance measure P

7 A Checkers Learning Problem t Three Features: 학습문제의 정의  The class of tasks  The measure of performance to be improved  The source of experience t Example  Task T: playing checkers  Performance measure P: percent of games won against opponent  Training experience E: playing practice games against itself

8 1.2 Designing a Learning System t Choosing the Training Experience t Choosing the Target Function t Choosing a Representation for the Target Function t Choosing a Function Approximation Algorithm

9 Choosing the Training Experience t Key Attributes  Direct/indirect feedback u Direct feedback: checkers state and correct move u Indirect feedback: move sequence and final outcomes  Degree of controlling the sequence of training example u Learner 가 학습 정보를 얻을 때 teacher 의 도움을 받는 정도  Distribution of examples u 시스템의 성능을 평가하는 테스트의 예제 분포를 잘 반영해야 함

10 Choosing the Target Function t A function that chooses the best move M for any B  ChooseMove : B --> M  Difficult to learn t It is useful to reduce the problem of improving performance P at task T to the problem of learning some particular target function. t An evaluation function that assigns a numerical score to any B  V : B --> R

11 Target Function for the Checkers Problem t Algorithm  If b is a final state that is won, then V(b) = 100  ……. that is lost, then V(b)=-100  ……. that is drawn, then V(b)=0  If b is not a final state, then V(b)=V(b’), where b’ is the best final board state t Nonoperational, i.e. not efficiently computable definition t Operational description of V needs function approximation

12 Choosing a Representation for the Target Function t Describing the function  Tables  Rules  Polynomial functions  Neural nets t Trade-off in choice  Expressive power  Size of training data

13 Linear Combination as Representation (b) = w 0 + w 1 x 1 + w 2 x 2 + w 3 x 3 +w 4 x 4 + w 5 x 5 + w 6 x 6 x 1 : # of black pieces on the board x 2 : # of red pieces on the board x 3 : # of black kings on the board x 4 : # of red kings on the board x 5 : # of black pieces threatened by red x 6 : # of red pieces threatened by black w 1 - w 6 : weights

14 Partial Design of a Checkers Learning Program t Task T: playing checkers t Performance measure P: Percent of games won in the world tournament t Training experience E: games played against itself t Target function V: Board -> R t Target function representation t (b) = w 0 + w 1 x 1 + w 2 x 2 + w 3 x 3 + w 4 x 4 + w 5 x 5 + w 6 x 6

15 Choosing a Function Approximation Algorithm t A training example is represented as an ordered pair  b: board state  V train (b): training value for b t Instance: “black has won the game (x 2 = 0), +100> t Estimating training values for intermediate board states  V train (b) <- (Successor(b))  : current approximation to V  Successor(b): the next board state

16 Adjusting the Weights t Choosing w i to best fit the training examples t Minimize the squared error t LMS Weight Update Rule For each training example 1. Use the current weights to calculate V’(b) 2. For each weight w i, update it as

17 The Final Design