Learning, page 1 CSI 4106, Winter 2005 Symbolic learning Points Definitions Representation in logic What is an arch? Version spaces Candidate elimination.

Slides:



Advertisements
Similar presentations
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Advertisements

Knowledge & Reasoning Logical Reasoning: to have a computer automatically perform deduction or prove theorems Knowledge Representations: modern ways of.
Machine Learning A Quick look Sources:
Version Space Search in Prolog سمینار درس : برنامه سازی منطق استاد : دکتر محمد ابراهیم شیری ارایه دهنده : رحمت ا... محمدی.
Logic Use mathematical deduction to derive new knowledge.
Knowledge Representation and Reasoning Learning Sets of Rules and Analytical Learning Harris Georgiou – 4.
Combining Inductive and Analytical Learning Ch 12. in Machine Learning Tom M. Mitchell 고려대학교 자연어처리 연구실 한 경 수
Università di Milano-Bicocca Laurea Magistrale in Informatica
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
1 Machine Learning: Symbol-based 10a 10.0Introduction 10.1A Framework for Symbol-based Learning 10.2Version Space Search 10.3The ID3 Decision Tree Induction.
Relational Data Mining in Finance Haonan Zhang CFWin /04/2003.
Inference and Resolution for Problem Solving
Adapted by Doug Downey from: Bryan Pardo, EECS 349 Fall 2007 Machine Learning Lecture 2: Concept Learning and Version Spaces 1.
1 Chapter 19 Knowledge in Learning Version spaces examples Additional sources used in preparing the slides: Jean-Claude Latombe’s CS121 slides: robotics.stanford.edu/~latombe/cs121.
Machine Learning: Symbol-Based
MACHINE LEARNING. 2 What is learning? A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) A computer.
MACHINE LEARNING. What is learning? A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) A computer.
Issues with Data Mining
Artificial Intelligence University Politehnica of Bucharest Adina Magda Florea
Machine Learning Version Spaces Learning. 2  Neural Net approaches  Symbolic approaches:  version spaces  decision trees  knowledge discovery  data.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Empirical Explorations with The Logical Theory Machine: A Case Study in Heuristics by Allen Newell, J. C. Shaw, & H. A. Simon by Allen Newell, J. C. Shaw,
1 CSI 5388:Topics in Machine Learning Inductive Learning: A Review.
Learning Holy grail of AI. If we can build systems that learn, then we can begin with minimal information and high-level strategies and have systems better.
1 Chapter 8 Inference and Resolution for Problem Solving.
General-to-Specific Ordering. 8/29/03Logic Based Classification2 SkyAirTempHumidityWindWaterForecastEnjoySport SunnyWarmNormalStrongWarmSameYes SunnyWarmHighStrongWarmSameYes.
110/19/2015CS360 AI & Robotics AI Application Areas  Neural Networks and Genetic Algorithms  These model the structure of neurons in the brain  Humans.
Planning, page 1 CSI 4106, Winter 2005 Planning Points Elements of a planning problem Planning as resolution Conditional plans Actions as preconditions.
Ch10 Machine Learning: Symbol-Based
November 10, Machine Learning: Lecture 9 Rule Learning / Inductive Logic Programming.
Machine Learning Chapter 2. Concept Learning and The General-to-specific Ordering Tom M. Mitchell.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Monday, January 22, 2001 William.
Chapter 2: Concept Learning and the General-to-Specific Ordering.
George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Symbol-Based Luger: Artificial.
Learning, page 19 CSI 4106, Winter 2005 Learning decision trees A concept can be represented as a decision tree, built from examples, as in this problem.
Machine Learning Chapter 5. Artificial IntelligenceChapter 52 Learning 1. Rote learning rote( โรท ) n. วิถีทาง, ทางเดิน, วิธีการตามปกติ, (by rote จากความทรงจำ.
Concept Learning and the General-to-Specific Ordering 이 종우 자연언어처리연구실.
Outline Inductive bias General-to specific ordering of hypotheses
Overview Concept Learning Representation Inductive Learning Hypothesis
CS Introduction to AI Tutorial 8 Resolution Tutorial 8 Resolution.
1 Inductive Learning (continued) Chapter 19 Slides for Ch. 19 by J.C. Latombe.
KU NLP Machine Learning1 Ch 9. Machine Learning: Symbol- based  9.0 Introduction  9.1 A Framework for Symbol-Based Learning  9.2 Version Space Search.
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Machine Learning A Quick look Sources: Artificial Intelligence – Russell & Norvig Artifical Intelligence - Luger By: Héctor Muñoz-Avila.
Machine Learning Concept Learning General-to Specific Ordering
Data Mining and Decision Support
More Symbolic Learning CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Concept Learning and The General-To Specific Ordering
Chap. 10 Learning Sets of Rules 박성배 서울대학교 컴퓨터공학과.
Concept learning Maria Simi, 2011/2012 Machine Learning, Tom Mitchell Mc Graw-Hill International Editions, 1997 (Cap 1, 2).
Inductive Learning (2/2) Version Space and PAC Learning Russell and Norvig: Chapter 18, Sections 18.5 through 18.7 Chapter 18, Section 18.5 Chapter 19,
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
Chapter 1 Logic and Proof.
Chapter 2 Concept Learning
Machine Learning: Symbol-Based
Machine Learning: Symbol-Based
CS 9633 Machine Learning Concept Learning
Machine Learning Learning is “any change in a system that allows it to perform better the second time on repetition of the same task or on another task.
Ordering of Hypothesis Space
Knowledge Representation
Data Mining Lecture 11.
CSI 5388:Topics in Machine Learning
PROPOSITIONAL LOGIC - SYNTAX-
Machine Learning Chapter 2
Inductive Learning (2/2) Version Space and PAC Learning
Implementation of Learning Systems
Machine Learning Chapter 2
Habib Ullah qamar Mscs(se)
Presentation transcript:

Learning, page 1 CSI 4106, Winter 2005 Symbolic learning Points Definitions Representation in logic What is an arch? Version spaces Candidate elimination Learning decision trees Explanation-based learning

Learning, page 2 CSI 4106, Winter 2005 Definitions Learning is a change that helps improve future performance. (a paraphrase of Herbert Simon's definition) Broad categories of machine learning methods and systems: Symbolic  our focus in this brief presentation Statistical Neural / Connectionist Genetic / Evolutionary

Learning, page 3 CSI 4106, Winter 2005 Definitions (2) Various characterizations of symbolic learning Learning can be supervised—pre-classified data, unsupervised (conceptual clustering)—raw data. Data for supervised learning can be many positive and negative examples, a theory and one example. The goal of learning can be concept discovery, (section 10.6) generalization from examples (inductive learning), a heuristic, a procedure, and so on.

Learning, page 4 CSI 4106, Winter 2005 Representation in logic The AI techniques in symbolic ML: search, graph matching, theorem proving. Knowledge can be represented in logic: a list of elementary properties and facts about things, examples are conjunctive formulae with constants, generalized concepts are formulae with variables.

Learning, page 5 CSI 4106, Winter 2005 Representation in logic (2) Two instances of a concept: size( i1, small )  colour( i1, red )  shape( i1, ball ) size( i2, large )  colour( i2, red )  shape( i2, brick ) A generalization of these instances: size( X, Y )  colour( X, red )  shape( X, Z ) Another representation: obj( small, red, ball ) obj( large, red, brick ) A generalization of these instances: obj( X, red, Z )

Learning, page 6 CSI 4106, Winter 2005 What is an arch? Positive and negative examples Learning the concept of an arch (section 10.1) An arch: A hypothesis: an arch has three bricks as parts. Another example—not an arch: Another hypothesis: two bricks support a third brick.

Learning, page 7 CSI 4106, Winter 2005 What is an arch (2) Another arch: Two bricks support a pyramid. We can generalize both positive examples if we have a taxonomy of blocks: the supported object is a polygon. Not an arch: We have to specialize the hypothesis: two bricks that do not touch support a polygon.

Learning, page 8 CSI 4106, Winter 2005 Version spaces This is a method of learning a concept from positive and negative examples. In the first stage, we have to select features that characterize the concepts, and their values. Our example: size={large, small} colour={red, white, blue} shape={ball, brick, cube} We will represent a feature “bundle” like this: obj( Size, Colour, Shape )

Learning, page 9 CSI 4106, Winter 2005 Version spaces (2) There are 18 specific objects and 30 classes of objects (expressed by variables instead of constants), variously generalized. They are all arranged into a graph called a version space. Here is part of this space for our example:

Learning, page 10 CSI 4106, Winter 2005 Version spaces (3) Generalization and specialization in this very simple representation are also simple. To generalize, replace a constant with a variable: obj( small, red, ball )  obj( small, X, ball ) More generalization requires introducing more (unique) variables. obj( small, X, ball )  obj( small, X, Y ) To specialize, replace a variable with a constant (it must come from the set of allowed feature values): obj( small, X, Y )  obj( small, blue, Y ) obj( small, blue, Y )  obj( small, blue, cube )

Learning, page 11 CSI 4106, Winter 2005 Version spaces (4) Other generalization operators (illustrated with a different representation) include the following. Drop a conjunct. size( i1, small )  colour( i1, red )  shape( i1, ball )  colour( i1, red )  shape( i1, ball ) Add a disjunct. colour( i2, red )  shape( i2, ball )  (colour( i2, red )  colour( i2, blue ))  shape( i2, ball ) Use a taxonomy (assuming that you have it!). Suppose we have a hierarchy of colours where “red” is a subclass of primaryColour". We can generalize: colour( i3, red )  colour( i3, primaryColour )

Learning, page 12 CSI 4106, Winter 2005 Version spaces (5) The candidate elimination algorithm There are three variants: general-to-specific, specific-to-general and a combination of both directions. All three methods work with sets of hypotheses — classes of concepts. We consider, one by one, a series of examples, both positive and negative. In the specific-to-general method, the set S is the (evolving) target concept, the set N stores negative examples.

Learning, page 13 CSI 4106, Winter 2005 Version spaces (6) Initialize the concept set S to the first positive example. Initialize the concept set N to Ø. Then repeat: For a positive example p Replace every s  S that does not match p with the minimal (most specific) generalization that matches p. Remove any s  S more general than another s’  S. Remove any s  S that matches some n  N. For a negative example n Remove any s  S that matches n. Add n to N for future use.

Learning, page 14 CSI 4106, Winter 2005 Version spaces (7) Example: obj(small, X, ball) minimally generalizes obj(small, white, ball) and obj(small, red, ball). The concept of a ball

Learning, page 15 CSI 4106, Winter 2005 Version spaces (8) Now, general-to-specific. Initialize the concept set G to the most general concept. Initialize the concept set P to Ø. Then repeat: For a negative example n Replace every g  G that matches n with the minimal (most general) specialization that does not match n. Remove any g  G more specific than another g’  G. Remove any g  G that does not match some p  P. For a positive example p Remove any g  G that does not match p. Add p to P for future use.

Learning, page 16 CSI 4106, Winter 2005 Version spaces (9) Example: obj(large, Y, Z), obj(X, Y, ball),... minimally specialize obj(X, Y, Z). The concept of a ball

Learning, page 17 CSI 4106, Winter 2005 Version spaces (10) Initialize G to the most general concept, S to the first positive example. For a positive example p Remove any g  G that does not match p. Replace every s  S that does not match p with the most specific generalization that matches p. Remove any s  S more general than another s’  S. Remove any s  S more general that some g  G. For a negative example n Remove any s  S that matches n. Replace every g  G that matches n with the most general specialization that does not match n. Remove any g  G more specific than another g’  G. Remove any g  G more specific than some s  S. If G = S = {c}, the learning of c succeeds. If G = S = Ø, learning fails.

Learning, page 18 CSI 4106, Winter 2005 Version spaces (11) The concept of a red ball