LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.

Slides:



Advertisements
Similar presentations
Approaches, Tools, and Applications Islam A. El-Shaarawy Shoubra Faculty of Eng.
Advertisements

Reinforcement Learning
Learning from Observations Chapter 18 Section 1 – 3.
Machine Learning: Intro and Supervised Classification
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Ai in game programming it university of copenhagen Reinforcement Learning [Outro] Marco Loog.
AI 授課教師:顏士淨 2013/09/12 1. Part I & Part II 2  Part I Artificial Intelligence 1 Introduction 2 Intelligent Agents Part II Problem Solving 3 Solving Problems.
CS 4700: Foundations of Artificial Intelligence
Computer science is a field of study that deals with solving a variety of problems by using computers. To solve a given problem by using computers, you.
1 Semantic Description of Programming languages. 2 Static versus Dynamic Semantics n Static Semantics represents legal forms of programs that cannot be.
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Spring 2004.
Machine Learning CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 5.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Fall 2005.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
Learning from Observations Chapter 18 Section 1 – 4.
18 LEARNING FROM OBSERVATIONS
Learning From Observations
Lecture 2: Fundamental Concepts
An Introduction to Machine Learning In the area of AI (earlier) machine learning took a back seat to Expert Systems Expert system development usually consists.
Learning from Observations Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 18 Fall 2004.
Machine Learning CSE 473. © Daniel S. Weld Topics Agency Problem Spaces Search Knowledge Representation Reinforcement Learning InferencePlanning.
LEARNING DECISION TREES
Learning: Introduction and Overview
Part I: Classification and Bayesian Learning
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Artificial Intelligence Dr. Paul Wagner Department of Computer Science University of Wisconsin – Eau Claire.
INTRODUCTION TO MACHINE LEARNING. $1,000,000 Machine Learning  Learn models from data  Three main types of learning :  Supervised learning  Unsupervised.
Learning CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
1 Machine Learning: Lecture 11 Analytical Learning / Explanation-Based Learning (Based on Chapter 11 of Mitchell, T., Machine Learning, 1997)
Reinforcement Learning
LEARNING DECISION TREES Yılmaz KILIÇASLAN. Definition - I Decision tree induction is one of the simplest, and yet most successful forms of learning algorithm.
Learning from observations
Learning from Observations Chapter 18 Through
CHAPTER 18 SECTION 1 – 3 Learning from Observations.
Learning from Observations Chapter 18 Section 1 – 3.
CPS 270: Artificial Intelligence Machine learning Instructor: Vincent Conitzer.
I Robot.
Indirect Supervision Protocols for Learning in Natural Language Processing II. Learning by Inventing Binary Labels This work is supported by DARPA funding.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Intro Learning (Reading: Chapter.
ASSESSING LEARNING ALGORITHMS Yılmaz KILIÇASLAN. Assessing the performance of the learning algorithm A learning algorithm is good if it produces hypotheses.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
What is Artificial Intelligence?
CpSc 810: Machine Learning Analytical learning. 2 Copy Right Notice Most slides in this presentation are adopted from slides of text book and various.
Data Mining and Decision Support
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Reinforcement Learning AI – Week 22 Sub-symbolic AI Two: An Introduction to Reinforcement Learning Lee McCluskey, room 3/10
Chapter 18 Section 1 – 3 Learning from Observations.
Learning From Observations Inductive Learning Decision Trees Ensembles.
Chapter 12 Theory of Computation Introduction to CS 1 st Semester, 2014 Sanghyun Park.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
CS-424 Gregory Dudek Lecture 14 Learning –Inductive inference –Probably approximately correct learning.
Anifuddin Azis LEARNING. Why is learning important? So far we have assumed we know how the world works Rules of queens puzzle Rules of chess Knowledge.
Knowledge Representation. A knowledge base can be organised in several different configurations to facilitate fast inferencing Knowledge Representation.
Learning from Observations
Learning from Observations
Introduce to machine learning
Chapter 11: Artificial Intelligence
Presented By S.Yamuna AP/CSE
Chapter 11: Learning Introduction
Introduction Artificial Intelligent.
Overview of Machine Learning
AI and Agents CS 171/271 (Chapters 1 and 2)
Learning from Observations
Lecture 14 Learning Inductive inference
Learning from Observations
Instructor: Vincent Conitzer
Presentation transcript:

LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN

Definition Learning takes place as the agent observes its interactions with the world and its own decision-making processes. Learning can range from trivial memorization of experience to the creation of entire specific theories.

Forms of Learning A learning agent can be thought as containing: 1. a performance element that decides what actions to take and 2. a learning element that modifies the performance element so that it makes better decisions. The design of a learning element is affected by three major issues: 1. which components of the performance element are to be learned; 2. what feedback is available to learn these components; and 3. what representation is used for the components.

Components of Agents The components of agents include the following: 1. A direct mapping from conditions on the current state to actions. 2. A means to infer relevant properties of the world from the percept sequence. 3. Information about the way the world evolves and about the results of possible actions the agent can take. 4. Utility information indicating the desirability of world states. 5. Action-value information indicating the desirability of actions. 6. Goals that describe classes of states whose achievement maximizes the agent's utility.

Types of Feedback - I The type of feedback available for learning is usually the most important factor in determining the nature of the learning problem. The field of machine learning usually distinguishes three cases: 1. supervised, 2. unsupervised, and 3. reinforcement learning.

Types of Feedback - II The problem of supervised learning involves learning a function from examples of its inputs and outputs. The problem of unsupervised learning involves learning patterns in the input when no specific output values are supplied. Rather than being told what to do by a teacher, a reinforcement learning agent must learn from reinforcement.

Representation The representation of the learned information also plays a very important role in determining how the learning algorithm must work. Some examples:  Linear weighted polynomials  Propositional and first-order logical sentences  Probabilistic descriptions such as Bayesian networks

Linear Weighted Polynomials A weighted linear function can be expressed as: Eval(s) = w 1 f 1 (s) + w 2 f 2 (s) w n f n (s) (a)White to move (b) White to move Two slightly different chess positions. In (a), black has an advantage of a knight and two pawns and will win the game. In (b), black will lose after white captures the queen.

Propositional Logic See (week-2)

First-Order (Predicate) Logic See (week-3)

Prior Knowledge The majority of learning research in AI, computer science, and psychology has studied the case in which the agent begins with no knowledge at all about what it is trying to learn. Most human learning takes place in the context of a good deal of background knowledge. Some psychologists and linguists claim that even newborn babies exhibit knowledge of the world.

Evrensel Gramer..

Inductive Learning The task of pure inductive inference (or induction) is this: Given a collection of examples of f, return a function h that approximates f. The function h is called a hypothesis. The reason that learning is difficult, from a conceptual point of view, is that it is not easy to tell whether any particular h is a good approximation of f. A good hypothesis will generalize well-that is, will predict unseen examples correctly. This is the fundamental problem of induction.

Hypothesis Space - I Fig. 1. (a) Example (x, f (x)) pairs and a consistent, linear hypothesis. (b) A consistent, degree-7 polynomial hypothesis for the same data set. (c) A different data set that admits an exact degree-6 polynomial fit or an approximate linear fit. (d) A simple, exact sinusoidal fit to the same data set.

Hypothesis Space - II For nondeterministic functions, there is an inevitable tradeoff between the complexity of the hypothesis and the degree of fit to the data. We say that a learning problem isrealizable, because learning problem is realizable if the hypothesis space contains the true function; otherwise, it is unrealizable. Unfortunately, we cannot always tell whether a given learning problem is realizable, because the true function is not known. One way to get around this barrier is to use prior knowledge to derive a hypothesis space in which we know the true function must lie.

Hypothesis Space - III Another approach is to use the largest possible hypothesis space. The problem with this idea is that it does not take into account the computational complexity of learning: There is a tradeoff between the expressiveness of a hypothesis space and the complexity of finding simple, consistent hypotheses within that space.

Hypothesis Space - IV An expressive language makes it possible for a simple theory to fit the data, whereas restricting the expressiveness of the language means that any consistent theory must be very complex. For example, the rules of chess can be written in a page or two of first-order logic, but require thousands of pages when written in propositional logic. In such cases, it should be possible to learn much faster by using the more expressive language.

Underfitting versus Overfitting Fig. 2. Toy example of binary classification, solved using three models.

Summary Learning takes place as the agent observes its interactions with the world and its own decision-making processes. Various components of the performance element represented in various ways can be learned based on supervised, unsupervised or reinforcement learning. The task of pure inductive inference is to find out a hypothesis function. A good hypothesis will generalize well. This is the fundamental problem of induction. There is an inevitable tradeoff between the complexity of the hypothesis and the degree of fit to the data.