15-859(B) Machine Learning Learning finite state environments Avrim Blum 03/29/07.

Slides:



Advertisements
Similar presentations
Problem Solving in Science
Advertisements

Intro to NLP - J. Eisner1 Learning in the Limit Golds Theorem.
Formal Languages Languages: English, Spanish,... PASCAL, C,... Problem: How do we define a language? i.e. what sentences belong to a language? e.g.Large.
CSE 311 Foundations of Computing I
Regular Expressions and DFAs COP 3402 (Summer 2014)
CMSC Spring Finite Automaton: Example accepted.
Finite Automata CPSC 388 Ellen Walker Hiram College.
CSE 311: Foundations of Computing Fall 2013 Lecture 23: Finite state machines and minimization.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 2 Mälardalen University 2005.
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 3 School of Innovation, Design and Engineering Mälardalen University 2012.
Hash Tables Hash function h: search key  [0…B-1]. Buckets are blocks, numbered [0…B-1]. Big idea: If a record with search key K exists, then it must be.
YES-NO machines Finite State Automata as language recognizers.
Welcome Buddies! Today’s Training Objectives: To know what your role is and the commitment that is expected. To understand why the role is so important.
1 Module 20 NFA’s with -transitions –NFA- ’s Formal definition Simplifies construction –LNFA- –Showing LNFA  is a subset of LNFA (extra credit) and therefore.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Machine Learning: Learning finite state environments Avrim Blum lecture 11/25/03.
Hash Tables Hash function h: search key  [0…B-1]. Buckets are blocks, numbered [0…B-1]. Big idea: If a record with search key K exists, then it must be.
Finite Automata Great Theoretical Ideas In Computer Science Anupam Gupta Danny Sleator CS Fall 2010 Lecture 20Oct 28, 2010Carnegie Mellon University.
61 Nondeterminism and Nodeterministic Automata. 62 The computational machine models that we learned in the class are deterministic in the sense that the.
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 20 Instructor: Paul Beame.
External Memory Hashing. Hash Tables Hash function h: search key  [0…B-1]. Buckets are blocks, numbered [0…B-1]. Big idea: If a record with search key.
Automata theory and formal languages Andrej Bogdanov The Chinese University of Hong Kong Fall 2008.
Lecture 18 NFA’s with -transitions –NFA- ’s Formal definition Simplifies construction –LNFA- –Showing LNFA  is a subset of LNFA and therefore a subset.
CHAPTER 4 Decidability Contents Decidable Languages
Snick  snack CPSC 121: Models of Computation 2010 Winter Term 2 DFAs in Depth Benjamin Israel Notes heavily borrowed from Steve Wolfman’s,
1 Intro to Finite Automata Chapter 2 introduces the concept of a Finite Automata (or FA). An FA has several properties: It is theoretical. It allows computer.
CS5371 Theory of Computation Lecture 4: Automata Theory II (DFA = NFA, Regular Language)
Topics Automata Theory Grammars and Languages Complexities
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY
Graphing Linear Equations From the beginning. All the slides in this presentation are timed. You do not need to click the mouse or press any keys on the.
CSE 311: Foundations of Computing Fall 2014 Lecture 23: State Minimization, NFAs.
WRITING EFFECTIVE S. Before writing the Make a plan! Think about the purpose of the Think about the person who will read the and.
The Scientific Method. The scientific method is the only scientific way accepted to back up a theory or idea. The Scientific Method is used to support.
Formal Language Finite set of alphabets Σ: e.g., {0, 1}, {a, b, c}, { ‘{‘, ‘}’ } Language L is a subset of strings on Σ, e.g., {00, 110, 01} a finite language,
1 Finite Automata (Finite State Machine) Lecture 4-5 Ref. Handout p18-24.
Functions Definition & purpose Notation Functions on binary / returning binary values Finite automaton model We haven’t completely left the world of counting.
In.  This presentation will only make sense if you view it in presentation mode (hit F5). If you don’t do that there will be multiple slides that are.
NFA ε - NFA - DFA equivalence. What is an NFA An NFA is an automaton that its states might have none, one or more outgoing arrows under a specific symbol.
CS 461 – Sept. 12 Simplifying FA’s. –Let’s explore the meaning of “state”. After all if we need to simplify, this means we have too many states. –Myhill-Nerode.
Learning DFA from corrections Leonor Becerra-Bonache, Cristina Bibire, Adrian Horia Dediu Research Group on Mathematical Linguistics, Rovira i Virgili.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
REGULAR LANGUAGES.
1 Unit 1: Automata Theory and Formal Languages Readings 1, 2.2, 2.3.
Learning Automata and Grammars Peter Černo.  The problem of learning or inferring automata and grammars has been studied for decades and has connections.
Lecture 05: Theory of Automata:08 Kleene’s Theorem and NFA.
1 What to do before class starts??? Download the sample database from the k: drive to the u: drive or to your flash drive. The database is named “FormBelmont.accdb”
Computability Review homework. Regular Operations. Nondeterministic machines. NFSM = FSM Homework: By hand, build FSM version of specific NFSM. Try to.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2010.
 A quadratic inequality is an inequality in the form ax 2 +bx+c >,
Inferring Finite Automata from queries and counter-examples Eggert Jón Magnússon.
CMSC 330: Organization of Programming Languages Finite Automata NFAs  DFAs.
CS 203: Introduction to Formal Languages and Automata
Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Regular Expressions Fundamental Data Structures and Algorithms Peter Lee March 13, 2003.
CSE 311 Foundations of Computing I Lecture 28 Computability: Other Undecidable Problems Autumn 2011 CSE 3111.
UNIT - I Formal Language and Regular Expressions: Languages Definition regular expressions Regular sets identity rules. Finite Automata: DFA NFA NFA with.
1 Chapter Constructing Efficient Finite Automata.
Nondeterministic Finite Automata (NFAs). Reminder: Deterministic Finite Automata (DFA) q For every state q in Q and every character  in , one and only.
98 Nondeterministic Automata vs Deterministic Automata We learned that NFA is a convenient model for showing the relationships among regular grammars,
Finite Automata Great Theoretical Ideas In Computer Science Victor Adamchik Danny Sleator CS Spring 2010 Lecture 20Mar 30, 2010Carnegie Mellon.
Welcome to Math 6 Our subject for today is… Divisibility.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
1 Section 11.3 Constructing Efficient Finite Automata First we’ll see how to transform an NFA into a DFA. Then we’ll see how to transform a DFA into a.
1 Section 11.2 Finite Automata Can a machine(i.e., algorithm) recognize a regular language? Yes! Deterministic Finite Automata A deterministic finite automaton.
CompSci Today’s Topics Computer Science Noncomputability Upcoming Special Topic: Enabled by Computer -- Decoding the Human Genome Reading Great.
General Discussion of “Properties” The Pumping Lemma Membership, Emptiness, Etc.
NFAε - NFA - DFA equivalence
Algorithms Today we will look at: what the word algorithm means
Chapter Five: Nondeterministic Finite Automata
What is it? The term "Automata" is derived from the Greek word "αὐτόματα" which means "self-acting". An automaton (Automata in plural) is an abstract self-propelled.
Presentation transcript:

15-859(B) Machine Learning Learning finite state environments Avrim Blum 03/29/07

Consider the following setting Say we are a baby trying to figure out the effects our actions have on our environment... –Perform actions –Get observations –Try to make an internal model of what is happening.

A model: learning a finite state environment Let’s model the world as a DFA. We perform actions, we get observations. Our actions can also change the state of the world. # states is finite.

We have a box with buttons and lights. Can press the buttons, observe the lights. lights = f(current state) next state = g(button, current state) Goal: learn predictive model of device. Another way to put it

Learning a DFA In the language of our standard models... Asking if we can learn a DFA from Membership Queries. –Issue of whether we have counterexamples (Equivalence Queries) or not. [for the moment, assume not] –Also issue of whether or not we have a reset button. [for today, assume yes]

This seems really hard. Can’t tell for sure when world state has changed.ample space S. Learning DFAs Let’s look at an easier problem first: state = observation. space S.

An example w/o hidden state 2 actions: a, b. Generic algorithm for lights=state: Build a model. While not done, find an unexplored edge and take it. Now, let’s try the harder problem!

Some examples Example #1Example #1 (3 states) Example #2Example #2 (3 states)

Can we design a procedure to do this in general? One problem: what if we always see the same thing? How do we know there isn’t something else out there? Our model: a,b Real world: a a a bba b aabb b Called “combination-lock automaton”

Can we design a procedure to do this in general? a a a bba b aabb b Combination-lock automaton: basically simulating a conjunction. This means we can’t hope to efficiently come up with an exact model of the world from just our own experimentation. (I.e., MQs only).

How to get around this? Assume we can propose model and get counterexample. (MQ+EQ) Equivalently, goal is to be predictive. Any time we make a mistake, we think and perform experiments. (MQ+MB) Goal is not to have to do this too many times. For our algorithm, total # mistakes will be at most # states.

Algorithm by Dana Angluin (with extensions by Rivest & Schapire) To simplify things, let’s assume we have a RESET button. [Back to basic DFA problem] Next time, we’ll see how to get rid of that.

The problem (recap) –observation = f(current state) –next state = g(button, prev state) Can feed in sequence of actions, get observations. Then resets to start. Can also propose/field-test model. Get counterexample. We have a DFA: a > a a b b b

Key Idea Key idea is to represent the DFA using a state/experiment table. a > a a b b b states experiments  a a b aa ab ba bb trans- itions

Key Idea Key idea is to represent the DFA using a state/experiment table. states experiments  a a b aa ab ba bb trans- itions Guarantee will be: either this is correct, or else the world has > n states. In that case, need way of using counterexs to add new state to model.

The algorithm We’ll do it by example... b > a a b a a

Algorithm (formally) go to 1.