Characterization of state merging strategies which ensure identification in the limit from complete data Cristina Bibire.

Slides:



Advertisements
Similar presentations
Recognising Languages We will tackle the problem of defining languages by considering how we could recognise them. Problem: Is there a method of recognising.
Advertisements

Non-Deterministic Finite Automata
Finite-state Recognizers
Finite-State Machines with No Output Ying Lu
Lecture 24 MAS 714 Hartmut Klauck
CPSC Compiler Tutorial 4 Midterm Review. Deterministic Finite Automata (DFA) Q: finite set of states Σ: finite set of “letters” (input alphabet)
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 3 School of Innovation, Design and Engineering Mälardalen University 2012.
Characterization of state merging strategies which ensure identification in the limit from complete data (II) Cristina Bibire.
1 Languages. 2 A language is a set of strings String: A sequence of letters Examples: “cat”, “dog”, “house”, … Defined over an alphabet: Languages.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture3: Regular Expressions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture2: Non Deterministic Finite Automata Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture3: Regular Expressions Prof. Amos Israeli.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture7: PushDown Automata (Part 1) Prof. Amos Israeli.
Introduction to Computability Theory
1 Introduction to Computability Theory Lecture5: Context Free Languages Prof. Amos Israeli.
Transparency No. 4-1 Formal Language and Automata Theory Chapter 4 Patterns, Regular Expressions and Finite Automata (include lecture 7,8,9) Transparency.
Lecture 3 Goals: Formal definition of NFA, acceptance of a string by an NFA, computation tree associated with a string. Algorithm to convert an NFA to.
1 Finite Automata. 2 Finite Automaton Input “Accept” or “Reject” String Finite Automaton Output.
Lecture 3 Goals: Formal definition of NFA, acceptance of a string by an NFA, computation tree associated with a string. Algorithm to convert an NFA to.
Normal forms for Context-Free Grammars
CS5371 Theory of Computation Lecture 4: Automata Theory II (DFA = NFA, Regular Language)
Theory of Computing Lecture 22 MAS 714 Hartmut Klauck.
Finite Automata Costas Busch - RPI.
Grammars, Languages and Finite-state automata Languages are described by grammars We need an algorithm that takes as input grammar sentence And gives a.
CS5371 Theory of Computation Lecture 12: Computability III (Decidable Languages relating to DFA, NFA, and CFG)
Transparency No. 8-1 Formal Language and Automata Theory Chapter 8 DFA state minimization (lecture 13, 14)
Regular Model Checking Ahmed Bouajjani,Benget Jonsson, Marcus Nillson and Tayssir Touili Moran Ben Tulila
Learning DFA from corrections Leonor Becerra-Bonache, Cristina Bibire, Adrian Horia Dediu Research Group on Mathematical Linguistics, Rovira i Virgili.
By: Er. Sukhwinder kaur.  What is Automata Theory? What is Automata Theory?  Alphabet and Strings Alphabet and Strings  Empty String Empty String 
Learning Automata and Grammars Peter Černo.  The problem of learning or inferring automata and grammars has been studied for decades and has connections.
Decidable Questions About Regular languages 1)Membership problem: “Given a specification of known type and a string w, is w in the language specified?”
Week 14 - Wednesday.  What did we talk about last time?  Regular expressions  Introduction to finite state automata.
Inferring Finite Automata from queries and counter-examples Eggert Jón Magnússon.
 2005 SDU Lecture13 Reducibility — A methodology for proving un- decidability.
CS 203: Introduction to Formal Languages and Automata
Probabilistic Automaton Ashish Srivastava Harshil Pathak.
Learning regular tree languages from correction and equivalence queries Cătălin Ionuţ Tîrnăucă Research Group on Mathematical Linguistics, Rovira i Virgili.
1 Chapter Constructing Efficient Finite Automata.
Algorithms for hard problems Automata and tree automata Juris Viksna, 2015.
Complexity and Computability Theory I Lecture #5 Rina Zviel-Girshin Leah Epstein Winter
1 Section 11.3 Constructing Efficient Finite Automata First we’ll see how to transform an NFA into a DFA. Then we’ll see how to transform a DFA into a.
Mid-Terms Exam Scope and Introduction. Format Grades: 100 points -> 20% in the final grade Multiple Choice Questions –8 questions, 7 points each Short.
 2005 SDU Lecture11 Decidability.  2005 SDU 2 Topics Discuss the power of algorithms to solve problems. Demonstrate that some problems can be solved.
Topic 3: Automata Theory 1. OutlineOutline Finite state machine, Regular expressions, DFA, NDFA, and their equivalence, Grammars and Chomsky hierarchy.
Fall 2004COMP 3351 Finite Automata. Fall 2004COMP 3352 Finite Automaton Input String Output String Finite Automaton.
6. Pushdown Automata CIS Automata and Formal Languages – Pei Wang.
Non Deterministic Automata
PROPERTIES OF REGULAR LANGUAGES
Linear Bounded Automata LBAs
CSE 105 theory of computation
CS314 – Section 5 Recitation 3
Chapter 7 PUSHDOWN AUTOMATA.
Review: NFA Definition NFA is non-deterministic in what sense?
Chapter 2 FINITE AUTOMATA.
Jaya Krishna, M.Tech, Assistant Professor
REGULAR LANGUAGES AND REGULAR GRAMMARS
Jaya Krishna, M.Tech, Assistant Professor
Non-Deterministic Finite Automata
Minimal DFA Among the many DFAs accepting the same regular language L, there is exactly one (up to renaming of states) which has the smallest possible.
Finite Automata and Formal Languages
CSE322 Minimization of finite Automaton & REGULAR LANGUAGES
Finite Automata.
Decidable Problems of Regular Languages
CSE 105 theory of computation
Chapter 1 Regular Language
Pushdown automata The Chinese University of Hong Kong Fall 2011
CSE 105 theory of computation
Presentation transcript:

Characterization of state merging strategies which ensure identification in the limit from complete data Cristina Bibire

 History  Motivation  Preliminaries  RPNI  Further Research  Bibliography

History In the second half of 60’s it was Gold who first formulated the process of learning formal languages. Motivated by observing children’s learning process, he proposed an idea that learning is an infinite process of making guesses of grammars and it does not terminate in finite steps but only able to converge at a correct grammar in the limit. Gold’s algorithm for learning regular languages from both positive and negative examples finds the correct automaton when a characteristic sample is included in the data. The problem of learning the minimum state DFA that is consistent with a given sample has been actively studied for over two decades. A lot of algorithms have been developed: RPNI (Regular Inference from Positive and Negative Data), ALERGIA, MDI (Minimum Divergence Inference), DDSM (Data Driven State Merging) and many others. Even if there is no guarantee of identification from the available data, the existence of the associated characteristic sets means that these algorithms converge towards the correct solution.

Motivation Given two sets of strings, how can we decide if they contain or not a characteristic sample for a given algorithm? How do we decide which algorithm to apply? How many consistent DFA can we find? Which is the best searching strategy: exhaustive search, beam search, greedy search, etc? The importance of learning regular languages (or equivalently, identification of the corresponding DFA) is justified by the fact that algorithms treating the inference problem for DFA can be nicely adapted for larger classes of grammars, for instance: even linear grammars (Takada 88 & 94; Sempere & Garcia 94, Makinen 96), subsequential functions (Oncina, Garcia & Vidal 93), tree automata (Knuutila) or Context-free grammars from skeletons (Sakakibara 90). The problem of exactly learning the target DFA from an arbitrary set of labeled examples and the problem of approximating the target DFA from labeled examples are both known to be hard problems. Thus the question as to whether DFA are efficiently learnable under some restricted but fairly general and practically useful classes of distribution is clearly of interest.

We will assume that the target DFA being learned is a canonical DFA. Let and denote the set of positive and negative examples of A respectively. A is consistent with a sample if it accepts all positive examples and rejects all negative examples. A set is said to be structurally complete with respect to a DFA A if it covers each transition of A and uses each final state of A. Given a set, let denote the prefix tree automaton for. is a DFA that contains a path from the start state to an accepting state for each string in modulo common prefixes. Ex: Preliminaries 1 λ The states of the are labeled based on the standard order of the set Pr(S + )

Given a DFA, is a partition of Q iff 1.Each is nonempty, 2., 3.. Ex: The DFA is Partitions of Q are: Lattice of partitions is: iff π i covers π j iff π j ≤ π i Preliminaries p q r 0 1 π3π3 π1π1 π2π2 π4π4 π5π5 πiπi πjπj

Given a DFA A and a partition π on the set of states Q of A, we define the quotient automaton A π obtained by merging the states of A that belong to the same block of the partition π. Note that a quotient automaton of a DFA might be a NFA and vice-versa. Ex: Given M: A structurally complete set for M is:. : then : Preliminaries p q 0 1 r p, r q

Preliminaries Search Space comprising π-quotient automata of A: qp r p,q r 1 q,r p 0 1 p,q,r 0,1 p,r q 1

The set of all derived automata obtained by systematically merging the states of A represents a lattice of finite state automata. Given a canonical DFA M and a set that is structurally complete with respect to M, the lattice derived from is guaranteed to contain M (Pao & Carr, 1978; Parekh & Honavar 1993; Dupont et al, 1994) Pr (α) – prefixes of α - the set of prefixes of L - the set of tails of α The standard order of strings of the alphabet Σ is denoted by <. The standard enumeration of strings over is λ, a, b, aa, ab, ba, bb, … - short prefixes of L - the kernel of L Preliminaries

Preliminaries Definition: A sample is said to be characteristic with respect to a regular language L (with the canonical DFA A) if it satisfies the following two conditions: Intuitively, condition 1 implies structural completeness with respect to A and condition 2 implies that for any distinct states of A there is a suffix γ that would correctly distinguish them. Notice that: - if you add more strings to a characteristic sample it still is characteristic; - there can be many different characteristic samples

RPNI The regular positive and negative inference (RPNI) algorithm [Oncina & Garcia, 1992] is a polynomial time algorithm for identifying a DFA consistent with a given sample. It can be shown that given a characteristic sample for the target DFA the algorithm is guaranteed to return a canonical representation of the target DFA [Oncina & Garcia, 1992; Dupont, 1996].

Ex: Suppose our language L is the set of all words which are congruent with 2 (mod 3). A canonical automaton for this language is: It can be easily verified that is a characteristic sample, where RPNI Ex: Suppose our language L is the set of all words which are congruent with 2 (mod 3). A canonical automaton for this language is: It can be easily verified that is a characteristic sample, where λ

RPNI λ K Fr

RPNI λ λ

RPNI λ λ, ,

RPNI λ λ, , ,

RPNI λ λ, , , , K Fr

RPNI λ,0, 1, , , , λ, , , , λ,0,1,01,10, 010,101, ,10100, 11,111,1110 0,1

11 RPNI λ, , , , K Fr 11 λ,10, 010, , , λ,010,10,0, 1010,1010 1, ,

11 RPNI λ, , , , K Fr 11 λ, ,01,1 0, , ,101, 0101 λ, ,01,1 0,

11 RPNI λ, , , , K Fr 11 λ,0,101, , , λ,0,101, , ,

11 RPNI λ, , , , K Fr 11 λ, ,01,10 1, , λ, ,01,10 1, ,010,

RPNI Fr λ, ,01 10, , K 11 λ,0 1, ,010,1 01, K Fr 11 λ,0, , ,010,1 01,

RPNI Fr λ, ,01 10, , K 11 λ,0 1, ,010,1 01, K Fr 11 λ,0,101 0, , ,010,1 01,

RPNI Fr λ, ,01 10, , K 11 λ,0 1, ,010,1 01, K Fr 11 λ,0 1,01, ,010,1 01,

RPNI Fr λ, ,01 10, , K 11 λ,0 1, ,010,1 01, K Fr 10,010,1 01,0101, λ,0 1,01,

RPNI K Fr 10,010,1 01,0101, λ,0 1,01, ,010,1 01,0101, λ,0,11 1,01,

RPNI K Fr 10,010,1 01,0101, λ,0 1,01, ,010,1 01,0101, ,01,10 10, λ,0,11

RPNI K Fr 10,010,1 01,0101, λ,0 1,01, ,010,10 1,0101,10 100,1110 1,01,10 10, λ,0,11

RPNI The convergence of the RPNI algorithm relies on the fact that sooner or later, the set of labeled examples seen by the learner will include a characteristic set. If the stream of examples provided to the learner is drawn according to a simple distribution, the characteristic set would be made available relatively early (during learning) with a sufficiently high probability and hence the algorithm will converge quickly to the desired target. RPNI is an optimistic algorithm: at any step two states are compared and the question is: can they be merged? No positive evidence can be produced; merging will take place each time that such a merge does not produce inconsistency. Obvious an early mistake can have disastrous effects and Lang proved that a breadth first exploration of the lattice is likely to be better.

Further Research o The RPNI complexity is not a tight upper bound. Find the correct complexity o Are DFA’s PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution? o The study of some data-independent algorithms (which do not use the state merging strategy) o The development of a software which would facilitate the merging of the states in any given algorithm (any merging strategy)

Bibliography Colin de la Higuera, José Oncina, Enrique Vidal. “Identification of DFA: Data-Dependent versus Data-Independent Algorithms”. Lecture Notes in Artificial Intelligence 1147, Grammatical Inference: Learning Syntax from Sentences, Rajesh Parekh, Vasant Honavar. “Learning DFA from Simple Examples”. Lecture Notes in Artificial Intelligence 1316, Algorithmic Learning Theory, Satoshi Kobayashi, Lecture notes for the 3 rd International PhD School on Formal Languages and Applications, Tarragona, Spain Colin de la Higuera, Lecture notes for the 3rd International PhD School on Formal Languages and Applications, Tarragona, Spain Michael J. Kearns, Umesh V. Vazirani “An Introduction to Computational Theory”