Learning DFA from corrections Leonor Becerra-Bonache, Cristina Bibire, Adrian Horia Dediu Research Group on Mathematical Linguistics, Rovira i Virgili University Pl. Imperial Tarraco 1, 43005, Tarragona, Spain Pl. Imperial Tarraco 1, 43005, Tarragona, Spain
Outline Learning from queries Learning from corrections Comparative results Concluding remarks Further research Bibliography
Learning from queries In the last four decades three important formal models have been developed within Computational Learning Theory: Gold's model of identification in the limit [4], the query learning model of Angluin [1,2] and the PAC learning model of Valiant [7]. Our paper is focused on learning DFA within the framework of query learning. Learning from queries was introduced by Dana Angluin in 1987 [1]. She gave an algorithm for learning DFA from membership and equivalence queries and she was the first who proved learnability of DFA via queries. Later, Rivest and Schapire in 1993 [6], Hellerstein et al. in 1995 [5] or Balcazar et al. in 1996 [3] developed more efficient versions of the same algorithm trying to increase the parallelism level, to reduce the number of equivalence queries, etc.
Learning from queries In query learning, there is a teacher that knows the language and has to answer correctly specific kind of queries asked by the learner. In Angluin’s algorithm, the learner asks two kinds of queries: membership query - consists of a string s; the answer is YES or NO depending on whether s is the member of the unknown language or not. equivalence query - is a conjecture, consisting of a description of a regular set U. The answer is YES if U is equal to the unknown language and is a string s in the symmetric difference of U and the unknown language otherwise.
Learning from corrections In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO.
Learning from corrections In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO. Our idea was to introduce a new type of query: correction query - it consists of a string s; the teacher has to return the smallest string s' such that s.s' belongs to the target language.
Learning from corrections In Angluin's algorithm, when the learner asks about a word in the language, the teacher's answer is very simple, YES or NO. Our idea was to introduce a new type of query: correction query - it consists of a string s; the teacher has to return the smallest string s' such that s.s' belongs to the target language. Formally, for a string, where is the left quotient of by : where is an automaton accepting.
Learning from corrections Observation table. An observation table consists of: a non-empty prefix-closed set S of strings, a non-empty suffix-closed set E of strings, and the restriction of the mapping C to the set.CES SΣ-S s e C(s.e)
For any, denotes the finite function from E to defined by Learning from corrections Closed, consistent observation tables An observation table is called closed if An observation table is called consistent if
Example: Learning from corrections λλa aλ aaλ aaaφ aaaaφ Is it closed? a aa a
Example: Learning from corrections λλa aλ aaλ aaaφ aaaaφ Is it closed? Yes a aa a
Example: Learning from corrections λλa aλ aaλ aaaφ aaaaφ Is it consistent? a aa a row(a)=row(aa)row(a.a)=row(aa.a)
Example: Learning from corrections λλa aλ aaλ aaaφ aaaaφ No Is it consistent? a aa a row(a)=row(aa)row(a.a)=row(aa.a)
Learning from corrections
Remark 1 C(α)=βγ implies C(αβ)=γ Remark 2 C(α)=φ implies C(αβ)=φ
Learning from corrections
Lemma Sketch of the proof: Remark 3:
Learning from corrections … To conclude we show that if t is the smallest string s.t. then.
Learning from corrections Lemma 4 Assume that (S,E,C) is a closed, consistent observation table. Suppose the automaton A(S,E,C) has n states. If is any automaton consistent with C that has n or fewer states then is isomorphic with A(S,E,C). Sketch of the proof: We define the function 1. is well defined 2. is bijective The proof of Theorem 1 follows, since Lemma 3 shows that A(S,E,C) is consistent with C, and Lemma 4 shows that any other automaton consistent with C is either isomorphic to A(S,E,C) or contains at least one more state. Thus, A(S,E,C) is the unique smallest automaton consistent with C.
Learning from corrections o Correctness If the teacher answers correctly then if LCA ever terminates it is clear that it outputs the target automaton o Termination Lemma 5. Let (S,E,C) be an observation table. Let n denote the number of different values of row(s) for s in S. Any automaton consistent with C must have at least n states. o Time analysis The total running time of LCA can be bounded by a polynomial in n and m.
Comparative results Language description L*L*L*L*LCAIdAlphabet Linear transition table Final states EQMQEQCQ L1L1L1L1{0,1}(1,2,1,2,3,4,3,3,1,3){1}34428 L2L2L2L2{0,1}(1,2,0,3,3,0,2,1){0}21916 L3L3L3L3{0,1}(1,2,3,4,4,4,1,4,4,4){2,3} L4L4L4L4{0,1,a,b} (1,2,2,2,2,3,2,2,2,2,2,2,0,0, 4,2,2,2,2,5,0,0,3,3) {3,5} L5L5L5L5{0,1} (1,2,3,4,5,6,7,8,9,10,11,12, 13,14,15,0) {1,2,4,8}32438 L6L6L6L6{0,1}(1,2,3,2,2,2,4,2,5,2,1,2){5}36517 Comparative results for different languages using L * and LCA
Comparative results Language description L*L*L*L*LCAIdAlphabet Linear transition table Final states EQMQEQCQ L1L1L1L1{0,1}(1,2,1,2,3,4,3,3,1,3){1}34428 L2L2L2L2{0,1}(1,2,0,3,3,0,2,1){0}21916 L3L3L3L3{0,1}(1,2,3,4,4,4,1,4,4,4){2,3} L4L4L4L4{0,1,a,b} (1,2,2,2,2,3,2,2,2,2,2,2,0,0, 4,2,2,2,2,5,0,0,3,3) {3,5} L5L5L5L5{0,1} (1,2,3,4,5,6,7,8,9,10,11,12, 13,14,15,0) {1,2,4,8}32438 L6L6L6L6{0,1}(1,2,3,2,2,2,4,2,5,2,1,2){5}36517 Comparative results for different languages using L * and LCA
o We have improved Angluin's query learning algorithm by replacing MQ with CQ. This approach allowed us to use a smaller number of queries and, in this way, the learning time is reduced. o One of the reasons of this reduction is that an answer to a CQ contains embedded much more information. o Another advantage of our approach is that we can differentiate better between states. o Among the improvements previously discussed, we would like to mention here the adequacy of CQ's in a real learning process. They reflect in a more accurate manner the process of children's language acquisition. We are aware that this kind of formalism is for an ideal teacher who knows everything and always gives the correct answers, which is an ideal situation. The learning of a natural language is an infinite process. Concluding remarks
o To prove that the number of CQs is always smaller than the number of MQs o To prove that the number of EQs is always less or equal o To prove the following conjectures: o To show that we have improved on the running time o CQs are more expensive than MQs. How much does this affect the total running time? Further research
[1] D. Angluin, Learning regular sets from queries and counterexamples. Information and Computation 75, 1987, [2] D. Angluin, Queries and concept learning. Machine Learning 2, 1988, [3] J. L. Balcázar, J. Díaz, R. Gavaldá, O. Watanabe, Algorithms for learning finite automata from queries: A unified view. Chapter in Advances in Algorithms, Languages and Complexity. Kluwer Academic Publishers, 1997, [4] E. M. Gold, Identification in the limit. Information and Control 10, 1967, [5] L. Hellerstein, K. Pillaipakkamnatt, V. Raghavan, D.Wilkins, How many queries are needed to learn? Proc. 27th Annual ACM Symposium on the Theory of Computing. ACM Press, 1995, [6] R. L. Rivest, R. E. Schapire, Inference of finite automata using homing sequences. Information and Computation 103(2), 1993, [7] L. G.Valiant, A theory of the learnable. Communication of the ACM 27, 1984, Bibliography