Download presentation
Presentation is loading. Please wait.
Published byJessica Morrison Modified over 9 years ago
1
Learning Restricted Restarting Automata Presentation for the ABCD workshop in Prague, March 27 – 29, 2009 Peter Černo
2
About the presentation PART I: We present a specialized program which allows an easy design and testing of restarting automata and provides specialized tools for learning finite automata and defining languages.
3
About the presentation PART I: We present a specialized program which allows an easy design and testing of restarting automata and provides specialized tools for learning finite automata and defining languages. PART II: We introduce two restricted models of restarting automata: RA CL and RA ∆CL together with their properties and limitations.
4
About the presentation PART I: We present a specialized program which allows an easy design and testing of restarting automata and provides specialized tools for learning finite automata and defining languages. PART II: We introduce two restricted models of restarting automata: RA CL and RA ∆CL together with their properties and limitations. PART III: We demonstrate learning of these restricted models by another specialized program.
5
About the presentation PART I: We present a specialized program which allows an easy design and testing of restarting automata and provides specialized tools for learning finite automata and defining languages. PART II: We introduce two restricted models of restarting automata: RA CL and RA ∆CL together with their properties and limitations. PART III: We demonstrate learning of these restricted models by another specialized program. PART IV: We give a list of some open problems and topics for future investigations.
6
Restarting Automaton Is a system M = (Σ, Γ, I) where: Σ is an input alphabet, Γ is a working alphabet I is a finite set of meta-instructions: Rewriting meta-instruction (E L, x → y, E R ), where x, y ∊ Γ* such that |x| > |y|, and E L, E R ⊆ Γ* are regular languages called left and right constraints. Accepting meta-instruction (E, Accept), where E ⊆ Γ* is a regular language.
7
Language of Restarting Automaton Rewriting meta-instructions of M induce a reducing relation ⊢ M ⊆ Γ* x Γ* such that: for each u, v ∊ Γ*, u ⊢ M v if and only if there exist an instruction i = (E L, x → y, E R ) in I and words u 1, u 2 ∊ Γ* such that u = u 1 xu 2, v = u 1 yu 2, u 1 ∊ E L and u 2 ∊ E R.
8
Language of Restarting Automaton Rewriting meta-instructions of M induce a reducing relation ⊢ M ⊆ Γ* x Γ* such that: for each u, v ∊ Γ*, u ⊢ M v if and only if there exist an instruction i = (E L, x → y, E R ) in I and words u 1, u 2 ∊ Γ* such that u = u 1 xu 2, v = u 1 yu 2, u 1 ∊ E L and u 2 ∊ E R. Accepting meta-instructions of M define simple sentential forms S M = set of words u ∊ Γ*, for which there exist an instruction i = (E, Accept) in I such that u ∊ E.
9
Language of Restarting Automaton Rewriting meta-instructions of M induce a reducing relation ⊢ M ⊆ Γ* x Γ* such that: for each u, v ∊ Γ*, u ⊢ M v if and only if there exist an instruction i = (E L, x → y, E R ) in I and words u 1, u 2 ∊ Γ* such that u = u 1 xu 2, v = u 1 yu 2, u 1 ∊ E L and u 2 ∊ E R. Accepting meta-instructions of M define simple sentential forms S M = set of words u ∊ Γ*, for which there exist an instruction i = (E, Accept) in I such that u ∊ E. The input language of M is defined as: L(M) = {u ∊ Σ*| ∃v ∊ S M : u ⊢ M * v}, where ⊢ M * is a reflexive and transitive closure of ⊢ M.
10
Example How to create a restarting automaton that recognizes the language L = {a i b i c j d j | i, j > 0}. Accepting meta-instructions: Reducing (or rewriting) meta-instructions: NameAccepting Language A0^abcd$ NameLeft Language From Word To Word Right Language R0^a*$ab λ ^b*c*d*$ R1^a*b*c*$cd λ ^d*$
11
Example Suppose that we have a word: aaabbbccdd NameAccepting Lang. A0^abcd$ NameLeft Lang. From Word To Word Right Lang. R0^a*$ab λ ^b*c*d*$ R1^a*b*c*$cd λ ^d*$
12
Example aaabbbccdd ‒ R0 → aabbccdd NameAccepting Lang. A0^abcd$ NameLeft Lang. From Word To Word Right Lang. R0^a*$ab λ ^b*c*d*$ R1^a*b*c*$cd λ ^d*$
13
Example aaabbbccdd ‒ R0 → aabbccdd ‒ R1 → aabbcd NameAccepting Lang. A0^abcd$ NameLeft Lang. From Word To Word Right Lang. R0^a*$ab λ ^b*c*d*$ R1^a*b*c*$cd λ ^d*$
14
Example aaabbbccdd ‒ R0 → aabbccdd ‒ R1 → aabbcd ‒ R0 → abcd abcd is accepted by A0, so the whole word aaabbbccdd is accepted. Note that abcd is a simple sentential form. NameAccepting Lang. A0^abcd$ NameLeft Lang. From Word To Word Right Lang. R0^a*$ab λ ^b*c*d*$ R1^a*b*c*$cd λ ^d*$
15
PART I : RestartingAutomaton.exe
16
Capabilities and features Design a restarting automaton. The design of restarting automaton consists of stepwise design of accepting and reducing meta-instructions. You can save (load) restarting automaton to (from) an XML file. Test correctly defined restarting automaton: The system is able to give you a list of all words that can be obtained by reductions from a given word w. The system is able to give you a list of all reduction paths from one given word to another given word. Start a server mode, in which the client applications can use services provided by the server application. You can use specialized tools to define formal languages. You can also save, load, copy, paste and view an XML representation of the actual state of every tool.
17
Learning Languages There are several tools that are used to define languages: DFA Modeler: allows you to enter a regular language by specifying its underlying deterministic finite automaton. LStar Algorithm: encapsulates Dana Angluin’s L* algorithm that is a machine learning algorithm which learns deterministic finite automaton using membership and equivalence queries. RPNI Algorithm: encapsulates a machine learning algorithm which learns deterministic finite automaton based on a given set of labeled examples. Regular Expression: allows you to enter a regular language by specifying the regular expression. SLT Language: allows you to design a regular language by specifying a positive integer k and positive examples using the algorithm for learning k-SLT languages.
18
Pros and Cons Pros: The application is written in C# using.NET Framework 2.0. It works both on Win32 and UNIX platforms. The application demonstrates that it is easy to design and work with restarting automata. Any component of the application can be easily reused in another projects. It safes your work. Cons: The application is a toy that allows you only to design simple restarting automata recognizing only simple formal languages with small alphabets consisting of few letters. On large inputs the computation can take a long time and it can produce a huge output.
19
PART II : RA CL k-local Restarting Automaton CLEARING ( k-RA CL ) M = (Σ, I) Σ is a finite nonempty alphabet, ¢, $ ∉ Σ I is a finite set of instructions (x, z, y), x ∊ LC k, y ∊ RC k, z ∊ Σ + left context LC k = Σ k ∪ ¢.Σ ≤k-1 right context RC k = Σ k ∪ Σ ≤k-1.$
20
PART II : RA CL k-local Restarting Automaton CLEARING ( k-RA CL ) M = (Σ, I) Σ is a finite nonempty alphabet, ¢, $ ∉ Σ I is a finite set of instructions (x, z, y), x ∊ LC k, y ∊ RC k, z ∊ Σ + left context LC k = Σ k ∪ ¢.Σ ≤k-1 right context RC k = Σ k ∪ Σ ≤k-1.$ A word w = uzv can be rewritten to uv ( uzv → uv ) if and only if there exist an instruction i = (x, z, y) ∊ I such that: x ⊒ ¢.u ( x is a suffix of ¢.u ) y ⊑ v.$ ( y is a prefix of v.$ )
21
PART II : RA CL k-local Restarting Automaton CLEARING ( k-RA CL ) M = (Σ, I) Σ is a finite nonempty alphabet, ¢, $ ∉ Σ I is a finite set of instructions (x, z, y), x ∊ LC k, y ∊ RC k, z ∊ Σ + left context LC k = Σ k ∪ ¢.Σ ≤k-1 right context RC k = Σ k ∪ Σ ≤k-1.$ A word w = uzv can be rewritten to uv ( uzv → uv ) if and only if there exist an instruction i = (x, z, y) ∊ I such that: x ⊒ ¢.u ( x is a suffix of ¢.u ) y ⊑ v.$ ( y is a prefix of v.$ ) A word w is accepted if and only if w →* λ where →* is reflexive and transitive closure of →.
22
PART II : RA CL k-local Restarting Automaton CLEARING ( k-RA CL ) M = (Σ, I) Σ is a finite nonempty alphabet, ¢, $ ∉ Σ I is a finite set of instructions (x, z, y), x ∊ LC k, y ∊ RC k, z ∊ Σ + left context LC k = Σ k ∪ ¢.Σ ≤k-1 right context RC k = Σ k ∪ Σ ≤k-1.$ A word w = uzv can be rewritten to uv ( uzv → uv ) if and only if there exist an instruction i = (x, z, y) ∊ I such that: x ⊒ ¢.u ( x is a suffix of ¢.u ) y ⊑ v.$ ( y is a prefix of v.$ ) A word w is accepted if and only if w →* λ where →* is reflexive and transitive closure of →. We define the class RA CL as ⋃ k≥1 k-RA CL.
23
Why RA CL ? This model was inspired by the Associative Language Descriptions (ALD) model By Alessandra Cherubini, Stefano Crespi-Reghizzi, Matteo Pradella, Pierluigi San Pietro See: http://home.dei.polimi.it/sanpietr/ALD/ALD.html The more restricted model we have the easier is the investigation of its properties. Moreover, the learning methods are much more simple and straightforward.
24
Example Language L = {a n b n | n ≥ 0}.
25
Example Language L = {a n b n | n ≥ 0}. 1-RA CL M = ({a, b}, I) where the instructions I are: R1 = (a, ab, b) R2 = (¢, ab, $)
26
Example Language L = {a n b n | n ≥ 0}. 1-RA CL M = ({a, b}, I) where the instructions I are: R1 = (a, ab, b) R2 = (¢, ab, $) For instance: aaaabbbb ‒ R1 → aaabbb
27
Example Language L = {a n b n | n ≥ 0}. 1-RA CL M = ({a, b}, I) where the instructions I are: R1 = (a, ab, b) R2 = (¢, ab, $) For instance: aaaabbbb ‒ R1 → aaabbb ‒ R1 → aabb
28
Example Language L = {a n b n | n ≥ 0}. 1-RA CL M = ({a, b}, I) where the instructions I are: R1 = (a, ab, b) R2 = (¢, ab, $) For instance: aaaabbbb ‒ R1 → aaabbb ‒ R1 → aabb ‒ R1 → ab
29
Example Language L = {a n b n | n ≥ 0}. 1-RA CL M = ({a, b}, I) where the instructions I are: R1 = (a, ab, b) R2 = (¢, ab, $) For instance: aaaabbbb ‒ R1 → aaabbb ‒ R1 → aabb ‒ R1 → ab ‒ R2 → λ Now we see that the word aaaabbbb is accepted because aaaabbbb →* λ.
30
Example Language L = {a n b n | n ≥ 0}. 1-RA CL M = ({a, b}, I) where the instructions I are: R1 = (a, ab, b) R2 = (¢, ab, $) For instance: aaaabbbb ‒ R1 → aaabbb ‒ R1 → aabb ‒ R1 → ab ‒ R2 → λ Now we see that the word aaaabbbb is accepted because aaaabbbb →* λ. Note that λ is always accepted because λ →* λ.
31
Some Theorems Theorem: For every finite L ⊆ Σ* there exist 1-RA CL M such that L(M) = L. Proof. Suppose L = {w 1, …, w n }. Consider I = {(¢, w 1, $), …, (¢, w n, $)}. ∎
32
Some Theorems Theorem: For every finite L ⊆ Σ* there exist 1-RA CL M such that L(M) = L. Proof. Suppose L = {w 1, …, w n }. Consider I = {(¢, w 1, $), …, (¢, w n, $)}. ∎ Theorem: For all k≥1 ℒ(k-RA CL ) ⊆ ℒ((k+1)-RA CL ).
33
Some Theorems Theorem: For every finite L ⊆ Σ* there exist 1-RA CL M such that L(M) = L. Proof. Suppose L = {w 1, …, w n }. Consider I = {(¢, w 1, $), …, (¢, w n, $)}. ∎ Theorem: For all k≥1 ℒ(k-RA CL ) ⊆ ℒ((k+1)-RA CL ). Theorem: For each regular language L there exist a k- RA CL M such that L(M) = L∪{λ}.
34
Some Theorems Theorem: For every finite L ⊆ Σ* there exist 1-RA CL M such that L(M) = L. Proof. Suppose L = {w 1, …, w n }. Consider I = {(¢, w 1, $), …, (¢, w n, $)}. ∎ Theorem: For all k≥1 ℒ(k-RA CL ) ⊆ ℒ((k+1)-RA CL ). Theorem: For each regular language L there exist a k- RA CL M such that L(M) = L∪{λ}. Proof. Based on pumping lemma for regular languages. For each z ∊ Σ*, |z|=n there exist u, v, w such that |v|≥1, δ(q 0, uv) = δ(q 0, u) ; the word v can be crossed out. We add corresponding instruction i z = (¢u, v, w). For each accepted z ∊ Σ <n we add instruction i z = (¢, z, $). ∎
35
Some Theorems Lemma: Let M be RA CL, i = (x, z, y) its instruction and w = uv such that x ⊒ ¢.u and y ⊑ v.$. Then uv ∊ L(M) ⇒ uzv ∊ L(M). Proof. uzv ― i → uv →* λ. ∎
36
Some Theorems Lemma: Let M be RA CL, i = (x, z, y) its instruction and w = uv such that x ⊒ ¢.u and y ⊑ v.$. Then uv ∊ L(M) ⇒ uzv ∊ L(M). Proof. uzv ― i → uv →* λ. ∎ Theorem: Languages L 1 ∪ L 2 and L 1.L 2 where L 1 = {a n b n | n≥0} and L 2 = {a n b 2n | n≥0} are not accepted by any RA CL. Proof by contradiction, based on the previous lemma.
37
Some Theorems Lemma: Let M be RA CL, i = (x, z, y) its instruction and w = uv such that x ⊒ ¢.u and y ⊑ v.$. Then uv ∊ L(M) ⇒ uzv ∊ L(M). Proof. uzv ― i → uv →* λ. ∎ Theorem: Languages L 1 ∪ L 2 and L 1.L 2 where L 1 = {a n b n | n≥0} and L 2 = {a n b 2n | n≥0} are not accepted by any RA CL. Proof by contradiction, based on the previous lemma. Corollary: RA CL is not closed under union and concatenation.
38
Some Theorems Lemma: Let M be RA CL, i = (x, z, y) its instruction and w = uv such that x ⊒ ¢.u and y ⊑ v.$. Then uv ∊ L(M) ⇒ uzv ∊ L(M). Proof. uzv ― i → uv →* λ. ∎ Theorem: Languages L 1 ∪ L 2 and L 1.L 2 where L 1 = {a n b n | n≥0} and L 2 = {a n b 2n | n≥0} are not accepted by any RA CL. Proof by contradiction, based on the previous lemma. Corollary: RA CL is not closed under union and concatenation. Corollary: RA CL is not closed under homomorphism. Consider {a n b n | n≥0} ∪ {c n d 2n | n≥0} and homomorphism defined as: a ↦ a, b ↦ b, c ↦ a, d ↦ b. ∎
39
Some Theorems Theorem: The language L 1 = {a n cb n | n ≥ 0} ∪ {λ} is not accepted by any RA CL. Theorem: The languages: L 2 = {a n cb n | n ≥ 0} ∪ {a m b m | m ≥ 0} L 3 = {a n cb m | n, m ≥ 0} ∪ {λ} L 4 = {a m b m | m ≥ 0} are recognized by 1-RA CL.
40
Some Theorems Theorem: The language L 1 = {a n cb n | n ≥ 0} ∪ {λ} is not accepted by any RA CL. Theorem: The languages: L 2 = {a n cb n | n ≥ 0} ∪ {a m b m | m ≥ 0} L 3 = {a n cb m | n, m ≥ 0} ∪ {λ} L 4 = {a m b m | m ≥ 0} are recognized by 1-RA CL. Corollary: RA CL is not closed under intersection. Proof. L 1 = L 2 ∩ L 3. ∎
41
Some Theorems Theorem: The language L 1 = {a n cb n | n ≥ 0} ∪ {λ} is not accepted by any RA CL. Theorem: The languages: L 2 = {a n cb n | n ≥ 0} ∪ {a m b m | m ≥ 0} L 3 = {a n cb m | n, m ≥ 0} ∪ {λ} L 4 = {a m b m | m ≥ 0} are recognized by 1-RA CL. Corollary: RA CL is not closed under intersection. Proof. L 1 = L 2 ∩ L 3. ∎ Corollary: RA CL is not closed under intersection with a regular language. Proof. L 3 is a regular language. ∎
42
Some Theorems Theorem: The language L 1 = {a n cb n | n ≥ 0} ∪ {λ} is not accepted by any RA CL. Theorem: The languages: L 2 = {a n cb n | n ≥ 0} ∪ {a m b m | m ≥ 0} L 3 = {a n cb m | n, m ≥ 0} ∪ {λ} L 4 = {a m b m | m ≥ 0} are recognized by 1-RA CL. Corollary: RA CL is not closed under intersection. Proof. L 1 = L 2 ∩ L 3. ∎ Corollary: RA CL is not closed under intersection with a regular language. Proof. L 3 is a regular language. ∎ Corollary: RA CL is not closed under difference. Proof. L 1 = (L 2 – L 4 ) ∪ {λ}. ∎
43
Parentheses The following instruction of 1-RA CL M is enough for recognizing the language of correct parentheses: (λ, ( ), λ)
44
Parentheses The following instruction of 1-RA CL M is enough for recognizing the language of correct parentheses: (λ, ( ), λ) Note: This instruction represents a set of instructions: ({¢}∪Σ, ( ), Σ∪{$}), where Σ = {(, )} and (A, w, B) = {(a, w, b) | a∊A, b∊B}.
45
Parentheses The following instruction of 1-RA CL M is enough for recognizing the language of correct parentheses: (λ, ( ), λ) Note: This instruction represents a set of instructions: ({¢}∪Σ, ( ), Σ∪{$}), where Σ = {(, )} and (A, w, B) = {(a, w, b) | a∊A, b∊B}. Note: We use the following notation for the (A, w, B) : A w B
46
Arithmetic expressions Suppose that we want to check correctness of arithmetic expressions over the alphabet Σ = {α, +, *, (, )}. For example α+(α*α+α) is correct, α*+α is not. The priority of the operations is considered.
47
Arithmetic expressions Suppose that we want to check correctness of arithmetic expressions over the alphabet Σ = {α, +, *, (, )}. For example α+(α*α+α) is correct, α*+α is not. The priority of the operations is considered. The following 1-RA CL M is sufficient: ¢+(¢+( α+ ()+ α(α( ¢+*(¢+*( α* ()* α(α( α)α) +α +() $+)$+) α)α) *α *() $+*)$+*) ¢ α () $ ( α () )
48
Arithmetic expressions: Example ExpressionInstruction α*α + ((α + α) + (α + α*α))*α(¢, α*, α) α + ((α + α) + (α + α*α))*α(α, +α, ) ) α + ((α) + (α + α*α))*α( ), *α, $) α + ((α) + (α + α*α))(+, α*, α) α + ((α) + (α + α))( (, α+, α) α + ((α) + (α))( (, α, ) ) α + (( ) + (α))( (, ( )+, ( ) α + ((α))( (, α, ) ) α + (( ))( (, ( ), ) ) α + ( )(¢, α+, ( ) ( )(¢, ( ), $) λaccept
49
Nondeterminism Assume the following instructions: R1 = (bb, a, bbbb) R2 = (bb, bb, $) R3 = (¢, cbb, $) and the word: cbbabbbb.
50
Nondeterminism Assume the following instructions: R1 = (bb, a, bbbb) R2 = (bb, bb, $) R3 = (¢, cbb, $) and the word: cbbabbbb. Then: cbbabbbb ― R1 → cbbbbbb ― R2 → cbbbb ― R2 → cbb ― R3 → λ.
51
Nondeterminism Assume the following instructions: R1 = (bb, a, bbbb) R2 = (bb, bb, $) R3 = (¢, cbb, $) and the word: cbbabbbb. Then: cbbabbbb ― R1 → cbbbbbb ― R2 → cbbbb ― R2 → cbb ― R3 → λ. But if we have started with R2 : cbbabbbb ― R2 → cbbabb then it would not be possible to continue.
52
Nondeterminism Assume the following instructions: R1 = (bb, a, bbbb) R2 = (bb, bb, $) R3 = (¢, cbb, $) and the word: cbbabbbb. Then: cbbabbbb ― R1 → cbbbbbb ― R2 → cbbbb ― R2 → cbb ― R3 → λ. But if we have started with R2 : cbbabbbb ― R2 → cbbabb then it would not be possible to continue. ⇒ The order of used instructions is important!
53
Hardest CFL H By S. A. Greibach, definition from Section 10.5 of M. Harrison, Introduction to Formal Language Theory, Addison- Wesley, Reading, MA, 1978.
54
Hardest CFL H By S. A. Greibach, definition from Section 10.5 of M. Harrison, Introduction to Formal Language Theory, Addison- Wesley, Reading, MA, 1978. Let D 2 be Semi-Dyck set on {a 1, a 2, a 1 ’, a 2 ’} generated by the grammar: S → a 1 Sa 1 ’S | a 2 Sa 2 ’S | λ.
55
Hardest CFL H By S. A. Greibach, definition from Section 10.5 of M. Harrison, Introduction to Formal Language Theory, Addison- Wesley, Reading, MA, 1978. Let D 2 be Semi-Dyck set on {a 1, a 2, a 1 ’, a 2 ’} generated by the grammar: S → a 1 Sa 1 ’S | a 2 Sa 2 ’S | λ. Let Σ = {a 1, a 2, a 1 ’, a 2 ’, b, c}, d ∉ Σ.
56
Hardest CFL H By S. A. Greibach, definition from Section 10.5 of M. Harrison, Introduction to Formal Language Theory, Addison- Wesley, Reading, MA, 1978. Let D 2 be Semi-Dyck set on {a 1, a 2, a 1 ’, a 2 ’} generated by the grammar: S → a 1 Sa 1 ’S | a 2 Sa 2 ’S | λ. Let Σ = {a 1, a 2, a 1 ’, a 2 ’, b, c}, d ∉ Σ. Then H = {λ} ∪ {∏ i=1..n x i cy i cz i d | n ≥ 1, y 1 y 2 …y n ∊ bD 2, x i, z i ∊ Σ*}.
57
Hardest CFL H By S. A. Greibach, definition from Section 10.5 of M. Harrison, Introduction to Formal Language Theory, Addison- Wesley, Reading, MA, 1978. Let D 2 be Semi-Dyck set on {a 1, a 2, a 1 ’, a 2 ’} generated by the grammar: S → a 1 Sa 1 ’S | a 2 Sa 2 ’S | λ. Let Σ = {a 1, a 2, a 1 ’, a 2 ’, b, c}, d ∉ Σ. Then H = {λ} ∪ {∏ i=1..n x i cy i cz i d | n ≥ 1, y 1 y 2 …y n ∊ bD 2, x i, z i ∊ Σ*}. Each CFL can be represented as an inverse homomorphism of H.
58
Hardest CFL H Theorem: H is not accepted by any RA CL. Proof by contradiction. But if we slightly extend the definition of RA CL then we will be able to recognize H.
59
RA ΔC L k-local Restarting Automaton Δ CLEARING k-RA ΔCL M = (Σ, I) Σ is a finite nonempty alphabet, ¢, $, Δ ∉ Σ, Γ = Σ ∪ {Δ} I is a finite set of instructions: (1) (x, z → λ, y) (2) (x, z → Δ, y) where x ∊ LC k, y ∊ RC k, z ∊ Γ +. left context LC k = Γ k ∪ ¢. Γ ≤k-1 right context RC k = Γ k ∪ Γ ≤k-1.$
60
RA ΔC L A word w = uzv can be rewritten to uv ( uzv → uv ) if and only if there exist an instruction i = (x, z → λ, y) ∊ I
61
RA ΔC L A word w = uzv can be rewritten to uv ( uzv → uv ) if and only if there exist an instruction i = (x, z → λ, y) ∊ I … or to uΔv ( uzv → uΔv ) if and only if there exist an instruction i = (x, z → Δ, y) ∊ I
62
RA ΔC L A word w = uzv can be rewritten to uv ( uzv → uv ) if and only if there exist an instruction i = (x, z → λ, y) ∊ I … or to uΔv ( uzv → uΔv ) if and only if there exist an instruction i = (x, z → Δ, y) ∊ I such that x ⊒ ¢.u and y ⊑ v.$.
63
RA ΔC L A word w = uzv can be rewritten to uv ( uzv → uv ) if and only if there exist an instruction i = (x, z → λ, y) ∊ I … or to uΔv ( uzv → uΔv ) if and only if there exist an instruction i = (x, z → Δ, y) ∊ I such that x ⊒ ¢.u and y ⊑ v.$. A word w is accepted if and only if w →* λ where →* is reflexive and transitive closure of →.
64
RA ΔC L A word w = uzv can be rewritten to uv ( uzv → uv ) if and only if there exist an instruction i = (x, z → λ, y) ∊ I … or to uΔv ( uzv → uΔv ) if and only if there exist an instruction i = (x, z → Δ, y) ∊ I such that x ⊒ ¢.u and y ⊑ v.$. A word w is accepted if and only if w →* λ where →* is reflexive and transitive closure of →. We define the class RA ΔCL as ⋃ k≥1 k-RA ΔCL.
65
Hardest CFL H revival Theorem: H is recognized by 1-RA ΔCL. Idea. Suppose that we have w ∊ H : w = ¢ x 1 cy 1 cz 1 dx 2 cy 2 cz 2 d… x n cy n cz n d $ In the first phase we start with deleting letters (from the alphabet Σ = {a 1, a 2, a 1 ’, a 2 ’, b, c} ) from the right side of ¢ and from the left and right sides of the letters d. As soon as we think that we have the following word: ¢ cy 1 cdcy 2 cd… cy n cd $ we introduce the Δ symbols: ¢ Δy 1 Δy 2 Δ… Δy n Δ $ In the second phase we check if y 1 y 2 …y n ∊ bD 2.
66
Instructions of M recognizing CFL H Suppose Σ = {a 1, a 2, a 1 ’, a 2 ’, b, c}, d ∉ Σ, Γ = Σ ∪ {d, Δ}. In fact, there is no such thing as a first phase or a second phase. We have only instructions. Theorem: H ⊆ L(M). Theorem: H ⊇ L(M). Idea. We describe all words that are generated by the instructions. Instructions for the first phase:Instructions for the second phase: (1) (¢, Σ → λ, Σ) (2) (Σ, Σ → λ, d) (3) (d, Σ → λ, Σ) (4) (¢, c → Δ, Σ ∪ {Δ}) (5) (Σ ∪ {Δ}, cdc → Δ, Σ ∪ {Δ}) (6) (Σ ∪ {Δ}, cd → Δ, $) (7) (Γ, a 1 a 1 ’ → λ, Γ – {b}) (8) (Γ, a 2 a 2 ’ → λ, Γ – {b}) (9) (Γ, a 1 Δa 1 ’ → Δ, Γ – {b}) (10) (Γ, a 2 Δa 2 ’ → Δ, Γ – {b}) (11) (Σ – {c}, Δ → λ, Δ) (12) (¢, ΔbΔ → λ, $)
67
The power of RA CL Theorem: There exists a k-RA CL M recognizing a language that is not a CFL.
68
The power of RA CL Theorem: There exists a k-RA CL M recognizing a language that is not a CFL. Idea. We try to create a k-RA CL M such that L(M) ∩ {(ab) n | n>0} = {(ab) 2 m | m≥0}.
69
The power of RA CL Theorem: There exists a k-RA CL M recognizing a language that is not a CFL. Idea. We try to create a k-RA CL M such that L(M) ∩ {(ab) n | n>0} = {(ab) 2 m | m≥0}. If L(M) is a CFL then the intersection with a regular language is also a CFL. In our case the intersection is not a CFL.
70
How does it work Example: ¢ abababababababab $
71
How does it work Example: ¢ abababababababab $ → ¢ abababababababb $ → ¢ abababababbabb $ → ¢ abababbabbabb $ → ¢ abbabbabbabb $
72
How does it work Example: ¢ abababababababab $ → ¢ abababababababb $ → ¢ abababababbabb $ → ¢ abababbabbabb $ → ¢ abbabbabbabb $ → ¢ abbabbabbab $ → ¢ abbabbabab $ → ¢ abbababab $ → ¢ abababab $
73
How does it work Example: ¢ abababababababab $ → ¢ abababababababb $ → ¢ abababababbabb $ → ¢ abababbabbabb $ → ¢ abbabbabbabb $ → ¢ abbabbabbab $ → ¢ abbabbabab $ → ¢ abbababab $ → ¢ abababab $ → ¢ abababb $ → ¢ abbabb $
74
How does it work Example: ¢ abababababababab $ → ¢ abababababababb $ → ¢ abababababbabb $ → ¢ abababbabbabb $ → ¢ abbabbabbabb $ → ¢ abbabbabbab $ → ¢ abbabbabab $ → ¢ abbababab $ → ¢ abababab $ → ¢ abababb $ → ¢ abbabb $ → ¢ abbab $ → ¢ abab $
75
How does it work Example: ¢ abababababababab $ → ¢ abababababababb $ → ¢ abababababbabb $ → ¢ abababbabbabb $ → ¢ abbabbabbabb $ → ¢ abbabbabbab $ → ¢ abbabbabab $ → ¢ abbababab $ → ¢ abababab $ → ¢ abababb $ → ¢ abbabb $ → ¢ abbab $ → ¢ abab $ → ¢ abb $ → ¢ ab $ → ¢ λ $ → accept.
76
The power of RA CL : Instructions If we infer instructions from the previous example, then for k=4 we get the following 4-RA CL M : ¢ab abab a b$ babb ¢a abba b b$ bab$ baba ¢ ab $
77
The power of RA CL : Instructions If we infer instructions from the previous example, then for k=4 we get the following 4-RA CL M : Theorem: L(M) ∩ {(ab) n | n>0} = {(ab) 2 m | m≥0}. Idea. We describe the whole language L(M). ¢ab abab a b$ babb ¢a abba b b$ bab$ baba ¢ ab $
78
The power of RA CL : Instructions If we infer instructions from the previous example, then for k=4 we get the following 4-RA CL M : Theorem: L(M) ∩ {(ab) n | n>0} = {(ab) 2 m | m≥0}. Idea. We describe the whole language L(M). Note that this technique does not work with k<4. ¢ab abab a b$ babb ¢a abba b b$ bab$ baba ¢ ab $
79
PART III : RACL.exe
80
RACL.exe: Reduce/Generate
81
PART IV : Open Problems We can restrict our model to a k-RA SIMPLE which is the same model as the k-RA CL except that we do not use the symbols ¢ and $. I think that this model is useless because it is not able to recognize even finite languages.
82
PART IV : Open Problems We can restrict our model to a k-RA SIMPLE which is the same model as the k-RA CL except that we do not use the symbols ¢ and $. I think that this model is useless because it is not able to recognize even finite languages. We can generalize our model to a k-RA Δ n CL which is the same model as the k-RA ΔCL except that it uses n Δ symbols: Δ 1, Δ 2, …, Δ n. This model is able to recognize each CFL.
83
PART IV : Open Problems We can restrict our model to a k-RA SIMPLE which is the same model as the k-RA CL except that we do not use the symbols ¢ and $. I think that this model is useless because it is not able to recognize even finite languages. We can generalize our model to a k-RA Δ n CL which is the same model as the k-RA ΔCL except that it uses n Δ symbols: Δ 1, Δ 2, …, Δ n. This model is able to recognize each CFL. We can study closure properties of these models.
84
PART IV : Open Problems We can restrict our model to a k-RA SIMPLE which is the same model as the k-RA CL except that we do not use the symbols ¢ and $. I think that this model is useless because it is not able to recognize even finite languages. We can generalize our model to a k-RA Δ n CL which is the same model as the k-RA ΔCL except that it uses n Δ symbols: Δ 1, Δ 2, …, Δ n. This model is able to recognize each CFL. We can study closure properties of these models. We can study decidability: L(M) = ∅, L(M) = Σ* etc.
85
PART IV : Open Problems We can restrict our model to a k-RA SIMPLE which is the same model as the k-RA CL except that we do not use the symbols ¢ and $. I think that this model is useless because it is not able to recognize even finite languages. We can generalize our model to a k-RA Δ n CL which is the same model as the k-RA ΔCL except that it uses n Δ symbols: Δ 1, Δ 2, …, Δ n. This model is able to recognize each CFL. We can study closure properties of these models. We can study decidability: L(M) = ∅, L(M) = Σ* etc. We can study differences between language classes of RA CL and RA ΔCL (for different values of k ) etc.
86
PART IV : Open Problems We can restrict our model to a k-RA SIMPLE which is the same model as the k-RA CL except that we do not use the symbols ¢ and $. I think that this model is useless because it is not able to recognize even finite languages. We can generalize our model to a k-RA Δ n CL which is the same model as the k-RA ΔCL except that it uses n Δ symbols: Δ 1, Δ 2, …, Δ n. This model is able to recognize each CFL. We can study closure properties of these models. We can study decidability: L(M) = ∅, L(M) = Σ* etc. We can study differences between language classes of RA CL and RA ΔCL (for different values of k ) etc. We can study if these models are applicable in real problems: for example if we are able to recognize Pascal language etc.
87
References A. Cherubini, S. Crespi Reghizzi, and P.L. San Pietro: Associative Language Descriptions, Theoretical Computer Science, 270, 2002, 463-491. P. Jančar, F. Mráz, M. Plátek, J. Vogel: On Monotonic Automata with a Restart Operation. Journal of Automata, Languages and Combinatorics,1999, 4(4):287–311. F. Mráz, F. Otto, M. Plátek: Learning Analysis by Reduction from Positive Data. In: Y. Sakakibara, S. Kobayashi, K. Sato, T. Nishino, E. Tomita (Eds.): Proceedings ICGI 2006, LNCS 4201, Springer, Berlin, 2006, 125–136. http://www.petercerno.wz.cz/ra.html
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.