The time complexity for e-closure(T).

Slides:



Advertisements
Similar presentations
A question from last class: construct the predictive parsing table for this grammar: S->i E t S e S | i E t S | a E -> B.
Advertisements

CSE 311 Foundations of Computing I
1 CIS 461 Compiler Design and Construction Fall 2012 slides derived from Tevfik Bultan et al. Lecture-Module 5 More Lexical Analysis.
Compiler Construction Sohail Aslam Lecture Finite Automaton of Items Then for every item A →  X  we must add an  -transition for every production.
Chapter 2 Lexical Analysis Nai-Wei Lin. Lexical Analysis Lexical analysis recognizes the vocabulary of the programming language and transforms a string.
LEXICAL ANALYSIS Phung Hua Nguyen University of Technology 2006.
1 Lexical Analysis Cheng-Chia Chen. 2 Outline l Introduction to lexical analyzer l Tokens l Regular expressions (RE) l Finite automata (FA) »deterministic.
1 Introduction to Computability Theory Lecture3: Regular Expressions Prof. Amos Israeli.
1 The scanning process Main goal: recognize words/tokens Snapshot: At any point in time, the scanner has read some input and is on the way to identifying.
Lecture 4 RegExpr  NFA  DFA Topics Thompson Construction Subset construction Readings: 3.7, 3.6 January 23, 2006 CSCE 531 Compiler Construction.
Thompson Construction, Subset Construction Thompson Construction, Subset Construction Continue….. LECTURE 7.
Lexical Analysis — Part II: Constructing a Scanner from Regular Expressions.
CS308 Compiler Principles Lexical Analyzer Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Fall 2012.
Lecture # 3 Chapter #3: Lexical Analysis. Role of Lexical Analyzer It is the first phase of compiler Its main task is to read the input characters and.
Pushdown Automata (PDAs)
Other Issues - § 3.9 – Not Discussed More advanced algorithm construction – regular expression to DFA directly.
Topic #3: Lexical Analysis EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
4b 4b Lexical analysis Finite Automata. Finite Automata (FA) FA also called Finite State Machine (FSM) –Abstract model of a computing entity. –Decides.
Decidable Questions About Regular languages 1)Membership problem: “Given a specification of known type and a string w, is w in the language specified?”
CS412/413 Introduction to Compilers Radu Rugina Lecture 4: Lexical Analyzers 28 Jan 02.
TRANSITION DIAGRAM BASED LEXICAL ANALYZER and FINITE AUTOMATA Class date : 12 August, 2013 Prepared by : Karimgailiu R Panmei Roll no. : 11CS10020 GROUP.
Thompson Construction Prepared by: Anupam Prakash 04CS1017.
Converting NFAs to DFAs How a Syntax Analyser is constructed.
Transition Diagrams Lecture 3 Wed, Jan 21, Building Transition Diagrams from Regular Expressions A regular expression consists of symbols a, b,
Lexical Analysis: Finite Automata CS 471 September 5, 2007.
Pembangunan Kompilator.  A recognizer for a language is a program that takes a string x, and answers “yes” if x is a sentence of that language, and.
Lecture # 4 Chapter 1 (Left over Topics) Chapter 3 (continue)
Basic Data Structures Stacks. A collection of objects Objects can be inserted into or removed from the collection at one end (top) First-in-last-out.
CMSC 330: Organization of Programming Languages Finite Automata NFAs  DFAs.
Fall 2003CS416 Compiler Design1 Lexical Analyzer Lexical Analyzer reads the source program character by character to produce tokens. Normally a lexical.
CMSC 330: Organization of Programming Languages Theory of Regular Expressions NFAs  DFAs.
Lexical Analysis – Part II EECS 483 – Lecture 3 University of Michigan Wednesday, September 13, 2006.
Overview of Previous Lesson(s) Over View  A token is a pair consisting of a token name and an optional attribute value.  A pattern is a description.
Jianguo Lu : Lab 3 Jan 30, Winter 2004.
Chapter 5 Finite Automata Finite State Automata n Capable of recognizing numerous symbol patterns, the class of regular languages n Suitable for.
Complexity and Computability Theory I Lecture #5 Rina Zviel-Girshin Leah Epstein Winter
CS412/413 Introduction to Compilers Radu Rugina Lecture 3: Finite Automata 25 Jan 02.
Converting Regular Expressions to NFAs Empty string   is a regular expression denoting  {  } a is a regular expression denoting {a} for any a in 
Lecture 11  2004 SDU Lecture7 Pushdown Automaton.
Department of Software & Media Technology
COMP 3438 – Part II - Lecture 3 Lexical Analysis II Par III: Finite Automata Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ. 1.
WELCOME TO A JOURNEY TO CS419 Dr. Hussien Sharaf Dr. Mohammad Nassef Department of Computer Science, Faculty of Computers and Information, Cairo University.
Lecture # 21.
Properties of Regular Languages
Table-driven parsing Parsing performed by a finite state machine.
PDA’s - A new format for FAs
Chapter 2 Finite Automata
Introduction to Lexical Analysis
PDAs Accept Context-Free Languages
Two issues in lexical analysis
Recognizer for a Language
Review: NFA Definition NFA is non-deterministic in what sense?
Jaya Krishna, M.Tech, Assistant Professor
Lexical Analysis Why separate lexical and syntax analyses?
Decision Properties of Regular Languages
Lecture 5: Lexical Analysis III: The final bits
COSC 3340: Introduction to Theory of Computation
Lecture 4 (new improved) Subset construction NFA  DFA
Transition Diagrams Lecture 3 Fri, Jan 21, 2005.
This method is used for converting an
Converting NFAs to DFAs How Lex is constructed
Finite Automata.
Finite Automata & Language Theory
Automating Scanner Construction
Other Issues - § 3.9 – Not Discussed
ReCap Chomsky Normal Form, Theorem regarding CNF, examples of converting CFG to be in CNF, Example of an FA corresponding to Regular CFG, Left most and.
4b Lexical analysis Finite Automata
Compiler Construction
Regular Language Equivalence and DFA Minimization
Lecture 5 Scanning.
Presentation transcript:

The time complexity for e-closure(T). Push all states in T onto stack; initialize e-closure(T) to T; while stack is not empty do begin pop t, the top element, off the stack for each state u with an edge from t to u labeled e do if u is not in e-closure(T) do begin add u to e-closure(T) push u onto stack end if end do end while (page 119, Fig. 3.26) Time complexity?

The complexity for the algorithm that recognizes the language accepted by NFA(revisit). Input: an NFA (transition table) and a string x (terminated by eof). output “yes” if accepted, “no” otherwise. S = e-closure({s0}); a = nextchar; while a != eof do begin S = e-closure(move(S, a)); a := next char; end if (intersect (S, F) != empty) then return “yes” else return “no” Time complexity ?? Space complexity ??

Algorithm to convert an NFA to a DFA that accepts the same language (algorithm 3.2, page 118) initially e-closure(s0) is the only state in Dstates and it is marked while there is an unmarked state T in Dstates do begin mark T; for each input symbol a do begin U := e-closure(move(T, a)); if (U is not in Dstates) then add U as an unmarked state to Dstates; Dtran[T, a] := U; end end; Initial state = e-closure(s0), Final state = ?

Question: for a NFA with |S| states, at most how many states can its corresponding DFA have? Using DFA or NFA?? Trade-off between space and time!!

The number of states determines the space complexity. A DFA can potentially have a large number of states. Converting an NFA to a DFA may not result in the minimum-state DFA. In the final product, we would like to construct a DFA with the minimum number of states (while still recognizing the same language). Basic idea: assuming all states have a transition on every input symbol (what if this is not the case??), find all groups of states that can be distinguished by some input strings. An input string w distinguishes two states s and t, if starting from s and feeding w, we end up in a nonaccepting state while starting from t and feeding w, we end up in an accepting state, or vice versa.

Algorithm (3.6, page 142): Input: a DFA M output: a minimum state DFA M’ If some states in M ignore some inputs, add transitions to a “dead” state. Let P = {All accepting states, All nonaccepting states} Let P’ = {} Loop: for each group G in P do Partition G into subgroups so that s and t (in G) belong to the same subgroup if and only if each input a moves s and t to the same state of the same P-groups put the new subgroups in P’ if (P != P’) {P = P’; goto loop} Remove any dead states and unreachable states.

Example: minimize the DFA for (ab|ba)a* Example: minimize the DFA for Fig 3.29 (pages 121) Questions: How can we implement Lex? %% BEGIN {return(BEGINNUMBER);} END {return(ENDNUMBER);} IF {return(IFNUMBER);}

Lex internal: construct an NFA to recognize the sum of all patterns convert the NFA to a DFA (record all accepting states for each individual pattern). Minimize the DFA (separate distinct accepting states for the initial pattern). Simulate the DFA to termination (that is, no further transitions) Find the last DFA state entered that holds an accepting NFA state (this picks the longest match). If no such state, then it is an invalid token.