Fall 2003CS416 Compiler Design1 Lexical Analyzer Lexical Analyzer reads the source program character by character to produce tokens. Normally a lexical.

Slides:



Advertisements
Similar presentations
CPSC Compiler Tutorial 4 Midterm Review. Deterministic Finite Automata (DFA) Q: finite set of states Σ: finite set of “letters” (input alphabet)
Advertisements

1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 3 School of Innovation, Design and Engineering Mälardalen University 2012.
COMP-421 Compiler Design Presented by Dr Ioanna Dionysiou.
1 CIS 461 Compiler Design and Construction Fall 2012 slides derived from Tevfik Bultan et al. Lecture-Module 5 More Lexical Analysis.
Lexical Analysis - Scanner Computer Science Rensselaer Polytechnic Compiler Design Lecture 2.
Lecture # 5. Topics Minimization of DFA Examples What are the Important states of NFA? How to convert a Regular Expression directly into a DFA ?
Winter 2007SEG2101 Chapter 81 Chapter 8 Lexical Analysis.
Lexical Analysis III Recognizing Tokens Lecture 4 CS 4318/5331 Apan Qasem Texas State University Spring 2015.
1 The scanning process Main goal: recognize words/tokens Snapshot: At any point in time, the scanner has read some input and is on the way to identifying.
1 The scanning process Goal: automate the process Idea: –Start with an RE –Build a DFA How? –We can build a non-deterministic finite automaton (Thompson's.
2. Lexical Analysis Prof. O. Nierstrasz
1 Pertemuan Lexical Analysis (Scanning) Matakuliah: T0174 / Teknik Kompilasi Tahun: 2005 Versi: 1/6.
Lexical Analysis The Scanner Scanner 1. Introduction A scanner, sometimes called a lexical analyzer A scanner : – gets a stream of characters (source.
Scanner Front End The purpose of the front end is to deal with the input language Perform a membership test: code  source language? Is the.
1 Scanning Aaron Bloomfield CS 415 Fall Parsing & Scanning In real compilers the recognizer is split into two phases –Scanner: translate input.
Topic #3: Lexical Analysis
Lexical Analysis — Part II: Constructing a Scanner from Regular Expressions Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.
Lexical Analysis Natawut Nupairoj, Ph.D.
Lexical Analysis — Part II: Constructing a Scanner from Regular Expressions.
Chapter 3 Chang Chi-Chung The Role of the Lexical Analyzer Lexical Analyzer Parser Source Program Token Symbol Table getNextToken error.
1 Outline Informal sketch of lexical analysis –Identifies tokens in input string Issues in lexical analysis –Lookahead –Ambiguities Specifying lexers –Regular.
CS308 Compiler Principles Lexical Analyzer Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Fall 2012.
Overview of Previous Lesson(s) Over View  Strategies that have been used to implement and optimize pattern matchers constructed from regular expressions.
Compiler Phases: Source program Lexical analyzer Syntax analyzer Semantic analyzer Machine-independent code improvement Target code generation Machine-specific.
어휘분석 (Lexical Analysis). Overview Main task: to read input characters and group them into “ tokens. ” Secondary tasks: –Skip comments and whitespace;
1 Regular Expressions. 2 Regular expressions describe regular languages Example: describes the language.
Lecture # 3 Chapter #3: Lexical Analysis. Role of Lexical Analyzer It is the first phase of compiler Its main task is to read the input characters and.
Automating Construction of Lexers. Example in javacc TOKEN: { ( | | "_")* > | ( )* > | } SKIP: { " " | "\n" | "\t" } --> get automatically generated code.
Lexical Analysis Constructing a Scanner from Regular Expressions.
Topic #3: Lexical Analysis EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Lexical Analyzer (Checker)
Overview of Previous Lesson(s) Over View  An NFA accepts a string if the symbols of the string specify a path from the start to an accepting state.
CS412/413 Introduction to Compilers Radu Rugina Lecture 4: Lexical Analyzers 28 Jan 02.
COMP3190: Principle of Programming Languages DFA and its equivalent, scanner.
1 November 1, November 1, 2015November 1, 2015November 1, 2015 Azusa, CA Sheldon X. Liang Ph. D. Computer Science at Azusa Pacific University Azusa.
Lexical Analysis: Finite Automata CS 471 September 5, 2007.
Chapter 3 Chang Chi-Chung The Role of the Lexical Analyzer Lexical Analyzer Parser Source Program Token Symbol Table getNextToken error.
1 CD5560 FABER Formal Languages, Automata and Models of Computation Lecture 3 Mälardalen University 2010.
1 Lexical Analysis and Lexical Analyzer Generators Chapter 3 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University,
Review: Compiler Phases: Source program Lexical analyzer Syntax analyzer Semantic analyzer Intermediate code generator Code optimizer Code generator Symbol.
Pembangunan Kompilator.  A recognizer for a language is a program that takes a string x, and answers “yes” if x is a sentence of that language, and.
By Neng-Fa Zhou Lexical Analysis 4 Why separate lexical and syntax analyses? –simpler design –efficiency –portability.
Lecture # 4 Chapter 1 (Left over Topics) Chapter 3 (continue)
Overview of Previous Lesson(s) Over View  Symbol tables are data structures that are used by compilers to hold information about source-program constructs.
Overview of Previous Lesson(s) Over View  Algorithm for converting RE to an NFA.  The algorithm is syntax- directed, it works recursively up the parse.
Lexical Analyzer CS308 Compiler Theory1. 2 Lexical Analyzer Lexical Analyzer reads the source program character by character to produce tokens. Normally.
UNIT - I Formal Language and Regular Expressions: Languages Definition regular expressions Regular sets identity rules. Finite Automata: DFA NFA NFA with.
Prof. Necula CS 164 Lecture 31 Lexical Analysis Lecture 3-4.
1 February 23, February 23, 2016February 23, 2016February 23, 2016 Azusa, CA Sheldon X. Liang Ph. D. Computer Science at Azusa Pacific University.
Chapter 2 Scanning. Dr.Manal AbdulazizCS463 Ch22 The Scanning Process Lexical analysis or scanning has the task of reading the source program as a file.
using Deterministic Finite Automata & Nondeterministic Finite Automata
Overview of Previous Lesson(s) Over View  A token is a pair consisting of a token name and an optional attribute value.  A pattern is a description.
CS 404Ahmed Ezzat 1 CS 404 Introduction to Compiler Design Lecture 1 Ahmed Ezzat.
LECTURE 5 Scanning. SYNTAX ANALYSIS We know from our previous lectures that the process of verifying the syntax of the program is performed in two stages:
Deterministic Finite Automata Nondeterministic Finite Automata.
CS412/413 Introduction to Compilers Radu Rugina Lecture 3: Finite Automata 25 Jan 02.
COMP3190: Principle of Programming Languages DFA and its equivalent, scanner.
Lecture 2 Compiler Design Lexical Analysis By lecturer Noor Dhia
Compilers Lexical Analysis 1. while (y < z) { int x = a + b; y += x; } 2.
COMP 3438 – Part II - Lecture 3 Lexical Analysis II Par III: Finite Automata Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ. 1.
Chapter 3 Lexical Analysis.
Introduction to Lexical Analysis
Two issues in lexical analysis
Recognizer for a Language
Review: NFA Definition NFA is non-deterministic in what sense?
Lexical Analysis and Lexical Analyzer Generators
Recognition of Tokens.
Finite Automata.
Finite Automata & Language Theory
Chapter 3. Lexical Analysis (2)
Presentation transcript:

Fall 2003CS416 Compiler Design1 Lexical Analyzer Lexical Analyzer reads the source program character by character to produce tokens. Normally a lexical analyzer doesn’t return a list of tokens at one shot, it returns a token when the parser asks a token from it. Lexical Analyzer Parser source program token get next token

Fall 2003CS416 Compiler Design2 Token Token represents a set of strings described by a pattern. –Identifier represents a set of strings which start with a letter continues with letters and digits –The actual string (newval) is called as lexeme. –Tokens: identifier, number, addop, delimeter, … Since a token can represent more than one lexeme, additional information should be held for that specific lexeme. This additional information is called as the attribute of the token. For simplicity, a token may have a single attribute which holds the required information for that token. –For identifiers, this attribute a pointer to the symbol table, and the symbol table holds the actual attributes for that token. Some attributes: – where attr is pointer to the symbol table – no attribute is needed (if there is only one assignment operator) – where val is the actual value of the number. Token type and its attribute uniquely identifies a lexeme. Regular expressions are widely used to specify patterns.

Fall 2003CS416 Compiler Design3 Terminology of Languages Alphabet : a finite set of symbols (ASCII characters) String : –Finite sequence of symbols on an alphabet –Sentence and word are also used in terms of string –  is the empty string –|s| is the length of string s. Language: sets of strings over some fixed alphabet –  the empty set is a language. –{  } the set containing empty string is a language –The set of well-wormed C programs is a language –The set of all possible identifiers is a language. Operators on Strings: –Concatenation: xy represents the concatenation of strings x and y. s  = s  s = s –s n = s s s.. s ( n times) s 0 = 

Fall 2003CS416 Compiler Design4 Operations on Languages Concatenation: –L 1 L 2 = { s 1 s 2 | s 1  L 1 and s 2  L 2 } Union –L 1  L 2 = { s | s  L 1 or s  L 2 } Exponentiation: –L 0 = {  } L 1 = L L 2 = LL Kleene Closure –L * = Positive Closure –L + =

Fall 2003CS416 Compiler Design5 Example L 1 = {a,b,c,d} L 2 = {1,2} L 1 L 2 = {a1,a2,b1,b2,c1,c2,d1,d2} L 1  L 2 = {a,b,c,d,1,2} L 1 3 = all strings with length three (using a,b,c,d} L 1 * = all strings using letters a,b,c,d and empty string L 1 + = doesn’t include the empty string

Fall 2003CS416 Compiler Design6 Regular Expressions We use regular expressions to describe tokens of a programming language. A regular expression is built up of simpler regular expressions (using defining rules) Each regular expression denotes a language. A language denoted by a regular expression is called as a regular set.

Fall 2003CS416 Compiler Design7 Regular Expressions (Rules) Regular expressions over alphabet  Reg. Expr Language it denotes  {  } a   {a} (r 1 ) | (r 2 ) L(r 1 )  L(r 2 ) (r 1 ) (r 2 ) L(r 1 ) L(r 2 ) (r) * (L(r)) * (r)L(r) (r) + = (r)(r) * (r)? = (r) | 

Fall 2003CS416 Compiler Design8 Regular Expressions (cont.) We may remove parentheses by using precedence rules. –* highest –concatenation next –|lowest ab * |c means (a(b) * )|(c) Ex: –  = {0,1} –0|1 => {0,1} –(0|1)(0|1) => {00,01,10,11} –0 * => { ,0,00,000,0000,....} –(0|1) * => all strings with 0 and 1, including the empty string

Fall 2003CS416 Compiler Design9 Regular Definitions To write regular expression for some languages can be difficult, because their regular expressions can be quite complex. In those cases, we may use regular definitions. We can give names to regular expressions, and we can use these names as symbols to define other regular expressions. A regular definition is a sequence of the definitions of the form: d 1  r 1 where d i is a distinct name and d 2  r 2 r i is a regular expression over symbols in.  {d 1,d 2,...,d i-1 } d n  r n basic symbolspreviously defined names

Fall 2003CS416 Compiler Design10 Regular Definitions (cont.) Ex: Identifiers in Pascal letter  A | B |... | Z | a | b |... | z digit  0 | 1 |... | 9 id  letter (letter | digit ) * –If we try to write the regular expression representing identifiers without using regular definitions, that regular expression will be complex. (A|...|Z|a|...|z) ( (A|...|Z|a|...|z) | (0|...|9) ) * Ex: Unsigned numbers in Pascal digit  0 | 1 |... | 9 digits  digit + opt-fraction  (. digits ) ? opt-exponent  ( E (+|-)? digits ) ? unsigned-num  digits opt-fraction opt-exponent

Fall 2003CS416 Compiler Design11 Finite Automata A recognizer for a language is a program that takes a string x, and answers “yes” if x is a sentence of that language, and “no” otherwise. We call the recognizer of the tokens as a finite automaton. A finite automaton can be: deterministic(DFA) or non-deterministic (NFA) This means that we may use a deterministic or non-deterministic automaton as a lexical analyzer. Both deterministic and non-deterministic finite automaton recognize regular sets. Which one? –deterministic – faster recognizer, but it may take more space –non-deterministic – slower, but it may take less space –Deterministic automatons are widely used lexical analyzers. First, we define regular expressions for tokens; Then we convert them into a DFA to get a lexical analyzer for our tokens. –Algorithm1: Regular Expression  NFA  DFA (two steps: first to NFA, then to DFA) –Algorithm2: Regular Expression  DFA (directly convert a regular expression into a DFA)

Fall 2003CS416 Compiler Design12 Non-Deterministic Finite Automaton (NFA) A non-deterministic finite automaton (NFA) is a mathematical model that consists of: –S - a set of states –  - a set of input symbols (alphabet) –move – a transition function move to map state-symbol pairs to sets of states. –s 0 - a start (initial) state –F – a set of accepting states (final states)  - transitions are allowed in NFAs. In other words, we can move from one state to another one without consuming any symbol. A NFA accepts a string x, if and only if there is a path from the starting state to one of accepting states such that edge labels along this path spell out x.

Fall 2003CS416 Compiler Design13 NFA (Example) 102 ab start a b 0 is the start state s 0 {2} is the set of final states F  = {a,b} S = {0,1,2} Transition Function: a b 0 {0,1} {0} 1 _ {2} 2 _ _ Transition graph of the NFA The language recognized by this NFA is (a|b) * a b

Fall 2003CS416 Compiler Design14 Deterministic Finite Automaton (DFA) A Deterministic Finite Automaton (DFA) is a special form of a NFA. no state has  - transition for each symbol a and state s, there is at most one labeled edge a leaving s. i.e. transition function is from pair of state-symbol to state (not set of states) 102 ba a b The language recognized by this DFA is also (a|b) * a b ba

Fall 2003CS416 Compiler Design15 Implementing a DFA Le us assume that the end of a string is marked with a special symbol (say eos). The algorithm for recognition will be as follows: (an efficient implementation) s  s 0 { start from the initial state } c  nextchar{ get the next character from the input string } while (c != eos) do{ do until the en dof the string } begin s  move(s,c){ transition function } c  nextchar end if (s in F) then { if s is an accepting state } return “yes” else return “no”

Fall 2003CS416 Compiler Design16 Implementing a NFA S   -closure({s 0 }){ set all of states can be accessible from s 0 by  -transitions } c  nextchar while (c != eos) { begin s   -closure(move( S,c)){ set of all states can be accessible from a state in S c  nextchar by a transition on c } end if ( S  F !=  ) then{ if S contains an accepting state } return “yes” else return “no” This algorithm is not efficient.

Fall 2003CS416 Compiler Design17 Converting A Regular Expression into A NFA (Thomson’s Construction) This is one way to convert a regular expression into a NFA. There can be other ways (much efficient) for the conversion. Thomson’s Construction is simple and systematic method. It guarantees that the resulting NFA will have exactly one final state, and one start state. Construction starts from simplest parts (alphabet symbols). To create a NFA for a complex regular expression, NFAs of its sub-expressions are combined to create its NFA,

Fall 2003CS416 Compiler Design18 To recognize an empty string  To recognize a symbol a in the alphabet  If N(r 1 ) and N(r 2 ) are NFAs for regular expressions r 1 and r 2 For regular expression r 1 | r 2 a fifi  N(r 2 ) N(r 1 ) fi NFA for r 1 | r 2 Thomson’s Construction (cont.)   

Fall 2003CS416 Compiler Design19 Thomson’s Construction (cont.) For regular expression r 1 r 2 ifN(r 2 )N(r 1 ) NFA for r 1 r 2 Final state of N(r 2 ) become final state of N(r 1 r 2 ) For regular expression r * N(r)if NFA for r *   

Fall 2003CS416 Compiler Design20 Thomson’s Construction (Example - (a|b) * a ) a: a b b: (a | b) a b     b     a    (a|b) *   b     a    a (a|b) * a

Fall 2003CS416 Compiler Design21 Converting a NFA into a DFA (subset construction) put  -closure({s 0 }) as an unmarked state into the set of DFA (DS) while (there is one unmarked S 1 in DS) do begin mark S 1 for each input symbol a do begin S 2   -closure(move(S 1,a)) if (S 2 is not in DS) then add S 2 into DS as an unmarked state transfunc[S 1,a]  S 2 end a state S in DS is an accepting state of DFA if a state in S is an accepting state of NFA the start state of DFA is  -closure({s 0 }) set of states to which there is a transition on a from a state s in S 1  -closure({s 0 }) is the set of all states can be accessible from s 0 by  -transition.

Fall 2003CS416 Compiler Design22 Converting a NFA into a DFA (Example) b     a    a S 0 =  -closure({0}) = {0,1,2,4,7} S 0 into DS as an unmarked state  mark S 0  -closure(move(S 0,a)) =  -closure({3,8}) = {1,2,3,4,6,7,8} = S 1 S 1 into DS  -closure(move(S 0,b)) =  -closure({5}) = {1,2,4,5,6,7} = S 2 S 2 into DS transfunc[S 0,a]  S 1 transfunc[S 0,b]  S 2  mark S 1  -closure(move(S 1,a)) =  -closure({3,8}) = {1,2,3,4,6,7,8} = S 1  -closure(move(S 1,b)) =  -closure({5}) = {1,2,4,5,6,7} = S 2 transfunc[S 1,a]  S 1 transfunc[S 1,b]  S 2  mark S 2  -closure(move(S 2,a)) =  -closure({3,8}) = {1,2,3,4,6,7,8} = S 1  -closure(move(S 2,b)) =  -closure({5}) = {1,2,4,5,6,7} = S 2 transfunc[S 2,a]  S 1 transfunc[S 2,b]  S 2

Fall 2003CS416 Compiler Design23 Converting a NFA into a DFA (Example – cont.) S 0 is the start state of DFA since 0 is a member of S 0 ={0,1,2,4,7} S 1 is an accepting state of DFA since 8 is a member of S 1 = {1,2,3,4,6,7,8} b a a b b a S1S1 S2S2 S0S0

Fall 2003CS416 Compiler Design24 Converting Regular Expressions Directly to DFAs We may convert a regular expression into a DFA (without creating a NFA first). First we augment the given regular expression by concatenating it with a special symbol #. r  (r)# augmented regular expression Then, we create a syntax tree for this augmented regular expression. In this syntax tree, all alphabet symbols (plus # and the empty string) in the augmented regular expression will be on the leaves, and all inner nodes will be the operators in that augmented regular expression. Then each alphabet symbol (plus #) will be numbered (position numbers).

Fall 2003CS416 Compiler Design25 Regular Expression  DFA (cont.) (a|b) * a  (a|b) * a #augmented regular expression  *  | b a # a Syntax tree of (a|b) * a # each symbol is numbered (positions) each symbol is at a leave inner nodes are operators

Fall 2003CS416 Compiler Design26 followpos Then we define the function followpos for the positions (positions assigned to leaves). followpos(i) -- is the set of positions which can follow the position i in the strings generated by the augmented regular expression. For example, ( a | b) * a # followpos(1) = {1,2,3} followpos(2) = {1,2,3} followpos(3) = {4} followpos(4) = {} followpos is just defined for leaves, it is not defined for inner nodes.

Fall 2003CS416 Compiler Design27 firstpos, lastpos, nullable To evaluate followpos, we need three more functions to be defined for the nodes (not just for leaves) of the syntax tree. firstpos(n) -- the set of the positions of the first symbols of strings generated by the sub-expression rooted by n. lastpos(n) -- the set of the positions of the last symbols of strings generated by the sub-expression rooted by n. nullable(n) -- true if the empty string is a member of strings generated by the sub-expression rooted by n false otherwise

Fall 2003CS416 Compiler Design28 How to evaluate firstpos, lastpos, nullable nnullable(n)firstpos(n)lastpos(n) leaf labeled  true  leaf labeled with position i false{i} | c 1 c 2 nullable(c 1 ) or nullable(c 2 ) firstpos(c 1 )  firstpos(c 2 )lastpos(c 1 )  lastpos(c 2 )  c 1 c 2 nullable(c 1 ) and nullable(c 2 ) if (nullable(c 1 )) firstpos(c 1 )  firstpos(c 2 ) else firstpos(c 1 ) if (nullable(c 2 )) lastpos(c 1 )  lastpos(c 2 ) else lastpos(c 2 ) * c 1 truefirstpos(c 1 )lastpos(c 1 )

Fall 2003CS416 Compiler Design29 How to evaluate followpos Two-rules define the function followpos: 1.If n is concatenation-node with left child c 1 and right child c 2, and i is a position in lastpos(c 1 ), then all positions in firstpos(c 2 ) are in followpos(i). 2.If n is a star-node, and i is a position in lastpos(n), then all positions in firstpos(n) are in followpos(i). If firstpos and lastpos have been computed for each node, followpos of each position can be computed by making one depth-first traversal of the syntax tree.

Fall 2003CS416 Compiler Design30 Example -- ( a | b) * a #  *  | b a # a {1} {1,2,3} {3} {1,2,3} {1,2} {2} {4} {3} {1,2} {2} green – firstpos blue – lastpos Then we can calculate followpos followpos(1) = {1,2,3} followpos(2) = {1,2,3} followpos(3) = {4} followpos(4) = {} After we calculate follow positions, we are ready to create DFA for the regular expression.

Fall 2003CS416 Compiler Design31 Algorithm (RE  DFA) Create the syntax tree of (r) # Calculate the functions: followpos, firstpos, lastpos, nullable Put firstpos(root) into the states of DFA as an unmarked state. while (there is an unmarked state S in the states of DFA) do –mark S –for each input symbol a do let s 1,...,s n are positions in S and symbols in those positions are a S ’  followpos(s 1 ) ...  followpos(s n ) move(S,a)  S’ if (S’ is not empty and not in the states of DFA) –put S’ into the states of DFA as an unmarked state. the start state of DFA is firstpos(root) the accepting states of DFA are all states containing the position of #

Fall 2003CS416 Compiler Design32 Example -- ( a | b) * a # followpos(1)={1,2,3} followpos(2)={1,2,3} followpos(3)={4} followpos(4)={} S 1 =firstpos(root)={1,2,3}  mark S 1 a: followpos(1)  followpos(3)={1,2,3,4}=S 2 move(S 1,a)=S 2 b: followpos(2)={1,2,3}=S 1 move(S 1,b)=S 1  mark S 2 a: followpos(1)  followpos(3)={1,2,3,4}=S 2 move(S 2,a)=S 2 b: followpos(2)={1,2,3}=S 1 move(S 2,b)=S 1 start state: S 1 accepting states: {S 2 } S1S1 S2S2 a b b a

Fall 2003CS416 Compiler Design33 Example -- ( a |  ) b c * # followpos(1)={2} followpos(2)={3,4} followpos(3)={3,4} followpos(4)={} S 1 =firstpos(root)={1,2}  mark S 1 a: followpos(1)={2}=S 2 move(S 1,a)=S 2 b: followpos(2)={3,4}=S 3 move(S 1,b)=S 3  mark S 2 b: followpos(2)={3,4}=S 3 move(S 2,b)=S 3  mark S 3 c: followpos(3)={3,4}=S 3 move(S 3,c)=S 3 start state: S 1 accepting states: {S 3 } S3S3 S2S2 S1S1 c a b b

Fall 2003CS416 Compiler Design34 Minimizing Number of States of a DFA partition the set of states into two groups: –G 1 : set of accepting states –G 2 : set of non-accepting states For each new group G –partition G into subgroups such that states s 1 and s 2 are in the same group iff for all input symbols a, states s 1 and s 2 have transitions to states in the same group. Start state of the minimized DFA is the group containing the start state of the original DFA. Accepting states of the minimized DFA are the groups containing the accepting states of the original DFA.

Fall 2003CS416 Compiler Design35 Minimizing DFA - Example ba a a b b G 1 = {2} G 2 = {1,3} G 2 cannot be partitioned because move(1,a)=2move(1,b)=3 move(3,a)=2move(2,b)=3 So, the minimized DFA (with minimum states) {1,3} a a b b {2}

Fall 2003CS416 Compiler Design36 Minimizing DFA – Another Example b b b a a a a b Groups: {1,2,3}{4} a b 1->21->3 2->22->3 3->43->3 {1,2}{3} no more partitioning So, the minimized DFA {1,2} {4} {3} b a a a b b

Fall 2003CS416 Compiler Design37 Some Other Issues in Lexical Analyzer The lexical analyzer has to recognize the longest possible string. –Ex: identifier newval -- n ne new newv newva newval What is the end of a token? Is there any character which marks the end of a token? –It is normally not defined. –If the number of characters in a token is fixed, in that case no problem: + - –But (in Pascal) –The end of an identifier : the characters cannot be in an identifier can mark the end of token. –We may need a lookhead In Prolog: p :- X is 1. p :- X is 1.5. The dot followed by a white space character can mark the end of a number. But if that is not the case, the dot must be treated as a part of the number.

Fall 2003CS416 Compiler Design38 Some Other Issues in Lexical Analyzer (cont.) Skipping comments –Normally we don’t return a comment as a token. –We skip a comment, and return the next token (which is not a comment) to the parser. –So, the comments are only processed by the lexical analyzer, and the don’t complicate the syntax of the language. Symbol table interface –symbol table holds information about tokens (at least lexeme of identifiers) –how to implement the symbol table, and what kind of operations. hash table – open addressing, chaining putting into the hash table, finding the position of a token from its lexeme. Positions of the tokens in the file (for the error handling).