Lexical Analysis.

Slides:



Advertisements
Similar presentations
Lexical Analysis Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!
Advertisements

COMP-421 Compiler Design Presented by Dr Ioanna Dionysiou.
Chapter 2 Lexical Analysis Nai-Wei Lin. Lexical Analysis Lexical analysis recognizes the vocabulary of the programming language and transforms a string.
Chapter 3 Lexical Analysis Yu-Chen Kuo.
Chapter 3 Lexical Analysis. Definitions The lexical analyzer produces a certain token wherever the input contains a string of characters in a certain.
CS-338 Compiler Design Dr. Syed Noman Hasany Assistant Professor College of Computer, Qassim University.
 Lex helps to specify lexical analyzers by specifying regular expression  i/p notation for lex tool is lex language and the tool itself is refered to.
1 Lexical Analysis Cheng-Chia Chen. 2 Outline l Introduction to lexical analyzer l Tokens l Regular expressions (RE) l Finite automata (FA) »deterministic.
Winter 2007SEG2101 Chapter 81 Chapter 8 Lexical Analysis.
Lecture 3-4 Notes by G. Necula, with additions by P. Hilfinger
1 The scanning process Main goal: recognize words/tokens Snapshot: At any point in time, the scanner has read some input and is on the way to identifying.
1 Chapter 2: Scanning 朱治平. Scanner (or Lexical Analyzer) the interface between source & compiler could be a separate pass and places its output on an.
2. Lexical Analysis Prof. O. Nierstrasz
Prof. Hilfinger CS 164 Lecture 21 Lexical Analysis Lecture 2-4 Notes by G. Necula, with additions by P. Hilfinger.
Chapter 3 Chang Chi-Chung. The Structure of the Generated Analyzer lexeme Automaton simulator Transition Table Actions Lex compiler Lex Program lexemeBeginforward.
Lexical Analysis Recognize tokens and ignore white spaces, comments
Lexical Analysis The Scanner Scanner 1. Introduction A scanner, sometimes called a lexical analyzer A scanner : – gets a stream of characters (source.
CS 426 Compiler Construction
1 Scanning Aaron Bloomfield CS 415 Fall Parsing & Scanning In real compilers the recognizer is split into two phases –Scanner: translate input.
Chapter 3 Lexical Analysis
Topic #3: Lexical Analysis
CPSC 388 – Compiler Design and Construction Scanners – Finite State Automata.
1 Flex. 2 Flex A Lexical Analyzer Generator  generates a scanner procedure directly, with regular expressions and user-written procedures Steps to using.
Lexical Analysis Natawut Nupairoj, Ph.D.
1 Outline Informal sketch of lexical analysis –Identifies tokens in input string Issues in lexical analysis –Lookahead –Ambiguities Specifying lexers –Regular.
Lexical Analysis - An Introduction. The Front End The purpose of the front end is to deal with the input language Perform a membership test: code  source.
어휘분석 (Lexical Analysis). Overview Main task: to read input characters and group them into “ tokens. ” Secondary tasks: –Skip comments and whitespace;
Lecture # 3 Chapter #3: Lexical Analysis. Role of Lexical Analyzer It is the first phase of compiler Its main task is to read the input characters and.
Other Issues - § 3.9 – Not Discussed More advanced algorithm construction – regular expression to DFA directly.
Topic #3: Lexical Analysis EE 456 – Compiling Techniques Prof. Carl Sable Fall 2003.
Lexical Analyzer (Checker)
4b 4b Lexical analysis Finite Automata. Finite Automata (FA) FA also called Finite State Machine (FSM) –Abstract model of a computing entity. –Decides.
CS412/413 Introduction to Compilers Radu Rugina Lecture 4: Lexical Analyzers 28 Jan 02.
TRANSITION DIAGRAM BASED LEXICAL ANALYZER and FINITE AUTOMATA Class date : 12 August, 2013 Prepared by : Karimgailiu R Panmei Roll no. : 11CS10020 GROUP.
1 November 1, November 1, 2015November 1, 2015November 1, 2015 Azusa, CA Sheldon X. Liang Ph. D. Computer Science at Azusa Pacific University Azusa.
Chapter 3. Lexical Analysis (1). 2 Interaction of lexical analyzer with parser.
Compiler Construction 2 주 강의 Lexical Analysis. “get next token” is a command sent from the parser to the lexical analyzer. On receipt of the command,
Lexical Analyzer in Perspective
Lexical Analysis: Finite Automata CS 471 September 5, 2007.
By Neng-Fa Zhou Lexical Analysis 4 Why separate lexical and syntax analyses? –simpler design –efficiency –portability.
1.  It is the first phase of compiler.  In computer science, lexical analysis is the process of converting a sequence of characters into a sequence.
IN LINE FUNCTION AND MACRO Macro is processed at precompilation time. An Inline function is processed at compilation time. Example : let us consider this.
Using Scanner Generator Lex By J. H. Wang May 10, 2011.
CSc 453 Lexical Analysis (Scanning)
Joey Paquet, 2000, Lecture 2 Lexical Analysis.
Overview of Previous Lesson(s) Over View  Symbol tables are data structures that are used by compilers to hold information about source-program constructs.
The Role of Lexical Analyzer
CSC3315 (Spring 2009)1 CSC 3315 Lexical and Syntax Analysis Hamid Harroud School of Science and Engineering, Akhawayn University
1st Phase Lexical Analysis
CS412/413 Introduction to Compilers and Translators Spring ’99 Lecture 2: Lexical Analysis.
Prof. Necula CS 164 Lecture 31 Lexical Analysis Lecture 3-4.
Chapter 2 Scanning. Dr.Manal AbdulazizCS463 Ch22 The Scanning Process Lexical analysis or scanning has the task of reading the source program as a file.
1 Topic 2: Lexing and Flexing COS 320 Compiling Techniques Princeton University Spring 2016 Lennart Beringer.
Overview of Previous Lesson(s) Over View  A token is a pair consisting of a token name and an optional attribute value.  A pattern is a description.
CS 404Ahmed Ezzat 1 CS 404 Introduction to Compiler Design Lecture 1 Ahmed Ezzat.
Deterministic Finite Automata Nondeterministic Finite Automata.
CS412/413 Introduction to Compilers Radu Rugina Lecture 3: Finite Automata 25 Jan 02.
Lecture 2 Compiler Design Lexical Analysis By lecturer Noor Dhia
COMP 3438 – Part II - Lecture 3 Lexical Analysis II Par III: Finite Automata Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ. 1.
WELCOME TO A JOURNEY TO CS419 Dr. Hussien Sharaf Dr. Mohammad Nassef Department of Computer Science, Faculty of Computers and Information, Cairo University.
Lexical Analyzer in Perspective
CS510 Compiler Lecture 2.
Chapter 3 Lexical Analysis.
Compilers Welcome to a journey to CS419 Lecture5: Lexical Analysis:
COMPILER DESIGN UNIT-I.
פרק 3 ניתוח לקסיקאלי תורת הקומפילציה איתן אביאור.
Lexical Analysis and Lexical Analyzer Generators
Lexical Analysis Lecture 3-4 Prof. Necula CS 164 Lecture 3.
Recognition of Tokens.
Other Issues - § 3.9 – Not Discussed
Presentation transcript:

Lexical Analysis

Outline Role of lexical analyzer Function of lexical analyzer Specification of tokens Recognition of tokens Lexical analyzer generator Finite automata Design of lexical analyzer generator

The role of lexical analyzer token Lexical Analyzer Parser Source program To semantic analysis getNextToken For writing the token rules Res are used and for recognizing the tokens NFA, DFA are used Symbol table

Tokenizing Breaking the program down into words or “tokens” Input: stream of characters Output: stream of names, keywords, punctuation marks Side effect: Discards white space, comments Source code: if (b==0) a = “Hi”; Token Stream: if ( b == 0 ) a = “Hi” ; Lexical Analysis Parsing

Breaking up Text elsex=0; else x = 0; elsex = 0 ; REs alone not enough: need rules for choosing Most languages: longest matching token wins Ties in length resolved by prioritizing tokens RE’s + priorities + longest-matching token rule = lexer definition else x = 0; elsex = 0 ;

Function of Lexical Analyzer Produces stream of tokens Eliminates Blank spaces Keeps track of line number Reports the error encountered while generating the tokens Generates symbol tables which stores the information about identifiers, constants encountered in the input.

Tokens, Patterns and Lexemes Tokens - describes the class or category of input string. Eg. Identifiers, keywords, constants are called tokens. (Token name, Token value) Pattern – set of rules that describe the token Lexeme - a sequence of characters in the source program that matches the pattern for a token. Eg. Int, i, num, ans,

Example Token Informal description Sample lexemes if Characters i, f if else Characters e, l, s, e else comparison <=, != < or > or <= or >= or == or != id Letter followed by letter and digits pi, score, D2 number Any numeric constant 3.14159, 0, 6.02e23 literal Anything but “ sorrounded by “ “core dumped” printf(“total = %d\n”, score);

Block schematic of Lexical Analyzer Input Buffer Lexeme Lexical analyzer Finite state machine Finite automata simulator Patterns Pattern matching algo. Tokens

Lexical errors Some errors are out of power of lexical analyzer to recognize: fi (a == f(x)) … However it may be able to recognize errors like: d = 2r Such errors are recognized when no pattern for tokens matches a character sequence

Attributes for tokens E = M * C ** 2 <id, pointer to symbol table entry for E> <assign-op> <id, pointer to symbol table entry for M> <mult-op> <id, pointer to symbol table entry for C> <exp-op> <number, integer value 2>

Specification of tokens In theory of compilation regular expressions are used to formalize the specification of tokens Regular expressions are means for specifying regular languages Example: Letter_(letter_ | digit)* Each regular expression is a pattern specifying the form of strings

Regular expressions Ɛ is a regular expression, L(Ɛ) = {Ɛ} If a is a symbol in ∑then a is a regular expression, L(a) = {a} (r) | (s) is a regular expression denoting the language L(r) ∪ L(s) (r)(s) is a regular expression denoting the language L(r)L(s) (r)* is a regular expression denoting (L(r))* (r) is a regular expression denting L(r)

Regular definitions d1 -> r1 d2 -> r2 … dn -> rn Example: letter_ -> A | B | … | Z | a | b | … | Z | _ digit -> 0 | 1 | … | 9 id -> letter_ (letter_ | digit)*

Extensions One or more instances: (r)+ Zero or one instances: r? Character classes: [abc] Example: letter_ -> [A-Za-z_] digit -> [0-9] id -> letter_(letter|digit)*

Recognition of tokens Starting point is the language grammar to understand the tokens: stmt -> if expr then stmt | if expr then stmt else stmt | Ɛ expr -> term relop term | term term -> id | number

Recognition of tokens (cont.) The next step is to formalize the patterns: digit -> [0-9] Digits -> digit+ number -> Digits(.Digits)? (E[+-]? Digits)? letter -> [A-Za-z_] id -> letter (letter|digit)* If -> if Then -> then Else -> else Relop -> < | > | <= | >= | = | <> We also need to handle whitespaces: ws -> (blank | tab | newline)+ 3.4E+/-38(7 digits)

Transition diagrams Transition diagram for relop

Transition diagrams (cont.) Transition diagram for reserved words and identifiers

Transition diagrams (cont.) Transition diagram for unsigned numbers

Transition diagrams (cont.) Transition diagram for whitespace

LEX Tool

Lexical Analyzer Generator - Lex Lex Spec. file x.l Lexical Compiler lex.yy.c C compiler lex.yy.c a.out a.out Sequence of tokens Input stream

Lexer Generator Specification Input to lexer generator: list of regular expressions in priority order associated action for each RE (generates appropriate kind of token) Output: program that reads an input stream and breaks it up into tokens according to the REs. (Or reports lexical error -- “Unexpected character” ) Lex produces a C program from a lexical specification

Structure of Lex programs {% Declarations %} %% translation rules Auxiliary procedure functions Pattern {Action}

Example %{ /* definitions of manifest constants LT, LE, EQ, NE, GT, GE, IF, THEN, ELSE, ID, NUMBER, RELOP */ %} /* regular definitions delim [ \t\n] ws {delim}+ letter [A-Za-z] digit [0-9] id {letter}({letter}|{digit})* number {digit}+(\.{digit}+)?(E[+-]?{digit}+)? %% {ws} {/* no action and no return */} if {return(IF);} then {return(THEN);} else {return(ELSE);} {id} {yylval = (int) installID(); return(ID); } {number} {yylval = (int) installNum(); return(NUMBER);} … Int installID() {/* funtion to install the lexeme, whose first character is pointed to by yytext, and whose length is yyleng, into the symbol table and return a pointer thereto */ } Int installNum() { /* similar to installID, but puts numerical constants into a separate table */

Finite Automata Regular expressions = specification Finite automata = implementation A finite automaton consists of An input alphabet  A set of states S A start state n A set of accepting states F  S A set of transitions state input state

In state s1 on input “a” go to state s2 Finite Automata Transition s1 a s2 Is read In state s1 on input “a” go to state s2 If end of input If in accepting state => accept, othewise => reject If no transition possible => reject

Finite Automata State Graphs The start state An accepting state a A transition

A Simple Example A finite automaton that accepts only “1” A finite automaton accepts a string if we can follow transitions labeled with the characters in the string from the start to some accepting state 1

Another Simple Example A finite automaton accepting any number of 1’s followed by a single 0 Alphabet: {0,1} 1

Epsilon Moves Another kind of transition: -moves  A B Machine can move from state A to state B without reading input

Deterministic and Nondeterministic Automata Deterministic Finite Automata (DFA) One transition per input per state No -moves Nondeterministic Finite Automata (NFA) Can have multiple transitions for one input in a given state Can have -moves Finite automata have finite memory Need only to encode the current state

Execution of Finite Automata A DFA can take only one path through the state graph Completely determined by input NFAs can choose Whether to make -moves Which of multiple transitions for a single input to take

Acceptance of NFAs Input: An NFA can get into multiple states 1 Input: 1 1 Rule: NFA accepts if it can get in a final state

NFA vs. DFA (1) NFAs and DFAs recognize the same set of languages (regular languages) DFAs are easier to implement There are no choices to consider

NFA vs. DFA (2) DFA can be exponentially larger than NFA For a given language the NFA can be simpler than the DFA 1 NFA 1 DFA DFA can be exponentially larger than NFA

Regular Expressions to Finite Automata High-level sketch NFA Regular expressions DFA Lexical Specification Table-driven Implementation of DFA

Regular Expressions to NFA (1) For each kind of rexp, define an NFA Notation: NFA for rexp A A For   For input a a

Regular Expressions to NFA (2) For AB A B  For A | B A B 

Regular Expressions to NFA (3) For A*  A  

Example of RegExp -> NFA conversion Consider the regular expression (1 | 0)*1 The NFA is  A H 1 C E  B G 1 I J D F 

Next NFA Regular expressions DFA Lexical Table-driven Specification Implementation of DFA

NFA to DFA. The Trick Simulate the NFA Each state of resulting DFA = a non-empty subset of states of the NFA Start state = the set of NFA states reachable through -moves from NFA start state Add a transition S a S’ to DFA iff S’ is the set of NFA states reachable from the states in S after seeing the input a considering -moves as well

NFA -> DFA Example  1   1      1 1 1 C E A B G H I J D F G H I J   D F   FGABCDHI 1 ABCDHI 1 1 EJGABCDHI

NFA to DFA. Remark An NFA may be in many states at any time How many different states ? If there are N states, the NFA must be in some subset of those N states How many non-empty subsets are there? 2N - 1 = finitely many, but exponentially many

Implementation A DFA can be implemented by a 2D table T One dimension is “states” Other dimension is “input symbols” For every transition Si a Sk define T[i,a] = k DFA “execution” If in state Si and input a, read T[i,a] = k and skip to state Sk Very efficient

Table Implementation of a DFA T 1 S 1 1 U 1 S T U

Implementation (Cont.) NFA -> DFA conversion is at the heart of tools such as flex or jflex But, DFAs can be huge In practice, flex-like tools trade off speed for space in the choice of NFA and DFA representations

Thank you