Prof. Hilfinger CS 164 Lecture 21 Lexical Analysis Lecture 2-4 Notes by G. Necula, with additions by P. Hilfinger.

Slides:



Advertisements
Similar presentations
COS 320 Compilers David Walker. Outline Last Week –Introduction to ML Today: –Lexical Analysis –Reading: Chapter 2 of Appel.
Advertisements

Lexical Analysis Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!
From Cooper & Torczon1 The Front End The purpose of the front end is to deal with the input language Perform a membership test: code  source language?
Compiler Construction
1 Lexical Analysis Cheng-Chia Chen. 2 Outline l Introduction to lexical analyzer l Tokens l Regular expressions (RE) l Finite automata (FA) »deterministic.
Finite Automata Great Theoretical Ideas In Computer Science Anupam Gupta Danny Sleator CS Fall 2010 Lecture 20Oct 28, 2010Carnegie Mellon University.
Winter 2007SEG2101 Chapter 81 Chapter 8 Lexical Analysis.
Lexical Analysis III Recognizing Tokens Lecture 4 CS 4318/5331 Apan Qasem Texas State University Spring 2015.
Lecture 3-4 Notes by G. Necula, with additions by P. Hilfinger
6/12/2015Prof. Hilfinger CS164 Lecture 111 Bottom-Up Parsing Lecture (From slides by G. Necula & R. Bodik)
Prof. Fateman CS 164 Lecture 51 Lexical Analysis Lecture 5.
1 Lexical Analysis Cheng-Chia Chen. 2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4.
1 Foundations of Software Design Lecture 23: Finite Automata and Context-Free Grammars Marti Hearst Fall 2002.
COS 320 Compilers David Walker. Outline Last Week –Introduction to ML Today: –Lexical Analysis –Reading: Chapter 2 of Appel.
CS5371 Theory of Computation Lecture 4: Automata Theory II (DFA = NFA, Regular Language)
Lexical Analysis Recognize tokens and ignore white spaces, comments
Lexical Analysis The Scanner Scanner 1. Introduction A scanner, sometimes called a lexical analyzer A scanner : – gets a stream of characters (source.
CS 426 Compiler Construction
1 Scanning Aaron Bloomfield CS 415 Fall Parsing & Scanning In real compilers the recognizer is split into two phases –Scanner: translate input.
Chapter 3 Lexical Analysis
Topic #3: Lexical Analysis
CPSC 388 – Compiler Design and Construction Scanners – Finite State Automata.
1 Outline Informal sketch of lexical analysis –Identifies tokens in input string Issues in lexical analysis –Lookahead –Ambiguities Specifying lexers –Regular.
어휘분석 (Lexical Analysis). Overview Main task: to read input characters and group them into “ tokens. ” Secondary tasks: –Skip comments and whitespace;
Lecture # 3 Chapter #3: Lexical Analysis. Role of Lexical Analyzer It is the first phase of compiler Its main task is to read the input characters and.
COMP 3438 – Part II - Lecture 2: Lexical Analysis (I) Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ. 1.
Lexical Analyzer (Checker)
Overview of Previous Lesson(s) Over View  An NFA accepts a string if the symbols of the string specify a path from the start to an accepting state.
4b 4b Lexical analysis Finite Automata. Finite Automata (FA) FA also called Finite State Machine (FSM) –Abstract model of a computing entity. –Decides.
CS412/413 Introduction to Compilers Radu Rugina Lecture 4: Lexical Analyzers 28 Jan 02.
1 November 1, November 1, 2015November 1, 2015November 1, 2015 Azusa, CA Sheldon X. Liang Ph. D. Computer Science at Azusa Pacific University Azusa.
CS 536 Fall Scanner Construction  Given a single string, automata and regular expressions retuned a Boolean answer: a given string is/is not in.
Compiler Construction 2 주 강의 Lexical Analysis. “get next token” is a command sent from the parser to the lexical analyzer. On receipt of the command,
Lexical Analysis: Finite Automata CS 471 September 5, 2007.
Lexical Analysis Cheng-Chia Chen.
CSc 453 Lexical Analysis (Scanning)
Joey Paquet, 2000, Lecture 2 Lexical Analysis.
CMSC 330: Organization of Programming Languages Finite Automata NFAs  DFAs.
Overview of Previous Lesson(s) Over View  Symbol tables are data structures that are used by compilers to hold information about source-program constructs.
Compiler Construction By: Muhammad Nadeem Edited By: M. Bilal Qureshi.
CSC3315 (Spring 2009)1 CSC 3315 Lexical and Syntax Analysis Hamid Harroud School of Science and Engineering, Akhawayn University
Lexical Analysis – Part II EECS 483 – Lecture 3 University of Michigan Wednesday, September 13, 2006.
Lexical Analysis.
1st Phase Lexical Analysis
CS412/413 Introduction to Compilers and Translators Spring ’99 Lecture 2: Lexical Analysis.
Prof. Necula CS 164 Lecture 31 Lexical Analysis Lecture 3-4.
1 Topic 2: Lexing and Flexing COS 320 Compiling Techniques Princeton University Spring 2016 Lennart Beringer.
Overview of Previous Lesson(s) Over View  A token is a pair consisting of a token name and an optional attribute value.  A pattern is a description.
Chapter 5 Finite Automata Finite State Automata n Capable of recognizing numerous symbol patterns, the class of regular languages n Suitable for.
CS 404Ahmed Ezzat 1 CS 404 Introduction to Compiler Design Lecture 1 Ahmed Ezzat.
LECTURE 5 Scanning. SYNTAX ANALYSIS We know from our previous lectures that the process of verifying the syntax of the program is performed in two stages:
1 Compiler Construction Vana Doufexi office CS dept.
CS412/413 Introduction to Compilers Radu Rugina Lecture 3: Finite Automata 25 Jan 02.
Lecture 2 Compiler Design Lexical Analysis By lecturer Noor Dhia
1 Lexical Analysis Cheng-Chia Chen. 2 Outline 1. The goal and niche of lexical analysis in a compiler 2. Lexical tokens 3. Regular expressions (RE) 4.
COMP 3438 – Part II - Lecture 3 Lexical Analysis II Par III: Finite Automata Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ. 1.
Topic 3: Automata Theory 1. OutlineOutline Finite state machine, Regular expressions, DFA, NDFA, and their equivalence, Grammars and Chomsky hierarchy.
WELCOME TO A JOURNEY TO CS419 Dr. Hussien Sharaf Dr. Mohammad Nassef Department of Computer Science, Faculty of Computers and Information, Cairo University.
CS510 Compiler Lecture 2.
Lecture 2 Lexical Analysis
Chapter 3 Lexical Analysis.
Lexical Analysis (Sections )
Finite-State Machines (FSMs)
CSc 453 Lexical Analysis (Scanning)
Finite-State Machines (FSMs)
Lexical Analysis Lecture 3-4 Prof. Necula CS 164 Lecture 3.
4b Lexical analysis Finite Automata
Compiler Construction
4b Lexical analysis Finite Automata
CSc 453 Lexical Analysis (Scanning)
Presentation transcript:

Prof. Hilfinger CS 164 Lecture 21 Lexical Analysis Lecture 2-4 Notes by G. Necula, with additions by P. Hilfinger

Prof. Hilfinger CS 164 Lecture 22 Administrivia Moving to 60 Evans on Wednesday HW1 available Pyth manual available on line. Please log into your account and electronically register today. Register your team with “make-team”. See class announcement page. Project #1 available Friday. Use “submit hw1” to submit your homework this week. Section 101 (9AM) is gone.

Prof. Hilfinger CS 164 Lecture 23 Outline Informal sketch of lexical analysis –Identifies tokens in input string Issues in lexical analysis –Lookahead –Ambiguities Specifying lexers –Regular expressions –Examples of regular expressions

Prof. Hilfinger CS 164 Lecture 24 The Structure of a Compiler SourceTokensInterm. Language Lexical analysis Parsing Code Gen. Machine Code Today we start Optimization

Prof. Hilfinger CS 164 Lecture 25 Lexical Analysis What do we want to do? Example: if (i == j) z = 0; else z = 1; The input is just a sequence of characters: \tif (i == j)\n\t\tz = 0;\n\telse\n\t\tz = 1; Goal: Partition input string into substrings –And classify them according to their role

Prof. Hilfinger CS 164 Lecture 26 What’s a Token? Output of lexical analysis is a stream of tokens A token is a syntactic category –In English: noun, verb, adjective, … –In a programming language: Identifier, Integer, Keyword, Whitespace, … Parser relies on the token distinctions: –E.g., identifiers are treated differently than keywords

Prof. Hilfinger CS 164 Lecture 27 Tokens Tokens correspond to sets of strings: –Identifiers: strings of letters or digits, starting with a letter –Integers: non-empty strings of digits –Keywords: “else” or “if” or “begin” or … –Whitespace: non-empty sequences of blanks, newlines, and tabs –OpenPars: left-parentheses

Prof. Hilfinger CS 164 Lecture 28 Lexical Analyzer: Implementation An implementation must do two things: 1.Recognize substrings corresponding to tokens 2.Return: 1.The type or syntactic category of the token, 2. the value or lexeme of the token (the substring itself).

Prof. Hilfinger CS 164 Lecture 29 Example Our example again: \tif (i == j)\n\t\tz = 0;\n\telse\n\t\tz = 1; Token-lexeme pairs returned by the lexer: –(Whitespace, “\t”) –(Keyword, “if”) –(OpenPar, “(“) –(Identifier, “i”) –(Relation, “==“) –(Identifier, “j”) –…

Prof. Hilfinger CS 164 Lecture 210 Lexical Analyzer: Implementation The lexer usually discards “uninteresting” tokens that don’t contribute to parsing. Examples: Whitespace, Comments Question: What happens if we remove all whitespace and all comments prior to lexing?

Prof. Hilfinger CS 164 Lecture 211 Lookahead. Two important points: 1.The goal is to partition the string. This is implemented by reading left-to-right, recognizing one token at a time 2.“Lookahead” may be required to decide where one token ends and the next token begins –Even our simple example has lookahead issues i vs. if = vs. ==

Prof. Hilfinger CS 164 Lecture 212 Next We need –A way to describe the lexemes of each token –A way to resolve ambiguities Is if two variables i and f ? Is == two equal signs = = ?

Prof. Hilfinger CS 164 Lecture 213 Regular Languages There are several formalisms for specifying tokens Regular languages are the most popular –Simple and useful theory –Easy to understand –Efficient implementations

Prof. Hilfinger CS 164 Lecture 214 Languages Def. Let S be a set of characters. A language over S is a set of strings of characters drawn from S (  is called the alphabet )

Prof. Hilfinger CS 164 Lecture 215 Examples of Languages Alphabet = English characters Language = English sentences Not every string on English characters is an English sentence Alphabet = ASCII Language = C programs Note: ASCII character set is different from English character set

Prof. Hilfinger CS 164 Lecture 216 Notation Languages are sets of strings. Need some notation for specifying which sets we want For lexical analysis we care about regular languages, which can be described using regular expressions.

Prof. Hilfinger CS 164 Lecture 217 Regular Expressions and Regular Languages Each regular expression is a notation for a regular language (a set of words) If A is a regular expression then we write L(A) to refer to the language denoted by A

Prof. Hilfinger CS 164 Lecture 218 Atomic Regular Expressions Single character: ‘c’ L( ‘c’) = { “c” } (for any c   ) Concatenation: AB (where A and B are reg. exp.) L(AB) = { ab | a  L(A) and b  L(B) } Example: L(‘i’ ‘f’) = { “if” } (we will abbreviate ‘i’ ‘f’ as ‘if’ )

Prof. Hilfinger CS 164 Lecture 219 Compound Regular Expressions Union L( A | B) = L(A)  L(B) = { s | s  L(A) or s  L(B) } Examples: ‘if’ | ‘then‘ | ‘else’ = { “if”, “then”, “else”} ‘0’ | ‘1’ | … | ‘9’ = { “0”, “1”, …, “9” } (note the … are just an abbreviation) Another example: L((‘0’ | ‘1’) (‘0’ | ‘1’)) = { “00”, “01”, “10”, “11” }

Prof. Hilfinger CS 164 Lecture 220 More Compound Regular Expressions So far we do not have a notation for infinite languages Iteration: A * L(A * ) = { “” } | L(A) | L(AA) | L(AAA) | … Examples: ‘0’ * = { “”, “0”, “00”, “000”, …} ‘1’ ‘0’ * = { strings starting with 1 and followed by 0’s } Epsilon:  L(  ) = { “” }

Prof. Hilfinger CS 164 Lecture 221 Example: Keyword –Keyword: “else” or “if” or “begin” or … ‘else’ | ‘if’ | ‘begin’ | … (‘else’ abbreviates ‘e’ ‘l’ ‘s’ ‘e’ )

Prof. Hilfinger CS 164 Lecture 222 Example: Integers Integer: a non-empty string of digits digit = ‘0’ | ‘1’ | ‘2’ | ‘3’ | ‘4’ | ‘5’ | ‘6’ | ‘7’ | ‘8’ | ‘9’ number = digit digit * Abbreviation: A + = A A *

Prof. Hilfinger CS 164 Lecture 223 Example: Identifier Identifier: strings of letters or digits, starting with a letter letter = ‘A’ | … | ‘Z’ | ‘a’ | … | ‘z’ identifier = letter (letter | digit) * Is (letter * | digit * ) the same as (letter | digit) * ?

Prof. Hilfinger CS 164 Lecture 224 Example: Whitespace Whitespace: a non-empty sequence of blanks, newlines, and tabs (‘ ‘ | ‘\t’ | ‘\n’) + (Can you spot a subtle omission?)

Prof. Hilfinger CS 164 Lecture 225 Example: Phone Numbers Regular expressions are all around you! Consider (510)  = { 0, 1, 2, 3, …, 9, (, ), - } area = digit 3 exchange = digit 3 phone = digit 4 number = ‘(‘ area ‘)’ exchange ‘-’ phone

Prof. Hilfinger CS 164 Lecture 226 Example: Addresses Consider  = letters [ } name = letter + address = name name (‘.’ name) *

Prof. Hilfinger CS 164 Lecture 227 Summary Regular expressions describe many useful languages Next: Given a string s and a R.E. R, is s  L( R ) ? But a yes/no answer is not enough ! Instead: partition the input into lexemes We will adapt regular expressions to this goal

Prof. Hilfinger CS 164 Lecture 228 Next: Outline Specifying lexical structure using regular expressions Finite automata –Deterministic Finite Automata (DFAs) –Non-deterministic Finite Automata (NFAs) Implementation of regular expressions RegExp => NFA => DFA => Tables

Prof. Hilfinger CS 164 Lecture 229 Regular Expressions => Lexical Spec. (1) 1.Select a set of tokens Number, Keyword, Identifier,... 2.Write a R.E. for the lexemes of each token Number = digit + Keyword = ‘if’ | ‘else’ | … Identifier = letter (letter | digit)* OpenPar = ‘(‘ …

Prof. Hilfinger CS 164 Lecture 230 Regular Expressions => Lexical Spec. (2) 3.Construct R, matching all lexemes for all tokens R = Keyword | Identifier | Number | … = R 1 | R 2 | R 3 | … Facts: If s  L(R) then s is a lexeme –Furthermore s  L(R i ) for some “i” –This “i” determines the token that is reported

Prof. Hilfinger CS 164 Lecture 231 Regular Expressions => Lexical Spec. (3) 4.Let the input be x 1 …x n (x 1... x n are characters in the language alphabet) For 1  i  n check x 1 …x i  L(R) ? 5.It must be that x 1 …x i  L(R j ) for some i and j 6.Remove x 1 …x i from input and go to (4)

Prof. Hilfinger CS 164 Lecture 232 Lexing Example R = Whitespace | Integer | Identifier | ‘+’ Parse “f+3 +g” –“f” matches R, more precisely Identifier –“+“ matches R, more precisely ‘+’ – … –The token-lexeme pairs are (Identifier, “f”), (‘+’, “+”), (Integer, “3”) (Whitespace, “ “), (‘+’, “+”), (Identifier, “g”) We would like to drop the Whitespace tokens –after matching Whitespace, continue matching

Prof. Hilfinger CS 164 Lecture 233 Ambiguities (1) There are ambiguities in the algorithm Example: R = Whitespace | Integer | Identifier | ‘+’ Parse “foo+3” –“f” matches R, more precisely Identifier –But also “fo” matches R, and “foo”, but not “foo+” How much input is used? What if x 1 …x i  L(R) and also x 1 …x K  L(R) –“Maximal munch” rule: Pick the longest possible substring that matches R

Prof. Hilfinger CS 164 Lecture 234 More Ambiguities R = Whitespace | ‘new’ | Integer | Identifier Parse “new foo” –“new” matches R, more precisely ‘new’ –but also Identifier, which one do we pick? In general, if x 1 …x i  L(R j ) and x 1 …x i  L(R k ) –Rule: use rule listed first (j if j < k) We must list ‘new’ before Identifier

Prof. Hilfinger CS 164 Lecture 235 Error Handling R = Whitespace | Integer | Identifier | ‘+’ Parse “=56” –No prefix matches R: not “=“, nor “=5”, nor “=56” Problem: Can’t just get stuck … Solution: –Add a rule matching all “bad” strings; and put it last Lexer tools allow the writing of: R = R 1 |... | R n | Error –Token Error matches if nothing else matches

Prof. Hilfinger CS 164 Lecture 236 Summary Regular expressions provide a concise notation for string patterns Use in lexical analysis requires small extensions –To resolve ambiguities –To handle errors Good algorithms known (next) –Require only single pass over the input –Few operations per character (table lookup)

Prof. Hilfinger CS 164 Lecture 237 Finite Automata Regular expressions = specification Finite automata = implementation A finite automaton consists of –An input alphabet  –A set of states S –A start state n –A set of accepting states F  S –A set of transitions state  input state

Prof. Hilfinger CS 164 Lecture 238 Finite Automata Transition s 1  a s 2 Is read In state s 1 on input “a” go to state s 2 If end of input –If in accepting state => accept, othewise => reject If no transition possible => reject

Prof. Hilfinger CS 164 Lecture 239 Finite Automata State Graphs A state The start state An accepting state A transition a

Prof. Hilfinger CS 164 Lecture 240 A Simple Example A finite automaton that accepts only “1” A finite automaton accepts a string if we can follow transitions labeled with the characters in the string from the start to some accepting state 1

Prof. Hilfinger CS 164 Lecture 241 Another Simple Example A finite automaton accepting any number of 1’s followed by a single 0 Alphabet: {0,1} Check that “1110” is accepted but “110…” is not 0 1

Prof. Hilfinger CS 164 Lecture 242 And Another Example Alphabet {0,1} What language does this recognize?

Prof. Hilfinger CS 164 Lecture 243 And Another Example Alphabet still { 0, 1 } The operation of the automaton is not completely defined by the input –On input “11” the automaton could be in either state 1 1

Prof. Hilfinger CS 164 Lecture 244 Epsilon Moves Another kind of transition:  -moves  Machine can move from state A to state B without reading input A B

Prof. Hilfinger CS 164 Lecture 245 Deterministic and Nondeterministic Automata Deterministic Finite Automata (DFA) –One transition per input per state –No  -moves Nondeterministic Finite Automata (NFA) –Can have multiple transitions for one input in a given state –Can have  -moves Finite automata have finite memory –Need only to encode the current state

Prof. Hilfinger CS 164 Lecture 246 Execution of Finite Automata A DFA can take only one path through the state graph –Completely determined by input NFAs can choose –Whether to make  -moves –Which of multiple transitions for a single input to take

Prof. Hilfinger CS 164 Lecture 247 Acceptance of NFAs An NFA can get into multiple states Input: Rule: NFA accepts if it can get in a final state

Prof. Hilfinger CS 164 Lecture 248 NFA vs. DFA (1) NFAs and DFAs recognize the same set of languages (regular languages) DFAs are easier to implement –There are no choices to consider

Prof. Hilfinger CS 164 Lecture 249 NFA vs. DFA (2) For a given language the NFA can be simpler than the DFA NFA DFA DFA can be exponentially larger than NFA

Prof. Hilfinger CS 164 Lecture 250 Regular Expressions to Finite Automata High-level sketch Regular expressions NFA DFA Lexical Specification Table-driven Implementation of DFA

Prof. Hilfinger CS 164 Lecture 251 Regular Expressions to NFA (1) For each kind of rexp, define an NFA –Notation: NFA for rexp A A For   For input a a

Prof. Hilfinger CS 164 Lecture 252 Regular Expressions to NFA (2) For AB A B  For A | B A B    

Prof. Hilfinger CS 164 Lecture 253 Regular Expressions to NFA (3) For A* A   

Prof. Hilfinger CS 164 Lecture 254 Example of RegExp -> NFA conversion Consider the regular expression (1 | 0)*1 The NFA is  1 C E 0 DF   B   G    A H 1 IJ

Prof. Hilfinger CS 164 Lecture 255 Next Regular expressions NFA DFA Lexical Specification Table-driven Implementation of DFA

Prof. Hilfinger CS 164 Lecture 256 NFA to DFA. The Trick Simulate the NFA Each state of resulting DFA = a non-empty subset of states of the NFA Start state = the set of NFA states reachable through  -moves from NFA start state Add a transition S  a S’ to DFA iff –S’ is the set of NFA states reachable from the states in S after seeing the input a considering  -moves as well

Prof. Hilfinger CS 164 Lecture 257 NFA -> DFA Example         A B C D E F G H IJ ABCDHI FGABCDHI EJGABCDHI

Prof. Hilfinger CS 164 Lecture 258 NFA to DFA. Remark An NFA may be in many states at any time How many different states ? If there are N states, the NFA must be in some subset of those N states How many non-empty subsets are there? –2 N - 1 = finitely many, but exponentially many

Prof. Hilfinger CS 164 Lecture 259 Implementation A DFA can be implemented by a 2D table T –One dimension is “states” –Other dimension is “input symbols” –For every transition S i  a S k define T[i,a] = k DFA “execution” –If in state S i and input a, read T[i,a] = k and skip to state S k –Very efficient

Prof. Hilfinger CS 164 Lecture 260 Table Implementation of a DFA S T U STU TTU UTU

Prof. Hilfinger CS 164 Lecture 261 Implementation (Cont.) NFA -> DFA conversion is at the heart of tools such as flex or jflex But, DFAs can be huge In practice, flex-like tools trade off speed for space in the choice of NFA and DFA representations

Prof. Hilfinger CS 164 Lecture 262 Perl’s “Regular Expressions” Some kind of pattern-matching feature now common in programming languages. Perl’s is widely copied (cf. Java, Python). Not regular expressions, despite name. –E.g., pattern /A (\S+) is a $1/ matches “A spade is a spade” and “A deal is a deal”, but not “A spade is a shovel” –But no regular expression recognizes this language! –Capturing substrings with (…) itself is an extension

Prof. Hilfinger CS 164 Lecture 263 Implementing Perl Patterns (Sketch) Can use NFAs, with some modification Implement an NFA as one would a DFA + use backtracking search to deal with states with nondeterministic choices. Add extra states (with  transitions) for parentheses. – “(“ state records place in input as side effect. –“)” state saves string started at matching “(“ –$n matches input with stored value. Backtracking much slower than DFA implementation.