Copyright © 2009 Elsevier Chapter 2 :: Programming Language Syntax Programming Language Pragmatics Michael L. Scott.

Slides:



Advertisements
Similar presentations
Compiler Construction
Advertisements

Semantic Analysis Chapter 4. Role of Semantic Analysis Following parsing, the next two phases of the "typical" compiler are – semantic analysis – (intermediate)
Chap. 5, Top-Down Parsing J. H. Wang Mar. 29, 2011.
YANGYANG 1 Chap 5 LL(1) Parsing LL(1) left-to-right scanning leftmost derivation 1-token lookahead parser generator: Parsing becomes the easiest! Modifying.
Mooly Sagiv and Roman Manevich School of Computer Science
Lexical and Syntactic Analysis Here, we look at two of the tasks involved in the compilation process –Given source code, we need to first break it into.
Predictive Parsing l Find derivation for an input string, l Build a abstract syntax tree (AST) –a representation of the parsed program l Build a symbol.
6/12/2015Prof. Hilfinger CS164 Lecture 111 Bottom-Up Parsing Lecture (From slides by G. Necula & R. Bodik)
1 Contents Introduction A Simple Compiler Scanning – Theory and Practice Grammars and Parsing LL(1) Parsing LR Parsing Lex and yacc Semantic Processing.
Parsing III (Eliminating left recursion, recursive descent parsing)
Parsing V Introduction to LR(1) Parsers. from Cooper & Torczon2 LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited.
Parsing — Part II (Ambiguity, Top-down parsing, Left-recursion Removal)
Prof. Fateman CS 164 Lecture 91 Bottom-Up Parsing Lecture 9.
1 LR parsing techniques SLR (not in the book) –Simple LR parsing –Easy to implement, not strong enough –Uses LR(0) items Canonical LR –Larger parser but.
COS 320 Compilers David Walker. last time context free grammars (Appel 3.1) –terminals, non-terminals, rules –derivations & parse trees –ambiguous grammars.
Table-driven parsing Parsing performed by a finite state machine. Parsing algorithm is language-independent. FSM driven by table (s) generated automatically.
1 Bottom-up parsing Goal of parser : build a derivation –top-down parser : build a derivation by working from the start symbol towards the input. builds.
Bottom-up parsing Goal of parser : build a derivation
CSC3315 (Spring 2009)1 CSC 3315 Lexical and Syntax Analysis Hamid Harroud School of Science and Engineering, Akhawayn University
1 Lecture 11: Semantic Analysis (Section ) CSCI 431 Programming Languages Fall 2002 A modification of slides developed by Felix Hernandez-Campos.
Copyright © 2005 Elsevier Chapter 4 :: Semantic Analysis Programming Language Pragmatics Michael L. Scott.
Parsing IV Bottom-up Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Syntax and Semantics Structure of programming languages.
Copyright © 2009 Elsevier Chapter 2 :: Programming Language Syntax Programming Language Pragmatics Michael L. Scott.
1 Chapter 5 LL (1) Grammars and Parsers. 2 Naming of parsing techniques The way to parse token sequence L: Leftmost R: Righmost Top-down  LL Bottom-up.
Chapter 5 Top-Down Parsing.
Syntax. Syntax defines what is grammatically valid in a programming language – Set of grammatical rules – E.g. in English, a sentence cannot begin with.
Parsing G Programming Languages May 24, 2012 New York University Chanseok Oh
COP4020 Programming Languages Semantics Prof. Xin Yuan.
Profs. Necula CS 164 Lecture Top-Down Parsing ICOM 4036 Lecture 5.
1 Compiler Construction Syntax Analysis Top-down parsing.
Review 1.Lexical Analysis 2.Syntax Analysis 3.Semantic Analysis 4.Code Generation 5.Code Optimization.
Syntax and Semantics Structure of programming languages.
1 Lecture 5: Syntax Analysis (Section 2.2) CSCI 431 Programming Languages Fall 2002 A modification of slides developed by Felix Hernandez-Campos at UNC.
Chapter 4 Top-Down Parsing Recursive-Descent Gang S. Liu College of Computer Science & Technology Harbin Engineering University.
CS 326 Programming Languages, Concepts and Implementation Instructor: Mircea Nicolescu Lecture 4.
Overview of Previous Lesson(s) Over View  In syntax-directed translation 1 st we construct a parse tree or a syntax tree then compute the values of.
1 Compiler Construction Syntax Analysis Top-down parsing.
Top-down Parsing lecture slides from C OMP 412 Rice University Houston, Texas, Fall 2001.
Top-down Parsing Recursive Descent & LL(1) Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412.
Top-Down Parsing CS 671 January 29, CS 671 – Spring Where Are We? Source code: if (b==0) a = “Hi”; Token Stream: if (b == 0) a = “Hi”; Abstract.
Top-down Parsing. 2 Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves Pick a production.
Top-Down Parsing.
Top-Down Predictive Parsing We will look at two different ways to implement a non- backtracking top-down parser called a predictive parser. A predictive.
Copyright © 2009 Elsevier Chapter 4 :: Semantic Analysis Programming Language Pragmatics Michael L. Scott.
CS 330 Programming Languages 09 / 25 / 2007 Instructor: Michael Eckmann.
Parser: CFG, BNF Backus-Naur Form is notational variant of Context Free Grammar. Invented to specify syntax of ALGOL in late 1950’s Uses ::= to indicate.
Bottom Up Parsing CS 671 January 31, CS 671 – Spring Where Are We? Finished Top-Down Parsing Starting Bottom-Up Parsing Lexical Analysis.
Parsing V LR(1) Parsers. LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token) for handle recognition.
Lecture 5: LR Parsing CS 540 George Mason University.
Compilers: Bottom-up/6 1 Compiler Structures Objective – –describe bottom-up (LR) parsing using shift- reduce and parse tables – –explain how LR.
CMSC 330: Organization of Programming Languages Pushdown Automata Parsing.
CSE 420 Lecture Program is lexically well-formed: ▫Identifiers have valid names. ▫Strings are properly terminated. ▫No stray characters. Program.
LECTURE 10 Semantic Analysis. REVIEW So far, we’ve covered the following: Compilation methods: compilation vs. interpretation. The overall compilation.
Semantic Analysis Chapter 4.
CS 326 Programming Languages, Concepts and Implementation
Programming Languages Translator
Chapter 2 :: Programming Language Syntax
Chapter 2 :: Programming Language Syntax
Chapter 2 :: Programming Language Syntax
Table-driven parsing Parsing performed by a finite state machine.
Chapter 2 :: Programming Language Syntax
CS 3304 Comparative Languages
Top-Down Parsing CS 671 January 29, 2008.
Chapter 2 :: Programming Language Syntax
CS 3304 Comparative Languages
Chapter 4 Action Routines.
Chapter 2 :: Programming Language Syntax
Chapter 2 :: Programming Language Syntax
Chapter 2 :: Programming Language Syntax
Presentation transcript:

Copyright © 2009 Elsevier Chapter 2 :: Programming Language Syntax Programming Language Pragmatics Michael L. Scott

Copyright © 2009 Elsevier Parsing: recap There are large classes of grammars for which we can build parsers that run in linear time –The two most important classes are called LL and LR LL stands for 'Left-to-right, Leftmost derivation'. LR stands for 'Left-to-right, Rightmost derivation ’

Copyright © 2009 Elsevier Parsing LL parsers are also called 'top-down', or 'predictive' parsers & LR parsers are also called 'bottom-up', or 'shift-reduce' parsers There are several important sub-classes of LR parsers –SLR –LALR (We won't be going into detail on the differences between them.)

Copyright © 2009 Elsevier Parsing You commonly see LL or LR (or whatever) written with a number in parentheses after it –This number indicates how many tokens of look-ahead are required in order to parse –Almost all real compilers use one token of look-ahead The expression grammar (with precedence and associativity) you saw before is LR(1), but not LL(1)

Copyright © 2009 Elsevier Parsing Every LL(1) grammar is also LR(1), though right recursion in production tends to require very deep stacks and complicates semantic analysis Every CFL that can be parsed deterministically has an SLR(1) grammar (which is LR(1)) Every deterministic CFL with the prefix property (no valid string is a prefix of another valid string) has an LR(0) grammar

Copyright © 2009 Elsevier LL Parsing Here is an LL(1) grammar that we saw late time in class (based on Fig 2.15 in book): 1. program → stmt list $$$ 2. stmt_list → stmt stmt_list 3. | ε 4. stmt → id := expr 5. | read id 6. | write expr 7. expr→ term term_tail 8. term_tail → add op term term_tail 9. | ε

Copyright © 2009 Elsevier LL Parsing LL(1) grammar (continued) 10. term→ factor fact_tailt 11. fact_tail → mult_op fact fact_tail 12. | ε 13. factor → ( expr ) 14. | id 15. | number 16. add_op → | mult_op → * 19. | /

Copyright © 2009 Elsevier LL Parsing Like the bottom-up grammar, this one captures associativity and precedence, but most people don't find it as pretty –for one thing, the operands of a given operator aren't in a RHS together! –however, the simplicity of the parsing algorithm makes up for this weakness How do we parse a string with this grammar? –by building the parse tree incrementally

Copyright © 2009 Elsevier LL Parsing Example (average program) read A read B sum := A + B write sum write sum / 2 We start at the top and predict needed productions on the basis of the current left-most non-terminal in the tree and the current input token

Copyright © 2009 Elsevier LL Parsing Parse tree for the average program (Figure 2.17)

Copyright © 2009 Elsevier LL Parsing: actual implementation Table-driven LL parsing: you have a big loop in which you repeatedly look up an action in a two-dimensional table based on current leftmost non-terminal and current input token. The actions are (1) match a terminal (2) predict a production (3) announce a syntax error

Copyright © 2009 Elsevier LL Parsing LL(1) parse table for parsing for calculator language

Copyright © 2009 Elsevier LL Parsing To keep track of the left-most non-terminal, you push the as-yet-unseen portions of productions onto a stack –for details see Figure 2.20 The key thing to keep in mind is that the stack contains all the stuff you expect to see between now and the end of the program –what you predict you will see

Copyright © 2009 Elsevier LL Parsing: when it isn’t LL Problems trying to make a grammar LL(1) –left recursion example: id_list→ id | id_list, id equivalently id_list→ id id_list_tail id_list_tail →, id id_list_tail | epsilon we can get rid of all left recursion mechanically in any grammar

Copyright © 2009 Elsevier LL Parsing Problems trying to make a grammar LL(1) –common prefixes: another thing that LL parsers can't handle solved by "left-factoring ” example: stmt → id := expr | id ( arg_list ) equivalently stmt → id id_stmt_tail id_stmt_tail → := expr | ( arg_list) we can eliminate left-factor mechanically

Copyright © 2009 Elsevier LL Parsing Note that eliminating left recursion and common prefixes does NOT make a grammar LL –there are infinitely many non-LL LANGUAGES, and the mechanical transformations work on them just fine –the few that arise in practice, however, can generally be handled with kludges

Copyright © 2009 Elsevier LL Parsing Problems trying to make a grammar LL(1) –the"dangling else" problem prevents grammars from being LL(1) (or in fact LL(k) for any k) –the following natural grammar fragment is inherently ambiguous (from Pascal) stmt → if cond then_clause else_clause | other_stuff then_clause → then stmt else_clause → else stmt | epsilon

Copyright © 2009 Elsevier LL Parsing The less natural grammar fragment can be parsed bottom-up (so LR) but not top-down (so not LL) stmt → balanced_stmt | unbalanced_stmt balanced_stmt → if cond then balanced_stmt else balanced_stmt | other_stuff unbalanced_stmt → if cond then stmt | if cond then balanced_stmt else unbalanced_stmt

Copyright © 2009 Elsevier LL Parsing The usual approach, whether top-down OR bottom-up, is to use the ambiguous grammar together with a disambiguating rule that says –else goes with the closest then or –more generally, the first of two possible productions is the one to predict (or reduce)

Copyright © 2009 Elsevier LL Parsing Better yet, languages (since Pascal) generally employ explicit end-markers, which eliminate this problem In Modula-2, for example, one says: if A = B then if C = D then E := F end else G := H end Ada says 'end if'; other languages say 'fi'

Copyright © 2009 Elsevier LL Parsing One problem with end markers is that they tend to bunch up. In Pascal you say if A = B then … else if A = C then … else if A = D then … else if A = E then … else...; With end markers this becomes if A = B then … else if A = C then … else if A = D then … else if A = E then … else...; end; end; end; end;

Copyright © 2009 Elsevier LL Parsing The algorithm to build predict sets is tedious (for a "real" sized grammar), but relatively simple It consists of three stages: –(1) compute FIRST sets for symbols –(2) compute FOLLOW sets for non-terminals (this requires computing FIRST sets for some strings) –(3) compute predict sets or table for all productions

Copyright © 2009 Elsevier LL Parsing It is conventional in general discussions of grammars to use –lower case letters near the beginning of the alphabet for terminals –lower case letters near the end of the alphabet for strings of terminals –upper case letters near the beginning of the alphabet for non-terminals –upper case letters near the end of the alphabet for arbitrary symbols –greek letters for arbitrary strings of symbols

Copyright © 2009 Elsevier LL Parsing Algorithm First/Follow/Predict: –FIRST(α) == {a : α →* a β} ∪ (if α =>* ε THEN {ε} ELSE NULL) –FOLLOW(A) == {a : S → + α A a β} ∪ (if S →* α A THEN {ε} ELSE NULL) –Predict (A → X 1... X m ) == (FIRST (X 1... X m ) - {ε}) ∪ (if X 1,..., X m →* ε then FOLLOW (A) ELSE NULL) Details following…

Copyright © 2009 Elsevier LL Parsing

Copyright © 2009 Elsevier LL Parsing

Copyright © 2009 Elsevier LL Parsing If any token belongs to the predict set of more than one production with the same LHS, then the grammar is not LL(1) A conflict can arise because –the same token can begin more than one RHS –it can begin one RHS and can also appear after the LHS in some valid program, and one possible RHS is 

Copyright © 2009 Elsevier LR Parsing LR parsers are almost always table-driven: –like a table-driven LL parser, an LR parser uses a big loop in which it repeatedly inspects a two- dimensional table to find out what action to take –unlike the LL parser, however, the LR driver has non-trivial state (like a DFA), and the table is indexed by current input token and current state –the stack contains a record of what has been seen SO FAR (NOT what is expected)

Copyright © 2009 Elsevier LR Parsing A scanner is a DFA –it can be specified with a state diagram An LL or LR parser is a Push Down Automata, or PDA –a PDA can be specified with a state diagram and a stack the state diagram looks just like a DFA state diagram, except the arcs are labeled with pairs, and in addition to moving to a new state the PDA has the option of pushing or popping a finite number of symbols onto/off the stack

Copyright © 2009 Elsevier LR Parsing An LL(1) PDA has only one state! –well, actually two; it needs a second one to accept with, but that's all –all the arcs are self loops; the only difference between them is the choice of whether to push or pop –the final state is reached by a transition that sees EOF on the input and the stack

Copyright © 2009 Elsevier LR Parsing An LR (or SLR/LALR) PDA has multiple states –it is a "recognizer," not a "predictor" –it builds a parse tree from the bottom up –the states keep track of which productions we might be in the middle The parsing of the Characteristic Finite State Machine (CFSM) is based on –Shift –Reduce

Copyright © 2009 Elsevier LR Parsing To illustrate LR parsing, consider the grammar (from Figure 2.24): 1. program→ stmt list $$$ 2. stmt_list → stmt_list stmt 3. | stmt 4. stmt → id := expr 5. | read id 6. | write expr 7. expr → term 8. | expr add op term

Copyright © 2009 Elsevier LR Parsing LR grammar (continued): 9. term → factor 10. | term mult_op factor 11. factor →( expr ) 12. | id 13. | number 14. add op → | mult op → * 17. | /

Copyright © 2009 Elsevier LR Parsing This grammar is SLR(1), a particularly nice class of bottom-up grammar –it isn't exactly what we saw originally –we've eliminated the epsilon production to simplify the presentation When parsing, mark current position with a “.”, and can have a similar sort of table to mark what state to go to

Copyright © 2009 Elsevier LR Parsing

Copyright © 2009 Elsevier LR Parsing

Copyright © 2009 Elsevier LR Parsing

Copyright © 2009 Elsevier Syntax Errors When parsing a program, the parser will often detect a syntax error –Generally when the next token/input doesn’t form a valid possible transition. What should we do? –Halt and find closest rule that does match. –Recover and continue parsing if possible. Most compilers don’t just halt; this would mean ignoring all code past the error. –Instead, goal is to find and report as many errors as possible.

Copyright © 2009 Elsevier Syntax Errors: approaches Method 1: Panic mode: Define a small set of “safe symbols”. –In C++, start from just after next semicolon –In Python, jump to next newline and continue When an error occurs, computer jumps back to last safe symbol, and tries to compile from the next safe symbol on. –(Ever notice that errors often point to the line before or after the actual error?)

Copyright © 2009 Elsevier Syntax Errors: approaches Method 2: Phase-level recovery –Refine panic mode with different safe symbols for different states –Ex: expression -> ), statement -> ; Method 3: Context specific look-ahead: –Improves on 2 by checking various contexts in which the production might appear in a parse tree –Improves error messages, but costs in terms of speed and complexity

Copyright © 2009 Elsevier Beyond Parsing: Ch. 4 We also need to define rules to connect the productions to actual operations concepts. Example grammar: E → E + T E → E – T E → T T → T * F T → T / F T → F F → - F Question: Is it LL or LR?

Copyright © 2009 Elsevier Attribute Grammars We can turn this into an attribute grammar as follows (similar to Figure 4.1): E → E + TE1.val = E2.val + T.val E → E – TE1.val = E2.val - T.val E → TE.val = T.val T → T * FT1.val = T2.val * F.val T → T / FT1.val = T2.val / F.val T → FT.val = F.val F → - FF1.val = - F2.val F → (E)F.val = E.val F → constF.val = C.val

Copyright © 2009 Elsevier Attribute Grammars The attribute grammar serves to define the semantics of the input program Attribute rules are best thought of as definitions, not assignments They are not necessarily meant to be evaluated at any particular time, or in any particular order, though they do define their left-hand side in terms of the right-hand side

Copyright © 2009 Elsevier Evaluating Attributes The process of evaluating attributes is called annotation, or DECORATION, of the parse tree [see next slide] –When a parse tree under this grammar is fully decorated, the value of the expression will be in the val attribute of the root The code fragments for the rules are called SEMANTIC FUNCTIONS –Strictly speaking, they should be cast as functions, e.g., E1.val = sum (E2.val, T.val), cf., Figure 4.1

Copyright © 2009 Elsevier Evaluating Attributes

Copyright © 2009 Elsevier Evaluating Attributes This is a very simple attribute grammar: –Each symbol has at most one attribute the punctuation marks have no attributes These attributes are all so-called SYNTHESIZED attributes: –They are calculated only from the attributes of things below them in the parse tree

Copyright © 2009 Elsevier Evaluating Attributes In general, we are allowed both synthesized and INHERITED attributes: –Inherited attributes may depend on things above or to the side of them in the parse tree –Tokens have only synthesized attributes, initialized by the scanner (name of an identifier, value of a constant, etc.). –Inherited attributes of the start symbol constitute run-time parameters of the compiler

Copyright © 2009 Elsevier Evaluating Attributes The grammar above is called S- ATTRIBUTED because it uses only synthesized attributes Its ATTRIBUTE FLOW (attribute dependence graph) is purely bottom-up –It is SLR(1), but not LL(1) An equivalent LL(1) grammar requires inherited attributes:

Copyright © 2009 Elsevier Evaluating Attributes – Example Attribute grammar in Figure 4.3: E → T TTE.v =TT.v TT.st = T.v TT1 → + T TT2TT1.v = TT2.v TT2.st = TT1.st + T.v TT1 → - T TT1TT1.v = TT2.v TT2.st = TT1.st - T.v TT → εTT.v = TT.st T → F FTT.v =FT.v FT.st = F.v

Copyright © 2009 Elsevier Evaluating Attributes– Example Attribute grammar in Figure 4.3 (continued): FT1 → * F FT2FT1.v = FT2.v FT2.st = FT1.st * F.v FT1 → / F FT2FT1.v = FT2.v FT2.st = FT1.st / F.v FT → ε FT.v = FT.st F1 → - F2F1.v = - F2.v F → ( E )F.v = E.v F → const F.v = C.v Figure 4.4 – parse tree for (1+3)*2

Copyright © 2009 Elsevier Evaluating Attributes– Example

Copyright © 2009 Elsevier Evaluating Attributes– Example Attribute grammar in Figure 4.3: –This attribute grammar is a good bit messier than the first one, but it is still L-ATTRIBUTED, which means that the attributes can be evaluated in a single left-to-right pass over the input –In fact, they can be evaluated during an LL parse –Each synthetic attribute of a LHS symbol (by definition of synthetic) depends only on attributes of its RHS symbols

Copyright © 2009 Elsevier Evaluating Attributes – Example Attribute grammar in Figure 4.3: –Each inherited attribute of a RHS symbol (by definition of L-attributed) depends only on inherited attributes of the LHS symbol, or synthetic or inherited attributes of symbols to its left in the RHS –L-attributed grammars are the most general class of attribute grammars that can be evaluated during an LL parse

Copyright © 2009 Elsevier Evaluating Attributes There are certain tasks, such as generation of code for short-circuit Boolean expression evaluation, that are easiest to express with non-L-attributed attribute grammars Because of the potential cost of complex traversal schemes, however, most real-world compilers insist that the grammar be L- attributed