Parsing VII The Last Parsing Lecture. High-level overview  Build the canonical collection of sets of LR(1) Items, I aBegin in an appropriate state, s.

Slides:



Advertisements
Similar presentations
Compiler Designs and Constructions
Advertisements

Bottom up Parsing Bottom up parsing trys to transform the input string into the start symbol. Moves through a sequence of sentential forms (sequence of.
Parsing Wrap Up Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit.
Mooly Sagiv and Roman Manevich School of Computer Science
Cse321, Programming Languages and Compilers 1 6/12/2015 Lecture #10, Feb. 14, 2007 Modified sets of item construction Rules for building LR parse tables.
6/12/2015Prof. Hilfinger CS164 Lecture 111 Bottom-Up Parsing Lecture (From slides by G. Necula & R. Bodik)
1 CMPSC 160 Translation of Programming Languages Fall 2002 slides derived from Tevfik Bultan, Keith Cooper, and Linda Torczon Lecture-Module #10 Parsing.
Parsing VI The LR(1) Table Construction. LR(k) items The LR(1) table construction algorithm uses LR(1) items to represent valid configurations of an LR(1)
Parsing III (Eliminating left recursion, recursive descent parsing)
Parsing VII The Last Parsing Lecture Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at.
Lecture #8, Feb. 7, 2007 Shift-reduce parsing,
ISBN Chapter 4 Lexical and Syntax Analysis The Parsing Problem Recursive-Descent Parsing.
Context-sensitive Analysis. Beyond Syntax There is a level of correctness that is deeper than grammar fie(a,b,c,d) int a, b, c, d; { … } fee() { int f[3],g[0],
Parsing V Introduction to LR(1) Parsers. from Cooper & Torczon2 LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited.
Parsing — Part II (Ambiguity, Top-down parsing, Left-recursion Removal)
1 LR parsing techniques SLR (not in the book) –Simple LR parsing –Easy to implement, not strong enough –Uses LR(0) items Canonical LR –Larger parser but.
COS 320 Compilers David Walker. last time context free grammars (Appel 3.1) –terminals, non-terminals, rules –derivations & parse trees –ambiguous grammars.
Table-driven parsing Parsing performed by a finite state machine. Parsing algorithm is language-independent. FSM driven by table (s) generated automatically.
1 CIS 461 Compiler Design & Construction Fall 2012 slides derived from Tevfik Bultan, Keith Cooper, and Linda Torczon Lecture-Module #12 Parsing 4.
Parsing Wrap-up. from Cooper and Torczon2 Filling in the A CTION and G OTO Tables The algorithm Many items generate no table entry  Closure( ) instantiates.
Bottom-up parsing Goal of parser : build a derivation
Chapter 4 Lexical and Syntax Analysis. Chapter 4 Topics Introduction Lexical Analysis The Parsing Problem Recursive-Descent Parsing Bottom-Up Parsing.
Lexical and syntax analysis
Context-sensitive Analysis or Semantic Elaboration Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in.
Parsing IV Bottom-up Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Parsing IV Bottom-up Parsing. Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves.
LR(1) Parsers Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University have explicit.
Syntax and Semantics Structure of programming languages.
Parsing. Goals of Parsing Check the input for syntactic accuracy Return appropriate error messages Recover if possible Produce, or at least traverse,
Introduction to Parsing Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students enrolled in Comp 412 at Rice University.
Parsing III (Top-down parsing: recursive descent & LL(1) )
Profs. Necula CS 164 Lecture Top-Down Parsing ICOM 4036 Lecture 5.
Review 1.Lexical Analysis 2.Syntax Analysis 3.Semantic Analysis 4.Code Generation 5.Code Optimization.
Syntax and Semantics Structure of programming languages.
ISBN Chapter 4 Lexical and Syntax Analysis.
1 Compiler Construction (CS-636) Muhammad Bilal Bashir UIIT, Rawalpindi.
Comp 311 Principles of Programming Languages Lecture 3 Parsing Corky Cartwright August 28, 2009.
Prof. Necula CS 164 Lecture 8-91 Bottom-Up Parsing LR Parsing. Parser Generators. Lecture 6.
Syntax Analysis - LR(0) Parsing Compiler Design Lecture (02/04/98) Computer Science Rensselaer Polytechnic.
Parsing — Part II (Top-down parsing, left-recursion removal) Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students.
Top-down Parsing lecture slides from C OMP 412 Rice University Houston, Texas, Fall 2001.
Parsing — Part II (Top-down parsing, left-recursion removal) Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved. Students.
1 Syntax Analysis Part II Chapter 4 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University, 2005.
Parsing V LR(1) Parsers C OMP 412 Rice University Houston, Texas Fall 2001 Copyright 2000, Keith D. Cooper, Ken Kennedy, & Linda Torczon, all rights reserved.
Top-down Parsing Recursive Descent & LL(1) Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412.
Top-Down Parsing CS 671 January 29, CS 671 – Spring Where Are We? Source code: if (b==0) a = “Hi”; Token Stream: if (b == 0) a = “Hi”; Abstract.
Top-down Parsing. 2 Parsing Techniques Top-down parsers (LL(1), recursive descent) Start at the root of the parse tree and grow toward leaves Pick a production.
Top-Down Parsing.
Parsing V LR(1) Parsers N.B.: This lecture uses a left-recursive version of the SheepNoise grammar. The book uses a right-recursive version. The derivations.
Parsing III (Top-down parsing: recursive descent & LL(1) )
Bottom Up Parsing CS 671 January 31, CS 671 – Spring Where Are We? Finished Top-Down Parsing Starting Bottom-Up Parsing Lexical Analysis.
Parsing V LR(1) Parsers. LR(1) Parsers LR(1) parsers are table-driven, shift-reduce parsers that use a limited right context (1 token) for handle recognition.
CS412/413 Introduction to Compilers and Translators Spring ’99 Lecture 6: LR grammars and automatic parser generators.
1 Syntax Analysis Part II Chapter 4 COP5621 Compiler Construction Copyright Robert van Engelen, Florida State University, 2007.
Compilers: Bottom-up/6 1 Compiler Structures Objective – –describe bottom-up (LR) parsing using shift- reduce and parse tables – –explain how LR.
CS 404Ahmed Ezzat 1 CS 404 Introduction to Compiler Design Lecture Ahmed Ezzat.
Eliminating Left-Recursion Where some of a nonterminal’s productions are left-recursive, top-down parsing is not possible “Immediate” left-recursion can.
Comp 411 Principles of Programming Languages Lecture 3 Parsing
Lexical and Syntax Analysis
LR(1) Parsers Comp 412 COMP 412 FALL 2010
Parsing — Part II (Top-down parsing, left-recursion removal)
Parsing VI The LR(1) Table Construction
N.B.: This lecture uses a left-recursive version of the SheepNoise grammar. The book uses a right-recursive version. The derivations (& the tables) are.
Lexical and Syntax Analysis
Context-sensitive Analysis
Parsing VII The Last Parsing Lecture
Kanat Bolazar February 16, 2010
Lecture 11 LR Parse Table Construction
Lecture 11 LR Parse Table Construction
Presentation transcript:

Parsing VII The Last Parsing Lecture

High-level overview  Build the canonical collection of sets of LR(1) Items, I aBegin in an appropriate state, s 0  [S’  S, EOF ], along with any equivalent items  Derive equivalent items as closure( s 0 ) bRepeatedly compute, for each s k, and each X, goto(s k,X)  If the set is not already in the collection, add it  Record all the transitions created by goto( ) This eventually reaches a fixed point 2Fill in the table from the collection of sets of LR(1) items The canonical collection completely encodes the transition diagram for the handle-finding DFA LR(1) Table Construction

Example ( grammar & sets ) Simplified, right recursive expression grammar Goal  Expr Expr  Term – Expr Expr  Term Term  Factor * Term Term  Factor Factor  ident

Example ( building the collection ) Initialization Step s 0  closure( { [Goal  Expr, EOF ] } ) { [Goal  Expr, EOF ], [Expr  Term – Expr, EOF ], [Expr  Term, EOF ], [Term  Factor * Term, EOF ], [Term  Factor * Term, –], [Term  Factor, EOF ], [Term  Factor, –], [Factor  ident, EOF ], [Factor  ident, –], [Factor  ident, *] } S  {s 0 }

Example ( building the collection ) Iteration 1 s 1  goto(s 0, Expr) s 2  goto(s 0, Term) s 3  goto(s 0, Factor) s 4  goto(s 0, ident ) Iteration 2 s 5  goto(s 2, – ) s 6  goto(s 3, * ) Iteration 3 s 7  goto(s 5, Expr ) s 8  goto(s 6, Term )

Example (Summary) S 0 : { [Goal  Expr, EOF], [Expr  Term – Expr, EOF], [Expr  Term, EOF], [Term  Factor * Term, EOF], [Term  Factor * Term, –], [Term  Factor, EOF ], [Term  Factor, –], [Factor  ident, EOF ], [Factor  ident, –], [Factor  ident, *] } S 1 : { [Goal  Expr, EOF ] } S 2 : { [Expr  Term – Expr, EOF ], [Expr  Term, EOF ] } S 3 : { [Term  Factor * Term, EOF ],[Term  Factor * Term, –], [Term  Factor, EOF ], [Term  Factor, –] } S 4 : { [Factor  ident, EOF ],[Factor  ident, –], [Factor  ident, *] } S 5 : { [Expr  Term – Expr, EOF ], [Expr  Term – Expr, EOF ], [Expr  Term, EOF ], [Term  Factor * Term, –], [Term  Factor, –], [Term  Factor * Term, EOF ], [Term  Factor, EOF ], [Factor  ident, *], [Factor  ident, –], [Factor  ident, EOF ] }

Example ( Summary ) S 6 : { [Term  Factor * Term, EOF ], [Term  Factor * Term, –], [Term  Factor * Term, EOF ], [Term  Factor * Term, –], [Term  Factor, EOF ], [Term  Factor, –], [Factor  ident, EOF ], [Factor  ident, –], [Factor  ident, *] } S 7 : { [Expr  Term – Expr, EOF ] } S 8 : { [Term  Factor * Term, EOF ], [Term  Factor * Term, –] }

Example (Summary) The Goto Relationship (from the construction)

Filling in the A CTION and G OTO Tables The algorithm  set s x  S  item i  s x if i is [A   ad,b] and goto(s x,a) = s k, a  T then A CTION [x,a]  “shift k” else if i is [S’  S, EOF ] then A CTION [x, EOF ]  “accept” else if i is [A  ,a] then A CTION [x,a]  “reduce A   ”  n  NT if goto(s x,n) = s k then G OTO [x,n]  k x is the number of the state for s x Many items generate no table entry e.g., [A  B ,a] does not, but closure ensures that all the rhs’ for B are in s x

Example ( Filling in the tables ) The algorithm produces the following table Plugs into the skeleton LR(1) parser

What can go wrong? What if set s contains [A a ,b] and [B ,a] ? First item generates “shift”, second generates “reduce” Both define ACTION[s,a] — cannot do both actions This is a fundamental ambiguity, called a shift/reduce error Modify the grammar to eliminate it (if-then-else) Shifting will often resolve it correctly What is set s contains [A , a] and [B , a] ? Each generates “reduce”, but with a different production Both define ACTION[s,a] — cannot do both reductions This fundamental ambiguity is called a reduce/reduce error Modify the grammar to eliminate it (P L/I’ s overloading of (...)) In either case, the grammar is not LR(1) EaC includes a worked example

Shrinking the Tables Three options: Combine terminals such as number & identifier, + & -, * & /  Directly removes a column, may remove a row  For expression grammar, 198 (vs. 384) table entries Combine rows or columns ( table compression )  Implement identical rows once & remap states  Requires extra indirection on each lookup  Use separate mapping for A CTION & for G OTO Use another construction algorithm  Both L ALR (1) and S LR (1) produce smaller tables  Implementations are readily available

Summary Advantages Fast Good locality Simplicity Good error detection Fast Deterministic langs. Automatable Left associativity Disadvantages Hand-coded High maintenance Right associativity Large working sets Poor error messages Large table sizes Top-down recursive descent LR(1)

Left Recursion versus Right Recursion Right recursion Required for termination in top-down parsers Uses (on average) more stack space Produces right-associative operators Left recursion Works fine in bottom-up parsers Limits required stack space Produces left-associative operators Rule of thumb Left recursion for bottom-up parsers Right recursion for top-down parsers * * * w x y z w * ( x * ( y * z ) ) * * * z w x y ( (w * x ) * y ) * z

Associativity What difference does it make? Can change answers in floating-point arithmetic Exposes a different set of common subexpressions Consider x+y+z What if y+z occurs elsewhere? Or x+y? or x+z? What if x = 2 & z = 17 ? Neither left nor right exposes 19. Best choice is function of surrounding context + + x y zxy z xyz Ideal operator Left association Right association

Hierarchy of Context-Free Languages Context-free languages Deterministic languages (LR(k)) LL(k) languagesSimple precedence languages LL(1) languagesOperator precedence languages LR(k)  LR(1) The inclusion hierarchy for context-free languages

Extra Slides Start Here

Beyond Syntax There is a level of correctness that is deeper than grammar fie(a,b,c,d) int a, b, c, d; { … } fee() { int f[3],g[0], h, i, j, k; char *p; fie(h,i,“ab”,j, k); k = f * i + j; h = g[17]; printf(“.\n”, p,q); p = 10; } What is wrong with this program? (let me count the ways …)

Beyond Syntax There is a level of correctness that is deeper than grammar fie(a,b,c,d) int a, b, c, d; { … } fee() { int f[3],g[0], h, i, j, k; char *p; fie(h,i,“ab”,j, k); k = f * i + j; h = g[17]; printf(“.\n”, p,q); p = 10; } What is wrong with this program? (let me count the ways …) declared g[0], used g[17] wrong number of args to fie() “ab” is not an int wrong dimension on use of f undeclared variable q 10 is not a character string All of these are “deeper than syntax” To generate code, we need to understand its meaning !

Beyond Syntax To generate code, the compiler needs to answer many questions Is “x” a scalar, an array, or a function? Is “x” declared? Are there names that are not declared? Declared but not used? Which declaration of “x” does each use reference? Is the expression “x * y + z” type-consistent? In “a[i,j,k]”, does a have three dimensions? Where can “z” be stored? (register, local, global, heap, static) In “f  15”, how should 15 be represented? How many arguments does “fie()” take? What about “printf ()” ? Does “*p” reference the result of a “malloc()” ? Do “p” & “q” refer to the same memory location? Is “x” defined before it is used ? These cannot be expressed in a CFG

Beyond Syntax These questions are part of context-sensitive analysis Answers depend on values, not parts of speech Questions & answers involve non-local information Answers may involve computation How can we answer these questions? Use formal methods  Context-sensitive grammars?  Attribute grammars? (attributed grammars?) Use ad-hoc techniques  Symbol tables  Ad-hoc code (action routines) In scanning & parsing, formalism won; different story here.

Beyond Syntax Telling the story The attribute grammar formalism is important  Succinctly makes many points clear  Sets the stage for actual, ad-hoc practice The problems with attribute grammars motivate practice  Non-local computation  Need for centralized information Some folks in the community still argue for attribute grammars  Knowledge is power  Information is immunization We will cover attribute grammars, then move on to ad-hoc ideas