Exam 1 Review (With answers) EECS 483 – Lecture 15 University of Michigan Monday, October 30, 2006.

Slides:



Advertisements
Similar presentations
Compiler construction in4020 – lecture 4 Koen Langendoen Delft University of Technology The Netherlands.
Advertisements

Mooly Sagiv and Roman Manevich School of Computer Science
9/27/2006Prof. Hilfinger, Lecture 141 Syntax-Directed Translation Lecture 14 (adapted from slides by R. Bodik)
6/12/2015Prof. Hilfinger CS164 Lecture 111 Bottom-Up Parsing Lecture (From slides by G. Necula & R. Bodik)
1 Chapter 5: Bottom-Up Parsing (Shift-Reduce). 2 - attempts to construct a parse tree for an input string beginning at the leaves (the bottom) and working.
Top-Down Parsing.
CS 310 – Fall 2006 Pacific University CS310 Parsing with Context Free Grammars Today’s reference: Compilers: Principles, Techniques, and Tools by: Aho,
By Neng-Fa Zhou Syntax Analysis lexical analyzer syntax analyzer semantic analyzer source program tokens parse tree parser tree.
Bottom-Up Syntax Analysis Mooly Sagiv html:// Textbook:Modern Compiler Design Chapter
CS Summer 2005 Top-down and Bottom-up Parsing - a whirlwind tour June 20, 2005 Slide acknowledgment: Radu Rugina, CS 412.
Bottom Up Parsing.
Prof. Fateman CS 164 Lecture 91 Bottom-Up Parsing Lecture 9.
1 Chapter 4: Top-Down Parsing. 2 Objectives of Top-Down Parsing an attempt to find a leftmost derivation for an input string. an attempt to construct.
Bottom-Up Syntax Analysis Mooly Sagiv html:// Textbook:Modern Compiler Implementation in C Chapter 3.
Professor Yihjia Tsai Tamkang University
Lecture 14 Syntax-Directed Translation Harry Potter has arrived in China, riding the biggest initial print run for a work of fiction here since the Communist.
Compiler construction in4020 – lecture 3 Koen Langendoen Delft University of Technology The Netherlands.
1 214 review. 2 What we have learnt Generate scanner and parser –We do not program directly –Instead we write the specifications for the scanner and parser.
Parsing Chapter 4 Parsing2 Outline Top-down v.s. Bottom-up Top-down parsing Recursive-descent parsing LL(1) parsing LL(1) parsing algorithm First.
410/510 1 of 21 Week 2 – Lecture 1 Bottom Up (Shift reduce, LR parsing) SLR, LR(0) parsing SLR parsing table Compiler Construction.
Top-Down Parsing - recursive descent - predictive parsing
4 4 (c) parsing. Parsing A grammar describes the strings of tokens that are syntactically legal in a PL A recogniser simply accepts or rejects strings.
CISC 471 First Exam Review Game Questions. Overview 1 Draw the standard phases of a compiler for compiling a high level language to machine code, showing.
LR(k) Parsing CPSC 388 Ellen Walker Hiram College.
4 4 (c) parsing. Parsing A grammar describes syntactically legal strings in a language A recogniser simply accepts or rejects strings A generator produces.
Parsing Jaruloj Chongstitvatana Department of Mathematics and Computer Science Chulalongkorn University.
Profs. Necula CS 164 Lecture Top-Down Parsing ICOM 4036 Lecture 5.
1 Compiler Construction Syntax Analysis Top-down parsing.
Review 1.Lexical Analysis 2.Syntax Analysis 3.Semantic Analysis 4.Code Generation 5.Code Optimization.
Syntax and Semantics Structure of programming languages.
4 4 (c) parsing. Parsing A grammar describes syntactically legal strings in a language A recogniser simply accepts or rejects strings A generator produces.
Compilers: syntax/4 1 Compiler Structures Objective – –describe general syntax analysis, grammars, parse trees, FIRST and FOLLOW sets ,
Exam 1 Review EECS 483 – Lecture 15 University of Michigan Monday, October 30, 2006.
Chapter 5: Bottom-Up Parsing (Shift-Reduce)
Prof. Necula CS 164 Lecture 8-91 Bottom-Up Parsing LR Parsing. Parser Generators. Lecture 6.
Announcements/Reading
Top-down Parsing Recursive Descent & LL(1) Comp 412 Copyright 2010, Keith D. Cooper & Linda Torczon, all rights reserved. Students enrolled in Comp 412.
Top-Down Parsing CS 671 January 29, CS 671 – Spring Where Are We? Source code: if (b==0) a = “Hi”; Token Stream: if (b == 0) a = “Hi”; Abstract.
Lecture 3: Parsing CS 540 George Mason University.
Lexical Analysis – Part II EECS 483 – Lecture 3 University of Michigan Wednesday, September 13, 2006.
Top-Down Parsing.
CSE 5317/4305 L3: Parsing #11 Parsing #1 Leonidas Fegaras.
UMBC  CSEE   1 Chapter 4 Chapter 4 (b) parsing.
Bernd Fischer RW713: Compiler and Software Language Engineering.
Bottom Up Parsing CS 671 January 31, CS 671 – Spring Where Are We? Finished Top-Down Parsing Starting Bottom-Up Parsing Lexical Analysis.
COMP 3438 – Part II-Lecture 6 Syntax Analysis III Dr. Zili Shao Department of Computing The Hong Kong Polytechnic Univ.
CS412/413 Introduction to Compilers and Translators Spring ’99 Lecture 6: LR grammars and automatic parser generators.
Compilers: Bottom-up/6 1 Compiler Structures Objective – –describe bottom-up (LR) parsing using shift- reduce and parse tables – –explain how LR.
Bottom-up parsing. Bottom-up parsing builds a parse tree from the leaves (terminals) to the start symbol int E T * TE+ T (4) (2) (3) (5) (1) int*+ E 
Mid-Terms Exam Scope and Introduction. Format Grades: 100 points -> 20% in the final grade Multiple Choice Questions –8 questions, 7 points each Short.
Chapter 8. LR Syntactic Analysis Sung-Dong Kim, Dept. of Computer Engineering, Hansung University.
Syntax and Semantics Structure of programming languages.
Announcements/Reading
Programming Languages Translator
50/50 rule You need to get 50% from tests, AND
Bottom-Up Parsing.
Table-driven parsing Parsing performed by a finite state machine.
Compiler Construction
Bottom-Up Syntax Analysis
Top-Down Parsing.
Implementing a LR parser engine
Syntax-Directed Translation
CPSC 388 – Compiler Design and Construction
Syntax Analysis source program lexical analyzer tokens syntax analyzer
Parsing #2 Leonidas Fegaras.
LL PARSER The construction of a predictive parser is aided by two functions associated with a grammar called FIRST and FOLLOW. These functions allows us.
R.Rajkumar Asst.Professor CSE
Parsing #2 Leonidas Fegaras.
Chap. 3 BOTTOM-UP PARSING
Review for the Midterm. Overview (Chapter 1):
Presentation transcript:

Exam 1 Review (With answers) EECS 483 – Lecture 15 University of Michigan Monday, October 30, 2006

- 1 - Logistics v When, Where: »Wednesday, Nov 1, 10:40am – 12:30pm »Room TBA v Type: »Open book/note v What to bring: »Text book, reference books, lecture notes  But must print these out »Pencils »No laptops or cell phones

- 2 - Topics Covered v Lexical analysis: ~25% v Syntax analysis: ~50% v Semantic analysis: ~25% »Percentages are VERY approximate v Not covered »MIRV specific stuff »Detailed flex/bison syntax  However, project 1 and 2 material is fair game  And general questions about flex/bison are possible

- 3 - Textbook v What have we covered: Ch v Things you should know / can ignore »Ch 1, 2 – Just overviews, don’t worry about it »Ch 3 – Lexical analysis »Ch 4 – Syntax analysis »Ch – Syntax-directed translation  No  Detailed algorithms in 5.5, 5.6 not important »Ch , 7.6  Rest of 6/7 you are not responsible for

- 4 - Open Book Exams v OPEN BOOK != DON’T STUDY v Open book means »Memorizing things is not that important »If you forget something, you can look it up v But »If you are trying to learn (or re-learn) stuff during the test, you won’t finish »Assume the test is not open book  Don’t rely on book/notes  Treat them as a backup

- 5 - How to Study v Re-familiarize yourself with all the material »Where to find things if you get confused v Practice solving problems »Do them w/o looking at the answer! »Class problems/examples from lectures »Fall 2004 exam 1 »Examples in book  If you are ambitious, exercises at the end of each chapter »Practice so that you can do them without thinking much

- 6 - Exam Format v Short answer: ~40% »Explain something »Short problems to work out v Longer design problems: ~60% »E.g., construct a parse table v Range of questions »Simple – Were you conscience in class? »Grind it out – Can you solve problems »Challenging – How well do you really understand things v My tests tend to be long – so move on if you get stuck!

- 7 - Lexical Analysis (aka Scanning) v Ch 3 v Regular expressions »How to write an RE from a textual description v NFAs »RE to NFA (Thompson construction) v DFAs »NFA to DFA v State minimization v How does lex/flex work

- 8 - Regular Expressions Construct a regular expression over the alphabet {a,b,c} that are at least 3 characters in length and end with an ‘a’. [a-c] [a-c]+ a Note + is the ‘+’ operator, ie 1 or more instances

- 9 - NFAs Construct an NFA for the following regular expression: a*(b|a)*c+ For answer, see notes from class as this is too messy to draw in powerpoint

Recall: Convert the NFA to a DFA v Problem 1: Multiple transitions »Move (S,a) is relabeled to target a new state whenever single input goes to multiple states v Problem 2: ε transitions »Any state reachable by an ε transition is “part of the state” »ε-closure - Any state reachable from S by ε transitions is in the ε- closure; treat ε-closure as 1 big state 12 a a b a+b* 12 a a 1/2 start b b 23 a b 1 εε 2/33 a b start 1/2/3 ab a*b*

NFA to DFA Conversion Convert the previous NFA into a DFA For answer, see notes from class as this is too messy to draw in powerpoint

Syntax Analysis (aka Parsing) v Ch. 4 v Context free grammars »Derivations, ambiguity, associativity v Parsing »Top-down  LL(1), building parse tables (FIRST, FOLLOW) »Bottom-up  LR(0), LR(1), SLR, LALR  Building parse tables (Closure, Goto), shift/reduce v Abstract syntax tree

Context Free Grammars A context free grammar is capable of representing more languages than a regular expression. What is the major reason for this? CFG has memory/stack/recursion

Context Free Grammars Write a grammar to parse all strings consisting of the symbols {a,b,c,d,e} of the form: a n b 2m c m d n e o, where m, n, o are >= 0 S  Se | X X  aXd | Y Y  bbYc | 

LL(1) Grammars S  E $ E  E (E) | E [E] | id Its not LL(1) because LL(1) parsers cannot handle left recursive grammars because it only has 1 token of lookahead. So for the string “id(id)”, you don’t know which production to apply, all of the 3 E productions make sense with lookahead = “id” Is the above Grammar LL(1) Explain your answer.

Recall: Computing FIRST/FOLLOW v Determining FIRST(X) 1.if X is a terminal, then add X to FIRST(X) 2.if X   then add  to FIRST(X) 3.if X is a nonterminal and X  Y1Y2...Yk then a is in FIRST(X) if a is in FIRST(Yi) and  is in FIRST(Yj) for j = 1...i-1 4.if  is in FIRST(Y1Y2...Yk) then  is in FIRST(X) v Determining FOLLOW(X) 1.if S is the start symbol then $ is in FOLLOW(S) 2.if A   B  then add all FIRST(  ) !=  to FOLLOW(B) 3.if A   B or  B  and  is in FIRST(  ) then add FOLLOW(A) to FOLLOW(B)

First/Follow Computation S  AB A  bC B  aAB |  C  (S) | c Construct First/Follow sets for the following LL(1) grammar for each non-terminal: S, A, B, C First(S) = {b} First(A) = {b} First(B) = {a,  } First(C) = {(, c} Follow(S) = {$, )} Follow(A) = {a, $, )} Follow(B) = {), $} Follow(C) = {a, $, )}

Recall: Closure and Goto v Closure of a parser state: »Start with Closure(S) = S »Then for each item in S:  X  . Y   Add items for all the productions Y   to the closure of S: Y .  v Goto operation = describes transitions between parser states, which are sets of items »If the item [X  . Y  ] is in I, then »Goto(I, Y) = Closure( [X   Y.  ] )

DFA Construction for LR(0) E  E + T | T T  TF | F F  F* | a | b Construct the LR(0) DFA for the following grammar: Are there any shift/reduce conflicts in table that you would construct from the DFA? How do you know? Note, * is not the closure operator!

DFA Construction for LR(0) - solution S  E$ E  E + T | T T  TF | F F  F* | a | b S .E$ E .E+T E .T T .TF T .F F .F* F .a F .b S  E.$ E  E.+T T  F. F  F.* F  a.F  b. E  T. T  T.F F .F* F .a F .b S  E$. E  E+.T F .TF T .F F .F* F .a F .b T  TF. F  F.* F  F*. E  E+T. T  T.F F .F* F .a F .b a b 1110 F * * 11 a b F T a b ab E F $ + T F The parse table will have shift reduce conflicts, states 3, 6, 7, 9 all have shift/reduce conflicts because in all shift and reduce actions are possible

Bottom-up Parsing Which is capable of parsing a larger set of languages, LR(0) or SLR? SLR can because it has a lookahead of 1

Abstract Syntax Trees int foo() { int a, b, c, i; a = 20; b = a * a; for (i=0; i<10, i++) { c = (10 + a) * b – (c / (a * i - b)); } Draw the AST for the following C function

Abstract Syntax Trees - answer function foo rettypeargsbodydecls intvoid int a, b, c, d statements = a20 = b* aa for =< i 10 0 i = i+ i1 = c- * + a b / c- * b ai Something close to the following was all that was needed

Semantic Analysis v Ch 5, 6 v Syntax-directed translation »Semantic actions, building the AST v Scope information »Hierarchy of symbol tables v Type checking »Static/dynamic, strong/weak »Static semantics, type judgments, proof tree

Static Semantics T  int | bool | stack(T) E  id | num | E + E E  top(E) | pop(E) | push(E,E) | newstack(T) | empty(E) Notes: 1. Add: can only be applied to ints or stacks, int1 + int2 = sum, stack1 + stack2 = stack of concatenation of elements (note elements must be same type) 2. top(E) = top element of stack E 3. pop(E) = resultant stack after removing top element 4. push(E, E’) = add E’ to stack, returns resulting stack 5. newstack(T) = produces new, empty stack of type T 6. empty(E) = is true if E has no elements, false otherwise Consider the following language that manipulates ints, bools, and stacks

Static Semantics (2) a) Write the static type-checking rules for all expressions in this language. b) Is the following well typed in any context? pop(push(x,y+5)) c) Is the following well typed in any context? push(newstack(int), top(x)) + push(y,empty(z))

empty(E) : bool E : stack(T) Static Semantics - answer a) Write the static type-checking rules for all expressions in this language. All rules are in the environment A num : int id : T  A Add E1 : intE2 : int E1 + E2 : int E1 : stack(T) E2 : stack(T) E1 + E2 : stack(T) Axioms Push E1 : stack(T)E2 : T push(E1, E2) : stack(T) Pop E : stack(T) pop(E) : stack(T) Top E : stack(T) top(E) : T Empty newstack(T) : stack(T) Newstack

Static Semantics - answer b) Is the following well typed in any context? pop(push(x,y+5)) c) Is the following well typed in any context? push(newstack(int), top(x)) + push(y,empty(z)) push (x, y+5) : stack(int) pop(push(x, y+5)) : stack(int) Yes, this is well typed. 5 is int, so y must be int, this also implies x is stack(int), then we have: No, push(y, empty(z)) = stack(bool) and push(newstack(int), top(x)) = stack(int) Thus, the ‘+’ operator is not well typed as its adding stacks of diff types

Important Notes v Just because I did not go over a problem on a topic does not mean its not important. There is limited time, so I cannot go over everything!! v Extended office hours »Tomorrow, Tuesday, Oct 31, 4-5:30pm