Cube Pruning as Heuristic Search Mark Hopkins and Greg Langmead Language Weaver, Inc.

Slides:



Advertisements
Similar presentations
Heuristic Searches. Feedback: Tutorial 1 Describing a state. Entire state space vs. incremental development. Elimination of children. Closed and the solution.
Advertisements

Review: Search problem formulation
Heuristic Search techniques
Informed search strategies
An Introduction to Artificial Intelligence
A* Search. 2 Tree search algorithms Basic idea: Exploration of state space by generating successors of already-explored states (a.k.a.~expanding states).
Sandro Spina, John Abela Department of CS & AI, University of Malta. Mutually compatible and incompatible merges for the search of the smallest consistent.
Iterative Deepening A* & Constraint Satisfaction Problems Lecture Module 6.
Informed Search Methods How can we improve searching strategy by using intelligence? Map example: Heuristic: Expand those nodes closest in “as the crow.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Exact algorithms for the minimum latency problem Exact algorithms for the minimum latency problem 吳邦一 黃正男 詹富傑 樹德科大 資工系.
Branch & Bound Algorithms
Chapter 4 Search Methodologies.
S. J. Shyu Chap. 1 Introduction 1 The Design and Analysis of Algorithms Chapter 1 Introduction S. J. Shyu.
Introduction to Trees. Tree example Consider this program structure diagram as itself a data structure. main readinprintprocess sortlookup.
Best-First Search: Agendas
Slide 1 Search: Advanced Topics Jim Little UBC CS 322 – Search 5 September 22, 2014 Textbook § 3.6.
Algorithm Strategies Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
CPSC 322, Lecture 9Slide 1 Search: Advanced Topics Computer Science cpsc322, Lecture 9 (Textbook Chpt 3.6) January, 22, 2010.
Review: Search problem formulation
CS5371 Theory of Computation Lecture 11: Computability Theory II (TM Variants, Church-Turing Thesis)
1 Chapter 4 Search Methodologies. 2 Chapter 4 Contents l Brute force search l Depth-first search l Breadth-first search l Properties of search methods.
Sag et al., Chapter 4 Complex Feature Values 10/7/04 Michael Mulyar.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Computational Methods for Management and Economics Carla Gomes
SubSea: An Efficient Heuristic Algorithm for Subgraph Isomorphism Vladimir Lipets Ben-Gurion University of the Negev Joint work with Prof. Ehud Gudes.
ICS-271:Notes 6: 1 Notes 6: Game-Playing ICS 271 Fall 2006.
Approximate Factoring for A* Search Aria Haghighi, John DeNero, and Dan Klein Computer Science Division University of California Berkeley.
Backtracking.
1 Combinatorial Problems in Cooperative Control: Complexity and Scalability Carla Gomes and Bart Selman Cornell University Muri Meeting March 2002.
Cmpt-225 Simulation. Application: Simulation Simulation  A technique for modeling the behavior of both natural and human-made systems  Goal Generate.
 Optimal Packing of High- Precision Rectangles By Eric Huang & Richard E. Korf 25 th AAAI Conference, 2011 Florida Institute of Technology CSE 5694 Robotics.
Busby, Dodge, Fleming, and Negrusa. Backtracking Algorithm Is used to solve problems for which a sequence of objects is to be selected from a set such.
Department of Computer Science and Engineering, HKUST 1 HKUST Summer Programming Course 2008 Recursion.
BackTracking CS335. N-Queens The object is to place queens on a chess board in such as way as no queen can capture another one in a single move –Recall.
A Language Independent Method for Question Classification COLING 2004.
Exact methods for ALB ALB problem can be considered as a shortest path problem The complete graph need not be developed since one can stop as soon as in.
+ Simulation Design. + Types event-advance and unit-time advance. Both these designs are event-based but utilize different ways of advancing the time.
Informed search algorithms Chapter 4. Outline Best-first search Greedy best-first search A * search Heuristics.
Computer Science CPSC 322 Lecture 9 (Ch , 3.7.6) Slide 1.
Informed search strategies Idea: give the algorithm “hints” about the desirability of different states – Use an evaluation function to rank nodes and select.
Informed searching. Informed search Blind search algorithms do not consider any information about the states and the goals Often there is extra knowledge.
Coarse-to-Fine Efficient Viterbi Parsing Nathan Bodenstab OGI RPE Presentation May 8, 2006.
1 More About Turing Machines “Programming Tricks” Restrictions Extensions Closure Properties.
2. Regular Expressions and Automata 2007 년 3 월 31 일 인공지능 연구실 이경택 Text: Speech and Language Processing Page.33 ~ 56.
What’s in a translation rule? Paper by Galley, Hopkins, Knight & Marcu Presentation By: Behrang Mohit.
Review: Tree search Initialize the frontier using the starting state While the frontier is not empty – Choose a frontier node to expand according to search.
Lecture 3: Uninformed Search
15.053Tuesday, April 9 Branch and Bound Handouts: Lecture Notes.
1 Branch and Bound Searching Strategies Updated: 12/27/2010.
Online Social Networks and Media
Problem Reduction So far we have considered search strategies for OR graph. In OR graph, several arcs indicate a variety of ways in which the original.
Slides by: Eric Ringger, adapted from slides by Stuart Russell of UC Berkeley. CS 312: Algorithm Design & Analysis Lecture #36: Best-first State- space.
Ricochet Robots Mitch Powell Daniel Tilgner. Abstract Ricochet robots is a board game created in Germany in A player is given 30 seconds to find.
Heuristic Functions. A Heuristic is a function that, when applied to a state, returns a number that is an estimate of the merit of the state, with respect.
CSCE350 Algorithms and Data Structure Lecture 21 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Heuristic Functions.
Optimization Problems The previous examples simply find a single solution What if we have a cost associated with each solution and we want to find the.
G5AIAI Introduction to AI Graham Kendall Heuristic Searches.
Artificial Intelligence Lecture No. 8 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
1 GRAPHS – Definitions A graph G = (V, E) consists of –a set of vertices, V, and –a set of edges, E, where each edge is a pair (v,w) s.t. v,w  V Vertices.
Computer Science and Engineering Jianye Yang 1, Ying Zhang 2, Wenjie Zhang 1, Xuemin Lin 1 Influence based Cost Optimization on User Preference 1 The University.
Lecture 3: Uninformed Search
Review: Tree search Initialize the frontier using the starting state
BackTracking CS255.
Artificial Intelligence Problem solving by searching CSC 361
Analysis and design of algorithm
Heuristics Definition – a heuristic is an inexact algorithm that is based on intuitive and plausible arguments which are “likely” to lead to reasonable.
CMSC 471 Fall 2011 Class #4 Tue 9/13/11 Uninformed Search
Presentation transcript:

Cube Pruning as Heuristic Search Mark Hopkins and Greg Langmead Language Weaver, Inc.

8/13/20152 Huang and Chiang (2007) Denero et al. (2009) Denero et al. (2009) Moore and Quirk (2007) Iglesias et al. (2009) Roark and Holings head (2008) Li and Khudan pur (2008) Chiang (2007) Zhang and Gildea (2008) Petrov et al. (2008) Pauls and Klein (2009) Pust and Knight (2009) Pust and Knight (2009)

8/13/20153 ! distinct terminologies ?> = good ? + specialized use cases

8/13/20154 !? #$!%&!!!

5 cube pruning cube pruning =A* …on a specific search space …with specific heuristics (Chiang, 2007)

8/13/20156 Accuracy (e.g., BLEU) is very important But time is money – for the customer: throughput – for LW: cpu time on servers for SaaS solution Our customers expect words per minute in one thread – and linear speedup with multiple threads That’s seconds per sentence Can syntax be viable at product speeds? Motivation: Speed vs Accuracy

8/13/20157 ilnevapas [0,1] item1 [0,1] item2 [0,1] item1 [0,1] item2 [1,2] item1 [1,2] item2 [1,2] item3 [1,2] item1 [1,2] item2 [1,2] item3 [2,3] item1 [3,4] item1 [3,4] item2 [3,4] item3 [3,4] item1 [3,4] item2 [3,4] item3 [0,2] item1 [0,2] item2 [0,2] item1 [0,2] item2 [2,4] item1 [2,4] item2 [2,4] item3 [2,4] item1 [2,4] item2 [2,4] item3 [1,3] item1 [1,3] item2 [1,3] item1 [1,3] item2 [0,3] item1 [0,3] item2 [0,3] item3 [0,3] item1 [0,3] item2 [0,3] item3 [1,4] item1 [1,4] item2 [1,4] item1 [1,4] item2 Cube pruning targets CKY decoding

8/13/20158 ilnevapas [1,4] item1 [1,4] item2 [1,4] item1 [1,4] item2 an item encodes a (set of) translation(s) of a partial sentence

8/13/20159 How are items created? 2345 [4,5] item1 [4,5] item2 [4,5] item3 [4,5] item1 [4,5] item2 [4,5] item3 [2,3] item1 [2,3] item2 [2,3] item3 [2,3] item1 [2,3] item2 [2,3] item3 [3,5] item1 [3,5] item2 [3,5] item3 [3,5] item4 [3,5] item1 [3,5] item2 [3,5] item3 [3,5] item4 [2,4] item1 [2,4] item2 [2,4] item3 [2,4] item4 [2,4] item1 [2,4] item2 [2,4] item3 [2,4] item4 ????

8/13/ Items are created by combining items from complementary spans [4,5] item1 [4,5] item2 [4,5] item3 [4,5] item1 [4,5] item2 [4,5] item3 [2,3] item1 [2,3] item2 [2,3] item3 [2,3] item1 [2,3] item2 [2,3] item3 [3,5] item1 [3,5] item2 [3,5] item3 [3,5] item4 [3,5] item1 [3,5] item2 [3,5] item3 [3,5] item4 [2,4] item1 [2,4] item2 [2,4] item3 [2,4] item4 [2,4] item1 [2,4] item2 [2,4] item3 [2,4] item4 [2,4] item3 [4,5] item2 [2,5] item1

8/13/ [ 2,4, NP, the*car ] an item consists of three parts: spanpostconditioncarry What is an item?

8/13/ an item consists of three parts: spanpostconditioncarry [ 2,4, NP, the*car ] What is an item?

8/13/ an item consists of three parts: spanpostconditioncarry What is an item? [ 2,4, NP, the*car ]

8/13/ CKY Item Creation [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] postcondition carry preconditions postcondition cost( subitem1) cost( subitem2) cost( rule ) + interaction( rule, subitems) cost( new item )

8/13/ The Item Creation Problem 2345 [4,5] item1 [4,5] item2 [4,5] item3 [4,5] item1 [4,5] item2 [4,5] item3 [2,3] item1 [2,3] item2 [2,3] item3 [2,3] item1 [2,3] item2 [2,3] item3 [3,5] item1 [3,5] item2 [3,5] item3 [3,5] item4 [3,5] item1 [3,5] item2 [3,5] item3 [3,5] item4 [2,4] item1 [2,4] item2 [2,4] item3 [2,4] item4 [2,4] item1 [2,4] item2 [2,4] item3 [2,4] item4 ???? even if we just store 1000 items here …and we have only 1000s of grammar rules …and 1000 items here …then item creation can still take a very long time

8/13/ The Item Creation Problem ???? is there a better way to enumerate the 1000 items of lowest cost for this span without going through the millions of candidate items and taking the best 1000? this is the problem that cube pruning addresses

178/13/2015 [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] We want: [?,?,?,?] ?  [?,?,?,?][?,?,?,?] So far we have: A demonstration of incremental CKY item creation for span [2,5]

18 [2,4] [4,5][2,3] [3,5] 8/13/2015 [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] We want: [2,3,?,?] [3,5,?,?] ?  [2,5,?,?][2,5,?,?] So far we have:

19 [2,4] [4,5][2,3] [3,5] AB 8/13/2015 [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] We want: [2,3,A,?] [3,5,?,?] ?  [2,5,?,?][2,5,?,?] So far we have:

8/13/ [2,4] [4,5][2,3] [3,5] AB AB [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] We want: [2,3,A,?] [3,5,B,?] ?  [2,5,?,?][2,5,?,?] So far we have:

8/13/ [2,4] [4,5][2,3] [3,5] AB AB yn accept rule(A,B,1)? rule(A,B,k) is the kth lowest cost rule whose preconditions are [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] We want: [2,3,A,?] [3,5,B,?] B  [2,5,B,?][2,5,B,?] So far we have:

8/13/ [2,4] [4,5][2,3] [3,5] AB AB yn y accept rule(A,B,1)? accept item(2,3,A,1)? item(2,3,A,k) is the kth lowest cost item of span [2,3] whose postcondition is A n [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] We want: [2,3,A,a*b] [3,5,B,?] B  [2,5,B,?*b] So far we have:

8/13/ [2,4] [4,5][2,3] [3,5] AB AB yn y n yn accept rule(A,B,1)? accept item(2,3,A,1)? accept item(3,5,B,1)? y n accept item(3,5,B,2)? [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] We want: [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b] So far we have:

8/13/2015 [2,4] [4,5][2,3] [3,5] AB AB yn y n yn accept rule(A,B,1)? accept item(2,3,A,1)? accept item(3,5,B,1)? y n accept item(3,5,B,2)? this is a search space The Item Creation Problem, rephrased: find the n lowest- cost goal nodes of this search space 24

8/13/2015 [2,4] [4,5][2,3] [3,5] AB AB yn y n yn y n so if we can come up with lower-bounds on the best-cost reachable goal node from here… …and here …then we can just run A* on this search space to find the n goal nodes of lowest cost (without searching the entire space) 25

8/13/2015 [2,4] [4,5][2,3] [3,5] AB AB yn y n yn y n h = -infinity h = greedy lookahead cost 26

8/13/2015 [2,4] [4,5][2,3] [3,5] AB A B h = -infinity 27 accept rule(A,B,1)? yn accept item(3,5,B,1)? yn accept item(2,3,A,1)? ny h = [2,5,B,b*b] cost =

8/13/2015 [2,4] [4,5][2,3] [3,5] AB A B h = -infinity 28 accept rule(A,B,1)? yn accept item(3,5,B,1)? y accept item(2,3,A,1)? ny h = 7.48 accept item(2,3,A,2)? yn yn accept item(3,5,B,1)? [2,5,A,a*b] cost = h = 12.26

8/13/2015 [2,4] [4,5][2,3] [3,5] AB A B h = -infinity 29 accept rule(A,B,1)? yn accept item(3,5,B,1)? y accept item(2,3,A,1)? ny h = 7.48 accept item(2,3,A,2)? yn yn accept item(3,5,B,1)? [2,5,A,a*b] cost = h =12.26 not a lower bound [2,5,A,a*b] cost = <

8/13/2015 [2,4] [4,5][2,3] [3,5] AB AB yn y n yn y n h = -infinity h = greedy lookahead cost 30 admissible not admissible therefore A* will not find the n best solutions, it will only find n good solutions

8/13/ A* search Cube pruning vs.

[2,4] [4,5] 8/13/ A* search Cube pruning Cube pruning begins by forming cubes [2,3] [3,5] for each choice of subspans and preconditions [2,3] A [3,5] B [2,3] A [3,5] A [2,4] A [4,5] B [2,4] A [4,5] A [2,3] B [3,5] A [2,3] B [3,5] B [2,4] B [4,5] A [2,4] B [4,5] B

h = -inf 8/13/ A* search Cube pruning Cube pruning begins by forming cubes [2,3] B [3,5] A [2,3] A [3,5] B A* search visits nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB h = -inf -inf therefore, it will begin by visiting all nodes with –inf heuristics [2,3] A [3,5] A [2,3] B [3,5] B [2,4] A [4,5] B [2,4] A [4,5] A [2,4] B [4,5] A [2,4] B [4,5] B

8/13/ A* search Cube pruning Cube pruning begins by forming cubes A* search visits nodes in order of increasing heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB [2,3] A [3,5] B [2,3] B [3,5] A [2,3] B [3,5] B [2,3] A [3,5] A [2,4] A [4,5] B [2,4] B [4,5] A [2,4] A [4,5] A [2,4] B [4,5] B

8/13/ A* search Cube pruning What is a cube? A* search visits nodes in order of increasing heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB [2,3] A [3,5] B a cube is a set of three axes

8/13/ A* search Cube pruning What is a cube? A* search visits nodes in order of increasing heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB [2,3]A [3,5]B item(2,3,A,1) item(2,3,A,2) item(2,3,A,3) item(3,5,B,3) item(3,5,B,2) item(3,5,B,1) rule(A,B,1) rule(A,B,2)rule(A,B,3)rule(A,B,4) [2,3] A [3,5] B sorted by increasing cost [2,3] A [3,5] B sorted by increasing cost [2,3] A [3,5] B

8/13/ A* search Cube pruning Thus each choice of object from A* search visits nodes in order of increasing heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB [2,3]A [3,5]B item(2,3,A,1) item(2,3,A,2) item(2,3,A,3) item(3,5,B,3) item(3,5,B,2) item(3,5,B,1) rule(A,B,1) rule(A,B,2)rule(A,B,3)rule(A,B,4) here item(2,3,A,2) here item(3,5,B,1) and here rule(A,B,4)

8/13/ A* search Cube pruning …creates a new item for [2,5] A* search visits nodes in order of increasing heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB [2,3]A [3,5]B item(2,3,A,1) item(2,3,A,2) item(2,3,A,3) item(3,5,B,3) item(3,5,B,2) item(3,5,B,1) rule(A,B,1) rule(A,B,2)rule(A,B,3)rule(A,B,4) item(2,3,A,2) item(3,5,B,1) rule(A,B,4) [2,3,A,a*b] [3,5,B,b*a] B  [2,5,B,b*b]

8/13/ A* search Cube pruning If we take the best representative from each axis (i.e. the “1-1-1”)… A* search visits nodes in order of increasing heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB [2,3]A [3,5]B item(2,3,A,1) item(2,3,A,2) item(2,3,A,3) item(3,5,B,3) item(3,5,B,2) item(3,5,B,1) rule(A,B,1) rule(A,B,2)rule(A,B,3)rule(A,B,4) …then we expect the resulting item to have a low cost, since: cost( new item ) = cost( subitem1 ) + cost( subitem2 ) + cost( rule ) + interaction( rule, subitems )

8/13/ A* search Cube pruning Though we are not guaranteed this, because: A* search visits nodes in order of increasing heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB [2,3]A [3,5]B item(2,3,A,1) item(2,3,A,2) item(2,3,A,3) item(3,5,B,3) item(3,5,B,2) item(3,5,B,1) rule(A,B,1) rule(A,B,2)rule(A,B,3)rule(A,B,4) cost( new item ) = cost( subitem1 ) + cost( subitem2 ) + cost( rule ) + interaction( rule, subitems ) this cost is not monotonic

8/13/ A* search Cube pruning A* search visits nodes in order of increasing heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB 111 Cube pruning proceeds by creating the item of every cube and scoring them

8/13/ A* search Cube pruning Meanwhile, A* search has scored its frontier nodes [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB 111 Cube pruning proceeds by creating the item of every cube and scoring them accept rule(A,B,1) accept item(2,3,A,1) accept item(3,5,B,1)

8/13/ A* search Cube pruning Meanwhile, A* search has scored its frontier nodes [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB 111 Cube pruning proceeds by creating the item of every cube and scoring them

8/13/ A* search Cube pruning Meanwhile, A* search has scored its frontier nodes [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB 111 At this point, cube pruning takes the best item it has created It keeps this item KEPT And generates its “one-off” items

8/13/ A* search Cube pruning A* search continues to visit nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB It keeps this item KEPT And generates its “one-off” items

8/13/ A* search Cube pruning A* search continues to visit nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB It keeps this item KEPT And generates its “one-off” items yn

8/13/ A* search Cube pruning A* search continues to visit nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB It keeps this item KEPT And generates its “one-off” items yn

8/13/ A* search Cube pruning A* search continues to visit nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB It keeps this item KEPT And generates its “one-off” items yn 4.8 yn

8/13/ A* search Cube pruning A* search continues to visit nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB It keeps this item KEPT And generates its “one-off” items yn 4.8 yn

8/13/ A* search Cube pruning A* search continues to visit nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB It keeps this item KEPT And generates its “one-off” items yn 4.8 yn 7.1 yn

8/13/ A* search Cube pruning A* search continues to visit nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB It keeps this item KEPT And generates its “one-off” items yn 4.8 yn 7.1 yn this is a goal node

8/13/ A* search Cube pruning A* search continues to visit nodes in increasing order of heuristic value [2,4] [4,5][2,3] [3,5] ABAB AB A B A B AB It keeps this item KEPT And generates its “one-off” items yn yn yn this is a goal node 111 So A* keeps this item. KEPT

8/13/ A* search Cube pruning = = almost (+ node tying) = =

8/13/2015 [2,4] [4,5][2,3] [3,5] AB AB yn y n yn yn h = -infinity h = greedy lookahead cost 54 accept rule(A,B,1)? accept item(2,3,A,1)? accept item(3,5,B,2)? accept item(3,5,B,1)? ? Cube pruning was specifically designed for hierarchical phrase-based MT which uses only a small number of distinct postconditions But say our use case was string-to-tree MT, in the style of (Galley et al 2006)

8/13/ Average number of search nodes visited, per sentence Nodes by TypeCube Pruning subspan12936 precondition rule33734 item goal74618 TOTAL BLEU38.33 Arabic-English NIST 2008 the early nodes with infinite heuristics dominate the search time

8/13/2015 [2,4] [4,5][2,3] [3,5] AB AB yn y n yn yn h = -infinity h = greedy lookahead cost 56 accept rule(A,B,1)? accept item(2,3,A,1)? accept item(3,5,B,2)? accept item(3,5,B,1)? ? h = something better !

Nodes by TypeCube PruningAugmented CP subspan precondition rule item goal TOTAL BLEU /13/ Number of search nodes visited Nodes by TypeCube Pruning subspan12936 precondition rule33734 item goal74618 TOTAL BLEU38.33 Arabic-English NIST 2008

8/13/ Tradeoff curves Arabic-English NIST 2008

8/13/2015 [2,4] [4,5][2,3] [3,5] AB AB yn y n yn yn h = -infinity h = greedy lookahead cost 59 accept rule(A,B,1)? accept item(2,3,A,1)? accept item(3,5,B,2)? accept item(3,5,B,1)? h = admissible Cube pruning becomes exact. We found that our exact version of cube pruning had a similar time/quality curve to the original inexact version of cube pruning. However it was not as effective as our “augmented” version of cube pruning. This is all interesting, but somewhat beside the point.

8/13/ What does cube pruning teach us? esson

8/13/2015 [2,4] [4,5][2,3] [3,5] AB AB yn y n yn accept rule(A,B,1)? accept item(2,3,A,1)? accept item(3,5,B,1)? y n accept item(3,5,B,2)? this is a search space 61 It tells us that it is useful to frame the CYK Item Generation Problem as a heuristic search problem. Once this realization is made, we suddenly have many more avenues available to us, when implementing a CKY decoder for a particular use case

8/13/ h = -infinity h = greedy lookahead cost h = -infinityh = something better We can change the heuristics.

8/13/ We can change the search algorithm. We can change the heuristics.

8/13/ We can change the search algorithm. For instance, instead of A*…

8/13/ We can change the search algorithm. For instance, instead of A*… …we could try a depth-first strategy like depth-first branch- and-bound, and take advantage of its anytime properties.

8/13/ We can change the search algorithm. We can change the search space.

8/13/ What does cube pruning teach us? esson We end up with a speedup technique which is: simple general well-studied easily adaptable to new use cases

8/13/ Thank you. Questions?