Presentation is loading. Please wait.

Presentation is loading. Please wait.

Some common assumptions behind Computational Generation of Referring Expressions (GRE) (Introductory remarks at the start of the workshop)

Similar presentations


Presentation on theme: "Some common assumptions behind Computational Generation of Referring Expressions (GRE) (Introductory remarks at the start of the workshop)"— Presentation transcript:

1 Some common assumptions behind Computational Generation of Referring Expressions (GRE) (Introductory remarks at the start of the workshop)

2 GRE is one sub-area of Natural Language Generation (NLG) Objective: allowing a computer to refer to an object (or set of objects), in natural language Referring expression Definite description Frequent assumption: all that counts is finding adistinguishing description

3 Example Situation a, £100 b, £150 c, £100 d, £150 e, £? SwedishItalian

4 Formalized in a KB Type: furniture (abcde), desk (ab), chair (cde) Origin: Sweden (ac), Italy (bde) Colours: dark (ade), light (bc), grey (a) Price: 100 (ac), 150 (bd), 250 ({}) Contains: wood ({}), metal (abcde), cotton(d) Assumption: this knowledge is shared between speaker and hearer, and directly accessible.

5 Some possible solutions for referring to a : e.g., –``the Swedish desk`` –``the dark-coloured desk from Sweden`` Much work in GRE cares little about the actual words (e.g., dark-coloured instead of dark). Focus on semantics: Content Determination From this point of view, a description is a set of properties, e.g. {Origin: Sweden, Colour: dark} (Attribute: Value)

6 Other typical assumptions All information in the KB is atomic (no negations, no disjunctions, no quantifiers) Context outside the KB is disregarded: –no anaphora –no deixis If a distinguishing description of the referent exists then one must be generated (``Logical completeness``)

7 All of these assumptions have recently been challenged This workshop looks set to continue this trend

8 One well-known algorithm The Incremental Algorithm (Dale & Reiter 1995) Accumulate properties until a distinguishing description is reached. (Incremental, hence no backtracking) Linguistic realisation (in words) is done by later modules of the NLG system

9 Incremental Algorithm (informal): Properties are considered in a fixed order: P = A property is included if it is useful: true of target; false of some distractors Stop when distinguishing description is found, or when end of list has been reached. Earlier properties have a greater chance of being included. (E.g., a perceptually salient property) Therefore called preference order.

10 These introductory remarks have focussed on computational GRE Well see various other perspectives today, particularly psycholinguistic ones These introductory remarks were aimed at levelling the playing field

11 We can start playing!


Download ppt "Some common assumptions behind Computational Generation of Referring Expressions (GRE) (Introductory remarks at the start of the workshop)"

Similar presentations


Ads by Google