Download presentation
Presentation is loading. Please wait.
Published byIrma Fletcher Modified over 9 years ago
1
1 Data Collection and Normalization for the Scenario- Based Lexical Knowledge Resource of a Text-to-Scene Conversion System Margit Bowler
2
2 Who I Am Rising senior at Reed College in Portland Linguistics major, concentration in Russian Rising senior at Reed College in Portland Linguistics major, concentration in Russian
3
3 Overview WordsEye & Scenario-Based Lexical Knowledge Resource (SBLR) Use of Amazon’s Mechanical Turk (AMT) for data collection Manual normalization of the AMT data and definition of semantic relations Automatic normalization techniques of AMT data with respect to building the SBLR Future automatic normalization techniques WordsEye & Scenario-Based Lexical Knowledge Resource (SBLR) Use of Amazon’s Mechanical Turk (AMT) for data collection Manual normalization of the AMT data and definition of semantic relations Automatic normalization techniques of AMT data with respect to building the SBLR Future automatic normalization techniques
4
4 WordsEye Text-to-Scene Conversion the humongous white shiny bear is on the american mountain range. the mountain range is 100 feet tall. the ground is water. the sky is partly cloudy. the airplane is 90 feet in front of the nose of the bear. the airplane is facing right.
5
5 Scenario-Based Lexical Knowledge Resource (SBLR) Information on semantic categories of words Semantic relations between predicates (verbs, nouns, adjectives, prepositions) and their arguments Contextual, common-sense knowledge about the visual scenes various actions and items occur in Information on semantic categories of words Semantic relations between predicates (verbs, nouns, adjectives, prepositions) and their arguments Contextual, common-sense knowledge about the visual scenes various actions and items occur in
6
6 How to build the SBLR… efficiently? Manual construction of the SBLR is time- consuming and expensive Past methods have included mining information from external semantic resources (e.g. WordNet, FrameNet, PropBank) & information extraction techniques from other corpora Manual construction of the SBLR is time- consuming and expensive Past methods have included mining information from external semantic resources (e.g. WordNet, FrameNet, PropBank) & information extraction techniques from other corpora
7
7 Amazon’s Mechanical Turk (AMT) Online marketplace for work Anyone can work on AMT, however: It is possible to screen workers by various criteria. We screened ours by: Located in the USA 99%+ approval rating Online marketplace for work Anyone can work on AMT, however: It is possible to screen workers by various criteria. We screened ours by: Located in the USA 99%+ approval rating
8
8 AMT Tasks In each task, we asked for up to 10 responses. A comment box was provided for >10 responses. Task 1: Given the object X, name 10 locations where you would find X. (Locations) Task 2: Given the object X, name 10 objects found near X. (Nearby Objects) Task 3: Given the object X, list 10 parts of X. (Part- Whole) In each task, we asked for up to 10 responses. A comment box was provided for >10 responses. Task 1: Given the object X, name 10 locations where you would find X. (Locations) Task 2: Given the object X, name 10 objects found near X. (Nearby Objects) Task 3: Given the object X, list 10 parts of X. (Part- Whole)
9
9 AMT Task Results 17,200 total responses Spent $106.90 for all three tasks It took approximately 5 days to complete each task 17,200 total responses Spent $106.90 for all three tasks It took approximately 5 days to complete each task Target Words User Inputs Reward Locations 3426,850$0.05 Objects 2453,500$0.07 Parts 3426,850$0.05
10
10 Goal: How to automatically normalize data collected from AMT in such a way that AMT would be useful for building the Scenario-Based Lexical Knowledge Resource (SBLR)?
11
11 Manual Normalization of AMT Data Removal of uninformative target item- response item pairs between which no relevant semantic relationship was held Definition of the semantic relations held between the remaining target item-response item pairs This manually normalized set of data was used as the standard against which we measured various automatic normalization techniques. Removal of uninformative target item- response item pairs between which no relevant semantic relationship was held Definition of the semantic relations held between the remaining target item-response item pairs This manually normalized set of data was used as the standard against which we measured various automatic normalization techniques.
12
12 Rejected Target-Response Pairs Misinterpretation of ambiguous target item (e.g. mobile) Viable interpretation of target item was not contained within the SBLR (e.g. crawfish as food rather than a living animal) Too generic responses (e.g. store in response to turntable) Misinterpretation of ambiguous target item (e.g. mobile) Viable interpretation of target item was not contained within the SBLR (e.g. crawfish as food rather than a living animal) Too generic responses (e.g. store in response to turntable)
13
13 Examples of Approved AMT Responses Locations: mural - gallery lizard - desert Nearby Objects: ambulance - stretcher cauldron - fire Part-Whole: scissors - blade monument - granite Locations: mural - gallery lizard - desert Nearby Objects: ambulance - stretcher cauldron - fire Part-Whole: scissors - blade monument - granite
14
14 Semantic Relations Defined a total of 34 relations Focused on defining concrete, graphically depictable relationships “Generic” relations accounted for most of the labeled pairs (e.g. containing.r, next-to.r) Finer distinctions were made within these generic semantic relations (e.g. habitat.r, residence.r within the overarching containing.r relation) Defined a total of 34 relations Focused on defining concrete, graphically depictable relationships “Generic” relations accounted for most of the labeled pairs (e.g. containing.r, next-to.r) Finer distinctions were made within these generic semantic relations (e.g. habitat.r, residence.r within the overarching containing.r relation)
15
15 Example Semantic Relations Locations: mural - gallery - containing.r lizard - desert - habitat.r Nearby Objects: ambulance - stretcher - next-to.r cauldron - fire - above.r Part-Whole: scissors - blade - object-part.r monument - granite - stuff-object.r Locations: mural - gallery - containing.r lizard - desert - habitat.r Nearby Objects: ambulance - stretcher - next-to.r cauldron - fire - above.r Part-Whole: scissors - blade - object-part.r monument - granite - stuff-object.r
16
16 Semantic Relations within Locations Task We collected 6850 locations for 342 target objects from our 3D library. Relation Number of occurrences Percentage of total scored pairs containing.r119438.01% habitat.r34611.02% on-surface.r33310.6% geographical- location.r 3069.74% group.r1835.83%
17
17 Semantic Relations within Nearby Objects Task We collected 6850 nearby objects for 342 target objects from our 3D library. Relation Number of occurrences Percentage of total scored pairs next-to.r498875.66% on-surface.r3755.69% containing.r2934.44% habitat.r2433.69% object-part.r1532.32%
18
18 Semantic Relations within Part- Whole Task We collected 3500 parts of 245 objects. Relation Number of occurrences Percentage of total scored pairs object-part.r267579.12% stuff-object.r55216.33% containing.r501.48% habitat.r361.06% stuff-mass.r170.5%
19
19 Automatic Normalization Techniques Collected AMT data was classified into higher- scoring versus lower-scoring sets by: Log-likelihood and log-odds of sentential co- occurrences in the Gigaword English corpus WordNet path similarity Resnik similarity WordNet average pair-wise similarity WordNet matrix similarity Accuracy evaluated by comparison against manually normalized data Collected AMT data was classified into higher- scoring versus lower-scoring sets by: Log-likelihood and log-odds of sentential co- occurrences in the Gigaword English corpus WordNet path similarity Resnik similarity WordNet average pair-wise similarity WordNet matrix similarity Accuracy evaluated by comparison against manually normalized data
20
20 Precision & Recall AMT data is quite cheap to collect, so we were concerned predominantly with precision (obtaining highly accurate data) rather than recall (avoiding loss of some data). In order to achieve more accurate data (high precision), we will lose a portion of our AMT data (low recall) AMT data is quite cheap to collect, so we were concerned predominantly with precision (obtaining highly accurate data) rather than recall (avoiding loss of some data). In order to achieve more accurate data (high precision), we will lose a portion of our AMT data (low recall)
21
21 Locations Task Achieved best precision with log-odds. Within high-scoring set, responses that were too general (e.g. turntable - store) were rejected. Within low-scoring set, extremely specific locations that were unlikely to occur within a corpus or WordNet’s synsets were approved (e.g. caliper - architect’s briefcase) Achieved best precision with log-odds. Within high-scoring set, responses that were too general (e.g. turntable - store) were rejected. Within low-scoring set, extremely specific locations that were unlikely to occur within a corpus or WordNet’s synsets were approved (e.g. caliper - architect’s briefcase) Base- line Log- likel. Log- odds WN Path Sim. Resnik WN Avg. PW WN Matrix Sim. Precision 0.5527 0.75020.77150.54620.55620.60140.4782 Recall 1.00.79450.64860.96490.96780.34541.0
22
22 Nearby Objects Task Relatively few target-response pairs were discarded, resulting in high recall. High precision due to open-ended nature of task; responses often fell under a relation, if not next-to.r. Relatively few target-response pairs were discarded, resulting in high recall. High precision due to open-ended nature of task; responses often fell under a relation, if not next-to.r. Base- line Log- likel. Log- odds WN Path Sim. Resnik WN Avg. PW WN Matrix Sim. Precision 0.8934 0.89470.90480.90760.90850.97640.8795 Recall 1.0 0.89171.0 0.26591.0
23
23 Part-Whole Task Rejected target-response pairs from the high-scoring set were often due to responses that named attributes, rather than parts, of the target item (e.g. croissant - flaky) Approved pairs from the low-scoring set were mainly due to obvious, “common sense” responses that would usually be inferred, not explicitly stated (e.g. bunny - brain) Rejected target-response pairs from the high-scoring set were often due to responses that named attributes, rather than parts, of the target item (e.g. croissant - flaky) Approved pairs from the low-scoring set were mainly due to obvious, “common sense” responses that would usually be inferred, not explicitly stated (e.g. bunny - brain) Base- line Log- likel. Log- odds WN Path Sim. Resnik WN Avg. PW WN Matrix Sim. Precision 0.7887 0.78320.82310.79630.79740.88230.8935 Recall 1.00.41290.46221.0 0.26210.2367
24
24 Future Automatic Normalization Techniques Computing word association measures on much larger corpora (e.g. Google’s 1 trillion word corpus) WordNet synonyms and hypernyms Latent Semantic Analysis to build word similarity matrices Computing word association measures on much larger corpora (e.g. Google’s 1 trillion word corpus) WordNet synonyms and hypernyms Latent Semantic Analysis to build word similarity matrices
25
25 In Summary… WordsEye & Scenario-Based Lexical Knowledge Resource (SBLR) Amazon’s Mechanical Turk & our tasks Manual normalization of AMT data Automatic normalization techniques used on AMT data and results Possible future automatic normalization methods WordsEye & Scenario-Based Lexical Knowledge Resource (SBLR) Amazon’s Mechanical Turk & our tasks Manual normalization of AMT data Automatic normalization techniques used on AMT data and results Possible future automatic normalization methods
26
26 Thanks to… Richard Sproat Masoud Rouhizadeh All the CSLU interns Richard Sproat Masoud Rouhizadeh All the CSLU interns
27
27 Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.