Download presentation
Presentation is loading. Please wait.
Published byCarol Henry Modified over 8 years ago
1
Automating Readers’ Advisory to Make Book Recommendations for K-12 Readers by Alicia Wood
2
Problem Existing book recommenders failed to offer adequate choices for K-12 readers Important to provide good reading material among K-12 students Not easy to find the right books for the right audience
3
Who cares? Parents – 32% of American 4 th graders proficient in reading Children
4
Previous Work Previous book recommenders Extract features, opinion, feature/opinion pairs – Bootstrapping, NLP, ML, extraction rules, latent semantic analysis, statistical analysis, and information retrieval Information extraction approaches on product reviews Amazon Require historical data Require an ontology Don’t consider readability level of users
5
Proposed Solution Rabbit Multi-dimensional approach No feedback from users ABET (Appeal-based extraction tool)
6
Readers’ Advisory Offers materials of potential interest with “the help of knowledgeable and non-judgmental library staff” Based on: – Reasons behind preferences – Topical areas – Content descriptions – Appeal factors (pacing, description of characters, tone, etc.)
7
Appeal Factors & Terms
8
ABET Extracts appeal-term descriptions of books from reviews available Imperative to properly associate appeal terms and appeal factors – pairs can be correctly extracted to generate accurate appeal-term description for the book
9
Extraction Rules for ABET
10
Example SA = “The narrative of the book is dramatic” – Subject: narrative SB = “He creates believable characters” – Object: character (AF) If subject/object is an appeal factor, the word semantically linked to that subject/object is often an appeal term Rules 1 + 2
11
Example “The characters are not simple – Rule 4 – negation
12
ABET Creates the appeal-term description for a book applying the rules Frequency of occurrence – degree of significance
13
Rabbit 1.Analyze profile of a reader 2.Determine readability level 3.Select books 4.Compute ranking score
14
Candidate Books CB – candidate book available at a book repository Rep PB each book in R’s profile |P| - # of books in R’s profile TRoLL(CB), TRoLL(PB) – grade level of CB/PB determined by TRoLL
15
Topical Similarity Measure CB – vector of weights of CB if subject heading is of CB P – vector of weights of Pi (proportion between number of books in P that have been assigned Pi)
16
Content Similarity Enhanced version of cosine similarity CB = vector of Wcb1…Wcbn P = vector of Wp1…Wpn Wpi, Wcbi = weights of keywords Pi and Cb
17
Appeal Term Similarity F = set of appeal factors in appeal term descriptions CBf and Pf = n dimensional vector representation of appeal term distribution of an appeal factor (f)
18
Ranking Candidate Books Multiple linear regression Train using Ordinary Least Squares method – T set dataset
19
Experimental Results 1.Compute precision and recall of appeal factor-appeal term pairs extracted from book reviews 2.Analyze correctness of appeal-term descriptions created by ABET 3.Compare appeal-term descriptions generated by ABET with respect to ones extracted from Novelist
20
Experimental Results – 1 100 books Manually annotated and compared with ABET Precision: 0.85 Recall: 0.82 F-measure: 0.83 High accuracy for ABET in generating appeal factor-appeal term pairs
21
Experimental Results - 2 Surveys to determine correctness Overall 94% accuracy
22
Experimental Results - 3 Surveys to determine comparison Appeal-term descriptions provided by ABET favored over Novelist
23
Validation Computed empirical studies Assessed performance of Rabbit using Eset in terms of Normalized Discounted Cumulative Gain Rabbit locates more relevant books Rabbit outperforms GoodReads and Novelist
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.