Download presentation
Presentation is loading. Please wait.
Published byMildred Waters Modified over 9 years ago
1
WEB BAR 2004 Advanced Retrieval and Web Mining Lecture 14
2
Today’s Topics Latent Semantic Indexing / Dimension reduction Interactive information retrieval / User interfaces Evaluation of interactive retrieval
3
How LSI is used for Text Search LSI is a technique for dimension reduction Similar to Principal Component Analysis (PCA) Addresses (near-)synonymy: car/automobile Attempts to enable concept-based retrieval Pre-process docs using a technique from linear algebra called Singular Value Decomposition. Reduce dimensionality to: Fewer dimensions, more “collapsing of axes”, better recall, worse precision More dimensions, less collapsing, worse recall, better precision Queries handled in this new (reduced) vector space.
4
Input: Term-Document Matrix w i,j = (normalized) weighted count (t i, d j ) Key idea: Factorize this matrix titi djdj n m
5
h j is representation of d j in terms of basis W If rank(W) ≥ rank(A) then we can always find H so A = WH Notice duality of problem More “semantic” dimensions -> LSI (latent semantic indexing) Matrix Factorization =x n m BasisRepresentation m k k n A = W x H hjhj djdj
6
Minimization Problem Minimize Minimize information loss Given: norm for SVD, the 2-norm constraints on W, S, V for SVD, W and V are orthonormal, and S is diagonal
7
Matrix Factorizations: SVD =x n m Basis Representation m k k n A = W x S x V T x Singular Values Restrictions on representation: W, V orthonormal; S diagonal
8
Dimension Reduction For some s << Rank, zero out all but the s biggest singular values in S. Denote by S s this new version of S. Typically s in the hundreds while r (Rank) could be in the (tens of) thousands. Before: A= W S V t Let A s = W S s V t = W s S s V s t A s is a good approximation to A. Best rank s approximation according to 2-norm
9
Dimension Reduction =x n m Basis Representation 00 m k k n A s = W x S s x V T x 0 0 Singular Values The columns of A s represent the docs, but in s << m dimensions Best rank s approximation according to 2-norm s s 0
10
More on W and V Recall m n matrix of terms docs, A. Define term-term correlation matrix T = AA t A t denotes the matrix transpose of A. T is a square, symmetric m m matrix. Doc-doc correlation matrix D=A t A. D is a square, symmetric n n matrix. Why?
11
Eigenvectors Denote by W the m r matrix of eigenvectors of T. Denote by V the n r matrix of eigenvectors of D. Denote by S the diagonal matrix with the squares of the eigenvalues of T = AA t in sorted order. It turns out that A = WSV t is the SVD of A Semi-precise intuition: The new dimensions are the principal components of term correlation space.
12
Query processing Exercise: How do you map the query into the reduced space?
13
Take Away LSI is optimal: optimal solution for given dimensionality Caveat: Mathematically optimal is not necessarily “semantically” optimal. LSI is unique Except for signs, singular values with same value Key benefits of LSI Enhances recall, addresses synonymy problem But can decrease precision Maintenance challenges Changing collections Recompute in intervals? Performance challenges Cheaper alternatives for recall enhancement E.g. Pseudo-feedback Use of LSI in deployed systems Why?
14
Resources: LSI Random projection theorem: http://citeseer.nj.nec.com/dasgupta99elementary.html Faster random projection: http://citeseer.nj.nec.com/frieze98fast.html Latent semantic indexing: http://citeseer.nj.nec.com/deerwester90indexing.html http://cs276a.stanford.edu/handouts/fsnlp-svd.pdf Books: FSNLP 15.4, MG 4.6, MIR 2.7.2.
15
Interactive Information Retrieval User Interfaces
16
The User in Information Access Stop Information need Explore results Formulate/ Reformulate Done? Query Send to system Receive results yes no User Find starting point
17
Main Focus of Information Retrieval yes no Focus of most IR! Stop Information need Explore results Formulate/ Reformulate Done? Query Send to system Receive results User Find starting point
18
Information Access in Context Stop High-Level Goal Synthesize Done? Analyze yes no User Information Access
19
The User in Information Access Stop Information need Explore results Formulate/ Reformulate Done? Query Send to system Receive results yes no User Find starting point
20
Queries on the Web Most Frequent on 2002/10/26
21
Queries on the Web (2000) Why only 9% sex?
22
Intranet Queries (Aug 2000) 3351 bearfacts 3349 telebears 1909 extension 1874 schedule+of+classes 1780 bearlink 1737 bear+facts 1468 decal 1443 infobears 1227 calendar 989 career+center 974 campus+map 920 academic+calendar 840 map 773 bookstore 741 class+pass 738 housing 721 tele-bears 716 directory 667 schedule 627 recipes 602 transcripts 582 tuition 577 seti 563 registrar 550 info+bears 543 class+schedule 470 financial+aid Source: Ray Larson
23
Intranet Queries Summary of sample data from 3 weeks of UCB queries 13.2% Telebears/BearFacts/InfoBears/BearLink (12297) 6.7% Schedule of classes or final exams (6222) 5.4% Summer Session (5041) 3.2% Extension (2932) 3.1% Academic Calendar (2846) 2.4% Directories (2202) 1.7% Career Center (1588) 1.7% Housing (1583) 1.5% Map (1393) Source: Ray Larson
24
Types of Information Needs Need answer to question (who won the superbowl?) Re-find a particular document Find a good recipe for tonight’s dinner Exploration of new area (browse sites about Mexico City) Authoritative summary of information (HIV review) In most cases, only one interface! Cell phone / pda / camera / mp3 analogy
25
The User in Information Access Stop Information need Explore results Formulate/ Reformulate Done? Query Send to system Receive results yes no User Find starting point
26
Find Starting Point By Browsing x x x xx x x x x x x x x x Entry point Starting point for search (or the answer?)
27
Hierarchical browsing Level 2 Level 1 Level 0
29
Visual Browsing: Hyperbolic Tree
31
Visual Browsing: Themescape
32
Scatter/Gather Scatter/gather allows the user to find a set of documents of interest through browsing. It iterates: Scatter Take the collection and scatter it into n clusters. Gather Pick the clusters of interest and merge them.
33
Scatter/Gather
34
Browsing vs. Searching Browsing and searching are often interleaved. Information need dependent Open-ended (find information about mexico city) -> browsing Specific (who won the superbowl) -> searching User dependent Some users prefer searching, others browsing (confirmed in many studies: some hate to type) Advantage of browsing: You don’t need to know the vocabulary of the collection Compare to physical world Browsing vs. searching in a grocery store
35
Browsers vs. Searchers 1/3 of users do not search at all 1/3 rarely search Or urls only Only 1/3 understand the concept of search (ISP data from 2000) Why?
36
Starting Points Methods for finding a starting point Select collections from a list Highwire press Google! Hierarchical browsing, directories Visual browsing Hyperbolic tree Themescape, Kohonen maps Browsing vs searching
37
The User in Information Access Stop Information need Explore results Formulate/ Reformulate Done? Query Send to system Receive results yes no User Find starting point
38
Form-based Query Specification (Infoseek) Credit: Marti Hearst
39
Boolean Queries Boolean logic is difficult for the average user. Some interfaces for average users support formulation of boolean queries Current view is that non-expert users are best served with non-boolean or simple +/- boolean (pioneered by altavista). But boolean queries are the standard for certain groups of expert users (eg, lawyers).
40
Direct Manipulation Spec. VQUERY (Jones 98) Credit: Marti Hearst
41
One Problem With Boolean Queries: Feast or Famine Famine Feast Specifying a well targeted query is hard. Bigger problem for Boolean. Google: 1860 hits for “standard user dlink 650” 0 hits after adding “no card found” How general is the query?
42
Boolean Queries Summary Complex boolean queries are difficult for average user Feast or famine problem Prior to google, many IR researchers thought boolean queries were a bad idea. Google queries are strict conjunctions. Why is this working well?
43
Notice that the output is a (large) table. Various parameters in the table (column headings) may be clicked on to effect a sort. Parametric search example
44
We can add text search.
45
Parametric search Each document has, in addition to text, some “meta-data” e.g., Make, Model, City, Color A parametric search interface allows the user to combine a full-text query with selections on these parameters
46
Interfaces for term browsing
48
Re/Formulate Query Single text box (google, stanford intranet) Command-based (socrates) Boolean queries Parametric search Term browsing Other methods Relevance feedback Query expansion Spelling correction Natural language, question answering
49
The User in Information Access Stop Information need Explore results Formulate/ Reformulate Done? Query Send to system Receive results yes no User Find starting point
50
Category Labels to Support Exploration Example: ODP categories on google Advantages: Interpretable Capture summary information Describe multiple facets of content Domain dependent, and so descriptive Credit: Marti Hearst Disadvantages Domain dependent, so costly to acquire May mis-match users’ interests
51
Evaluate Results Context in Hierarchy: Cat-a-Cone
52
Summarization to Support Exploration Query-dependent summarization KWIC (keyword in context) lines (a la google) Query-independent summarization Summary written by author (if available) Automatically generated summary.
53
Visualize Document Structure for Exploration
54
Result Exploration User Goal: Do these results answer my question? Methods Category labels Summarization Visualization of document structure Other methods Metadata: URL, date, file size, author Hypertext navigation: Can I find the answer by following a link? Browsing in general Clustering of results (jaguar example)
55
Exercise Current information retrieval user interfaces are designed for typical computer screens. How would you design a user interface for a wall-sized screen? Observe your own information seeking behavior Examples WWW University library Grocery store Are you a searcher or a browser? How do you reformulate your query? Read bad hits, then minus terms Read good hits, then plus terms Try a completely different query …
56
Take Away yes no Focus of most IR Stop Information need Explore results Formulate/ Reformulate Done? Query Send to system Receive results User Find starting point
57
Evaluation of Interactive Retrieval
58
Recap: Relevance Feedback User sends query Search system returns results User marks some results as relevant and resubmits query plus relevant results Search system now has better description of the information need and returns more relevant results. One method: Rocchio algorithm
59
Why Evaluate Relevance Feedback? Simulated interactive retrieval consistently outperforms non-interactive retrieval (70% here).
60
Relevance Feedback Evaluation Case Study Example of evaluation of interactive information retrieval Koenemann & Belkin 1996 Goal of study: show that relevance feedback improves retrieval effectiveness
61
Details on the User Study 64 novice searchers 43 female, 21 male, native English TREC test bed Wall Street Journal subset Two search topics Automobile Recalls Tobacco Advertising and the Young Relevance judgements from TREC and experimenter System was INQUERY (vector space with some bells and whistles) Subjects had a tutorial session to learn the system Their goal was to keep modifying the query until they have developed one that gets high precision Reweighting of terms similar to but different from Rocchio
62
Credit: Marti Hearst
63
Evaluation Criterion: p@30 (precision at 30 documents) Compare: p@30 for users with relevance feedback p@30 for users without relevance feedback Goal: show that users with relevance feedback do better
64
Precision vs. RF condition (from Koenemann & Belkin 96) Credit: Marti Hearst
65
Result Subjects with relevance feedback had, on average, 17-34% better performance than subjects without relevance feedback. Does this show conclusively that relevance feedback is better?
66
But … Difference in precision numbers not statistically significant. Search times approximately equal.
67
Take Away Evaluating interactive systems is harder than evaluating algorithms. Experiments involving humans have many confounding variables: Age Level of education Prior experience with search Search style (browsing vs searching) Mac vs linux vs MS user Mood, level of alertness, chemistry with experimenter etc. Showing statistical significance becomes harder as the number of confounding variables increases. Also: human subject studies are resource-intensive It’s hard to “scientifically prove” the superiority of relevance feedback.
68
Other Evaluation Issues Query variability Always compare methods on query-by-query basis Methods with the same average performance can differ a lot in user friendliness Inter-judge variability In general, judges disagree often Big impact on relevance assessment of a single document Little impact on ranking of systems Redundancy A highly relevant document with no new information is useless Most IR measures don’t measure redundancy
69
Resources FOA 4.3 MIR Ch. 10.8 – 10.10 Ellen Voorhees, Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness, ACM Sigir 98 Harman, D.K. Overview of the Third REtrieval Conference (TREC-3). In: Overview of The Third Text REtrieval Conference (TREC-3). Harman, D.K. (Ed.). NIST Special Publication 500-225, 1995, pp.l-19. Reexamining the Cluster Hypothesis: Scatter/Gather on Retrieval Results (1996) Marti A. Hearst, Jan O. Pedersen Proceedings of SIGIR-96, Paul Over, TREC-6 Interactive Track Report, NIST, 1998.
70
Resources MIR Ch. 10.0 – 10.7 Donna Harman, Overview of the fourth text retrieval conference (TREC 4), National Institute of Standards and Technology. Cutting, Karger, Pedersen, Tukey. Scatter/Gather. ACM SIGIR. http://citeseer.nj.nec.com/cutting92scattergather.html Hearst, Cat-a-cone, an interactive interface for specifying searches and viewing retrieving results in a large category hierarchy, ACM SIGIR. http://www.acm.org/sigchi/chi96/proceedings/papers/Koenemann/jk 1_txt.htm http://otal.umd.edu/olive
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.