2002.10.31 - SLIDE 1IS 202 – FALL 2002 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2002

Slides:



Advertisements
Similar presentations
Text Categorization.
Advertisements

Traditional IR models Jian-Yun Nie.
Chapter 5: Introduction to Information Retrieval
| 1 › Gertjan van Noord2014 Zoekmachines Lecture 4.
Learning for Text Categorization
Search and Retrieval: More on Term Weighting and Document Ranking Prof. Marti Hearst SIMS 202, Lecture 22.
SLIDE 1IS 202 – FALL 2003 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2003
9/11/2000Information Organization and Retrieval Content Analysis and Statistical Properties of Text Ray Larson & Marti Hearst University of California,
Query Operations: Automatic Local Analysis. Introduction Difficulty of formulating user queries –Insufficient knowledge of the collection –Insufficient.
Text Databases. Outline Spatial Databases Temporal Databases Spatio-temporal Databases Data Mining Multimedia Databases Text databases Image and video.
Database Management Systems, R. Ramakrishnan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides.
9/18/2001Information Organization and Retrieval Vector Representation, Term Weights and Clustering (continued) Ray Larson & Warren Sack University of California,
I256 Applied Natural Language Processing Fall 2009
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
Multimedia and Text Indexing. Multimedia Data Management The need to query and analyze vast amounts of multimedia data (i.e., images, sound tracks, video.
9/11/2001Information Organization and Retrieval Content Analysis and Statistical Properties of Text Ray Larson & Warren Sack University of California,
SIMS 202 Information Organization and Retrieval Prof. Marti Hearst and Prof. Ray Larson UC Berkeley SIMS Tues/Thurs 9:30-11:00am Fall 2000.
SLIDE 1IS 240 – Spring 2011 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
SLIDE 1IS 240 – Spring 2007 Prof. Ray Larson University of California, Berkeley School of Information Tuesday and Thursday 10:30 am - 12:00.
DOK 324: Principles of Information Retrieval Hacettepe University Department of Information Management.
Vector Space Model Any text object can be represented by a term vector Examples: Documents, queries, sentences, …. A query is viewed as a short document.
SLIDE 1IS 202 – FALL 2002 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2002
9/13/2001Information Organization and Retrieval Vector Representation, Term Weights and Clustering Ray Larson & Warren Sack University of California, Berkeley.
8/28/97Information Organization and Retrieval IR Implementation Issues, Web Crawlers and Web Search Engines University of California, Berkeley School of.
SLIDE 1IS 240 – Spring 2010 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
9/14/2000Information Organization and Retrieval Vector Representation, Term Weights and Clustering Ray Larson & Marti Hearst University of California,
SLIDE 1IS 202 – FALL 2004 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2004
SLIDE 1IS 202 – FALL 2002 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2002
9/19/2000Information Organization and Retrieval Vector and Probabilistic Ranking Ray Larson & Marti Hearst University of California, Berkeley School of.
9/21/2000Information Organization and Retrieval Ranking and Relevance Feedback Ray Larson & Marti Hearst University of California, Berkeley School of Information.
SLIDE 1IS 202 – FALL 2004 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall 2004
Latent Semantic Analysis (LSA). Introduction to LSA Learning Model Uses Singular Value Decomposition (SVD) to simulate human learning of word and passage.
SLIDE 1IS 240 – Spring 2011 Prof. Ray Larson University of California, Berkeley School of Information Principles of Information Retrieval.
Recuperação de Informação. IR: representation, storage, organization of, and access to information items Emphasis is on the retrieval of information (not.
Chapter 5: Information Retrieval and Web Search
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 6 9/8/2011.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Advanced Multimedia Text Classification Tamara Berg.
Modeling (Chap. 2) Modern Information Retrieval Spring 2000.
1 Vector Space Model Rong Jin. 2 Basic Issues in A Retrieval Model How to represent text objects What similarity function should be used? How to refine.
5 June 2006Polettini Nicola1 Term Weighting in Information Retrieval Polettini Nicola Monday, June 5, 2006 Web Information Retrieval.
Information Retrieval and Web Search Text properties (Note: some of the slides in this set have been adapted from the course taught by Prof. James Allan.
Chapter 2 Architecture of a Search Engine. Search Engine Architecture n A software architecture consists of software components, the interfaces provided.
CSE 6331 © Leonidas Fegaras Information Retrieval 1 Information Retrieval and Web Search Engines Leonidas Fegaras.
Weighting and Matching against Indices. Zipf’s Law In any corpus, such as the AIT, we can count how often each word occurs in the corpus as a whole =
Chapter 6: Information Retrieval and Web Search
1 Computing Relevance, Similarity: The Vector Space Model.
CPSC 404 Laks V.S. Lakshmanan1 Computing Relevance, Similarity: The Vector Space Model Chapter 27, Part B Based on Larson and Hearst’s slides at UC-Berkeley.
Comparing and Ranking Documents Once our search engine has retrieved a set of documents, we may want to Rank them by relevance –Which are the best fit.
Interaction LBSC 734 Module 4 Doug Oard. Agenda Where interaction fits Query formulation Selection part 1: Snippets  Selection part 2: Result sets Examination.
Clustering C.Watters CS6403.
Vector Space Models.
Ray R. Larson : University of California, Berkeley Clustering and Classification Workshop 1998 Cheshire II and Automatic Categorization Ray R. Larson Associate.
CIS 530 Lecture 2 From frequency to meaning: vector space models of semantics.
Relevance Feedback Prof. Marti Hearst SIMS 202, Lecture 24.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
SIMS 202, Marti Hearst Content Analysis Prof. Marti Hearst SIMS 202, Lecture 15.
IR 6 Scoring, term weighting and the vector space model.
Automated Information Retrieval
Why indexing? For efficient searching of a document
Plan for Today’s Lecture(s)
Representation of documents and queries
Text Categorization Assigning documents to a fixed set of categories
From frequency to meaning: vector space models of semantics
Document Clustering Matt Hughes.
Content Analysis of Text
4. Boolean and Vector Space Retrieval Models
Boolean and Vector Space Retrieval Models
CS 430: Information Discovery
Presentation transcript:

SLIDE 1IS 202 – FALL 2002 Prof. Ray Larson & Prof. Marc Davis UC Berkeley SIMS Tuesday and Thursday 10:30 am - 12:00 pm Fall SIMS 202: Information Organization and Retrieval Lecture 18: Vector Representation

SLIDE 2IS 202 – FALL 2002 Lecture Overview Review –Content Analysis –Statistical Properties of Text Zipf Distribution Statistical Dependence –Indexing and Inverted Files Vector Representation Term Weights Vector Matching Clustering Credit for some of the slides in this lecture goes to Marti Hearst

SLIDE 3IS 202 – FALL 2002 Lecture Overview Review –Content Analysis –Statistical Properties of Text Zipf Distribution Statistical Dependence –Indexing and Inverted Files Vector Representation Term Weights Vector Matching Clustering Credit for some of the slides in this lecture goes to Marti Hearst

SLIDE 4IS 202 – FALL 2002 Techniques for Content Analysis Statistical –Single Document –Full Collection Linguistic –Syntactic –Semantic –Pragmatic Knowledge-Based (Artificial Intelligence) Hybrid (Combinations)

SLIDE 5IS 202 – FALL 2002 Content Analysis Areas How is the text processed? Index Pre-Process Parse Collections Rank Query Text Input How is the query constructed? Information Need

SLIDE 6 Document Processing Steps From “Modern IR” Textbook

SLIDE 7IS 202 – FALL 2002 Errors Generated by Porter Stemmer From Krovetz ‘93

SLIDE 8IS 202 – FALL 2002 A Small Collection (Stems) Rank Freq Term 1 37 system 2 32 knowledg 3 24 base 4 20 problem 5 18 abstract 6 15 model 7 15 languag 8 15 implem 9 13 reason inform expert analysi rule program oper evalu comput case 19 9 gener 20 9 form enhanc energi emphasi detect desir date critic content consider concern compon compar commerci clause aspect area aim affect

SLIDE 9IS 202 – FALL 2002 The Corresponding Zipf Curve Rank Freq 1 37 system 2 32 knowledg 3 24 base 4 20 problem 5 18 abstract 6 15 model 7 15 languag 8 15 implem 9 13 reason inform expert analysi rule program oper evalu comput case 19 9 gener 20 9 form

SLIDE 10IS 202 – FALL 2002 Zipf Distribution The Important Points: –A few elements occur very frequently –A medium number of elements have medium frequency –Many elements occur very infrequently

SLIDE 11 Zipf Distribution Linear ScaleLogarithmic Scale

SLIDE 12IS 202 – FALL 2002 Related Distributions/”Laws” Bradford’s Law of Scattering Lotka’s Law of Productivity De Solla Price’s Urn Model for “Cumulative Advantage Processes” ½ = 50%2/3 = 66%¾ = 75%Pick Replace +1

SLIDE 13IS 202 – FALL 2002 Frequent Words on the WWW the a to of and in s for on this is by with or at all are from e you be that not an as home it i have if new t your page about com information will can more has no other one c d m was copyright us (see

SLIDE 14IS 202 – FALL 2002 Word Frequency vs. Resolving Power The most frequent words are not the most descriptive (from van Rijsbergen 79)

SLIDE 15IS 202 – FALL 2002 Statistical Independence Two events x and y are statistically independent if the product of the probabilities of their happening individually equals the probability of their happening together

SLIDE 16IS 202 – FALL 2002 Lexical Associations Subjects write first word that comes to mind –doctor/nurse; black/white (Palermo & Jenkins 64) Text Corpora can yield similar associations One measure: Mutual Information (Church and Hanks 89) If word occurrences were independent, the numerator and denominator would be equal (if measured across a large collection)

SLIDE 17IS 202 – FALL 2002 Interesting Associations with “Doctor” AP Corpus, N=15 million, Church & Hanks 89

SLIDE 18IS 202 – FALL 2002 These associations were likely to happen because the non-doctor words shown here are very common and therefore likely to co-occur with any noun Un-Interesting Associations with “Doctor” AP Corpus, N=15 million, Church & Hanks 89

SLIDE 19IS 202 – FALL 2002 Content Analysis Summary Content Analysis: transforming raw text into more computationally useful forms Words in text collections exhibit interesting statistical properties –Word frequencies have a Zipf distribution –Word co-occurrences exhibit dependencies Text documents are transformed to vectors –Pre-processing includes tokenization, stemming, collocations/phrases –Documents occupy multi-dimensional space

SLIDE 20IS 202 – FALL 2002 Inverted Indexes We have seen “Vector files” conceptually –An Inverted File is a vector file “inverted” so that rows become columns and columns become rows

SLIDE 21IS 202 – FALL 2002 How Inverted Files are Created Dictionary Postings

SLIDE 22IS 202 – FALL 2002 Inverted Indexes Permit fast search for individual terms For each term, you get a list consisting of: –Document ID –Frequency of term in doc (optional) –Position of term in doc (optional) These lists can be used to solve Boolean queries: country -> d1, d2 manor -> d2 country AND manor -> d2 Also used for statistical ranking algorithms

SLIDE 23IS 202 – FALL 2002 How Inverted Files are Used Dictionary Postings Query on “time” AND “dark” 2 docs with “time” in dictionary -> IDs 1 and 2 from posting file 1 doc with “dark” in dictionary -> ID 2 from posting file Therefore, only doc 2 satisfied the query

SLIDE 24IS 202 – FALL 2002 Lecture Overview Review –Content Analysis –Statistical Properties of Text Zipf Distribution Statistical Dependence –Indexing and Inverted Files Vector Representation Term Weights Vector Matching Clustering Credit for some of the slides in this lecture goes to Marti Hearst

SLIDE 25IS 202 – FALL 2002 Document Vectors Documents are represented as “bags of words” Represented as vectors when used computationally –A vector is like an array of floating point –Has direction and magnitude –Each vector holds a place for every term in the collection –Therefore, most vectors are sparse

SLIDE 26IS 202 – FALL 2002 Vector Space Model Documents are represented as vectors in term space –Terms are usually stems –Documents represented by binary or weighted vectors of terms Queries represented the same as documents Query and Document weights are based on length and direction of their vector A vector distance measure between the query and documents is used to rank retrieved documents

SLIDE 27IS 202 – FALL 2002 Vector Representation Documents and Queries are represented as vectors Position 1 corresponds to term 1, position 2 to term 2, position t to term t The weight of the term is stored in each position

SLIDE 28IS 202 – FALL 2002 Document Vectors “Nova” occurs 10 times in text A “Galaxy” occurs 5 times in text A “Heat” occurs 3 times in text A (Blank means 0 occurrences.)

SLIDE 29IS 202 – FALL 2002 Document Vectors “Hollywood” occurs 7 times in text I “Film” occurs 5 times in text I “Diet” occurs 1 time in text I “Fur” occurs 3 times in text I

SLIDE 30IS 202 – FALL 2002 Document Vectors

SLIDE 31IS 202 – FALL 2002 We Can Plot the Vectors Star Diet Doc about astronomy Doc about movie stars Doc about mammal behavior

SLIDE 32IS 202 – FALL 2002 Documents in 3D Space Primary assumption of the Vector Space Model: Documents that are “close together” in space are similar in meaning

SLIDE 33IS 202 – FALL 2002 Vector Space Documents and Queries D1D1 D2D2 D3D3 D4D4 D5D5 D6D6 D7D7 D8D8 D9D9 D 10 D 11 t2t2 t3t3 t1t1 Boolean term combinations Q is a query – also represented as a vector

SLIDE 34IS 202 – FALL 2002 Documents in Vector Space t1t1 t2t2 t3t3 D1D1 D2D2 D 10 D3D3 D9D9 D4D4 D7D7 D8D8 D5D5 D 11 D6D6

SLIDE 35IS 202 – FALL 2002 Lecture Overview Review –Content Analysis –Statistical Properties of Text Zipf Distribution Statistical Dependence –Indexing and Inverted Files Vector Representation Term Weights Vector Matching Clustering Credit for some of the slides in this lecture goes to Marti Hearst

SLIDE 36IS 202 – FALL 2002 Assigning Weights to Terms Binary Weights Raw term frequency tf*idf –Recall the Zipf distribution –Want to weight terms highly if they are Frequent in relevant documents … BUT Infrequent in the collection as a whole Automatically derived thesaurus terms

SLIDE 37IS 202 – FALL 2002 Binary Weights Only the presence (1) or absence (0) of a term is included in the vector

SLIDE 38IS 202 – FALL 2002 Raw Term Weights The frequency of occurrence for the term in each document is included in the vector

SLIDE 39IS 202 – FALL 2002 Assigning Weights tf*idf measure: –Term frequency (tf) –Inverse document frequency (idf) A way to deal with some of the problems of the Zipf distribution Goal: Assign a tf*idf weight to each term in each document

SLIDE 40IS 202 – FALL 2002 tf*idf

SLIDE 41IS 202 – FALL 2002 Inverse Document Frequency IDF provides high values for rare words and low values for common words For a collection of documents (N = 10000)

SLIDE 42IS 202 – FALL 2002 Similarity Measures Simple matching (coordination level match) Dice’s Coefficient Jaccard’s Coefficient Cosine Coefficient Overlap Coefficient

SLIDE 43IS 202 – FALL 2002 tf*idf Normalization Normalize the term weights (so longer vectors are not unfairly given more weight) –Normalize usually means force all values to fall within a certain range, usually between 0 and 1, inclusive

SLIDE 44IS 202 – FALL 2002 Vector Space Similarity Now, the similarity of two documents is: This is also called the cosine, or normalized inner product –The normalization was done when weighting the terms

SLIDE 45IS 202 – FALL 2002 Vector Space Similarity Measure Combine tf and idf into a similarity measure

SLIDE 46IS 202 – FALL 2002 Computing Similarity Scores

SLIDE 47IS 202 – FALL 2002 What’s Cosine Anyway? “One of the basic trigonometric functions encountered in trigonometry. Let theta be an angle measured counterclockwise from the x-axis along the arc of the unit circle. Then cos(theta) is the horizontal coordinate of the arc endpoint. As a result of this definition, the cosine function is periodic with period 2pi.” From

SLIDE 48IS 202 – FALL 2002 Cosine vs. Degrees CosineCosine Degrees

SLIDE 49IS 202 – FALL 2002 Computing a Similarity Score

SLIDE 50IS 202 – FALL 2002 Vector Space Matching D2D2 D1D1 Q Term B Term A D i =(d i1,w di1 ;d i2, w di2 ;…;d it, w dit ) Q =(q i1,w qi1 ;q i2, w qi2 ;…;q it, w qit ) Q = (0.4,0.8) D1=(0.8,0.3) D2=(0.2,0.7)

SLIDE 51IS 202 – FALL 2002 Weighting Schemes We have seen something of –Binary –Raw term weights –TF*IDF There are many other possibilities –IDF alone –Normalized term frequency

SLIDE 52IS 202 – FALL 2002 Term Weights in SMART SMART is an experimental IR system developed by Gerard Salton (and continued by Chris Buckley) at Cornell Designed for laboratory experiments in IR –Easy to mix and match different weighting methods –Really terrible user interface –Intended for use by code hackers (and even they have trouble using it)

SLIDE 53IS 202 – FALL 2002 Term Weights in SMART In SMART weights are decomposed into three factors:

SLIDE 54IS 202 – FALL 2002 SMART Freq Components Binary maxnorm augmented log

SLIDE 55IS 202 – FALL 2002 Collection Weighting in SMART Inverse squared probabilistic frequency

SLIDE 56IS 202 – FALL 2002 Term Normalization in SMART sum cosine fourth max

SLIDE 57IS 202 – FALL 2002 To Think About How does the tf*idf ranking algorithm behave? –Make a set of hypothetical documents consisting of terms and their weights –Create some hypothetical queries –How are the documents ranked, depending on the weights of their terms and the queries’ terms?

SLIDE 58IS 202 – FALL 2002 Document Space Has High Dimensionality What happens beyond 2 or 3 dimensions? Similarity still has to do with how many tokens are shared in common More terms -> harder to understand which subsets of words are shared among similar documents One approach to handling high dimensionality: Clustering

SLIDE 59IS 202 – FALL 2002 Vector Space Visualization

SLIDE 60IS 202 – FALL 2002 Text Clustering Finds overall similarities among groups of documents Finds overall similarities among groups of tokens Picks out some themes, ignores others

SLIDE 61IS 202 – FALL 2002 Text Clustering Clustering is “The art of finding groups in data.” -- Kaufmann and Rousseeu Term 1 Term 2

SLIDE 62IS 202 – FALL 2002 Text Clustering Clustering is “The art of finding groups in data.” -- Kaufmann and Rousseeu Term 1 Term 2

SLIDE 63IS 202 – FALL 2002 Pair-Wise Document Similarity How to compute document similarity?

SLIDE 64IS 202 – FALL 2002 Pair-Wise Document Similarity (no normalization for simplicity)

SLIDE 65IS 202 – FALL 2002 Document/Document Matrix

SLIDE 66IS 202 – FALL 2002 Agglomerative Clustering ABCDEFGHIABCDEFGHI

SLIDE 67IS 202 – FALL 2002 Agglomerative Clustering ABCDEFGHIABCDEFGHI

SLIDE 68IS 202 – FALL 2002 Agglomerative Clustering ABCDEFGHIABCDEFGHI

SLIDE 69IS 202 – FALL 2002 Clustering Agglomerative methods: Polythetic, Exclusive or Overlapping, Unordered clusters are order-dependent 1. Select initial centers (i.e., seed the space) 2. Assign docs to highest matching centers and compute centroids 3. Reassign all documents to centroid(s) Doc Rocchio’s method

SLIDE 70IS 202 – FALL 2002 Automatic Class Assignment Doc Search Engine 1. Create pseudo-documents representing intellectually derived classes. 2. Search using document contents 3. Obtain ranked list 4. Assign document to N categories ranked over threshold. OR assign to top-ranked category Automatic Class Assignment: Polythetic, Exclusive or Overlapping, usually ordered clusters are order-independent, usually based on an intellectually derived scheme

SLIDE 71IS 202 – FALL 2002 K-Means Clustering 1)Create a pair-wise similarity measure 2)Find K centers using agglomerative clustering –Take a small sample –Group bottom up until K groups found 3)Assign each document to nearest center, forming new clusters 4)Repeat 3 as necessary

SLIDE 72IS 202 – FALL 2002 Scatter/Gather Cutting, Pedersen, Tukey & Karger 92, 93, Hearst & Pedersen 95 Cluster sets of documents into general “themes”, like a table of contents Display the contents of the clusters by showing topical terms and typical titles User chooses subsets of the clusters and re-clusters the documents within Resulting new groups have different “themes”

SLIDE 73IS 202 – FALL 2002 S/G Example: Query on “star” Encyclopedia text 14 sports 8 symbols47 film, tv 68 film, tv (p) 7 music 97 astrophysics 67 astronomy(p)12 stellar phenomena 10 flora/fauna49 galaxies, stars 29 constellations 7 miscelleneous Clustering and re-clustering is entirely automated

SLIDE 77IS 202 – FALL 2002 Clustering Result Sets Advantages: –See some main themes Disadvantage: –Many ways documents could group together are hidden Thinking point: What is the relationship to classification systems and facets?

SLIDE 78IS 202 – FALL 2002 Next Time Probabilistic Models Relevance Feedback