Download presentation
Presentation is loading. Please wait.
1
Introduction to Information Retrieval
2
The problem of IR Goal = find documents relevant to an information need from a large document set Info. need Query IR system Document collection Retrieval Answer list
3
Example Google Web
4
Unstructured (text) vs. structured (database) data in 1996
5
Unstructured (text) vs. structured (database) data in 2009
6
Definitions Information retrieval: Subfield of computer science that deals with automated retrieval of documents (especially text) based on their content and context. Searching: Seeking for specific information within a body of information. The result of a search is a set of hits. Browsing: Unstructured exploration of a body of information. Linking: Moving from one item to another following links, such as citations, references, etc.
7
The Basics of Information Retrieval
Query: A string of text, describing the information that the user is seeking. Each word of the query is called a search term. A query can be a single search term, a string of terms, a phrase in natural language, or a stylized expression using special symbols. Full text searching: Methods that compare the query with every word in the text, without distinguishing the function of the various words. Fielded searching: Methods that search on specific bibliographic or structural fields, such as author or heading.
8
SORTING AND RANKING HITS
When a user submits a query to a search system, the system returns a set of hits. With a large collection of documents, the set of hits maybe very large. The value to the use depends on the order in which the hits are presented. Three main methods: Sorting the hits, e.g., by date Ranking the hits by similarity between query and document Ranking the hits by the importance of the documents
9
Examples of Search Systems
Find file on a computer system (Spotlight for Macintosh). Library catalog for searching bibliographic records about books and other objects (Library of Congress catalog). Abstracting and indexing system for finding research information about specific topics (Medline for medical information). Web search service for finding web pages (Google).
10
Possible approaches 1. String matching (linear search in documents)
- Slow - Difficult to improve 2. Indexing (*) - Fast - Flexible to further improvement
11
Indexing-based IR Document Query indexing indexing (Query analysis)
Representation Representation (keywords) Query (keywords) evaluation
12
Main problems in IR Document and query indexing
How to best represent their contents? Query evaluation (or retrieval process) To what extent does a document correspond to a query? System evaluation How good is a system? Are the retrieved documents relevant? (precision) Are all the relevant documents retrieved? (recall)
13
Document indexing Goal = Find the important meanings and create an internal representation Factors to consider: Accuracy to represent meanings (semantics) Exhaustiveness (cover all the contents) Facility for computer to manipulate What is the best representation of contents? Char. string (char trigrams): not precise enough Word: good coverage, not precise Phrase: poor coverage, more precise Concept: poor coverage, precise Coverage (Recall) Accuracy (Precision) String Word Phrase Concept
14
Keyword selection and weighting
How to select important keywords? Simple method: using middle-frequency words
15
Boolean queries: Exact match
Sec. 1.3 Boolean queries: Exact match The Boolean retrieval model is being able to ask a query that is a Boolean expression: Boolean Queries are queries using AND, OR and NOT to join query terms Views each document as a set of words Is precise: document matches condition or not. Perhaps the simplest model to build an IR system on Primary commercial retrieval tool for 3 decades. Many search systems you still use are Boolean: , library catalog, Mac OS X Spotlight
16
Example: WestLaw http://www.westlaw.com/
Sec. 1.4 Example: WestLaw Largest commercial (paying subscribers) legal search service (started 1975; ranking added 1992) Tens of terabytes of data; 700,000 users Majority of users still use boolean queries Example query: What is the statute of limitations in cases involving the federal tort claims act? LIMIT! /3 STATUTE ACTION /S FEDERAL /2 TORT /3 CLAIM /3 = within 3 words, /S = in same sentence
17
What’s ahead in IR? Beyond term search
What about phrases? Stanford University Proximity: Find Gates NEAR Microsoft. Need index to capture position information in docs. Zones in documents: Find documents with (author = Ullman) AND (text contains automata).
18
Ranking search results
Boolean queries give inclusion or exclusion of docs. Often we want to rank/group results Need to measure proximity from query to each doc. Need to decide whether docs presented to user are singletons, or a group of docs covering various aspects of the query.
19
IR vs. databases: Structured vs unstructured data
Structured data tends to refer to information in “tables” Employee Manager Salary Smith Jones 50000 Chang Smith 60000 Ivy Smith 50000 Typically allows numerical range and exact match (for text) queries, e.g., Salary < AND Manager = Smith.
20
Unstructured data Typically refers to free text Allows
Keyword queries including operators More sophisticated “concept” queries e.g., find all web pages dealing with drug abuse Classic model for searching text documents
21
Semi-structured data In fact almost no data is “unstructured”
E.g., this slide has distinctly identified zones such as the Title and Bullets Facilitates “semi-structured” search such as Title contains data AND Bullets contain search … to say nothing of linguistic structure
22
More sophisticated semi-structured search
Title is about Object Oriented Programming AND Author something like stro*rup where * is the wild-card operator Issues: how do you process “about”? how do you rank results? The focus of XML search
23
Clustering, classification and ranking
Clustering: Given a set of docs, group them into clusters based on their contents. Classification: Given a set of topics, plus a new doc D, decide which topic(s) D belongs to. Ranking: Can we learn how to best order a set of documents, e.g., a set of search results
24
The web and its challenges
Unusual and diverse documents Unusual and diverse users, queries, information needs Beyond terms, exploit ideas from social networks link analysis, clickstreams ... How do search engines work? And how can we make them better?
25
More sophisticated information retrieval
Cross-language information retrieval Question answering Summarization Text mining …
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.