Mining the Web for Structured Data

Slides:



Advertisements
Similar presentations
Crawling, Ranking and Indexing. Organizing the Web The Web is big. Really big. –Over 3 billion pages, just in the indexable Web The Web is dynamic Problems:
Advertisements

Context-aware Query Suggestion by Mining Click-through and Session Data Authors: H. Cao et.al KDD 08 Presented by Shize Su 1.
CS345 Data Mining Web Spam Detection. Economic considerations  Search has become the default gateway to the web  Very high premium to appear on the.
Automatic Web Page Categorization by Link and Context Analysis Giuseppe Attardi Antonio Gulli Fabrizio Sebastiani.
1 Query Languages. 2 Boolean Queries Keywords combined with Boolean operators: –OR: (e 1 OR e 2 ) –AND: (e 1 AND e 2 ) –BUT: (e 1 BUT e 2 ) Satisfy e.
Query Languages: Patterns & Structures. Pattern Matching Pattern –a set of syntactic features that must occur in a text segment Types of patterns –Words:
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Probabilistic Information Retrieval.
1 CS 430 / INFO 430 Information Retrieval Lecture 12 Probabilistic Information Retrieval.
Web Mining. Two Key Problems  Page Rank  Web Content Mining.
CS345 Data Mining Virtual Databases. Example  Find marketing manager openings in Internet companies so that my commute is shorter than 10 miles. Web.
CS345 Data Mining Mining the Web for Structured Data.
CS246 Extracting Structured Information from the Web.
Web Mining for Extracting Relations Negin Nejati.
A Fast Regular Expression Indexing Engine Junghoo “John” Cho (UCLA) Sridhar Rajagopalan (IBM)
Chapter 5: Information Retrieval and Web Search
Information Extraction with Unlabeled Data Rayid Ghani Joint work with: Rosie Jones (CMU) Tom Mitchell (CMU & WhizBang! Labs) Ellen Riloff (University.
Chapter 4 Query Languages.... Introduction Cover different kinds of queries posed to text retrieval systems Keyword-based query languages  include simple.
Modeling (Chap. 2) Modern Information Retrieval Spring 2000.
Probabilistic Model for Definitional Question Answering Kyoung-Soo Han, Young-In Song, and Hae-Chang Rim Korea University SIGIR 2006.
The Confident Researcher: Google Away (Module 2) The Confident Researcher: Google Away 2.
Grouping search-engine returned citations for person-name queries Reema Al-Kamha, David W. Embley (Proceedings of the 6th annual ACM international workshop.
1 University of Qom Information Retrieval Course Web Search (Link Analysis) Based on:
1 University of Palestine Topics In CIS ITBS 3202 Ms. Eman Alajrami 2 nd Semester
Chapter 6: Information Retrieval and Web Search
A fast algorithm for the generalized k- keyword proximity problem given keyword offsets Sung-Ryul Kim, Inbok Lee, Kunsoo Park Information Processing Letters,
Using a Named Entity Tagger to Generalise Surface Matching Text Patterns for Question Answering Mark A. Greenwood and Robert Gaizauskas Natural Language.
LIS618 lecture 3 Thomas Krichel Structure of talk Document Preprocessing Basic ingredients of query languages Retrieval performance evaluation.
Bootstrapping for Text Learning Tasks Ramya Nagarajan AIML Seminar March 6, 2001.
Presented by: Aneeta Kolhe. Named Entity Recognition finds approximate matches in text. Important task for information extraction and integration, text.
Algorithmic Detection of Semantic Similarity WWW 2005.
1 Internet Research Third Edition Unit A Searching the Internet Effectively.
LOGO 1 Corroborate and Learn Facts from the Web Advisor : Dr. Koh Jia-Ling Speaker : Tu Yi-Lang Date : Shubin Zhao, Jonathan Betz (KDD '07 )
Using a Named Entity Tagger to Generalise Surface Matching Text Patterns for Question Answering Mark A. Greenwood and Robert Gaizauskas Natural Language.
4. Relationship Extraction Part 4 of Information Extraction Sunita Sarawagi 9/7/2012CS 652, Peter Lindes1.
The World Wide Web. What is the worldwide web? The content of the worldwide web is held on individual pages which are gathered together to form websites.
Acquisition of Categorized Named Entities for Web Search Marius Pasca Google Inc. from Conference on Information and Knowledge Management (CIKM) ’04.
Liang, Introduction to Java Programming, Sixth Edition, (c) 2007 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
1 Overview of Query Evaluation Chapter Outline  Query Optimization Overview  Algorithm for Relational Operations.
Harnessing the Deep Web : Present and Future -Tushar Mhaskar Jayant Madhavan, Loredana Afanasiev, Lyublena Antova, Alon Halevy January 7,
LECTURE 9 CS203. Execution Time Suppose two algorithms perform the same task such as search (linear search vs. binary search) and sorting (selection sort.
Trends in NL Analysis Jim Critz University of New York in Prague EurOpen.CZ 12 December 2008.
Search Engine Optimization
Search Engines and Search techniques
Lecture 1: Introduction and the Boolean Model Information Retrieval
WEB SPAM.
Database Management System
Search Engines and Internet Resources
Internet Searching: Finding Quality Information
Memory Standardization
Statistical Data Analysis
Variables, Algebraic Expressions, and Simple Equations
Chapter 12: Query Processing
Chapter 15 QUERY EXECUTION.
Roberto Battiti, Mauro Brunato
Query Languages.
Search Techniques and Advanced tools for Researchers
Internet Research Third Edition
Information Retrieval
Data Structures Review Session
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Lecture 2- Query Processing (continued)
Searching with context
CSE 311: Foundations of Computing
Statistical Data Analysis
Overview of Query Evaluation
Mining Anchor Text for Query Refinement
PolyAnalyst Web Report Training
Word embeddings (continued)
Extracting Patterns and Relations from the World Wide Web
Information Retrieval and Web Design
Presentation transcript:

Mining the Web for Structured Data CS345 Data Mining Mining the Web for Structured Data

Our view of the web so far… Web pages as atomic units Great for some applications e.g., Conventional web search But not always the right model

Going beyond web pages Question answering Relation Extraction What is the height of Mt Everest? Who killed Abraham Lincoln? Relation Extraction Find all <company,CEO> pairs Virtual Databases Answer database-like queries over web data E.g., Find all software engineering jobs in Fortune 500 companies

Question Answering E.g., Who killed Abraham Lincoln? Naïve algorithm Find all web pages containing the terms “killed” and “Abraham Lincoln” in close proximity Extract k-grams from a small window around the terms Find the most commonly occuring k-grams

Question Answering Naïve algorithm works fairly well! Some improvements Use sentence structure e.g., restrict to noun phrases only Rewrite questions before matching “What is the height of Mt Everest” becomes “The height of Mt Everest is <blank>” The number of pages analyzed is more important than the sophistication of the NLP For simple questions Reference: Dumais et al

Relation Extraction Find pairs (title, author) Where title is the name of a book E.g., (Foundation, Isaac Asimov) Find pairs (company, hq) E.g., (Microsoft, Redmond) Find pairs (abbreviation, expansion) (ADA, American Dental Association) Can also have tuples with >2 components

Relation Extraction Assumptions: No single source contains all the tuples Each tuple appears on many web pages Components of tuple appear “close” together Foundation, by Isaac Asimov Isaac Asimov’s masterpiece, the <em>Foundation</em> trilogy There are repeated patterns in the way tuples are represented on web pages

Naïve approach Study a few websites and come up with a set of patterns e.g., regular expressions letter = [A-Za-z. ] title = letter{5,40} author = letter{10,30} <b>(title)</b> by (author)

Problems with naïve approach A pattern that works on one web page might produce nonsense when applied to another So patterns need to be page-specific, or at least site-specific Impossible for a human to exhaustively enumerate patterns for every relevant website Will result in low coverage

Better approach (Brin) Exploit duality between patterns and tuples Find tuples that match a set of patterns Find patterns that match a lot of tuples DIPRE (Dual Iterative Pattern Relation Extraction) Match Patterns Tuples Generate

DIPRE Algorithm R Ã SampleTuples O Ã FindOccurrences(R) e.g., a small set of <title,author> pairs O Ã FindOccurrences(R) Occurrences of tuples on web pages Keep some surrounding context P Ã GenPatterns(O) Look for patterns in the way tuples occur Make sure patterns are not too general! R Ã MatchingTuples(P) Return or go back to Step 2

Occurrences e.g., Titles and authors Restrict to cases where author and title appear in close proximity on web page <li><b> Foundation </b> by Isaac Asimov (1951) url = http://www.scifi.org/bydecade/1950.html order = [title,author] (or [author,title]) denote as 0 or 1 prefix = “<li><b> ” (limit to e.g., 10 characters) middle = “</b> by ” suffix = “(1951) ” occurrence = (’Foundation’,’Isaac Asimov’,url,order,prefix,middle,suffix)

Patterns order = [title,author] (say 0) shared prefix = <b> <li><b> Foundation </b> by Isaac Asimov (1951) <p><b> Nightfall </b> by Isaac Asimov (1941) order = [title,author] (say 0) shared prefix = <b> shared middle = </b> by shared suffix = (19 pattern = (order,shared prefix, shared middle, shared suffix)

URL Prefix Patterns may be specific to a website Or even parts of it Add urlprefix component to pattern http://www.scifi.org/bydecade/1950.html occurence: <li><b> Foundation </b> by Isaac Asimov (1951) http://www.scifi.org/bydecade/1940.html occurence: <p><b> Nightfall </b> by Isaac Asimov (1941) shared urlprefix = http://www.scifi.org/bydecade/19 pattern = (urlprefix,order,prefix,middle,suffix)

Generating Patterns Group occurences by order and middle Let O = set of occurences with the same order and middle pattern.order = O.order pattern.middle = O.middle pattern.urlprefix = longest common prefix of all urls in O pattern.prefix = longest common prefix of occurrences in O pattern.suffix = longest common suffix of occurrences in O

Example http://www.scifi.org/bydecade/1950.html occurence: <li><b> Foundation </b> by Isaac Asimov (1951) http://www.scifi.org/bydecade/1940.html occurence: <p><b> Nightfall </b> by Isaac Asimov (1941) order = [title,author] middle = “ </b> by ” urlprefix = http://www.scifi.org/bydecade/19 prefix = “<b> ” suffix = “ (19”

Example http://www.scifi.org/bydecade/1950.html occurence: Foundation, by Isaac Asimov, has been hailed… http://www.scifi.org/bydecade/1940.html occurence: Nightfall, by Isaac Asimov, tells the tale of… order = [title,author] middle = “, by ” urlprefix = http://www.scifi.org/bydecade/19 prefix = “” suffix = “, ”

Pattern Specificity We want to avoid generating patterns that are too general One approach: For pattern p, define specificity = |urlprefix||middle||prefix||suffix| Suppose n(p) = number of occurences that match the pattern p Discard patterns where n(p) < nmin Discard patterns p where specificity(p)n(p) < threshold

Pattern Generation Algorithm Group occurences by order and middle Let O = a set of occurences with the same order and middle p = GeneratePattern(O) If p meets specificity requirements, add p to set of patterns Otherwise, try to split O into multiple subgroups by extending the urlprefix by one character If all occurences in O are from the same URL, we cannot extend the urlprefix, so we discard O

Extending the URL prefix Suppose O contains occurences from urls of the form http://www.scifi.org/bydecade/195?.html http://www.scifi.org/bydecade/194?.html urlprefix = http://www.scifi.org/bydecade/19 When we extend the urlprefix, we split O into two subsets: urlprefix = http://www.scifi.org/bydecade/194 urlprefix = http://www.scifi.org/bydecade/195

Finding occurrences and matches Use inverted index on web pages Examine resulting pages to extract occurrences Finding matches Use urlprefix to restrict set of pages to examine Scan each page using regex constructed from pattern

Relation Drift Small contaminations can easily lead to huge divergences Need to tightly control process Snowball (Agichtein and Gravano) Trust only tuples that match many patterns Trust only patterns with high “support” and “confidence”

Pattern support Similar to DIPRE Eliminate patterns not supported by at least nmin known good tuples either seed tuples or tuples generated in a prior iteration

Pattern Confidence Suppose tuple t matches pattern p What is the probability that tuple t is valid? Call this probability the confidence of pattern p, denoted conf(p) Assume independent of other patterns How can we estimate conf(p)?

Categorizing pattern matches Given pattern p, suppose we can partition its matching tuples into groups p.positive, p.negative, and p.unknown Grouping methodology is application-specific

Categorizing Matches e.g., Organizations and Headquarters A tuple that exactly matches a known pair (org,hq) is positive A tuple that matches the org of a known tuple but a different hq is negative Assume org is key for relation A tuple that matches a hq that is not a known city is negative Assume we have a list of valid city names All other occurrences are unknown

Categorizing Matches Books and authors One possibility… A tuple that matches a known tuple is positive A tuple that matches the title of a known tuple but has a different author is negative Assume title is key for relation All other tuples are unknown Can come up with other schemes if we have more information e.g., list of possible legal people names

Example |p.positive| = 2, |p.negative| = 1, |p.unknown| = 1 Suppose we know the tuples Foundation, Isaac Asimov Startide Rising, David Brin Suppose pattern p matches Foundation, Doubleday Rendezvous with Rama, Arthur C. Clarke |p.positive| = 2, |p.negative| = 1, |p.unknown| = 1

Pattern Confidence (1) pos(p) = |p.positive| neg(p) = |p.negative| un(p) = |p.unknown| conf(p) = pos(p)/(pos(p)+neg(p))

Pattern Confidence (2) Another definition – penalize patterns with many unknown matches conf(p) = pos(p)/(pos(p)+neg(p)+un(p)) where 0 ·  · 1

Tuple confidence Suppose candidate tuple t matches patterns p1 and p2 What is the probability that t is an valid tuple? Assume matches of different patterns are independent events

Tuple confidence Pr[t matches p1 and t is not valid] = 1-conf(p1) Pr[t matches {p1,p2} and t is not valid] = (1-conf(p1))(1-conf(p2)) Pr[t matches {p1,p2} and t is valid] = 1 - (1-conf(p1))(1-conf(p2)) If tuple t matches a set of patterns P conf(t) = 1 - p2P(1-conf(p))

Snowball algorithm Start with seed set R of tuples Generate set P of patterns from R Compute support and confidence for each pattern in P Discard patterns with low support or confidence Generate new set T of tuples matching patterns P Compute confidence of each tuple in T Add to R the tuples t2T with conf(t)>threshold. Go back to step 2

Some refinements Give more weight to tuples found earlier Approximate pattern matches Entity tagging

Tuple confidence If tuple t matches a set of patterns P conf(t) = 1 - p2P(1-conf(p)) Suppose we allow tuples that don’t exactly match patterns but only approximately conf(t) = 1 - p2P(1-conf(p)match(t,p))