Presentation is loading. Please wait.

Presentation is loading. Please wait.

Classifying and Searching "Hidden-Web" Text Databases

Similar presentations


Presentation on theme: "Classifying and Searching "Hidden-Web" Text Databases"— Presentation transcript:

1 Classifying and Searching "Hidden-Web" Text Databases
Panos Ipeirotis Department of Information Systems New York University

2 Motivation? “Surface” Web vs. “Hidden” Web
Link structure Crawlable Documents indexed by search engines “Hidden” Web No link structure Documents “hidden” in databases Documents not indexed by search engines Need to query each collection individually 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

3 Hidden-Web Databases: Examples
Search on U.S. Patent and Trademark Office (USPTO) database: [wireless network]  29,051 matches (USPTO database is at Search on Google restricted to USPTO database site: [wireless network site:patft.uspto.gov]  0 matches Database Query Database Matches Site-Restricted Google Matches USPTO wireless network 29,051 Library of Congress visa regulations >10,000 PubMed thrombopenia 26,887 221 as of Oct 28th, 2004 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

4 Interacting With Hidden-Web Databases
Browsing: Yahoo!-like directories InvisibleWeb.com SearchEngineGuide.com Searching: Metasearchers Populated Manually 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

5 Panos Ipeirotis - New York University - IOMS Department
Outline of Talk Classification of Hidden-Web Databases Search over Hidden-Web Databases 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

6 Hierarchically Classifying the ACM Digital Library
ACM DL ? 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

7 Hierarchically Classifying the ACM Digital Library
ACM DL ? 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

8 Hierarchically Classifying the ACM Digital Library
ACM DL 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

9 Text Database Classification: Definition
For a text database D and a category C: Coverage(D,C) = number of docs in D about C Specificity(D,C) = fraction of docs in D about C Assign a text database to a category C if: Database coverage for C at least Tc Tc: coverage threshold (e.g., > 100 docs in C) Database specificity for C at least Ts Ts: specificity threshold (e.g., > 40% of docs in C) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

10 Brute-Force Classification “Strategy”
Extract all documents from database Classify documents on topic (use state-of-the-art classifiers: SVMs, C4.5, RIPPER,…) Classify database according to topic distribution Problem: No direct access to full contents of Hidden-Web databases 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

11 Classification: Goal & Challenges
Discover database topic distribution Challenges: No direct access to full contents of Hidden-Web databases Only limited search interfaces available Should not overload databases For example, in a health-related database a query on “cancer” will generate a large number of matches, while a query on “michael jordan” will generate small or zero matches. So, we will see now how we generate such queries, and how we exploit the returned results Key observation: Only queries “about” database topic(s) generate large number of matches 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

12 Query-based Database Classification: Overview
TRAIN CLASSIFIER Train document classifier Extract queries from classifier Adaptively issue queries to database Identify topic distribution based on adjusted number of query matches Classify database EXTRACT QUERIES Sports: +nba +knicks Health: +sars QUERY DATABASE +sars 1254 IDENTIFY TOPIC DISTRIBUTION CLASSIFY DATABASE 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

13 Training a Document Classifier
Get training set (set of pre-classified documents) Select best features to characterize documents (Zipf’s law + information theoretic feature selection) [Koller and Sahami 1996] Train classifier (SVM, C4.5, RIPPER, …) EXTRACT QUERIES Sports: +nba +knicks Health +sars QUERY DATABASE Output: A “black-box” model for classifying documents If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. IDENTIFY TOPIC DISTRIBUTION CLASSIFY DATABASE Document Classifier 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

14 Training a Document Classifier
Get training set (set of pre-classified documents) Select best features to characterize documents (Zipf’s law + information theoretic feature selection) [Koller and Sahami 1996] Train classifier (SVM, C4.5, RIPPER, …) EXTRACT QUERIES Sports: +nba +knicks Health +sars QUERY DATABASE Output: A “black-box” model for classifying documents If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. IDENTIFY TOPIC DISTRIBUTION CLASSIFY DATABASE Document Classifier 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

15 Extracting Query Probes
ACM TOIS 2003 Transform classifier model into queries Trivial for “rule-based” classifiers (RIPPER) TRAIN CLASSIFIER EXTRACT QUERIES Sports: C4.5rules Easy for decision-tree classifiers (C4.5) for which rule generators exist (C4.5rules) +nba +knicks Health: +sars QUERY DATABASE +sars 1254 Rule extraction Trickier for other classifiers: we devised rule-extraction methods for linear classifiers (linear-kernel SVMs, Naïve-Bayes, …) If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. IDENTIFY TOPIC DISTRIBUTION CLASSIFY DATABASE Example query for Sports: +nba +knicks 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

16 Querying Database with Extracted Queries
TRAIN CLASSIFIER Issue each query to database to obtain number of matches without retrieving any documents Increase coverage of rule’s category accordingly (#Sports = #Sports + 706) EXTRACT QUERIES Sports: +nba +knicks Health: +sars QUERY DATABASE +sars 1254 If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. IDENTIFY TOPIC DISTRIBUTION CLASSIFY DATABASE SIGMOD 2001 ACM TOIS 2003 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

17 Identifying Topic Distribution from Query Results
TRAIN CLASSIFIER Query-based estimates of topic distribution not perfect Document classifiers not perfect: Rules for one category match documents from other categories Querying not perfect: Queries for same category might overlap Queries do not match all documents in a category EXTRACT QUERIES Sports: +nba +knicks Health +sars QUERY DATABASE If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. IDENTIFY TOPIC DISTRIBUTION Solution: Learn to adjust results of query probes CLASSIFY DATABASE 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

18 Confusion Matrix Adjustment of Query Probe Results
correct class Correct (but unknown) topic distribution Incorrect topic distribution derived from query probing Real Coverage 1000 5000 50 comp sports health 0.80 0.10 0.00 0.08 0.85 0.04 0.02 0.15 0.96 Estimated Coverage 1300 4332 818 = X = = = If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. assigned class This “multiplication” can be inverted to get a better estimate of the real topic distribution from the probe results 10% of “sport” documents match queries for “computers” 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

19 Confusion Matrix Adjustment of Query Probe Results
TRAIN CLASSIFIER Coverage(D) ~ M-1 . ECoverage(D) EXTRACT QUERIES Sports: +nba +knicks Adjusted estimate of topic distribution Health Probing results +sars QUERY DATABASE M usually diagonally dominant for “reasonable” document classifiers, hence invertible Compensates for errors in query-based estimates of topic distribution IDENTIFY TOPIC DISTRIBUTION CLASSIFY DATABASE 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

20 Classification Algorithm (Again)
TRAIN CLASSIFIER Train document classifier Extract queries from classifier Adaptively issue queries to database Identify topic distribution based on adjusted number of query matches Classify database One-time process EXTRACT QUERIES Sports: +nba +knicks Health +sars QUERY DATABASE +sars 1254 May be put the hierarchical classification algorithm here? IDENTIFY TOPIC DISTRIBUTION For every database CLASSIFY DATABASE 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

21 Panos Ipeirotis - New York University - IOMS Department
Experimental Setup 72-node 4-level topic hierarchy from InvisibleWeb/Yahoo! (54 leaf nodes) 500,000 Usenet articles (April-May 2000): Newsgroups assigned by hand to hierarchy nodes RIPPER trained with 54,000 articles (1,000 articles per leaf), 27,000 articles to construct confusion matrix 500 “Controlled” databases built using 419,000 newsgroup articles (to run detailed experiments) 130 real Web databases picked from InvisibleWeb (first 5 under each topic) If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. comp.hardware rec.music.classical rec.photo.* 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

22 Experimental Results: Controlled Databases
Accuracy (using F-measure): Above 80% for most <Tc, Ts> threshold combinations tried Degrades gracefully with hierarchy depth Confusion-matrix adjustment helps Efficiency: Relatively small number of queries (<500) needed for most threshold <Tc, Ts> combinations tried 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

23 Experimental Results: Web Databases
Accuracy (using F-measure): ~70% for best <Tc, Ts> combination Learned thresholds that reproduce human classification Tested threshold choice using 3-fold cross validation Efficiency: 120 queries per database on average needed for choice of thresholds, no documents retrieved Only small part of hierarchy “explored” Queries are short: 1.5 words on average; 4 words maximum (easily handled by most Web databases) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

24 Panos Ipeirotis - New York University - IOMS Department
Other Experiments Effect of choice of document classifiers: RIPPER C4.5 Naïve Bayes SVM Benefits of feature selection Effect of search-interface heterogeneity: Boolean vs. vector-space retrieval models Effect of query-overlap elimination step Over crawlable databases: query-based classification orders of magnitude faster than “brute-force” crawling-based classification ACM TOIS 2003 IEEE Data Engineering Bulletin 2002 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

25 Hidden-Web Database Classification: Summary
Handles autonomous Hidden-Web databases accurately and efficiently: ~70% F-measure Only 120 queries issued on average, with no documents retrieved Handles large family of document classifiers (and can hence exploit future advances in machine learning) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

26 Panos Ipeirotis - New York University - IOMS Department
Outline of Talk Classification of Hidden-Web Databases Search over Hidden-Web Databases 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

27 Interacting With Hidden-Web Databases
Browsing: Yahoo!-like directories Searching: Metasearchers Content not accessible through Google } NYTimes Archives In your opening, establish the relevancy of the topic to the audience. Give a brief preview of the presentation and establish value for the listeners. Take into account your audience’s interest and expertise in the topic when choosing your vocabulary, examples, and illustrations. Focus on the importance of the topic to your audience, and you will have more attentive listeners. PubMed Query Metasearcher USPTO Library of Congress 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

28 Metasearchers Provide Access to Distributed Databases
Database selection relies on simple content summaries: vocabulary, word frequencies thrombopenia PubMed (11,868,552 documents) aids ,491 cancer 1,562,477 heart ,360 hepatitis ,129 thrombopenia ,826 Metasearcher ? PubMed ... thrombopenia 24,826 thrombopenia 0 thrombopenia 18 NYTimes Archives USPTO Databases typically do not export such summaries! 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

29 Extracting Content Summaries from Autonomous Hidden-Web Databases
[Callan&Connell 2001] Send random queries to databases Retrieve top matching documents If retrieved 300 documents then stop; else go to Step 1 Content summary contains words in sample and document frequency of each word If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. Problems: Random sampling retrieves non-representative documents Frequencies in summary “compressed” to sample size range Summaries from small samples are highly incomplete 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

30 Extracting Representative Document Sample
Problem 1: Random sampling retrieves non-representative documents Train a document classifier Create queries from classifier Adaptively issue queries to databases Retrieve top-k matching documents for each query Save #matches for each one-word query Identify topic distribution based on adjusted number of query matches Categorize the database Generate content summary from document sample If you have several points, steps, or key ideas use multiple slides. Determine if your audience is to understand a new idea, learn a process, or receive greater depth to a familiar concept. Back up each point with adequate explanation. As appropriate, supplement your presentation with technical support data in hard copy or on disc, , or the Internet. Develop each point adequately to communicate with your audience. Sampling retrieves documents only from “topically dense” areas from database 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

31 Sample Frequencies vs. Actual Frequencies
Problem 2: Frequencies in summary “compressed” to sample size range PubMed (11,868,552 docs) … cancer 1,562,477 heart ,360 … PubMed Sample (300 documents) cancer 45 heart 16 … Sampling Key Observation: Query matches reveal frequency information 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

32 Adjusting Document Frequencies
Zipf’s law empirically connects word frequency f and rank r f = A (r + B) c frequency rank VLDB 2002 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

33 Adjusting Document Frequencies
Zipf’s law empirically connects word frequency f and rank r We know document frequency and rank r of the words in sample f = A (r + B) c frequency Frequency in sample 100 rank …. VLDB 2002 Rank in sample 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

34 Adjusting Document Frequencies
Zipf’s law empirically connects word frequency f and rank r We know document frequency and rank r of the words in sample We know real document frequency f of some words from one-word queries frequency f = A (r + B) c Frequency in database rank …. VLDB 2002 Rank in sample 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

35 Adjusting Document Frequencies
Zipf’s law empirically connects word frequency f and rank r We know document frequency and rank r of the words in sample We know real document frequency f of some words from one-word queries We use curve-fitting to estimate the absolute frequency of all words in sample f = A (r + B) c frequency Estimated frequency in database rank …. VLDB 2002 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

36 Actual PubMed Content Summary
Number of Documents: 8,691,360 (Actual: 11,868,552) Category: Health, Diseases cancer 1,562,477 heart ,506 (Actual: 691,360) aids ,491 hepatitis ,481 (Actual: 121,129) basketball (Actual: 1,063) cpu Extracted automatically ~ 27,500 words in extracted content summary Fewer than 200 queries sent At most 4 documents retrieved per query (heart, hepatitis, basketball not in 1-word probes) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

37 Sampling and Incomplete Content Summaries
Problem 3: Summaries from small samples are highly incomplete Log(Frequency) 107 106 Frequency & rank of 10% most frequent words in PubMed database (95% of the words appear in < 0.1% of db) 10,000 . . 103 102 2·104 4·104 105 Rank Many words appear in “relatively few” documents (Zipf’s law) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

38 Sampling and Incomplete Content Summaries
Problem 3: Summaries from small samples are highly incomplete Log(Frequency) 107 106 Frequency & rank of 10% most frequent words in PubMed database (95% of the words appear in < 0.1% of db) 10,000 . . endocarditis ~10,000 docs / ~0.1% 103 102 2·104 4·104 105 Rank Many words appear in “relatively few” documents (Zipf’s law) Low-frequency words are often important 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

39 Sampling and Incomplete Content Summaries
Problem 3: Summaries from small samples are highly incomplete Log(Frequency) Sample=300 107 106 Frequency & rank of 10% most frequent words in PubMed database (95% of the words appear in < 0.1% of db) 9,000 . . endocarditis ~9,000 docs / ~0.1% 103 102 2·104 4·104 105 Rank Many words appear in “relatively few” documents (Zipf’s law) Low-frequency words are often important Small document samples miss many low-frequency words 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

40 Sample-based Content Summaries
Challenge: Improve content summary quality without increasing sample size Main Idea: Database Classification Helps Similar topics ↔ Similar content summaries Extracted content summaries complement each other 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

41 Databases with Similar Topics
CANCERLIT contains “metastasis”, not found during sampling CancerBACUP contains “metastasis” Databases under same category have similar vocabularies, and can complement each other 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

42 Content Summaries for Categories
Databases under same category share similar vocabulary Higher level category content summaries provide additional useful estimates All estimates in category path are potentially useful 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

43 Enhancing Summaries Using “Shrinkage”
Estimates from database content summaries can be unreliable Category content summaries are more reliable (based on larger samples) but less specific to database By combining estimates from category and database content summaries we get better estimates SIGMOD 2004 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

44 Shrinkage-based Estimations
Adjust estimate for metastasis in D: λ1 * λ2 * λ3 * 0.092 + λ4 * 0.000 Select λi weights to maximize the probability that the summary of D is from a database under all its parent categories Avoids “sparse data” problem and decreases estimation risk 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

45 Computing Shrinkage-based Summaries
Root Health Cancer D Pr [metastasis | D] = λ1 * λ2 * λ3 * 0.092 + λ4 * 0.000 Pr [treatment | D] = λ1 * λ2 * λ3 * 0.179 + λ4 * 0.184 Automatic computation of λi weights using an EM algorithm Computation performed offline  No query overhead Avoids “sparse data” problem and decreases estimation risk 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

46 Shrinkage Weights and Summary
new estimates old estimates CANCERLIT Shrinkage-based λroot=0.02 λhealth=0.13 λcancer=0.20 λcancerlit=0.65 metastasis 2.5% 0.2% 5% 9.2% 0% aids 14.3% 0.8% 7% 2% 20% football 0.17% 1% Shrinkage: Increases estimations for underestimates (e.g., metastasis) Decreases word-probability estimates for overestimates (e.g., aids) …it also introduces (with small probabilities) spurious words (e.g., football) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

47 Is Shrinkage Always Necessary?
Shrinkage used to reduce uncertainty (variance) of estimations Small samples of large databases  high variance In sample: 10 out of 100 documents contain metastasis In database: ? out of 10,000,000 documents? Small samples of small databases  small variance In database: ? out of 200 documents? Shrinkage less useful (or even harmful) when uncertainty is low 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

48 Adaptive Application of Shrinkage
Database selection algorithms assign scores to databases for each query When word frequency estimates are uncertain, assigned score has high variance shrinkage improves score estimates When word frequency estimates are reliable, assigned score has small variance shrinkage unnecessary Unreliable Score Estimate: Use shrinkage Probability 1 Database Score for a Query Reliable Score Estimate: Shrinkage might hurt Probability Solution: Use shrinkage adaptively in a query- and database-specific manner 1 Database Score for a Query 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

49 Panos Ipeirotis - New York University - IOMS Department
Searching Algorithm Classify databases and extract document samples Adjust frequencies in samples One-time process For each query: For each database D: Assign score to database D (using extracted content summary) Examine uncertainty of score If uncertainty high, apply shrinkage and give new score; else keep existing score Query only top-K scoring databases For every query 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

50 Extracting Content Summaries: Problems Solved
Problem 1: Random sampling may retrieve non-representative documents Solution: Focus querying on “topically dense” areas of the database Problem 2: Frequencies are “compressed” to the sample size range Solution: Exploit number of matches for query and adjust estimates using curve fitting Problem 3: Summaries based on small samples are highly incomplete Solution: Exploit database classification and augment summaries using samples from topically similar databases 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

51 Panos Ipeirotis - New York University - IOMS Department
Evaluation: Goals Examine quality of shrinkage-based summaries Examine effect of shrinkage on database selection CANCERLIT Correct CANCERLIT Shrinkage-based CANCERLIT Unshrunk metastasis 12% 2.5% 0% aids 8% 14.3% 20% football 0.17% regression 1% 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

52 Panos Ipeirotis - New York University - IOMS Department
Experimental Setup Three data sets: Two standard testbeds from TREC (“Text Retrieval Conference”): 200 databases 100 queries with associated human-assigned document relevance judgments 315 real Web databases Two sets of experiments: Content summary quality Database selection accuracy 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

53 Panos Ipeirotis - New York University - IOMS Department
Experimental Results Content summary quality: Shrinkage improves quality of content summaries without increasing sample size Frequency estimation gives accurate (within ±20%) estimates of actual frequencies Database selection accuracy: Frequency estimation: Improves performance by 20%-30% Focused sampling: Improves performance by 40%-50% Adaptive application of shrinkage: Improves performance up to 100% Shrinkage is robust: Improved performance consistently across many different configurations 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

54 Results: Database Selection
Metric: R(K) = Χ / Υ X = # of relevant documents in the selected K databases Y = # of relevant documents in the best K databases For CORI (a state-of-the-art database selection algorithm) with stemming over one TREC testbed 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

55 Panos Ipeirotis - New York University - IOMS Department
Other Experiments Choice of database selection algorithm (CORI, bGlOSS, Language Modeling) Universal vs. adaptive application of shrinkage Effect of stemming Effect of stop-word elimination 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

56 Classification & Search: Overall Contributions
Support for browsing and searching Hidden-Web databases No need for cooperation: Work with autonomous Hidden-Web databases Scalable and work with large number of databases Not restricted to “Hidden”-Web databases: Work with any searchable text database Classification and content summary extraction implemented and available for download at: 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

57 Future Work: Integrated Access to Hidden-Web Databases
Query: [good drama movies playing in toronto tomorrow] Current top Google result: as of Oct 28th, 2004 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

58 Future Work: Integrated Access to Hidden-Web Databases
Query: [ drama movies playing in toronto tomorrow] good query review databases query movie databases query ticket databases All information already available on the web Review databases: Rotten Tomatoes, NY Times, TONY,… Movie databases: All Movie Guide, IMDB Tickets: Moviefone, Fandango,… Privacy: Should the database know that it was selected for one of my queries? Authorization: If a query is sent to a database that I do not have access, I know that it contains something relevant 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

59 Future Work: Integrated Access to Hidden-Web Databases
Query: [ drama movies playing in toronto tomorrow] good query review databases query movie databases query ticket databases Challenges: Short term: Learn to interface with different databases Adapt database selection algorithms Long term: Understand semantics of query Extract “query plans” and optimize for distributed execution Personalization Security and privacy Privacy: Should the database know that it was selected for one of my queries? Authorization: If a query is sent to a database that I do not have access, I know that it contains something relevant 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

60 Panos Ipeirotis - New York University - IOMS Department
Thank you! 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

61 Panos Ipeirotis - New York University - IOMS Department
Outline of Talk Classification of Hidden-Web Databases Search over Hidden-Web Databases SDARTS: Protocol and Toolkit for Metasearching 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

62 SDARTS: Protocol and Toolkit for Metasearching
Query Harrison’s Online SDARTS British Medical Journal PubMed Unstructured text documents DLI2 Corpus XML documents Local Web 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

63 SDARTS: Protocol and Toolkit for Metasearching
Accomplishments: Combines the strength of existing Digital Library protocols (SDLIP, STARTS) Enables indexing and wrapping of “local” collections of text and XML documents Enables “declarative” wrapping of Hidden-Web databases, with no programming Extracts content summary, topical focus, and technical level of each database Interfaces with Open Archives Initiative, an emerging Digital Library interoperability protocol Critical building block for search component of Columbia’s PERSIVAL project (5-year, $5M NSF Digital Libraries – Phase 2 project) Open source, available at: ~1,000 downloads since Jan 2003 Supervised and coordinated eight students during development ACM+IEEE JCDL Conference 2001, 2002 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

64 Current Work: Updating Content Summaries
Databases are not static. Their content changes. When should we refresh the content summary? Examined 150 real Web databases over 52 weeks Modeled changes using “survival analysis” techniques (Cox proportional hazards model) Currently developing updating algorithms: Contact database only when necessary Improve quality of summaries by exploiting history Joint work with Junghoo Cho and Alex Ntoulas (UCLA) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

65 Other Work: Approximate Text Matching
VLDB’01 WWW’03 Matching similar strings within relational DBMS important: data resides there Service A Jenny Stamatopoulou John Paul McDougal Aldridge Rodriguez Panos Ipeirotis John Smith Service B Panos Ipirotis Jonh Smith Stamatopulou, Jenny John P. McDougal Al Dridge Rodriguez Exact joins not enough: Typing mistakes, abbreviations, different conventions Introduced algorithms for mapping approximate text joins into SQL: No need for import/export of data Provides crucial building block for data cleaning applications Identifies many interesting matches Joint work with Divesh Srivastava, Nick Koudas (AT&T Labs-Research) and others 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

66 No Good Category for Database
General “problem” with supervised learning Example: English vs. Chinese databases Devised technique to analyze if can work with given database: Find candidate textfields Send top-level queries Examine results & construct similarity matrix If “matrix rank” small  Many “similar” pages returned Web form is not a search interface Textfield is not a “keyword” field Database is of different language Database is of an “unknown” topic 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

67 Database not Category Focused
Extract one content summary per topic: Focused queries retrieve documents about known topic Each database is represented multiple times in hierarchy 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

68 Near Future Work: Definition and analysis of query-based algorithms
Currently query-based algorithms are evaluated only empirically Possible to model querying process using random graph theory and: Analyze thoroughly properties of the algorithms Understand better why, when, and how the algorithms work Interested in exploring similar directions: Adapt hyperlink-based ranking algorithms Use results in graph theory to design sampling algorithms WebDB 2003 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

69 Database Selection (CORI, TREC6)
More results in … Stemming/No Stemming, CORI/LM/bGlOSS, QBS/FPS/RS/CMPL, Stopwords 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

70 3-Fold Cross-Validation
These charts are not included in the paper and I am not quite sure whether they can be useful or not. Actually they are the F measure values for the three disjoint sets of the Web set. Their behavior is exactly the same for Varying thresholds, confirming strongly the fact that we are not overfitting the data 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

71 Crawling- vs. Query-based Classification for CNN Sports
Efficiency Statistics: Crawling-based Query-based Time Files Size Queries 1325min 270,202 8Gb 2min (-99.8%) 112 357Kb (-99.9%) IEEE DEB – March 2002 Accuracy Statistics: Crawling-based classification is classified correctly only after downloading 70% of the documents in CNN-Sports 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

72 Experiments: Precision of Database Selection Algorithms
Content Summary Generation Technique CORI Hierarchical Flat FP-SVM-Documents 0.270 0.170 FP-SVM-Snippets 0.200 0.183 Random Sampling 0.177 QPilot (backlinks + front page) 0.050 VLDB 2002 (extended version) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

73 F-measure vs. Hierarchy Depth
ACM TOIS 2003 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

74 Real Confusion Matrix for Top Node of Hierarchy
Health Sports Science Computers Arts 0.753 0.018 0.124 0.021 0.017 0.006 0.171 0.016 0.064 0.024 0.255 0.047 0.004 0.042 0.080 0.610 0.031 0.027 0.298 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

75 Panos Ipeirotis - New York University - IOMS Department
Overlap Elimination 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

76 No Support for Conjunctive Queries (Boolean vs. Vector-space)
12/6/2018 Panos Ipeirotis - New York University - IOMS Department

77 Panos Ipeirotis - New York University - IOMS Department
Outline of Talk Classification of Hidden-Web Databases Search over Hidden-Web Databases Modeling and Managing Changes in Hidden-Web Databases 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

78 Do Content Summaries Change Over Time?
Databases are not static. Their content changes. Should we refresh the content summary? Examined summaries of 152 real Web databases over 52 weeks Summary quality declines as age increases 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

79 Updating Content Summaries
Summaries change  Need to refresh to capture changes To devise update policy  Need to know frequency of “change” Summary changes at time T if dist(current, old(T)) > τ Survival analysis estimates probability S(t) that T>t Common model S(t) = e-λt (λ defines frequency of change) Problems: No access to content summaries Even if we know summaries, long time to estimate λ change sensitivity threshold 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

80 Cox Proportional Hazards Regression
We want to estimate frequency of change for each database Cox PH model examines effect of database characteristics on frequency of change E.g., “if you double the size of a database, it changes twice as fast” Cox PH model effectively uses “censored” data (i.e., database did not change within time T) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

81 Cox PH Regression Results
Examined effect of: Change-sensitivity threshold τ Topic Domain Size Number of words Differences of summaries extracted in consecutive weeks Devised concrete “change model” according to database characteristics (formula in thesis) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

82 Panos Ipeirotis - New York University - IOMS Department
Scheduling Updates D λ average time between updates 10 weeks 40 weeks Tom’s Hardware 0.088 5 weeks 46 weeks USPS 0.023 12 weeks 34 weeks Using our change model, we schedule updates according to the available resources (using Lagrange-multiplier method) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

83 Panos Ipeirotis - New York University - IOMS Department
Scheduling Results With clever scheduling we improve the quality of summaries according to a variety of metrics (precision, recall, KL-divergence) 12/6/2018 Panos Ipeirotis - New York University - IOMS Department

84 Updating Content Summaries: Contributions
Extensive experimental study showing that quality of summaries deteriorates for increasing summary age Change frequency model that uses database characteristics to predict frequency of change Derived scheduling algorithms that define update frequency for each database according to the available resources 12/6/2018 Panos Ipeirotis - New York University - IOMS Department


Download ppt "Classifying and Searching "Hidden-Web" Text Databases"

Similar presentations


Ads by Google