Presentation is loading. Please wait.

Presentation is loading. Please wait.

CPT-S Advanced Databases

Similar presentations


Presentation on theme: "CPT-S Advanced Databases"— Presentation transcript:

1 CPT-S 580-06 Advanced Databases
Yinghui Wu EME 49 1

2 Information retrieval and Database systems
2

3 Information Retrieval: a brief overview
Relevance Ranking Using Terms Relevance Using Hyperlinks Synonyms., Homonyms, and Ontologies Indexing of Documents Measuring Retrieval Effectiveness Web Search Engines Information Retrieval and Structured Data

4 Information Retrieval Systems
Information retrieval (IR) systems use a simpler data model than database systems Information organized as a collection of documents Documents are unstructured, no schema Information retrieval locates relevant documents, on the basis of user input such as keywords or example documents e.g., find documents containing the words “database systems” Can be used even on textual descriptions provided with non-textual data such as images Web search engines are the most familiar example of IR systems

5 Information Retrieval Systems (Cont.)
Differences from database systems IR systems don’t deal with transactional updates (including concurrency control and recovery) Database systems deal with structured data, with schemas that define the data organization IR systems deal with some querying issues not generally addressed by database systems Approximate searching by keywords Ranking of retrieved answers by estimated degree of relevance

6 Keyword Search In full text retrieval, all the words in each document are considered to be keywords. We use the word term to refer to the words in a document Information-retrieval systems typically allow query expressions formed using keywords and the logical connectives and, or, and not Ands are implicit, even if not explicitly specified Ranking of documents on the basis of estimated relevance to a query is critical Relevance ranking is based on factors such as Term frequency Frequency of occurrence of query keyword in document Inverse document frequency How many documents the query keyword occurs in Fewer  give more importance to keyword Hyperlinks to documents More links to a document  document is more important What else? Popularity? Conciseness? Diversity? Interestingness? Suprisingness?

7 Relevance Ranking Using Terms
TF-IDF (Term frequency/Inverse Document frequency) ranking: Let n(d) = number of terms in the document d n(d, t) = number of occurrences of term t in the document d. Relevance of a document d to a term t The log factor is to avoid excessive weight to frequent terms Relevance of document to query Q n(d, t) TF (d, t) = log 1 + n(d) TF (d, t) r (d, Q) = n(t) tQ

8 Relevance Ranking Using Terms (Cont.)
Most systems add to the above model Words that occur in title, author list, section headings, etc. are given greater importance Words whose first occurrence is late in the document are given lower importance Very common words such as “a”, “an”, “the”, “it” etc are eliminated Called stop words Proximity: if keywords in query occur close together in the document, the document has higher importance than if they occur far apart Documents are returned in decreasing order of relevance score Usually only top few documents are returned, not all

9 Similarity Based Retrieval
Similarity based retrieval - retrieve documents similar to a given document Similarity may be defined on the basis of common words E.g. find k terms in A with highest TF (d, t ) / n (t ) and use these terms to find relevance of other documents. Relevance feedback: Similarity can be used to refine answer set to keyword query User selects a few relevant documents from those retrieved by keyword query, and system finds other documents similar to these Vector space model: define an n-dimensional space, where n is the number of words in the document set. Vector for document d goes from origin to a point whose i th coordinate is TF (d,t ) / n (t ) The cosine of the angle between the vectors of two documents is used as a measure of their similarity.

10 Relevance Using Hyperlinks
Number of documents relevant to a query can be enormous if only term frequencies are taken into account Using term frequencies makes “spamming” easy E.g. a travel agency can add many occurrences of the words “travel” to its page to make its rank very high Most of the time people are looking for pages from popular sites Idea: use popularity of Web site (e.g. how many people visit it) to rank site pages that match given keywords Problem: hard to find actual popularity of site Solution: next slide

11 Relevance Using Hyperlinks (Cont.)
Solution: use number of hyperlinks to a site as a measure of the popularity or prestige of the site Count only one hyperlink from each site (why? - see previous slide) Popularity measure is for site, not for individual page But, most hyperlinks are to root of site Also, concept of “site” difficult to define since a URL prefix like cs.yale.edu contains many unrelated pages of varying popularity Refinements When computing prestige based on links to a site, give more weight to links from sites that themselves have higher prestige Definition is circular Set up and solve system of simultaneous linear equations Above idea is basis of the Google PageRank ranking mechanism

12 Relevance Using Hyperlinks (Cont.)
Connections to social networking theories that ranked prestige of people E.g. the president of the U.S.A has a high prestige since many people know him Someone known by multiple prestigious people has high prestige Hub and authority based ranking A hub is a page that stores links to many pages (on a topic) An authority is a page that contains actual information on a topic Each page gets a hub prestige based on prestige of authorities that it points to Each page gets an authority prestige based on prestige of hubs that point to it Again, prestige definitions are cyclic, and can be got by solving linear equations Use authority prestige when ranking answers to a query

13 Synonyms and Homonyms Synonyms
E.g. document: “motorcycle repair”, query: “motorcycle maintenance” need to realize that “maintenance” and “repair” are synonyms System can extend query as “motorcycle and (repair or maintenance)” Homonyms E.g. “object” has different meanings as noun/verb Can disambiguate meanings (to some extent) from the context Extending queries automatically using synonyms can be problematic Need to understand intended meaning in order to infer synonyms Or verify synonyms with user Synonyms may have other meanings as well

14 Concept-Based Querying
Approach For each word, determine the concept it represents from context Use one or more ontologies: Hierarchical structure showing relationship between concepts E.g.: the ISA relationship that we saw in the E-R model This approach can be used to standardize terminology in a specific field Ontologies can link multiple languages Foundation of the Semantic Web (not covered here)

15 Indexing of Documents An inverted index maps each keyword Ki to a set of documents Si that contain the keyword Documents identified by identifiers Inverted index may record Keyword locations within document to allow proximity based ranking Counts of number of occurrences of keyword to compute TF and operation: Finds documents that contain all of K1, K2, ..., Kn. Intersection S1 S2 .....  Sn or operation: documents that contain at least one of K1, K2, …, Kn union, S1 S2 .....  Sn,. Each Si is kept sorted to allow efficient intersection/union by merging “not” can also be efficiently implemented by merging of sorted lists

16 Measuring Retrieval Effectiveness
Information-retrieval systems save space by using index structures that support only approximate retrieval. May result in: false negative (false drop) - some relevant documents may not be retrieved. false positive - some irrelevant documents may be retrieved. For many applications a good index should not permit any false drops, but may permit a few false positives. Relevant performance metrics: precision - what percentage of the retrieved documents are relevant to the query. recall - what percentage of the documents relevant to the query were retrieved.

17 Measuring Retrieval Effectiveness (Cont.)
Recall vs. precision tradeoff: Can increase recall by retrieving many documents (down to a low level of relevance ranking), but many irrelevant documents would be fetched, reducing precision Measures of retrieval effectiveness: Recall as a function of number of documents fetched, or Precision as a function of recall Equivalently, as a function of number of documents fetched E.g. “precision of 75% at recall of 50%, and 60% at a recall of 75%” Problem: which documents are actually relevant, and which are not

18 Web Search Engines Web crawlers are programs that locate and gather information on the Web Recursively follow hyperlinks present in known documents, to find other documents Starting from a seed set of documents Fetched documents Handed over to an indexing system Can be discarded after indexing, or store as a cached copy Crawling the entire Web would take a very large amount of time Search engines typically cover only a part of the Web, not all of it Take months to perform a single crawl

19 Web Crawling (Cont.) Crawling is done by multiple processes on multiple machines, running in parallel Set of links to be crawled stored in a database New links found in crawled pages added to this set, to be crawled later Indexing process also runs on multiple machines Creates a new copy of index instead of modifying old index Old index is used to answer queries After a crawl is “completed” new index becomes “old” index Multiple machines used to answer queries Indices may be kept in memory Queries may be routed to different machines for load balancing

20 Information Retrieval and Structured Data
Information retrieval systems originally treated documents as a collection of words Information extraction systems infer structure from documents, e.g.: Extraction of house attributes (size, address, number of bedrooms, etc.) from a text advertisement Extraction of topic and people named from a new article Relations or XML structures used to store extracted data System seeks connections among data to answer queries Question answering systems

21 Case study: ambiguous graph search

22 Queries transform to inexact answers
5/21/2018 Queries transform to inexact answers “find information about the patients with eye tumor, and doctors who cured them.” (IBM Watson, Facebook Graph Search, Apple Siri, Wolfram Alpha Search…) patient eye tumor doctor choroid neoplasm does not match Jane (patient) Alex Smith (primary care provider) match! Continue our clinical query example. Suppose.. We have query analysis techniques to transform such a query to a small graph pattern and it’s more intuitive. So let’s use this small graph to present a query. Now there could be a nice match for this query, but it’s not an exact match: eye tumor is mapped with its more advanced…. Google – altavista Ours – ibm watson; Why top univeristies are doing this But this gap can be bridged by using some external ontologies. Precise Not simple; Graph, simple than SPARQL; we have work to transform keyword query to graph query. Writing a graph query is much easier than SPARQL Now: what if we don’t have ontologies in the first place? We construct them, as what Google, IBM, Microsoft and Yahoo are doing now. Fundation problem: SPARQL to graph query; label to reduce complexity doctor SameAs physician superclassOf primary care provider eye tumor eye neoplasm synonym choroid neoplasm Using ontologies to capture semantically related matches

23 Ontology-based graph querying
5/21/2018 Ontology-based graph querying Given a data graph, a query graph and an ontology graph, identify K best matches with minimum semantic closeness. semantic closeness metrics eye tumor eye tumor choroid neoplasm choroid neoplasm Primary care provider doctor Primary care provider Here we compute the semantic closeness as the sum of the closeness of each node and its match from data graph. The semantic closeness for the nodes may come from existing study for ontology similarity metrics. Most of such metrics Follow a basic assumption that if two entities are closer in an ontology, then they are more closely related. In general, if I can find a match with a set of nodes close to the query nodes in an ontology, the match should be considered as More relevant to the query. This is the central idea of ontology-based matching. sounds reasonable right? add doctor query ontology data graph

24 A framework based on query rewriting
5/21/2018 A framework based on query rewriting database query ontology rewriting evaluation ranked query results The idea is simple but the way to find matches is not trivial. We may first try this. … ok, since we have.. We can .. Sounds great. But the problem here is again scalability. For a single graph query we may have exponential number of possible ways to interprete it, if you think about the combinations of the interpretation for each node in the query graph. For each such query we again have exponential number of matches. This is clearly not a good idea for large graphs. Exponential! Exponential!

25 Direct querying + How? query 5/21/2018 offline construction ontology
candidate results ranked query results ontology database query result 1 ontology index result 2 filtering verification result 3 The good news is that we can bypass the query interpretation from the very begining. Instead of query enumeration, we Construct an ontology index from ontology and data graph. We then use the index to extract a subgraph that contains all Possible matches. Next we directly evaluate the query in this small subgraph to extract top answers. How?

26 Ontology-based Indexing
5/21/2018 Ontology-based Indexing Idea: summarize data graph with ontologies ontology index: a set of concept graphs doctors nurse practitioner physician primary care provider choroid eye neoplasm Ontology eye tumor neoplasm disorder Data graph eye patient 3 patient 1 patient 2 primary care provider patient database ontology + partitions ontology index + summarize data graph with selected concepts ontology database ontology partitions ontology index We first construct several partitions of ontology graphs. We select several entities as concepts, and grouping the rest of the entities closely related to them together. Each such partition further induces a summary of data graphs in terms of concepts, which we call a concept graph. The ontology index is a set of such concept graphs. partition ontologies with several concepts computed once-for-all

27 Ontology-based Subgraph Matching
5/21/2018 Ontology-based Subgraph Matching Idea: filtering (concept graphs) + verification (view graph) choroid patient patient 3 patient 2 disorder neoplasm doctors Concept graph physician primary care provider eye tumor doctor top 1 result ontology index Query concept graph 1 graph 2 graph n concept level results filtering verification ranked query result 1 result 2 result 3 candidate concept level results filtering by intersection filtering candidate results verification ranked query results result 1 result 2 result 3 concept graph 1 Query concept graph 2 concept graph n ontology index construct concept level matches

28 Ontology-based Subgraph Matching
Offline index construction O(|E|log|V|) for graph G (V, E) Online query processing (top-K matches) Concept level matching: O(|Q||I|) for index I Subgraph extraction: O(|Q||I|) Verification: O(|Q||I|+|Gv||Q|) Gv: extracted graph from concept level matches } |Gv|<<|G| Gv machine learning; as I’ll talk about later, we have developed machine learning techniques to reduce the size of I. Gv Q small Worst case; |Gv| << |G|

29 More than one way to pick a leaf…
5/21/2018 More than one way to pick a leaf… Data Graph Query Transformation Category Example First/Last token String “Barack Obama” -> “Obama” Abbreviation “Jeffrey Jacob Abrams” -> “J. J. Abrams” Prefix “Doctor” -> “Dr” Acronym “Bank of America” -> “BOA" Synonym Semantic “tumor” -> “neoplasm” Ontology “teacher” -> “educator” Range Numeric “1980” -> “~30” Unit Conversion “3 mi” -> “4.8 km” Distance Topology “Pine” - “M:I” -> “Pine” - “J.J. Abrams” - “M:I” Go back to example: the combination of two transformations lead to a top ranked answer: but how do we integrate them to automatically find those ranked ones? JJ Abrams. Has a query has some relationship ; given a query abbreviation given like this; Suppose user sends a query with J.J.Abrams, … matching via abbreviation

30 Different weights! How to determine?
5/21/2018 Schema-less Querying Users want to freely post queries, without possessing any knowledge of the underlying data. The querying system should automatically find the matches through a set of transformations. Too many candidates! How to find the best? A match Query No. of Transformation applied Avg. result number Actor, ~30 yrs M : I UCB Chris Pine (1980) University of California, Berkeley J. J. Abrams Mission: Impossible IBM Watson Jeopardy! Very close An IBM researcher gave a talk at UCSB – the idea of Watson system is very close. Why not relational databases? Too crowded; Different weights! How to determine? Acronym transformation matches ‘UCB’ to ‘University of California, Berkeley’ Abbreviation transformation matches ‘M : I’ to ‘Mission: Impossible’ Numeric transformation matches ‘~30’ to ‘1980’. Structural transformation matches ‘an edge’ to ‘a path’.

31 5/21/2018 Ranking Function With a set of transformations , given a query Q and its match result R, our ranking model considers the node matching: from a query node v to its match the edge matching: from query edge e to its match Overall ranking model: Database: simple aggregation like sum; We leverage ML techniques to train the ranking model. To this end we provide a representation in terms of a likelihood function

32 Graph Querying: a Machine Learning Approach
5/21/2018 Graph Querying: a Machine Learning Approach A query can be interpreted with conditional random field (CRFs), a common graphical model A good match = an instance has the highest probability under CRFs find a good ranking function = learn good CRFs model! Actor, ~30 yrs M : I UCB Chris Pine (1980) University of California, Berkeley J. J. Abrams Mission: Impossible Title a machine learning approach Matching – fine; How to learn? a possible assignment input variables Joint probability

33 Parameter Learning Parameters need to be determined appropriately
5/21/2018 Parameter Learning Parameters need to be determined appropriately Classic DB/IR method: Tuned by domain experts manually Specific domain knowledge is not sufficient for big graph data Supervised method: Learning to rank User query logs: not easy to acquire at the beginning Manually label the answers: not practical and scalable Our unsupervised approach: Automatically generate training data When system runs a while – good for supervised method; How if from start The system needs a cold start.

34 5. Train the ranking model
5/21/2018 Automatically Generate Training Data Data graph Sampling: a set of subgraphs are randomly extracted from the data graph Query generation: the queries are generated by randomly adding transformation on the extracted subgraphs Searching: search the generated queries on the data graph Labeling: the results are labeled based on the original subgraph Training: the queries, with the labeled results, are then used to estimate the parameters of the ranking model 1. Sampling Tom Cruise 4. rank the results results Tom Cruise Samuel Tom 3. Search 2. Add transformations Tom training query Train the ranking model (L-BFGS [Liu89]) Learn a set of proper parameters that rank the labeled good matches as high as possible 5. Train the ranking model

35 A knowledge retrieval system over graphs
5/21/2018 A knowledge retrieval system over graphs “find history of jaguar in America” 14 types 85 matches (SIGMOD 2014 demo, VLDB 2014) Access, search and explore big graphs without training


Download ppt "CPT-S Advanced Databases"

Similar presentations


Ads by Google