Download presentation
Presentation is loading. Please wait.
Published byDenis Page Modified over 9 years ago
1
Learning to Create Data-Integration Queries Partha Pratim Talukdar, Marie Jacob, Muhammad Salman Mehmood, Koby Crammer, Zachary G. Ives, Fernando Pereira, Sudipto Guha VLDB2008 Seminar Presented by Noel Gunasekar CSE Department – SUNY Buffalo
2
Learning to Create Data Integration Queries 2 Learning to Create Data-Integration Queries Introduction Motivation Example Existing solutions Q-System Solution Q-System Architecture Query and Query Answers Executing Query Learning From feedback Conclusion Experimental results Future Work
3
Learning to Create Data Integration Queries 3 Introduction
4
Learning to Create Data Integration Queries 4 Motivation Need for non-expert user to pose queries across multiple data resources. Non-expert user - Not familiar with querying languages Multiple resource - Databases, Data warehouses, Virtual integrated schemas
5
Bio-Science Field Learning to Create Data Integration Queries5 Many "standardized" databases with overlapping and cross-referenced information Each site is being independently extended, corrected, and analyzed Differing levels of data quality/confidence Protein Databases Protein DataBase: - PDB information and service listings at Brookhaven National Laboratory [ BNL ] PIR: - Protein Identification Resource database at [ JHU ] PRF: - Protein Research Foundation database at GenomeNet SwissProt - Protein database at ExPASy [ Switzerland ]
6
Example Learning to Create Data Integration Queries6 genomics Life Sciences Researcher Disease Studies Life Sciences researcher querying on data-sources like genomics, disease studies and pharmacology. Pharmacology “ What are the proteins and genes associated with the disease Narcolepsy?” http://www.expasy.org/uniprot/P04049
7
Learning to Create Data Integration Queries7 Existing Solution Using keyword based queries on Web-Forms Match the keywords with terms in the tuples and form the query by joining different databases using foreign-keys Cost for the query is fixed and doesn’t accommodate the context of the query http://www.expasy.org/uniprot/P0C852
8
Learning to Create Data Integration Queries8 Proposed Solution - Q System Automatically generate Web-Forms for given set of keywords Pose queries across multiple data resources using the generated web-form
9
Learning to Create Data Integration Queries9 Proposed Solution - Q System Q System Keywords Protein, gene, disease Reusable Web-Form For querying User (Author) Create re-usable web-form Use web-form for Querying Reusable Web-Form For querying Parameters Users (Author + others) Query Results
10
Learning to Create Data Integration Queries 10 Q System Architecture
11
Learning to Create Data Integration Queries11 Architecture of Q System Four Components Initial Schema Loader Query Template Creation Query Execution Learning Through Feedback
12
Learning to Create Data Integration Queries12 Architecture of Q System
13
Learning to Create Data Integration Queries13 Initial Setup Schema Loader Input Given a set of data sources with its own schema Foreign Keys and Links Schema Mappings Record Link Output Schema Graph
14
Learning to Create Data Integration Queries14 Example Schema Graph Node: Databases and their attributes (UniProt database, Entrez GeneInfo db, term) Edge: Relation based on foreign keys/cross-references (UniProt to PIR) Cost: Reliability, completeness cb d 0.07 0.1 0.04 Initial Setup
15
Learning to Create Data Integration Queries15 Query Template Creation
16
Learning to Create Data Integration Queries16 Query Template Creation - Example Input: “protein”, “plasma membrane”, “gene” and “disease” Output:
17
Find trees connecting red nodes e acb fd 0.07 0.1 0.04 0.1 Schema Graph Rank = 2 Cost = 0.41 Rank = 1 Cost = 0.4 e acb fd 0.07 0.1 0.04 0.1 e ab fd Query Keywords a, e, f Q2 Q1 Query Template Creation - Example
18
Query Formulation Trees can be easily written as executable queries: Steiner Tree Conjunctive query: a(x,y),b(y,z),d(z,w),e(w,u),f(w,v) e ab fd 0.1
19
View Refinement
20
Web-Form
21
Learning to Create Data Integration Queries21 Query Execution
22
Input: Web-Form
23
Output: Result Answers Q1 Q1,2 Q2 System determines “producer” queries using provenance
24
Learning to Create Data Integration Queries24 Query Execution Query Processing Engine with Support for querying remote data sources Record data provenance Solution: ORCHESTRA http://www.cis.upenn.edu/~zives/orchestra/
25
Learning to Create Data Integration Queries25 Orchestra Project The ORCHESTRA project focuses on the challenges of data sharing scenarios in the sciences Bioinformatics Scenario - many "standardized" databases with overlapping information, similar but not identical data and differing levels of data quality/confidence Each site is being independently extended, corrected, and analyzed ORCHESTRA collaborative data sharing system (CDSS) is on how to support reconciliation across different schemas, with disagreeing users
26
Learning to Create Data Integration Queries26 Orchestra Project – Data Provenance http://www.cis.upenn.edu/~zives/research/exchange.pdf
27
Learning to Create Data Integration Queries27 Learning through Feedback
28
Learning to Create Data Integration Queries28 Input: Ranked Results + provenance Q1 Q1,2 Q2 Learning through Feedback
29
Learning to Create Data Integration Queries29 User provides feedback Q1 Q1,2 Q2 Learning through Feedback
30
Query Formulation - Recap Find trees connecting red nodes e acb fd 0.07 0.1 0.04 0.1 Schema Graph Rank = 2 Cost = 0.41 Rank = 1 Cost = 0.4 e acb fd 0.07 0.1 0.04 0.1 e ab fd Query Keywords a, e, f Q2 Q1
31
e acb fd 0.1 0.04 0.1 e acb fd 0.04 0.1 e ab fd Change weights so Q2 is “cheaper” than Q1 Rank = 1 Cost = 0.4 Rank = 2 Cost = 0. 41 Rank = 2 Cost = 0.4 Rank = 1 Cost = 0.39 0.0 5 Q1 Q2 0.05 0.07 Learning through Feedback
32
Learning to Create Data Integration Queries32 Iteration!
33
Q-System: Challenges Computation of ranked queries which in turn produce ranked tuples: K-Best Steiner Tree Generation Predicting new query rankings based on user feedback over tuples, and also generalizing feedback: Learning Maintaining associations between tuples and queries: Query answers with provenance Everything at interactive speed!
34
Cost of a Query Query Cost = Sum of edge costs in the tree. Edge Cost = Sum of weights of features defined over it. Features are properties of the edges, e.g., nodes connected Each feature has a corresponding weight. Feature example: TermSynonym f1w8w8 f = 1 if the edge connects Term and Synonym tables, else 0
35
Steiner Trees: Finding Lowest-Cost Queries A tree of minimal cost in a graph (G) which includes all the required nodes (S). Cost of a Steiner Tree is the sum of costs of edges present in the tree. Steiner Tree is generalization of Minimum Spanning Tree (MST) [equivalent when S = all vertices in G]. e acb fd 0.07 0.1 0.04 0.1
36
K-Best Steiner Tree Algorithms Exact (practical for ~100 nodes and edges). Integer Linear Program (ILP) based formulation for finding K-best Steiner Trees in a graph. The ILP uses ideas from multi-commodity network flows Approximate (for 100s+ nodes and edges). Novel Shortest Paths Complete Subgraph Heuristic. Significantly faster; in practice, often gives optimal solution.
37
Multi-Commodity Flow Problem
38
MIP for min-cost Steiner Tree
39
MIP for K min-cost Steiner Tree
40
Constraints C1 : Flow of commodity k starts at root r C2 : Flow of commodity k terminates at node k C3 : Conservation of flow at Steiner nodes C4 : Flow of an edge allowed only if that edge is included ( Yij = 1 ) C5 : Non-negativity constraint C6 : Defines value for Y C7 : Ensures no incoming active edge into the root C8 : Ensures that all nodes have at most one incoming active edge C9 : Flow of at least one commodity on all edges in I C10 : Ensures no flow on edges in X
41
Finding K-Best Steiner Trees 2-best Steiner trees connecting terminal nodes. e acb fd 0.07 0.1 0.04 0.1 Rank = 2 Cost = 0.41 Rank = 1 Cost = 0.4 e acb fd 0.07 0.1 0.04 0.1 e ab fd
42
K-Best Steiner Tree Algorithms Approximate (for 100s+ nodes and edges). Novel Shortest Paths Complete Subgraph Heuristic. Send “m” shortest path graph as input. Shortest path between each pair of nodes in S Significantly faster; in practice, often gives optimal solution.
43
Q Challenge : Getting User Feedback e ab fd T e ab fd c T*T* Q Q*Q*...... Query...... Tuples...... Top Bottom T and T * differ in 3 edges. This difference is termed loss: L(T, T * )
44
Learning: Update Weights Term2TermTerm(T1) Edge Cost: 0.07 w 8 = 0.06 w 25 = 0.01 Term2TermTerm(T1) Edge Cost: 0.05 w 8 = 0.04 w 25 = 0.01 Edge Cost Update
45
Re-ranked Steiner Trees Weight Update Rank 1Rank 2 e acb fd 0.07 0.1 0.04 0.1 e ab fd e acb fd 0.05 0.1 0.04 0.1 e ab fd
46
Experimental Results The Key Questions Can the algorithm start with uninitialized weights and learn expert (“gold standard”) ranking of queries? Can the results be generated at interactive speeds? Does the approach scale to larger graphs?
47
Results: Learning Expert Weights Graph: Start with the BioGuide bio sources, with 28 vertices and 96 edges. Goal: Learn the queries corresponding to the expert-set weights in BioGuide Methodology: All weights are set to default. Sequence of 25 searches For each, user feedback identifies & promotes a tuple from the gold standard answer. After 40-60% searches with feedback, system finds the top query immediately. For each individual search, a single feedback is enough to learn the top query. # Gold queries absent in top-3 predictions
48
Results: Time to generate K-best Queries KTime (s) 10.11 52.00 104.02 208.42 Schema graph of size (28, 96) from BioGuide (Boulakia et al., 2007). It is possible to generate the top query in < 1 sec and top 5 queries in about 2 sec, all within interactive range. Query execution is pipelined.
49
Results: Scalability to Larger Graphs KSpeedupError 1120 214.60 320.30 572.40 Larger schema graph of size (408, 1366) from real sources: GUS, GO, BioSQL. It is possible to do K-best inference in larger graphs quickly and with little or no loss (none in this case). Queries (K)
50
Learning to Create Data Integration Queries50 Experimental Results “Gold Standard” Using BioGuide – a biomedical information integration system BioGuide generates the schema graph based on keywords The edge cost in the schema graph are manually assigned by experts This expert given schema graph is called the “gold standard” Experiment involves in comparing the result produced by the q system with the results produced by the gold standard.
51
Learning to Create Data Integration Queries51 Learning against expert cost Started with an expert query template “What are the related proteins in [DB1] and genes in [DB2] associated with disease Narcolepsy in [DB3] ? By instantiating the template 25 queries were formed Each time for a query the lowest-cost Steiner tree is computed The “gold standard” is used as the feedback and learning is done
52
Learning to Create Data Integration Queries52 Future Work Work on other approximation algorithms for computing the Steiner trees Evaluation against real biological applications Incorporating data-level keyword matches.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.