Download presentation
Presentation is loading. Please wait.
Published byAshlynn Norman Modified over 9 years ago
1
INEX ‘05 TopX @ INEX ‘05 Martin Theobald Ralf Schenkel Gerhard Weikum Max Planck Institute for Informatics Saarbrücken
2
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 2 //article [ //sec [ about(.//, “XML retrieval”) ] //par [ about(.//, “native XML database”) ] ] //bib[about(.//item, “W3C”)] sec article sec par bib par title “Current Approaches to XML Data Manage- ment.” item “Data management systems control data acquisition, storage, and retrieval. Systems evolved from flat files … ” “XML queries with an expres- sive power similar to that of Datalog …” par title “XML-QL: A Query Language for XML.” “Native XML database systems can store schemaless data... ” inproc “Proc. Query Languages Workshop, W3C,1998.” title “Native XML databases.” sec article sec par “Sophisticated technologies developed by smart people.” par title “The X ML Files” par title “The Ontology Game” title “The Dirty Little Secret” “What does XML add for retrieval? It adds formal ways …” bib “ w3c.org/xml” “There, I've said it - the "O" word. If anyone is thinking along ontology lines, I would like to break some old news …” title item url “XML”
3
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 3 TopX: Efficient XML-IR [VLDB ’05] Extend top-k query processing algorithms for sorted lists [Buckley ’85; Güntzer, Balke & Kießling ’00; Fagin ‘01] to XML data Non-schematic, heterogeneous data sources Combined inverted index for content & structure Avoid full index scans, postpone expensive random accesses to large disk-resident data structures Exploit cheap disk space for redundant indexing Goal: Efficiently retrieve the best results of a similarity query
4
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 4 XML-IR: History and Related Work IR on structured docs (SGML): 1995 2000 2005 IR on XML: Commercial software: MarkLogic, Verity?, IBM?, Oracle?,... XML query languages: XQuery (W3C) XPath 2.0 (W3C) NEXI (INEX Benchmark) XPath & XQuery Full-Text (W3C) XPath 1.0 (W3C) XML-QL (AT&T Labs) Web query languages: Lorel (Stanford U) Araneus (U Roma) W3QS (Technion Haifa) TeXQuery (AT&T Labs) WebSQL (U Toronto) XIRQL (U Dortmund / Essen) XXL & TopX (U Saarland / MPII) ApproXQL (U Berlin / U Munich) ELIXIR (U Dublin) JuruXML (IBM Haifa ) XSearch (Hebrew U) Timber (U Michigan) XRank & Quark (Cornell U) FleXPath (AT&T Labs) XKeyword (UCSD) OED etc. (U Waterloo) HySpirit (U Dortmund) HyperStorM (GMD Darmstadt) WHIRL (CMU)
5
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 5 Computational Model Precomputed content scores score(t i,e) ∈ E.g., term/element frequencies, probabilistic models (Okapi BM25), etc. Typically normalized to score(t i,e) ∈ [0,1] Monotonous score aggregation aggr: (D 1 ×…×D m ) (D 1 ×…×D m ) → + E.g., sum, max, product (using log), cosine (using L 2 norm) Structural query conditions Complex query DAGs Aggregate constant score c for each matched structural condition (edges) Similarity queries (aka. “andish”) Non-conjunctive query evaluations Weak content matches can be compensated Vague structural matches Access model Disk-resident inverted index Inexpensive sequential accesses (SA) to inverted lists: “getNextItem()” More expensive random accesses (RA): “getItemBy(Id)”
6
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 6 Data Model Simplified XML model disregarding IDRef & XLink/XPointer Redundant full-contents Per-element term frequencies ftf(t i,e) for full-contents Pre/postorder labels for each tag-term pair XML-IR IR techniques for XML Clustering on XML Evaluation “xml ir” article title abs sec “xml ir ir technique xml clustering xml evaluation“ “ir technique xml“ “clustering xml evaluation“ “clustering xml” “evaluation“ title par 16 253 4 33 5261 ftf(“xml”, article 1 ) = 3 ftf(“xml”, article 1 ) = 3
7
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 7 Full-Content Scoring Model Full-content scores cast into an Okapi-BM25 probabilistic model with element-specific parameterization Basic scoring idea within IR-style family of TF*IDF ranking functions tagNavglengthk1k1 b article12,2232,90310.50.75 sec96,70941310.50.75 par1,024,9073210.50.75 fig109,2301310.50.75 per-element statistics Additional static score mass c for relaxable structural conditions
8
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 8 Inverted Block-Index for Content & Structure eiddocidscoreprepostmax- score 4620.92150.9 920.51080.9 17150.851200.85 8430.11120.1 sec[clustering] title[xml]par[evaluation] sec[clustering] title[xml] par[evaluation] Inverted index over tag-term pairs (full-contents) Benefits from increased selectivity of combined tag-term pairs Accelerates child-or-descendant axis, e.g., sec//”clustering” eiddocidscoreprepostmax- score 216170.92150.9 7230.81080.8 5120.54120.5 671310.412230.4 eiddocidscoreprepostmax- score 311.01211.0 2820.88140.8 18250.7537 9640.7564 Sequential block-scans Re-order elements in descending order of (maxscore, docid, score) per list Fetch all tag-term pairs per doc in one sequential block-access docid limits the range of in-memory structural joins Stored as inverted files or database tables (B + -tree indexes)
9
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 9 Navigational Index eiddocidprepost 462215 92108 1715120 843112 sec title[xml]par[evaluation] sec title par Additional navigational index Non-redundant element directory Supports element paths and branching path queries Random accesses using (docid, tag) as key Schema-oblivious indexing & querying eiddocidprepost 21617215 723108 512412 671311223 eiddocidprepost 31121 282814 182537 96464
10
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 10 TopX Query Processing Adapt T hreshold A lgorithm (TA) paradigm Focus on inexpensive sequential/sorted accesses Postpone expensive random accesses Candidate d = connected sub-pattern with element ids and scores Incrementally evaluate path constraints using pre/postorder labels In-memory structural joins (nested loops, staircase, or holistic twig joins) Upper/lower score guarantees per candidate Remember set of evaluated dimensions E(d) worstscore(d) = ∑ i E(d) score(t i,e) bestscore(d) = worstscore(d) + ∑ i E(d) high i Early threshold termination Candidate queuing Stop, if Extensions Batching of sorted accesses & efficient queue management Cost model for random access scheduling Probabilistic candidate pruning for approximate top-k results [VLDB ’04] [Fagin et al., PODS ’01 Güntzer et al., VLDB ’00 Buckley&Lewit, SigIR ‘85] [Fagin et al., PODS ’01 Güntzer et al., VLDB ’00 Buckley&Lewit, SigIR ‘85]
11
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 11 1.0 worst=0.9 best=2.9 46 worst=0.5 best=2.5 9 TopX Query Processing By Example eiddocidscoreprepost 4620.9215 920.5108 17150.85120 8430.1112 eiddocidscoreprepost 216170.9215 7230.8108 5120.5412 671310.41223 eiddocidscoreprepost 311.0121 2820.8814 18250.7537 9640.7564 worst=1.0 best=2.8 3 worst=0.9 best=2.8 216 171 worst=0.85 best=2.75 72 worst=0.8 best=2.65 worst=0.9 best=2.8 46 2851 worst=0.5 best=2.4 9 doc 2 doc 17 doc 1 worst=0.9 best=2.75 216 doc 5 worst=1.0 best=2.75 3 doc 3 worst=0.9 best=2.7 46 2851 worst=0.5 best=2.3 9 worst=0.85 best=2.65 171 score=1.7 best=2.5 46 28 score=0.5 best=1.3 9 worst=0.9 best=2.55 216 worst=1.0 best=2.65 3 worst=0.85 best=2.45 171 worst=0.8 best=2.45 72 worst=0.8 best=1.6 72 worst=0.1 best=0.9 84 worst=0.9 best=1.8 216 worst=1.0 best=1.9 3 worst=2.2 best=2.2 46 2851 worst=0.5 best=0.5 9 worst=1.0 best=1.6 3 worst=0.85 best=2.15 171 worst=1.6 best=2.1 171 182 worst=0.9 best=1.0 216 worst=0.0 best=2.9 Pseudo- Candidate worst=0.0 best=2.8 worst=0.0 best=2.75 worst=0.0 best=2.65 worst=0.0 best=2.45 worst=0.0 best=1.7 worst=0.0 best=1.4 worst=0.0 best=1.35 sec[clustering] title[xml] Top-2 results worst=0.9 46 worst=0.5 9 worst=0.9 216 worst=1.7 46 28 worst=2.2 46 2851 worst=1.0 3 worst=1.6 171 182 par[evaluation] 1.0 0.9 0.85 0.1 0.9 0.8 0.5 0.8 0.75 min-2=0.0 min-2=0.5 min-2=0.9 min-2=1.6 sec[clustering] title[xml]par[evaluation] Candidate queue
12
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 12 CO.Thorough Element-granularity Turn query into pseudo CAS query using “//*” No post-filtering on specific element types nxCG@10 = 0.0379 (rank 22 of 55) MAP = 0.008 (rank 37 of 55) Old INEX_eval: MAP=0.058 (rank 3)
13
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 13 COS.Fetch&Browse Document-granularity Rank documents according to their best target element Strict evaluation of support & target elements Return all target elements per doc using the document score (no overlap) MAP = 0.0601 (rank 4 of 19)
14
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 14 SSCAS Element-granularity with strict support & target elements (no overlap) nxCG@10 = 0.45 (ranks 1 & 2 of 25) MAP = 0.0322 & 0.0272 (ranks 1 & 6 )
15
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 15 Top-k Efficiency 0.07 84,424 723,1690.010TopX – BenProbe 0.17 0.09 0.37 3,25,068 761,970n/a10StructIndex 0.261 0 9,122,318n/a10Join&Sort 1.000.341.87 5,074,384 77,482n/a10StructIndex+ 0.03 64,807 635,5070.010TopX – MinProbe 1.000.030.35 1,902,427 882,9290.01,000TopX – BenProbe relPrec # SA CPU sec P@k MAP@k epsilon # RA k relPrec
16
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 16 Probabilistic Pruning 0.07 0.08 0.09 0.770.340.05 56,952 392,3950.2510 1.000.340.03 64,807 635,5070.0010TopX - MinProbe 0.650.310.02 48,963 231,1090.5010 0.510.330.01 42,174 102,1180.7510 0.380.300.01 35,327 36,9361.0010 # SA CPU sec P@k MAP@k epsilon # RA k relPrec
17
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 17 Conclusions & Ongoing Work Efficient and versatile TopX query processor Extensible framework for text, semi-structured & structured data Probabilistic Extensions Probabilistic cost model for random access scheduling Very good precision/runtime ratio for probabilistic candidate pruning Full NEXI support Phrase matching, mandatory terms “+”, negation “-”, attributes “@” Query weights (e.g., relevance feedback, ontological similarities) Scalability Optimized for runtime, exploits cheap disk space (redundancy factor 4-5 for INEX) Participated at TREC Terabyte Efficiency Task Dynamic and self-tuning query expansions [Sigir ’05] Incrementally merges inverted lists for a set of active expansions Vague Content & Structure (VCAS) queries (maybe next year..)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.