INEX ‘05 INEX ‘05 Martin Theobald Ralf Schenkel Gerhard Weikum Max Planck Institute for Informatics Saarbrücken
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 2 //article [ //sec [ about(.//, “XML retrieval”) ] //par [ about(.//, “native XML database”) ] ] //bib[about(.//item, “W3C”)] sec article sec par bib par title “Current Approaches to XML Data Manage- ment.” item “Data management systems control data acquisition, storage, and retrieval. Systems evolved from flat files … ” “XML queries with an expres- sive power similar to that of Datalog …” par title “XML-QL: A Query Language for XML.” “Native XML database systems can store schemaless data... ” inproc “Proc. Query Languages Workshop, W3C,1998.” title “Native XML databases.” sec article sec par “Sophisticated technologies developed by smart people.” par title “The X ML Files” par title “The Ontology Game” title “The Dirty Little Secret” “What does XML add for retrieval? It adds formal ways …” bib “ w3c.org/xml” “There, I've said it - the "O" word. If anyone is thinking along ontology lines, I would like to break some old news …” title item url “XML”
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 3 TopX: Efficient XML-IR [VLDB ’05] Extend top-k query processing algorithms for sorted lists [Buckley ’85; Güntzer, Balke & Kießling ’00; Fagin ‘01] to XML data Non-schematic, heterogeneous data sources Combined inverted index for content & structure Avoid full index scans, postpone expensive random accesses to large disk-resident data structures Exploit cheap disk space for redundant indexing Goal: Efficiently retrieve the best results of a similarity query
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 4 XML-IR: History and Related Work IR on structured docs (SGML): IR on XML: Commercial software: MarkLogic, Verity?, IBM?, Oracle?,... XML query languages: XQuery (W3C) XPath 2.0 (W3C) NEXI (INEX Benchmark) XPath & XQuery Full-Text (W3C) XPath 1.0 (W3C) XML-QL (AT&T Labs) Web query languages: Lorel (Stanford U) Araneus (U Roma) W3QS (Technion Haifa) TeXQuery (AT&T Labs) WebSQL (U Toronto) XIRQL (U Dortmund / Essen) XXL & TopX (U Saarland / MPII) ApproXQL (U Berlin / U Munich) ELIXIR (U Dublin) JuruXML (IBM Haifa ) XSearch (Hebrew U) Timber (U Michigan) XRank & Quark (Cornell U) FleXPath (AT&T Labs) XKeyword (UCSD) OED etc. (U Waterloo) HySpirit (U Dortmund) HyperStorM (GMD Darmstadt) WHIRL (CMU)
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 5 Computational Model Precomputed content scores score(t i,e) ∈ E.g., term/element frequencies, probabilistic models (Okapi BM25), etc. Typically normalized to score(t i,e) ∈ [0,1] Monotonous score aggregation aggr: (D 1 ×…×D m ) (D 1 ×…×D m ) → + E.g., sum, max, product (using log), cosine (using L 2 norm) Structural query conditions Complex query DAGs Aggregate constant score c for each matched structural condition (edges) Similarity queries (aka. “andish”) Non-conjunctive query evaluations Weak content matches can be compensated Vague structural matches Access model Disk-resident inverted index Inexpensive sequential accesses (SA) to inverted lists: “getNextItem()” More expensive random accesses (RA): “getItemBy(Id)”
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 6 Data Model Simplified XML model disregarding IDRef & XLink/XPointer Redundant full-contents Per-element term frequencies ftf(t i,e) for full-contents Pre/postorder labels for each tag-term pair XML-IR IR techniques for XML Clustering on XML Evaluation “xml ir” article title abs sec “xml ir ir technique xml clustering xml evaluation“ “ir technique xml“ “clustering xml evaluation“ “clustering xml” “evaluation“ title par ftf(“xml”, article 1 ) = 3 ftf(“xml”, article 1 ) = 3
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 7 Full-Content Scoring Model Full-content scores cast into an Okapi-BM25 probabilistic model with element-specific parameterization Basic scoring idea within IR-style family of TF*IDF ranking functions tagNavglengthk1k1 b article12,2232, sec96, par1,024, fig109, per-element statistics Additional static score mass c for relaxable structural conditions
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 8 Inverted Block-Index for Content & Structure eiddocidscoreprepostmax- score sec[clustering] title[xml]par[evaluation] sec[clustering] title[xml] par[evaluation] Inverted index over tag-term pairs (full-contents) Benefits from increased selectivity of combined tag-term pairs Accelerates child-or-descendant axis, e.g., sec//”clustering” eiddocidscoreprepostmax- score eiddocidscoreprepostmax- score Sequential block-scans Re-order elements in descending order of (maxscore, docid, score) per list Fetch all tag-term pairs per doc in one sequential block-access docid limits the range of in-memory structural joins Stored as inverted files or database tables (B + -tree indexes)
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 9 Navigational Index eiddocidprepost sec title[xml]par[evaluation] sec title par Additional navigational index Non-redundant element directory Supports element paths and branching path queries Random accesses using (docid, tag) as key Schema-oblivious indexing & querying eiddocidprepost eiddocidprepost
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 10 TopX Query Processing Adapt T hreshold A lgorithm (TA) paradigm Focus on inexpensive sequential/sorted accesses Postpone expensive random accesses Candidate d = connected sub-pattern with element ids and scores Incrementally evaluate path constraints using pre/postorder labels In-memory structural joins (nested loops, staircase, or holistic twig joins) Upper/lower score guarantees per candidate Remember set of evaluated dimensions E(d) worstscore(d) = ∑ i E(d) score(t i,e) bestscore(d) = worstscore(d) + ∑ i E(d) high i Early threshold termination Candidate queuing Stop, if Extensions Batching of sorted accesses & efficient queue management Cost model for random access scheduling Probabilistic candidate pruning for approximate top-k results [VLDB ’04] [Fagin et al., PODS ’01 Güntzer et al., VLDB ’00 Buckley&Lewit, SigIR ‘85] [Fagin et al., PODS ’01 Güntzer et al., VLDB ’00 Buckley&Lewit, SigIR ‘85]
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search worst=0.9 best= worst=0.5 best=2.5 9 TopX Query Processing By Example eiddocidscoreprepost eiddocidscoreprepost eiddocidscoreprepost worst=1.0 best=2.8 3 worst=0.9 best= worst=0.85 best= worst=0.8 best=2.65 worst=0.9 best= worst=0.5 best=2.4 9 doc 2 doc 17 doc 1 worst=0.9 best= doc 5 worst=1.0 best= doc 3 worst=0.9 best= worst=0.5 best=2.3 9 worst=0.85 best= score=1.7 best= score=0.5 best=1.3 9 worst=0.9 best= worst=1.0 best= worst=0.85 best= worst=0.8 best= worst=0.8 best= worst=0.1 best= worst=0.9 best= worst=1.0 best=1.9 3 worst=2.2 best= worst=0.5 best=0.5 9 worst=1.0 best=1.6 3 worst=0.85 best= worst=1.6 best= worst=0.9 best= worst=0.0 best=2.9 Pseudo- Candidate worst=0.0 best=2.8 worst=0.0 best=2.75 worst=0.0 best=2.65 worst=0.0 best=2.45 worst=0.0 best=1.7 worst=0.0 best=1.4 worst=0.0 best=1.35 sec[clustering] title[xml] Top-2 results worst= worst=0.5 9 worst= worst= worst= worst=1.0 3 worst= par[evaluation] min-2=0.0 min-2=0.5 min-2=0.9 min-2=1.6 sec[clustering] title[xml]par[evaluation] Candidate queue
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 12 CO.Thorough Element-granularity Turn query into pseudo CAS query using “//*” No post-filtering on specific element types = (rank 22 of 55) MAP = (rank 37 of 55) Old INEX_eval: MAP=0.058 (rank 3)
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 13 COS.Fetch&Browse Document-granularity Rank documents according to their best target element Strict evaluation of support & target elements Return all target elements per doc using the document score (no overlap) MAP = (rank 4 of 19)
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 14 SSCAS Element-granularity with strict support & target elements (no overlap) = 0.45 (ranks 1 & 2 of 25) MAP = & (ranks 1 & 6 )
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 15 Top-k Efficiency , , TopX – BenProbe ,25, ,970n/a10StructIndex ,122,318n/a10Join&Sort ,074,384 77,482n/a10StructIndex , , TopX – MinProbe ,902, , ,000TopX – BenProbe relPrec # SA CPU sec epsilon # RA k relPrec
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 16 Probabilistic Pruning , , , , TopX - MinProbe , , , , ,327 36, # SA CPU sec epsilon # RA k relPrec
INEX ‘05 An Efficient and Versatile Query Engine for TopX Search 17 Conclusions & Ongoing Work Efficient and versatile TopX query processor Extensible framework for text, semi-structured & structured data Probabilistic Extensions Probabilistic cost model for random access scheduling Very good precision/runtime ratio for probabilistic candidate pruning Full NEXI support Phrase matching, mandatory terms “+”, negation “-”, attributes Query weights (e.g., relevance feedback, ontological similarities) Scalability Optimized for runtime, exploits cheap disk space (redundancy factor 4-5 for INEX) Participated at TREC Terabyte Efficiency Task Dynamic and self-tuning query expansions [Sigir ’05] Incrementally merges inverted lists for a set of active expansions Vague Content & Structure (VCAS) queries (maybe next year..)