Presentation is loading. Please wait.

Presentation is loading. Please wait.

SemRank: Ranking Complex Relationship Search Results on the Semantic Web Kemafor Anyanwu, Angela Maduko, Amit Sheth LSDIS labLSDIS lab, University of Georgia.

Similar presentations


Presentation on theme: "SemRank: Ranking Complex Relationship Search Results on the Semantic Web Kemafor Anyanwu, Angela Maduko, Amit Sheth LSDIS labLSDIS lab, University of Georgia."— Presentation transcript:

1 SemRank: Ranking Complex Relationship Search Results on the Semantic Web Kemafor Anyanwu, Angela Maduko, Amit Sheth LSDIS labLSDIS lab, University of Georgia PaperPaper presentation at WWW2005, Chiba Japan Kemafor Anyanwu, Angela Maduko, and Amit Sheth. SemRank: Ranking Complex Relationship Search Results on the Semantic Web, Proceedings of the 14th International World Wide Web Conference (WWW2005), Chiba, Japan, May 10-14, 2005, pp. 117-127. This work is funded by NSF-ITR-IDM Award#0325464 titled ‘SemDIS: Discovering Complex Relationships in the Semantic Web’ and NSF-ITR-IDM Award#0219649 titled ‘Semantic Association Identification and Knowledge Discovery for National Security Applications.’SemDIS: Discovering Complex Relationships in the Semantic Web

2 Outline The Problem The SemRank relevance model SemRank computational issues in the SSARK system Evaluating SemRank: strategy and issues Related Work Conclusion and Future work

3 The Problem [Anyanwu et al WWW2003] proposed a query operator for finding complex relationships between entitiesAnyanwu et al WWW2003 [Angles et al ESWC05] a survey of graph- based query operations that should be enabled on the Semantic Web Question: How can results of relationship query operations be ranked?

4 g The Relationship Ranking Problem query q = (1, 3) (a pair of nodes) 2 3 a d e f 1 5 b c 7 6 f g 8 h 2. g 3 1 1.Find the subgraph that covers q 4...... 3. 2. List the results in order of relevance could be done with step 1 or as a separate step 2n.2n. 1 4 bd 1. 3

5 Things to think about Relevance as best match vs. ???? Homogenous (hyperlinks) vs. heterogeneous relationships Should relevance be fixed for all situations? Size of result set potentially large

6 This paper has relationships to Semantic Searching Graph theory Database Systems –path expression queries, ranked queries, query processing, join algorithms, indexing, etc Data mining Linear algebra But …. Are all these relationships equally relevant when presenting to this audience?

7 The SemRank Model

8 SemRank’s Design Philosophy Tenet 1: Thou shall support variable rankings Tenet 2: Thou must not burden the user with complex query specification Tenet 3: Thou shall support main stream search paradigms

9 SemRank’s Key Concepts Modulative Ranking Relevance: Search Mode + Predictability Refraction Count –How varied is the result from what is expected from schema? Information Gain –How much information does a user gain by being informed about a result? S-Match –Best semantic match with user need (if provided)

10 High Information Gain High Refraction Count High S-Match Low Information Gain Low Refraction Count High S-Match adjustable search mode

11 Modulative Rank Function Typical preference or rank function –Rank i =   w i j * attr i j What we want is, given –µ - weight function parameter –and attributes attr 1, attr 2 … attr k e.g. length –for each attribute, select appropriate weight functions from g 1, g 2, … g m e.g. g i (µ) = µ each g i is some function of µ Then –Rank i (µ ) =  g j (µ) * (attr i k ) where g j is the weight function selected for attr k

12 Refraction as a measure of predictability

13 Refraction The path “ enrolled_in  taught_by  married_to “ doesn’t exist anywhere at schema layer We say that the path refracts at node 3 High refraction count in a path  low predictability StudentCourseProfessor enrolled_in Spouse married_to taught_by 12 enrolled_intaught_by 4 married_to 3

14 Semantic Summary C1C1 C2C2 C3C3 C4C4 C5C5 C1C1 p1, p2,p1, p2,p 1, p 2 p3p3 C2C2 P 5, p 4 C3C3 p 1, p 2 p1, p2p1, p2 p3p3 C4C4 p 4, p 5 C5C5 C1C1 C5C5 C4C4 C3C3 C2C2 p1p1 p2p2 p1p1 p3p3 p4p4 p 1, p 2 p5p5 p4p4 p 5 p3p3 p 1, p 2 p2p2 C 1  C 3 C 2  C 4 Representative Ontology Class

15 Semantic Summary & Refraction. A Semantic Summary is a graph of representative ontology classes with appropriate relations as arcs For a path p = r 1, p 1, r 2, p 2, r 3, there is a refraction at r 2 if p 1  (ROC i, ROC j ) and p 2  (ROC j, ROC k ) (or vice versa) where –ROC i, ROC j, ROC k are representative ontology classes of r 1, r 2, r 3 respectively

16 Information content and Information gain

17 Measuring Information Content of a Property Content is related to uncertainty removed Typically measured as some function of its probability –High probability -> low information content For p  P, P = set of property types, its information content I SP can be measured as: –I SP (p k ) = log 2 (1/Pr k (p = p k )) = - log 2 ( [[ p k ]] / [[ P ]] ) I SP (p) is maximum when –Pr i = 1 / [[ P ]] = log [[ P ]]

18 Information Content of a Property Sequence – global perspective The information content of a sequence of properties p 1  p 2  p 3     p k is –max(I SP (p i )), 1 ≤ i ≤ k p1p1 p2p2 p3p3 Prob = high Prob = low Prob = high Information content is dependent on p 2 weak point

19 Information Content – Local Perspective Global high information content but local low information content Given (a, p 1, b), information content with respect to only the valid possibilities between a and b ? (a, p 1, b), and valid(p 1 ) is P  =  (ROC i, ROC j ), a  ROC i and b  ROC j and superproperties Recompute probabilities based on P  (local) –I  =min(NI(p i ) + average of other NI

20 Total Information Content Total information content = Information content from global perspective + Information content from local perspective

21 S-Match Relevance Specification as keywords

22 published_in located_in Keywords

23 S-Match Uses the “best semantic match” paradigm For a keyword k i and a property p j on a path: –SemMatch(k i, p j ) = 0 < (2 d ) -1  1, where d is the minimum distance between the properties in a property hierarchy For a path ps, its S-Match value is: – the sum of the max(SemMatch(k i, p j ))

24 Putting it all together …….

25 SemRank For a search mode  and a path ps: Modulated information gain for ps, I  (ps) –I  (ps) = (1-)(I(ps)) -1 + I(ps) Modulated Refraction Count RC  (ps) –RC  (ps) = RC(ps) SEMRANK(ps) = I  (ps)  (1+RC  (ps))  (1+S-Match(ps))

26 Computing SemRank in SSARK

27 The SSARK system Ranking Engine Pipelined top-k results Preprocessor Query Processor RDF Documents Query & Result Interface User SubSystem x ??  ??  ?? y FDIX PHIX ROIX Index Manager Storage Manager LtStore UtStore Loader LAC Look Ahead Cache RC Result Cache Preprocessing phase Query Processing phase Ranking phase

28 2 3 a d e f 1 5 b c 4 Approach     g     Query Processor af fecb db, 4, 5, 4, 2, 1, 6, 6, 2, 5, 3 Ranking engine Assigns SemRank* values to leaves of the tree i.e. edges on the path * - without refraction count g

29 The Index Subsystem FDIX – Frequency Distribution IndeX –Stores the frequency distribution of properties ROIX – Representative Ontology IndeX –Maps classes to Representative Ontology Classes –Stores the semantic summary graph PHIX – Property Hierarchy IndeX –Uses the Dewey Decimal labeling scheme to encode the hierarchical relationships in a property hierarchy –Used for computing S-Match (match between keywords and properties in a path)

30 Index Subsystem contd. PHIX – Property Hierarchy IndeX –Uses the Dewey Decimal ?? labeling scheme to encode the hierarchical relationships in a property hierarchy –Used for computing S-Match (match between keywords and properties in a path) 1 1.1 1.2 1.2.1 { 1.2.2, 2.1} 2 If keyword is 1 and property in path is 1.2.1 then distance = 2 and S-Match = 1/2 2

31   ∙ ∙ a, 3b, 2  ∙ c, 4d, 1e, 2f, 5  h, 1i, 6 g, 3   ∙ ∙ a, 3b, 2  ∙ c, 4d, 1e, 2f, 5  h, 1i, 6 g, 3 h, 1 i, 6, e, 2 f, 5, d, 1 c, 4, a  b, 5 g .i, 9, h, 1 i, 6, i, 6 ∙   ∙ a, 3b, 2  ∙ c, 4d, 1e, 2f, 5  h, 1 g, 3 g  h, 4 d, 1 a  b, 5 c  f, 9, c  e, 6 e, 2 f, 5, c  4, c.f, 9, h, 1 i, 6, a  b, 5 ∙   ∙ a, 3b, 2  ∙ c, 4d, 1e, 2f, 5  h, 1i, 6 g, 3 g  h, 4 g.i, 9 d1d1 c  e, 6 e, 2 f, 5, c  4, g  i, 9, c  f, 9, c  e, 6... Top-K Evaluation Final Top_k: 1. g.i, 18 2. c. f, 9

32 Top-K Evaluation phase 2 – refraction count The total refraction count for a path is not known until the whole path has been assembled at the root node, so is not used in the first phase In phase 2, we integrate the refraction count into the top-k results at the root node and rerank –The final ordering is not an exact SemRank ordering but is a reasonable tradeoff

33 Evaluation Issues Data set needs –Entities described with a variety of relationships –Richly connected hierarchies –Realistic frequency distributions Synthetically generated realistic small data set using human defined rules –e.g. |(p = “audits”)| ≤ 0.1  |(p = “enrolls”)|

34 µ = 0

35 µ = 1

36 Related Work Semantic searching and ranking of entities on the Semantic Web Rocha et al WWW2004, Guha et al WWW2003, Stojanovic et al ISWC 2003, Zhuge et al WWW2003, Semantic ranking of relationships Halaschek VLDB demo 2004, Aleman-Meza et al SWDB03

37 Future Work Comprehensive evaluation Including some measures for importance of nodes in the paths Revise the Modulation function Optimizing Top-K evaluation –Decreasing height of tree –estimation techniques for a closer approximation to SemRank ordering

38 Data, demos, more publications at SemDis project web site (Google: semdis) Thank Yousemdis


Download ppt "SemRank: Ranking Complex Relationship Search Results on the Semantic Web Kemafor Anyanwu, Angela Maduko, Amit Sheth LSDIS labLSDIS lab, University of Georgia."

Similar presentations


Ads by Google