Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CIS607, Fall 2005 Semantic Information Integration Presentation by Amanda Hosler Week 6 (Nov. 2)

Similar presentations


Presentation on theme: "1 CIS607, Fall 2005 Semantic Information Integration Presentation by Amanda Hosler Week 6 (Nov. 2)"— Presentation transcript:

1 1 CIS607, Fall 2005 Semantic Information Integration Presentation by Amanda Hosler Week 6 (Nov. 2)

2 2 Questions from Homework 4 Some concepts: – What is the frame-based model mentioned in the paper – Shiwoong – Is this very different from a scheduling problem? i.e. construct a dependency graph, non-deterministically choose a candidate action, apply the candidate and evaluate the result. The comparison is between two, whether it improves if it is among N? – Zebin – The medical vocabularies provide a rich field for data exchange, why? -- DongHwi – What’s definition of “Slot”, what is “knowledge-based operations”. -- Dayi – Is there difference between ontology merging and schema matching? -- Dayi

3 3 Questions from Homework 4 About the algorithms in PROMPT – Section 6.2 mentions that the user using PROMPT + Protege 2000 only had to perform 16 operations, versus the 60 operations that the user using vanilla Protege 2000 had to perform. However, wouldn't a more interesting metric be the time it took to complete the merging of the ontologies, rather than the number of operations performed? It is difficult to compare efficiency using the current metric, since it is unknown as to how long an 'average' operation takes in either environment. In general, many algorithms and systems in AI are not evaluated quantitatively. Why is that? -- Shiwoong – How much is the overhead in building the class hierarchy in each ontology before merge – Zebin – How Protégé component-based architecture allows the user to plug in any term-matching algorithm – Jiawei – Is there standard way to do ontology merging and we can compare with other approaches -- Dayi

4 4 Questions from Homework 4 (cont ’ d) Other questions about this paper: – The algorithm is evaluated based on what percentage of its suggestions were followed by human experts using the system. However, would it not be possible that the users (at least in some cases) are biased towards following the suggestions given by the system? Also, the measure itself is similar to precision (the ratio of relevant records retrieved versus total number of records retrieved). When precision goes up, recall tends to go down (recall, in this case, would be the percentage of relevant suggestions made versus the total number of all relevant suggestions possible). Is it more important for a system to make as many relevant suggestions as possible, or make few suggestions that are mostly correct?– Shiwoong – How does the PROMPT algorithm or say the interaction with the user end? Who decide the termination? - Jiawei

5 5 Questions from Homework 4 (cont ’ d) Other questions about this paper: – Whether they have tested the complex sources with PROMPT, if so, what type of results they achieved? A ideal system will be get same result as by hand but less workload ? – Enrico – It seems the “false negatives” rate is rather high. It’s ok to me since user is heavily integrated in all the choice made. Don’t you think we should get some system as 90% successful rate? - Enrico


Download ppt "1 CIS607, Fall 2005 Semantic Information Integration Presentation by Amanda Hosler Week 6 (Nov. 2)"

Similar presentations


Ads by Google