1 CIS607, Fall 2005 Semantic Information Integration Presentation by Amanda Hosler Week 6 (Nov. 2)

Slides:



Advertisements
Similar presentations
Relevance Feedback Limitations –Must yield result within at most 3-4 iterations –Users will likely terminate the process sooner –User may get irritated.
Advertisements

Schema Matching and Query Rewriting in Ontology-based Data Integration Zdeňka Linková ICS AS CR Advisor: Július Štuller.
Haystack: Per-User Information Environment 1999 Conference on Information and Knowledge Management Eytan Adar et al Presented by Xiao Hu CS491CXZ.
1 Evaluation Rong Jin. 2 Evaluation  Evaluation is key to building effective and efficient search engines usually carried out in controlled experiments.
Exploiting Synergy between Ontology and Recommender Systems Middleton, S. T., Alani, H. & De Roure D. C. (2002) Semantic Web Workship 2002 Presented by.
 delivers evidence that a solution developed achieves the purpose for which it was designed.  The purpose of evaluation is to demonstrate the utility,
USC Graduate Student DayColumbia, SCMarch 2006 Presented by: Jingshan Huang Computer Science & Engineering Department University of South Carolina PhD.
Assessing the National Prosecuting Authority Complexities and possibilities 1.
Evaluating Search Engine
Search Engines and Information Retrieval
1 CIS607, Fall 2004 Semantic Information Integration Presentation by Xiangkui Yao Week 6 (Nov. 3)
1 CIS607, Fall 2004 Semantic Information Integration Presentation by Julian Catchen Week 3 (Oct. 13)
1 CIS607, Fall 2005 Semantic Information Integration Presentation by Dong Hwi Kwak Week 5 (Oct. 26)
1 CIS607, Fall 2005 Semantic Information Integration Presentation by Dayi Zhou Week 4 (Oct. 19)
Image Search Presented by: Samantha Mahindrakar Diti Gandhi.
Part 4: Evaluation Days 25, 27, 29, 31 Chapter 20: Why evaluate? Chapter 21: Deciding on what to evaluate: the strategy Chapter 22: Planning who, what,
1 CIS607, Fall 2005 Semantic Information Integration Presentation by Paea LePendu Week 8 (Nov. 16)
Retrieval Evaluation. Brief Review Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
PROMPT: Algorithm and Tool for Automated Ontology Merging and Alignment Natalya Fridman Noy and Mark A. Musen.
Retrieval Evaluation: Precision and Recall. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity.
1 CIS607, Fall 2005 Semantic Information Integration Presentation by Enrico Viglino Week 3 (Oct. 12)
Automated Changes of Problem Representation Eugene Fink LTI Retreat 2007.
PROMPT: Algorithm and Tool for Automated Ontology Merging and Alignment Natalya F. Noy Stanford Medical Informatics Stanford University.
1 CIS607, Fall 2005 Semantic Information Integration Presentation by Jiawei Rong Week 10 (Nov. 30)
1 CIS607, Fall 2005 Semantic Information Integration Presentation by Shiwoong Kim Week 9 (Nov. 23)
1 CIS607, Fall 2005 Semantic Information Integration Presentation by Zebin Chen Week 7 (Nov. 9)
PROMPT: Algorithm and Tool for Automated Ontology Merging and Alignment Natalya F. Noy and Mark A. Musen.
Retrieval Evaluation. Introduction Evaluation of implementations in computer science often is in terms of time and space complexity. With large document.
CS246 Query Translation. Mind Your Vocabulary Q: What is the problem? A: How to integrate heterogeneous sources when their schema & capability are different.
PROMPT: Algorithm and Tool for Automated Ontology Merging and Alignment Natalya Fridman Noy and Mark A. Musen.
1 CIS607, Fall 2004 Semantic Information Integration Presentation by Xiaofang Zhang Week 7 (Nov. 10)
Test Preparation Strategies
Evaluating Ontology-Mapping Tools: Requirements and Experience Natalya F. Noy Mark A. Musen Stanford Medical Informatics Stanford University.
State of the Art Ontology Mapping By Justin Martineau.
Query Relevance Feedback and Ontologies How to Make Queries Better.
Search Engines and Information Retrieval Chapter 1.
Empirical Methods in Information Extraction Claire Cardie Appeared in AI Magazine, 18:4, Summarized by Seong-Bae Park.
An Integrated Approach to Extracting Ontological Structures from Folksonomies Huairen Lin, Joseph Davis, Ying Zhou ESWC 2009 Hyewon Lim October 9 th, 2009.
Knowledge representation
LIS510 lecture 3 Thomas Krichel information storage & retrieval this area is now more know as information retrieval when I dealt with it I.
Evidence Based Medicine Meta-analysis and systematic reviews Ross Lawrenson.
Usability Evaluation June 8, Why do we need to do usability evaluation?
UOS 1 Ontology Based Personalized Search Zhang Tao The University of Seoul.
A Probabilistic Graphical Model for Joint Answer Ranking in Question Answering Jeongwoo Ko, Luo Si, Eric Nyberg (SIGIR ’ 07) Speaker: Cho, Chin Wei Advisor:
1 ECE-517 Reinforcement Learning in Artificial Intelligence Lecture 7: Finite Horizon MDPs, Dynamic Programming Dr. Itamar Arel College of Engineering.
CMPT480 Term Project Yichen Dang Nov 28,2012.   For me:  Introduce a technology for painting without hands requirement  Deeper understanding of eye.
CSC 413/513: Intro to Algorithms NP Completeness.
Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation Jianping Fan, Yuli Gao, Hangzai Luo, Guangyou Xu.
Data Mining Algorithms for Large-Scale Distributed Systems Presenter: Ran Wolff Joint work with Assaf Schuster 2003.
ConcepTest • Section 2.3 • Question 1
System Testing Beyond unit testing. 2 System Testing Of the three levels of testing, system level testing is closest to everyday experience We evaluate.
1 Automatic indexing Salton: When the assignment of content identifiers is carried out with the aid of modern computing equipment the operation becomes.
Shridhar Bhalerao CMSC 601 Finding Implicit Relations in the Semantic Web.
Distribution of information in biomedical abstracts and full- text publications M. J. Schuemie et al. Dept. of Medical Informatics, Erasmus University.
Chapter 19 Confidence intervals for proportions
Performance Measures. Why to Conduct Performance Evaluation? 2 n Evaluation is the key to building effective & efficient IR (information retrieval) systems.
Lecture 7: Foundations of Query Languages Tuesday, January 23, 2001.
Top-K Generation of Integrated Schemas Based on Directed and Weighted Correspondences by Ahmed Radwan, Lucian Popa, Ioana R. Stanoi, Akmal Younis Presented.
Spectrum Sensing In Cognitive Radio Networks
CASE CONTROL STUDY. Learning Objectives Identify the principles of case control design State the advantages and limitations of case control study Calculate.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
#1 Make sense of problems and persevere in solving them How would you describe the problem in your own words? How would you describe what you are trying.
Ontology Evaluation Outline Motivation Evaluation Criteria Evaluation Measures Evaluation Approaches.
TESTS FOR CONVERGENCE AND DIVERGENCE Section 8.3b.
Opinion spam and Analysis 소프트웨어공학 연구실 G 최효린 1 / 35.
Usability engineering
Chapter 3: Describing Relationships
Evaluation of IR Performance
State of the Art Ontology Mapping
Retrieval Performance Evaluation - Measures
Presentation transcript:

1 CIS607, Fall 2005 Semantic Information Integration Presentation by Amanda Hosler Week 6 (Nov. 2)

2 Questions from Homework 4 Some concepts: – What is the frame-based model mentioned in the paper – Shiwoong – Is this very different from a scheduling problem? i.e. construct a dependency graph, non-deterministically choose a candidate action, apply the candidate and evaluate the result. The comparison is between two, whether it improves if it is among N? – Zebin – The medical vocabularies provide a rich field for data exchange, why? -- DongHwi – What’s definition of “Slot”, what is “knowledge-based operations”. -- Dayi – Is there difference between ontology merging and schema matching? -- Dayi

3 Questions from Homework 4 About the algorithms in PROMPT – Section 6.2 mentions that the user using PROMPT + Protege 2000 only had to perform 16 operations, versus the 60 operations that the user using vanilla Protege 2000 had to perform. However, wouldn't a more interesting metric be the time it took to complete the merging of the ontologies, rather than the number of operations performed? It is difficult to compare efficiency using the current metric, since it is unknown as to how long an 'average' operation takes in either environment. In general, many algorithms and systems in AI are not evaluated quantitatively. Why is that? -- Shiwoong – How much is the overhead in building the class hierarchy in each ontology before merge – Zebin – How Protégé component-based architecture allows the user to plug in any term-matching algorithm – Jiawei – Is there standard way to do ontology merging and we can compare with other approaches -- Dayi

4 Questions from Homework 4 (cont ’ d) Other questions about this paper: – The algorithm is evaluated based on what percentage of its suggestions were followed by human experts using the system. However, would it not be possible that the users (at least in some cases) are biased towards following the suggestions given by the system? Also, the measure itself is similar to precision (the ratio of relevant records retrieved versus total number of records retrieved). When precision goes up, recall tends to go down (recall, in this case, would be the percentage of relevant suggestions made versus the total number of all relevant suggestions possible). Is it more important for a system to make as many relevant suggestions as possible, or make few suggestions that are mostly correct?– Shiwoong – How does the PROMPT algorithm or say the interaction with the user end? Who decide the termination? - Jiawei

5 Questions from Homework 4 (cont ’ d) Other questions about this paper: – Whether they have tested the complex sources with PROMPT, if so, what type of results they achieved? A ideal system will be get same result as by hand but less workload ? – Enrico – It seems the “false negatives” rate is rather high. It’s ok to me since user is heavily integrated in all the choice made. Don’t you think we should get some system as 90% successful rate? - Enrico