Presentation is loading. Please wait.

Presentation is loading. Please wait.

Benchmarking the interoperability of ODTs. April 7th 2005 1 © Raúl García-Castro, Asunción Gómez-Pérez Benchmarking the interoperability of ontology development.

Similar presentations


Presentation on theme: "Benchmarking the interoperability of ODTs. April 7th 2005 1 © Raúl García-Castro, Asunción Gómez-Pérez Benchmarking the interoperability of ontology development."— Presentation transcript:

1 Benchmarking the interoperability of ODTs. April 7th 2005 1 © Raúl García-Castro, Asunción Gómez-Pérez Benchmarking the interoperability of ontology development tools Raúl García-Castro, Asunción Gómez-Pérez April 7th 2005

2 Benchmarking the interoperability of ODTs. April 7th 2005 2 © Raúl García-Castro, Asunción Gómez-Pérez Table of Contents The interoperability problem Benchmarking framework Experiment to perform Participating in the benchmarking

3 Benchmarking the interoperability of ODTs. April 7th 2005 3 © Raúl García-Castro, Asunción Gómez-Pérez Ontology development tools interoperability problem It appears due to ontology reuse. Tool 1 Tool 2 Tool 3 Tool 4 Tool 5 Ontology development tools Potential functionalities Real functionalities

4 Benchmarking the interoperability of ODTs. April 7th 2005 4 © Raúl García-Castro, Asunción Gómez-Pérez Ontology development tools interoperability problem Why is it difficult? –Different KR formalisms framesdescription logicconceptual graphs first order logicsemantic networks –Different modelling components inside the same KR formalism Some results: –It is difficult to preserve the semantics and the intended meaning of the ontology –Interoperability decisions… At many different levels Usually hidden in the programming code of ontology exporters/importers O. Corcho. A Layered Declarative Approach to Ontology Translation with Knowledge Preservation Frontiers in Artificial Intelligence and Applications, Volume 116, January 2005

5 Benchmarking the interoperability of ODTs. April 7th 2005 5 © Raúl García-Castro, Asunción Gómez-Pérez Knowledge Model comparison Classes Template Slots/properties/ instance attributes Instances Data types RDF(S) Protégé-2000 Subclass-of Subproperty-of Literals Containers Collections Statements Own slots/ Class attributes PAL constraints/ WAB axioms Concept groups Disjoint decompositions Exhaustive decompositions Partitions Constants Relation properties Synonyms Abbreviations Bibliographic references Metaclasses

6 Benchmarking the interoperability of ODTs. April 7th 2005 6 © Raúl García-Castro, Asunción Gómez-Pérez Elements outside RDF(S) K.M. Thesis MSc ThesisPhD Thesis Disjoint-subclass Thesis MSc ThesisPhD Thesis subclass Thesis MSc ThesisPhD Thesis subclass Partial loss Total loss Don’t export Insert ad-hoc RDF(S) Doesn’t import Thesis RDFS EXPORTIMPORT

7 Benchmarking the interoperability of ODTs. April 7th 2005 7 © Raúl García-Castro, Asunción Gómez-Pérez Example: WebODE and Protégé RDFS The distance from the hotel to a ski resort WebODE RDF(S) Protégé-2000

8 Benchmarking the interoperability of ODTs. April 7th 2005 8 © Raúl García-Castro, Asunción Gómez-Pérez Example: WebODE and Protégé WebODE RDF(S) Protégé-2000 RDFS Protégé-2000 generates ad hoc RDF(S) code

9 Benchmarking the interoperability of ODTs. April 7th 2005 9 © Raúl García-Castro, Asunción Gómez-Pérez Table of Contents The interoperability problem Benchmarking framework Experiment to perform Participating in the benchmarking

10 Benchmarking the interoperability of ODTs. April 7th 2005 10 © Raúl García-Castro, Asunción Gómez-Pérez Benchmark and benchmarking Benchmarking Systematic evaluation Comparison with the best tools Extraction of best practices BenchmarkBenchmarking IS A TestContinuous process PURPOSE Measure Evaluate Search for best practices – Measure – Evaluate Improve TARGET Method System Process Product Service

11 Benchmarking the interoperability of ODTs. April 7th 2005 11 © Raúl García-Castro, Asunción Gómez-Pérez General framework for benchmarking BENCHMARKING ITERATION Recalibration task PLAN PHASE 1. B. goals identification 2. B. subject identification 3. Participant identification 4. B. proposal writing 5. Management involvement 6. B. partner selection 7. B. planning and resource allocation IMPROVE PHASE 11. B. report writing 12. B. findings communication 13. Improvement planning 14. Improvement 15. Monitor EXPERIMENT PHASE 8. Experiment definition 9. Experiment execution 10. Experiment results analysis General evaluation criteria: Interoperability Scalability Robustness Benchmark suites for: Interoperability Scalability Robustness Benchmarking supporting tools: Testing frameworks Workload generators Monitoring tools Statistical packages García-Castro, Maynard, Wache, Foxvog and González-Cabero. Knowledge Web Deliverable 2.1.4 Specification of a methodology, general criteria, and benchmark suites for benchmarking ontology tools. December 2004.

12 Benchmarking the interoperability of ODTs. April 7th 2005 12 © Raúl García-Castro, Asunción Gómez-Pérez Plan phase Benchmarking goals identification Benchmarking subject identification Participant identification Need for benchmarking Organisation goals and strategies Benchmarking proposal writing Benchmarking goals, benefits, costs Benchmarking subject, tool functionalities, evaluation criteria List of involved members, benchmarking team Management involvement Benchmarking partner selection Benchmarking planning and resource allocation Benchmarking proposal Management support Benchmarking partners, updated benchmarking proposal Benchmarking planning Organisation's tools Tools from outside the organisation Organisation planning Improve the interoperability of ontol. development tools RDF(S) import and export capabilities Identify ontology components exported/imported B.P.

13 Benchmarking the interoperability of ODTs. April 7th 2005 13 © Raúl García-Castro, Asunción Gómez-Pérez Experiment definition Experiment execution Experiment analysis Benchmarking planning Benchmarking proposal Experiment definition, experimentation planning Experiment results Experiment report Experiment phase RDF(S) Import benchmark suites RDF(S) Export benchmark suites test 1 test 2 test 3... test 1 test 2 test 3... NO OK test 1 test 2 test 3... OK NO OK NO... test 1 test 2 test 3... test 1 test 2 test 3... NO OK test 1 test 2 test 3... OK NO OK NO E.R.

14 Benchmarking the interoperability of ODTs. April 7th 2005 14 © Raúl García-Castro, Asunción Gómez-Pérez Benchmarking report writing Benchmarking findings communication Improvement planning Updated benchmarking proposal Experiment report Improvement Benchmarking report Updated benchmarking report Monitor Necessary changes, improvement planning, improvement forecast Organisation support Improved tool Monitorisation report Improve phase Comparative analysis Compliance with standards Weaknesses Recommendations on tools Recommendations on practices

15 Benchmarking the interoperability of ODTs. April 7th 2005 15 © Raúl García-Castro, Asunción Gómez-Pérez Table of Contents The interoperability problem Benchmarking framework Experiment to perform Participating in the benchmarking

16 Benchmarking the interoperability of ODTs. April 7th 2005 16 © Raúl García-Castro, Asunción Gómez-Pérez Benchmarking goals Goal 1: To assess and improve the interoperability of ontology development tools using RDF(S) for ontology exchange. Goal 2: To identify the subset of RDF(S) elements that ontology development tools can use to correctly interoperate. Goal 3: Next step: OWL.

17 Benchmarking the interoperability of ODTs. April 7th 2005 17 © Raúl García-Castro, Asunción Gómez-Pérez Experiment to perform 1. Export to RDF(S) Check if ontology development tools can export the core elements of RDF(S). Check if ontology development tools can export other elements of their knowledge models. 2. Import from RDF(S) Check if ontology development tools can import the core elements of RDF(S). Check if ontology development tools can import the non-core elements of RDF(S). Check if ontology development tools can import the other elements of the knowledge model the tools exported to RDF(S). RDF(S)

18 Benchmarking the interoperability of ODTs. April 7th 2005 18 © Raúl García-Castro, Asunción Gómez-Pérez Extended Benchmarks Sample export benchmarks Core Benchmarks Common to every tool Particular to each tool Benchmark 3Export concept NL descriptionExport an ontology containing 3 concepts and no further property or relation between them. Graphical representation Expected result <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"> Concept 1Concept 2Concept 3

19 Benchmarking the interoperability of ODTs. April 7th 2005 19 © Raúl García-Castro, Asunción Gómez-Pérez Sample export benchmarks Benchmark 6Export linear concept taxonomy NL descriptionExport an ontology containing 3 concepts with a subclass-of relation between each pair of them forming a linear taxonomy. Graphical representation Expected result <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"> Concept 1 Concept 2 Concept 3

20 Benchmarking the interoperability of ODTs. April 7th 2005 20 © Raúl García-Castro, Asunción Gómez-Pérez Sample export benchmarks Concept 1 Concept 2Concept 3Concept 4 Concept 1 Concept 2Concept 3Concept 4 Concept 1 Concept 2 Benchmark 7 Export a concept with multiple children Export an ontology containing four concepts, being three of them subclass of the fourth one. Benchmark 8 Export a concept with multiple parents Export an ontology containing four concepts, one of them subclass of the other three. Benchmark 9 Export two concepts subclass of each other Export an ontology containing two concepts, each one subclass of the other.

21 Benchmarking the interoperability of ODTs. April 7th 2005 21 © Raúl García-Castro, Asunción Gómez-Pérez Export process Load ontology Concept 1 Concept 2 Export ontology <rdf:RDF xmlns:rdf="http://www.w3.org/2-rdf-syntax-ns#" Compare result with expected <rdf:RDF xmlns:rdfs="http://www.w3.org/f-schema#"> <rdf:RDF xmlns:rdf="http://www.w3.org/2-rdf-syntax-ns#" = ? YES NO Steps can be manual or automatic Export strategy: minimal knowledge loss in exports

22 Benchmarking the interoperability of ODTs. April 7th 2005 22 © Raúl García-Castro, Asunción Gómez-Pérez Export results WebODEProtégéTool ATool BTool C… Benchmark 1YES Benchmark 2YES Benchmark 3YES Benchmark 4YES Benchmark 5YES Benchmark 6YES NOYES Benchmark 7YES NO Benchmark 8NOYESNOYES Benchmark 9NO YES Benchmark 10YES NO YES Benchmark 11YES NO Benchmark 12NO YESNO …

23 Benchmarking the interoperability of ODTs. April 7th 2005 23 © Raúl García-Castro, Asunción Gómez-Pérez Extended Benchmarks Sample import benchmarks RDF(S) Core Benchmarks Common to every tool Common to every tool. From export extended benchmarks Benchmark 2Import class NL descriptionImport a graph containing 3 classes and no further property between them. Graphical representation RDF(S) source <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"> Expected result Class 1Class 2Class 3 RDF(S) Extended Benchmarks Class 1Class 2Class 3

24 Benchmarking the interoperability of ODTs. April 7th 2005 24 © Raúl García-Castro, Asunción Gómez-Pérez Sample import benchmarks Benchmark 5Import classes with a property NL descriptionImport a graph containing 2 classes and a property between them. Graphical representation RDF(S) source <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"> Expected result PersonBook Is_author PersonBook Is_author

25 Benchmarking the interoperability of ODTs. April 7th 2005 25 © Raúl García-Castro, Asunción Gómez-Pérez Import process Load graph <rdf:RDF xmlns:rdf="http://www.w3.org/2-rdf-syntax-ns#" Import graph Class 1 Class 2 Steps can be manual or automatic Compare result with expected = ? YES NO Class 1 Class 2 Class 1 Class 2

26 Benchmarking the interoperability of ODTs. April 7th 2005 26 © Raúl García-Castro, Asunción Gómez-Pérez Import results WebODEProtégéTool ATool BTool C… Benchmark 1YES Benchmark 2YES Benchmark 3YES Benchmark 4YES Benchmark 5YES Benchmark 6YES NOYES Benchmark 7YES NO Benchmark 8NOYESNOYES Benchmark 9NO YES Benchmark 10YES NO YES Benchmark 11YES NO Benchmark 12NO YESNO …

27 Benchmarking the interoperability of ODTs. April 7th 2005 27 © Raúl García-Castro, Asunción Gómez-Pérez Table of Contents The interoperability problem Benchmarking framework Experiment to perform Participating in the benchmarking

28 Benchmarking the interoperability of ODTs. April 7th 2005 28 © Raúl García-Castro, Asunción Gómez-Pérez Benchmarking benefits For the participants: –To know in detail the interoperability of their ODTs. –To know the set of terms in which interoperability between their ODTs can be achieved. –To show the rest of the world that their ODTs are able to interoperate and are among the best ODTs. For the Semantic Web community: –To obtain a significant improvement in the interoperability of ODTs. –To know the best practices that are performed when developing the interoperability of ontology development tools. –To obtain instruments to assess the interoperability of ODTs. –To know the best-in-class ODTs regarding interoperability.

29 Benchmarking the interoperability of ODTs. April 7th 2005 29 © Raúl García-Castro, Asunción Gómez-Pérez Participating in the benchmarking Every organisation is invited to participate: –If you are developers, with your own tool. –If you are users, with your preferred tool. Supported by the Knowledge Web NoE. The results will be presented in the EON 2005 workshop.

30 Benchmarking the interoperability of ODTs. April 7th 2005 30 © Raúl García-Castro, Asunción Gómez-Pérez Timeline April 22nd 2005First definition of export and import benchmarks. May 6th 2005Agreement on the benchmark definitions and the experimentation. July 1st 2005Results from Export experiments. September 16th 2005Results from Import experiments. If you want to participate in the benchmarking or have some further question/comment about it, please contact: Raúl García-Castro

31 Benchmarking the interoperability of ODTs. April 7th 2005 31 © Raúl García-Castro, Asunción Gómez-Pérez KW Deliverable 1.2.2 Deliverable 1.2.2: “Semantic Web Framework Requirements Analysis” Analyse applications and tools for identifying the set of requirements for interoperation and exchange of ontologies. Identify the main components that an unified semantic web framework should have. Should contain: The main systems developed on the field. For each application or system, include its architecture, design criteria, main components, how they interoperate with other systems or exchange their ontologies. These studies should allow you to identify: The main and additional requirements for each type of tool/system. The main and additional functionalities. Results on evaluation of the requirements of interoperation and exchange. Results on evaluation of other criteria like scalability, etc. Summarize in a table such criteria in order to have a clear picture of the field. Needs contributions from tool developers in order to obtain accurate descriptions of their tools.

32 Benchmarking the interoperability of ODTs. April 7th 2005 32 © Raúl García-Castro, Asunción Gómez-Pérez Benchmarking the interoperability of ontology development tools Raúl García-Castro, Asunción Gómez-Pérez April 7th 2005


Download ppt "Benchmarking the interoperability of ODTs. April 7th 2005 1 © Raúl García-Castro, Asunción Gómez-Pérez Benchmarking the interoperability of ontology development."

Similar presentations


Ads by Google