Download presentation
Presentation is loading. Please wait.
Published byLynette Palmer Modified over 9 years ago
1
1 Dejing Dou Computer and Information Science University of Oregon, Eugene, Oregon September, 2010@ Kent State University
2
Where is Eugene, Oregon?
3
Outline Introduction Ontology and the Semantic Web Biomedical Ontology Development Challenges for Data-driven Approaches The NEMO Project Mining ERP Ontologies (KDD’07) Modeling NEMO Ontology Databases ( SSDBM’08, JIIS’10) Mapping ERP Metrics (PAKDD’10) Ongoing Work 3
4
4 What is Ontology? Formal specification of a vocabulary of domain concepts and relationships relating them.
5
5 A Genealogy Ontology Individual Family Event Male Female MarriageEvent DivorceEvent DeathEvent BirthEvent husband childIn wife marriage divorce birth Gender sex Classes: Individual, Male, Female, Family, MarriageEvent… Properties: sex, husband, wife, birth…… Axioms: If there is a MarriageEvent, there will be a Family related to the husband and wife properties. Ontology languages : OWL, KIF, OBO …
6
6 Current WWW The majority of data resources in WWW are in human readable format only (e.g. HTML). human WWW
7
7 The Semantic Web One major goal of the Semantic Web is that web-based agents can process and “understand” data[Berners-Lee et al 2001]. Ontologies formally describe the semantics of data and web- based agents can take web documents (e.g. in RDF, OWL) as a set of assertions and draw inferences from them. human SW Web-based agents
8
Biomedical Ontologies The Gene Ontology (GO): to standardize the formal representation of gene and gene product attributes across all species and gene databases (e.g., zebrafish, mouse, fruit fly) Classes: cellular component, molecular function, biological process, … Properties: is_a, part_of The Unified Medical Language System (UMLS): a comprehensive thesaurus and ontology of biomedical concepts. The National Center of Biomedical Ontology (NCBO) at Stanford University >200 ontologies (hundreds to thousands concepts each one) 4 millions of mappings. 8
9
Biomedical Ontology Development Typically Knowledge Driven: top down process Some basic steps and principles: Discussions among domain experts and ontology engineers Select basic (root) classes and properties (i.e., terms) Go to deeper depth for sub-concepts and relationships. Modularization may be considered if the ontology is expected to be large. Add constraints (axioms) Add unique IDs (e.g., URLs) and textual definitions for terms Consistency checking Updating and Evolution (e.g., GO is updated every 15 minutes) 9
10
Challenges: Knowledge Sharing does not help Data Sharing Automatically Annotation (like tags) helps Search in text (e.g., papers), but not good for experimental data (e.g., numerical values) Three main challenges for knowledge/data sharing: Heterogeneity: different labs use different analysis methods, spreadsheet attributes, DB schemas. Reusability: knowledge mined from different experimental data may not be consistent and sharable Scalability: the size of experimental data grow much larger than the size of ontologies. Ontology-based reasoning (e.g., ABox) for large size data is a headache. 10
11
Case Study: EEG data Electroencephalogram (EEG) data Observing Brain Functions through EEG 11 Brain activity occurs in cortex and cortex activity generates scalp EEG EEG data (dense-array, 256 channels) has high temporal (1msec) / poor spatial resolution (2D), MR imaging (fMRI, PET) has good spatial (3D) / poor temporal resolution (~1.0 sec)
12
ERP data and Pattern Analysis Event-related potentials (ERP) are created by averaging across segments of EEG data in different trials and time-locking (e.g., every 2 seconds) to stimulus events or response. Some existing tools (e.g., Net Station, EEGLAB, APECS, the Dien PCA Toolbox) can process ERP data and do pattern analysis. h 12 (A) 128-channel ERPs to visual word and nonword stimuli. (B) Time course for P100 pattern by PCA. (C) Scalp topography (spatial distribution) of P100 pattern.
13
NEMO: NeuroElectroMagnetic Ontologies Some challenges in ERP study Patterns can be difficult to identify and definitions vary across research labs. Methods for ERP analysis differ across research sites. It is hard to compare and share the results across experiments and across labs. The NEMO (NeuroElectroMagnetic Ontologies) project is to address those challenges by developing ontologies to support ERP data and pattern representation, sharing and meta-analysis. It has been funded by the NIH as an R01 project since 2009. 13
14
Architecture 14
15
Progress in Data Driven Approaches Mining ERP Ontologies (KDD’07) -- Reusability Modeling NEMO Ontology Databases ( SSDBM’08, JIIS’10) -- Scalability Mapping ERP Metrics (PAKDD’10) -- Heterogeneity 15
16
Ontology Mining Ontology mining is a process for learning an ontology, including classes, class taxonomy, properties and axioms, from data. Existing ontology mining approaches focus on text mining or web mining (web content, usage, structure, user profiles). Clustering and association rule mining have been used for classes and properties. [Li&Zhong @ TKDE 18(4), Maedche&Staab @ EKAW’00, Reinberger et al @ ODBASE’03]. NetAffix Gene ontology mining tool is applied to microarray data [Cheng et al @ Bioinformatics 20 (9)] Our approach includes hierarchical clustering and classification for mining class taxonomy, properties and axioms of the first- generation of ERP data-specific ontology from spreadsheets, which is novel. 16
17
17 Knowledge Reuse in KDD Data Cleaning Data Integration Databases Data Warehouse Task-relevant Data Selection Data Mining Pattern Evaluation ? Lack of formal Semantics
18
Our Framework (KDD’07) 18 A semi-automatic framework for mining ontologies
19
Four General Procedures Classes <= Clustering-based Classification Class Taxonomy <= Hierarchical Clustering Properties <= Classification Axioms <= Association Rule Mining and Classification 19
20
Experiments on ERP Data Preprocessing Data with Temporal PCA Mining ERP Classes with Clustering-based Classification Mining ERP Class Taxonomy with Hierarchical Clustering Mining Properties and Axioms (Rules) with Classification Discovering Axioms among Properties with Association Rules Mining 20
21
Input Raw ERP data 21 SubjectConditionChannel#Time1(µv)Time2(µv)Time3(µv)Time4(µv)Time5(µv)Time6(µv) S01A10.0770.1360.0750.0950.1880.097 S01A20.8911.7800.8950.8051.6120.813 S01A30.0140.0180.0130.0400.0660.035 S01A40.6571.3090.6570.7891.5710.785 S01A50.4370.8640.4321.0072.0021.003 S01B10.3030.6030.3030.1280.2500.123 S01B20.4770.9510.4830.4180.8410.418 S01B30.5380.0730.0380.0290.0430.022 S01B40.5091.0610.5330.6281.2540.626 S01B51.4971.0240.5100.2180.4340.219 S02A11.2752.9871.5000.3820.7690.386 S02A20.6662.5551.2810.3260.6480.329 S02A30.6731.3210.6661.0262.0511.029 S02A40.2841.3410.6781.9663.9141.966 S02A50.9800.5640.2920.5111.0120.507 S02B10.3671.9600.9781.7413.4861.739 S02B20.8640.7210.3651.4702.9341.472 S02B30.5681.7290.8661.3422.6801.337 S02B40.1491.1340.5750.2100.4230.215 S02B50.0420.2870.1510.4330.8600.433 Sampling rate: 250Hz for 1500ms (375 samples) Experiment 1-2: 89 subjects and 6 experiment conditions Experiment 3: 36 subjects and 4 experiment conditions
22
Data Preprocessing (1) Temporal PCA Decomposition 22 component 1+ component 2 = complex waveform += PCA PCA extracts as many factors (components) as there are variables (i.e., number of samples). We retain the first 15 PCA factors, accounting for most of variances (> 75%). The remaining factors are assumed to contain “noise”.
23
Data Preprocessing (2) Intensity, spatial, temporal and functional metrics (attributes) for each factor 23
24
ERP Factors after PCA Decomposition 24 TI-max (µs) IN-mean (ROI) (µv) IN-mean (ROCC) (µv)...SP-min (channel#) 1284.28234.7245…24 961.22231.3955…62 164-6.6589-4.7608…59 220-3.635-2.0782…58 244-0.813220.29263…65 For Experiment 1 data, number of Factors = (474) (594) For Experiment 2 data, number of Factors = (588) (598) For Experiment 3 data, number of Factors = 708
25
Mining ERP Classes with Clustering (1) We use EM (Expectation-Maximization) clustering E.g. for Experiment 1 group 2 data 25 Cluster/ Pattern 0123 P10007602 N1001171054 lateN1/N 2 13140104 P30006111042
26
Mining ERP Classes with Clustering (2) We use OWL to represent ERP Classes 26
27
Mining ERP Class Taxonomy with Hierarchical Clustering We use EM clustering in both divisive and agglomerative ways. E.g. for Experiment 3 data 27
28
Mining ERP Class Taxonomy with Hierarchical Clustering We use OWL to represent class taxonomy 28
29
Mining Properties and Axioms with Clustering-based Classification (1) We use decision tree learning (C4.5) to do classification with the training data labeled by clustering results. 29
30
Mining Properties and Axioms with Clustering-based Classification (2) We use OWL to represent datatype properties which are based on those attributes with high information gain (e.g., top 6). 30
31
Mining Properties and Axioms with Clustering-based Classification (3) We use SWRL to represent axioms. In FOL: 31
32
Discovering Axioms among Properties with Association Rule Mining We use Apriori algorithm to find association rules among properties. The split points are determined by classification rules. In FOL, they looks like: 32
33
Rule Optimization 33 Idea: (A → B) (A B → C) => (A → C) And
34
A Partial View of the Mined ERP Data Ontology 34 Our first-generation ERP ontology consists of 16 classes, 57 properties and 23 axioms.
35
Ontology-based Data Modeling (SSDBM’08, JIIS’10) In general, ontologies can be treated as one kind of conceptual model. Considering the size of data (e.g., PCA factors) can be large, instead of building a knowledge base to store those data, we propose to use relational databases. We designed database schemas based on our ERP ontologies which include temporal, spatial and functional concepts. 35
36
Ontology Databases Axioms Class Datat ype Objects Facts Relation Datat ype keys constraints triggers tuples Now we have bridged these.
37
Ontology Databases Axioms Class Datat ype Objects Facts Relation Datat ype keys constraints views triggers tuples
38
Loading time in Lehigh University Benchmark Load Time (1.5 million facts) (10 Universities, 20 Departments)
39
Query time Query Performance (logarithmic time)
40
Ontology-based Data Modeling For example, especially for the important subsumption axioms (e.g., subclassof ) of the current ERP ontologies, we use SQL Triggers and Foreign-Keys to represent them. 40
41
Ontology-based Data Modeling 41 The ER Diagram for the ERP ontology database shows tables (boxes) and foreign key constraints (arrows). The concepts pattern, factor, and channel are most densely connected (toward the right-side of the image) as expected.
42
42
43
NEMO Data Mapping (PAKDD’10) Motivation Lack of meta-analysis across experiment because different labs may use different metrics Goal of the study Mapping alternative sets of ERP spatial and temporal metrics
44
Problem definition Alternative sets of ERP metrics
45
Challenges Semi-structured data Uninformative column headers (string similarity matching does not work) Numerical values
46
Grouping and reordering
48
Sequence post-processing
49
Cross-spatial Join Process all point- sequence curves Calculate Euclidean distance between sequences in the Cartesian product set (Cross-spatial join) ● ● ● Metric Set1 Metric Set2
50
Cross-spatial Join
51
Assumptions and Heuristics The two datasets contain the same or similar ERP patterns if they are from the same paradigms (e.g., oddball in visual/audio - watching or listening uncommon or fake words among common words)
52
Wrong Mappings. Precision = 9/13 Gold standard mapping falls along the diagonal cells
53
Experiment Design of experiment data 2 simulated “subject groups” (samples) SG1 = sample 1 SG2 = sample 2 2 data decompositions tPCA = temporal PCA decomposition sICA = spatial ICA (Independent Component Analysis) decomposition 2 sets of alternative metrics m1 = metric set 1 m2 = metric set 2
54
Experiment Result Overall Precision: 84.6%
55
NEMO Related Ongoing Work Application of our framework to other domain microRNA, medical informatics, gene databases, Mapping discovery and integration across ontologies related to different modalities (e.g., EEG vs. fMRI). 55
56
56 Joint EEG-fMRI Data Mapping
57
Joint work with: Gwen Frishkoff, Jiawei Rong, Robert Frank, Paea LePendu, Haishan Liu, Allen Malony, and Don Tucker 3,4 57
58
Thanks for your attention ! Any Question? 58
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.