Automating Schema Matching for Data Integration

Slides:



Advertisements
Similar presentations
C6 Databases.
Advertisements

Schema Matching and Data Extraction over HTML Tables Cui Tao Data Extraction Research Group Department of Computer Science Brigham Young University supported.
Semiautomatic Generation of Data-Extraction Ontologies Master’s Thesis Proposal Yihong Ding.
Grouping Search-Engine Returned Citations for Person Name Queries Reema Al-Kamha Research Supported by NSF.
Sunita Sarawagi.  Enables richer forms of queries  Facilitates source integration and queries spanning sources “Information Extraction refers to the.
Generic Schema Matching using Cupid
CS652 Spring 2004 Summary. Course Objectives  Learn how to extract, structure, and integrate Web information  Learn what the Semantic Web is  Learn.
Schema Matching and Data Extraction over HTML Tables Cui Tao Data Extraction Research Group Department of Computer Science Brigham Young University supported.
Aki Hecht Seminar in Databases (236826) January 2009
Multifaceted Exploitation of Metadata for Attribute Match Discovery in Information Integration David W. Embley David Jackman Li Xu.
Query Rewriting for Extracting Data Behind HTML Forms Xueqi Chen Department of Computer Science Brigham Young University March, 2003 Funded by National.
Data Frames Version 3 Proposal. Data Frames Version 2 Year matches [2] constant { extract "\d{2}"; context "([^\$\d]|^)\d{2}[^,\dkK]"; } 0.5, { extract.
BYU 2003BYU Data Extraction Group Combining the Best of Global-as-View and Local-as-View for Data Integration Li Xu Brigham Young University Funded by.
Direct and Indirect Matching of Schema Elements for Data Integration on the Web Li Xu Data Extraction Group Brigham Young University Sponsored by NSF.
Automatic Discovery and Classification of search interface to the Hidden Web Dean Lee and Richard Sia Dec 2 nd 2003.
Recognizing Ontology-Applicable Multiple-Record Web Documents David W. Embley Dennis Ng Li Xu Brigham Young University.
6/17/20151 Table Structure Understanding by Sibling Page Comparison Cui Tao Data Extraction Group Department of Computer Science Brigham Young University.
BYU 2003BYU Data Extraction Group Automating Schema Matching David W. Embley, Cui Tao, Li Xu Brigham Young University Funded by NSF.
Gimme’ The Context: Context- driven Automatic Semantic Annotation with CPANKOW Philipp Cimiano et al.
Schema Mapping: Experiences and Lessons Learned Yihong Ding Data Extraction Group Brigham Young University Sponsored by NSF.
Semiautomatic Generation of Resilient Data-Extraction Ontologies Yihong Ding Data Extraction Group Brigham Young University Sponsored by NSF.
DLLS Ontologically-based Searching for Jobs in Linguistics Deryle Lonsdale Funded by:
Semiautomatic Generation of Resilient Data-Extraction Ontologies Yihong Ding Data Extraction Group Brigham Young University Sponsored by NSF.
ER 2002BYU Data Extraction Group Automatically Extracting Ontologically Specified Data from HTML Tables with Unknown Structure David W. Embley, Cui Tao,
FACT: A Learning Based Web Query Processing System Hongjun Lu, Yanlei Diao Hong Kong U. of Science & Technology Songting Chen, Zengping Tian Fudan University.
Ontology-Based Information Extraction and Structuring Stephen W. Liddle † School of Accountancy and Information Systems Brigham Young University Douglas.
Query Rewriting for Extracting Data Behind HTML Forms Xueqi Chen, 1 David W. Embley 1 Stephen W. Liddle 2 1 Department of Computer Science 2 Rollins Center.
From OSM-L to JAVA Cui Tao Yihong Ding. Overview of OSM.
DASFAA 2003BYU Data Extraction Group Discovering Direct and Indirect Matches for Schema Elements Li Xu and David W. Embley Brigham Young University Funded.
UFMG, June 2002BYU Data Extraction Group Automating Schema Matching for Data Integration David W. Embley Brigham Young University Funded by NSF.
Filtering Multiple-Record Web Documents Based on Application Ontologies Presenter: L. Xu Advisor: D.W.Embley.
Scheme Matching and Data Extraction over HTML Tables from Heterogeneous Sources Cui Tao March, 2002 Founded by NSF.
Discovering Direct and Indirect Matches for Schema Elements Li Xu Data Extraction Group Brigham Young University Sponsored by NSF.
Multifaceted Exploitation of Metadata for Attribute Match Discovery in Information Integration Li Xu David W. Embley David Jackman.
BYU Data Extraction Group Automating Schema Matching David W. Embley, Cui Tao, Li Xu Brigham Young University Funded by NSF.
1 A Tool to Support Ontology Creation Based on Incremental Mini-ontology Merging Zonghui Lian.
Recognizing Records from the Extracted Cells of Microfilm Tables Kenneth M. Tubbs David W. Embley Brigham Young University Supported by NSF.
fleckvelter gonsity (ld/gg) hepth (gd) burlam falder multon repeat: 1.understand table 2.generate mini-ontology 3.match with growing.
Record-Boundary Discovery in Web Documents D.W. Embley, Y. Jiang, Y.-K. Ng Data-Extraction Group* Department of Computer Science Brigham Young University.
Table Interpretation by Sibling Page Comparison Cui Tao & David W. Embley Data Extraction Group Department of Computer Science Brigham Young University.
BYU Data Extraction Group Funded by NSF1 Brigham Young University Li Xu Source Discovery and Schema Mapping for Data Integration.
1 Cui Tao PhD Dissertation Defense Ontology Generation, Information Harvesting and Semantic Annotation For Machine-Generated Web Pages.
Automatic Creation and Simplified Querying of Semantic Web Content An Approach Based on Information-Extraction Ontologies Yihong Ding, David W. Embley,
Query Rewriting for Extracting Data Behind HTML Forms Xueqi Chen Department of Computer Science Brigham Young University March 31, 2004 Funded by National.
Learning Table Extraction from Examples Ashwin Tengli, Yiming Yang and Nian Li Ma School of Computer Science Carnegie Mellon University Coling 04.
Processing of large document collections Part 3 (Evaluation of text classifiers, applications of text categorization) Helena Ahonen-Myka Spring 2005.
The Database and Info. Systems Lab. University of Illinois at Urbana-Champaign Light-weight Domain-based Form Assistant: Querying Web Databases On the.
Automatic Lexical Annotation Applied to the SCARLET Ontology Matcher Laura Po and Sonia Bergamaschi DII, University of Modena and Reggio Emilia, Italy.
Michael Cafarella Alon HalevyNodira Khoussainova University of Washington Google, incUniversity of Washington Data Integration for Relational Web.
An Aspect of the NSF CDI InitiativeNSF CDI: Cyber-Enabled Discovery and Innovation.
CSE 636 Data Integration Schema Matching Cupid Fall 2006.
Interoperable Visualization Framework towards enhancing mapping and integration of official statistics Haitham Zeidan Palestinian Central.
HKU CSIS DB Seminar: HKU CSIS DB Seminar: Finding Set-Mappings in Schema Matching Supervisor: Dr. David Cheung Speaker: Eric Lo.
A Scalable Machine Learning Approach for Semi-Structured Named Entity Recognition Utku Irmak(Yahoo! Labs) Reiner Kraft(Yahoo! Inc.) WWW 2010(Information.
Introduction to Geographic Information Systems Fall 2013 (INF 385T-28620) Dr. David Arctur Research Fellow, Adjunct Faculty University of Texas at Austin.
Instance Discovery and Schema Matching With Applications to Biological Deep Web Data Integration Tantan Liu, Fan Wang, Gagan Agrawal {liut, wangfa,
Semantic Interoperability in GIS N. L. Sarda Suman Somavarapu.
Databases Databases are collections of information; our study repeats a theme: Tell the computer the structure, and it can help you! © 2004, Lawrence Snyder.
XML: Extensible Markup Language
Databases Chapter 16.
Single Table Queries in SQL
Cross-language Information Retrieval
David W. Embley Brigham Young University Provo, Utah, USA
Announcements Project 2’s due date is moved to Tuesday 8/3/04
Data Integration for Relational Web
GIS Lecture: Geoprocessing
Family History Technology Workshop
Spreadsheets, Modelling & Databases
INFO/CSE 100, Spring 2006 Fluency in Information Technology
Context-Aware Internet
Presentation transcript:

Automating Schema Matching for Data Integration David W. Embley Brigham Young University Funded by NSF

Information Exchange Source Target Information Extraction Schema Leverage this … … to do this Schema Matching

Presentation Outline Information Extraction Schema Matching for HTML Table Direct Schema Matching Indirect Schema Matching Conclusions and Future Work

Information Extraction

Extracting Pertinent Information from Documents

A Conceptual Modeling Solution Year Price Make Mileage Model Feature PhoneNr Extension Car has is for 1..* 0..1 0..*

Car-Ads Ontology Car [->object]; Car [0..1] has Year [1..*]; Car [0..1] has Make [1..*]; Car [0...1] has Model [1..*]; Car [0..1] has Mileage [1..*]; Car [0..*] has Feature [1..*]; Car [0..1] has Price [1..*]; PhoneNr [1..*] is for Car [0..*]; PhoneNr [0..1] has Extension [1..*]; Year matches [4] constant {extract “\d{2}”; context "([^\$\d]|^)[4-9]\d,[^\d]"; substitute "^" -> "19"; }, … End;

Recognition and Extraction Car Year Make Model Mileage Price PhoneNr 0001 1989 Subaru SW $1900 (363)835-8597 0002 1998 Elandra (336)526-5444 0003 1994 HONDA ACCORD EX 100K (336)526-1081 Car Feature 0001 Auto 0001 AC 0002 Black 0002 4 door 0002 tinted windows 0002 Auto 0002 pb 0002 ps 0002 cruise 0002 am/fm 0002 cassette stero 0002 a/c 0003 Auto 0003 jade green 0003 gold

Schema Matching for HTML Tables

Table-Schema Matching (Basic Idea) Many tables on the Web Ontology-Based Extraction: Works well for unstructured or semistructured data What about structured data – tables? Method: Form attribute-value pairs Do extraction Infer mappings from extraction patterns

Problem: Different Schemas Target Database Schema {Car, Year, Make, Model, Mileage, Price, PhoneNr}, {PhoneNr, Extension}, {Car, Feature} Different Source Table Schemas {Run #, Yr, Make, Model, Tran, Color, Dr} {Make, Model, Year, Colour, Price, Auto, Air Cond., AM/FM, CD} {Vehicle, Distance, Price, Mileage} {Year, Make, Model, Trim, Invoice/Retail, Engine, Fuel Economy} ?

Problem: Attribute is Value

Problem: Attribute-Value is Value ? ?

Problem: Value is not Value

Problem: Implied Values

Problem: Missing Attributes

Problem: Compound Attributes

Problem: Merged Values

Problem: Values not of Interest

Problem: Factored Values

Problem: Split Values

Problem: Information Behind Links Table extending over several pages Single-Column Table (formated a list)

Solution Form attribute-value pairs (adjust if necessary) Do extraction Infer mappings from extraction patterns

Solution: Remove Internal Factoring Unnest: (Model, Year, Colour, Price, Auto, Air Cond, AM/FM, CD)*(Year, Colour, Price, Auto, Air Cond, AM/FM, CD)*Table Legend ACURA Discover Nesting: Make, (Model, (Year, Colour, Price, Auto, Air Cond, AM/FM, CD)*)*

Solution: Replace Boolean Values Yes, AutoAir CondAM/FM AM/FM Air Cond. Auto CD Table Yes, CD ACURA ACURA Legend

Solution: Form Attribute-Value Pairs AM/FM Air Cond. Auto CD ACURA ACURA Legend <Make, Honda>, <Model, Civic EX>, <Year, 1995>, <Colour, White>, <Price, $6300>, <Auto, Auto>, <Air Cond., Air Cond.>, <AM/FM, AM/FM>

Solution: Adjust Attribute-Value Pairs AM/FM Air Cond. Auto CD ACURA ACURA Legend <Make, Honda>, <Model, Civic EX>, <Year, 1995>, <Colour, White>, <Price, $6300>, <Auto>, <Air Cond>, <AM/FM>

Solution: Do Extraction AM/FM Air Cond. Auto CD ACURA ACURA Legend <Make, Honda>, <Model, Civic EX>, <Year, 1995>, <Colour, White>, <Price, $6300>, <Auto>, <Air Cond>, <AM/FM>

Solution: Infer Mappings Make(Model, Year, Colour, Price, Auto, Air Cond, AM/FM, CD)*(Year, Colour, Price, Auto, Air Cond, AM/FM, CD)*Table YearTable Note: Mappings produce sets for attributes. Joining to form records is trivial because we have OIDs for table rows (e.g. for each Car). Model(Year, Colour, Price, Auto, Air Cond, AM/FM, CD)*Table AM/FM Air Cond. Auto CD Each row is a car. ACURA ACURA Legend {Car, Year, Make, Model, Mileage, Price, PhoneNr}, {PhoneNr, Extension}, {Car, Feature}

Solution: Do Extraction Model(Year, Colour, Price, Auto, Air Cond, AM/FM, CD)*Table AM/FM Air Cond. Auto CD ACURA ACURA Legend {Car, Year, Make, Model, Mileage, Price, PhoneNr}, {PhoneNr, Extension}, {Car, Feature}

Solution: Do Extraction AM/FM Air Cond. Auto CD ACURA ACURA Legend PriceTable {Car, Year, Make, Model, Mileage, Price, PhoneNr}, {PhoneNr, Extension}, {Car, Feature}

Solution: Do Extraction AM/FM Air Cond. Auto CD ACURA ACURA Legend ColourFeatureColourTable  AutoFeatureAutoAutoTable  Air Cond.FeatureAir Cond. Air Cond.Table  AM/FMFeatureAM/FMAM/FMTable  CDFeatureCDCDTable Yes, Yes, Yes, Yes, {Car, Year, Make, Model, Mileage, Price, PhoneNr}, {PhoneNr, Extension}, {Car, Feature}

Experiment Tables from 60 sites 10 “training” tables 50 test tables 357 mappings (from all 60 sites) 172 direct mappings (same attribute and meaning) 185 indirect mappings (29 attribute synonyms, 5 “Yes/No” columns, 68 unions over columns for Feature, 19 factored values, and 89 columns of merged values that needed to be split)

Results 10 “training” tables 50 test tables 16 missed mappings 100% of the 57 mappings (no false mappings) 94.6% of the values in linked pages (5.4% false declarations) 50 test tables 94.7% of the 300 mappings (no false mappings) On the bases of sampling 3,000 values in linked pages, we obtained 97% recall and 86% precision 16 missed mappings 4 partial (not all unions included) 6 non-U.S. car-ads (unrecognized makes and models) 2 U.S. unrecognized makes and models 3 prices (missing $ or found MSRP instead) 1 mileage (mileages less than 1,000)

Direct Schema Matching

Attribute Matching for Populated Schemas Central Idea: Exploit All Data & Metadata Matching Possibilities (Facets) Attribute Names Data-Value Characteristics Expected Data Values Data-Dictionary Information Structural Properties

Approach Target Schema T Source Schema S Framework Individual Facet Matching Combining Facets Best-First Match Iteration

Example Target Schema T Source Schema S Year Model Make Car Mileage Miles Year has 0:1 Year has 0:1 Make has 0:1 Feature has 0:* Make has 0:1 Model has 0:1 Car Cost has 0:1 Model has 0:1 Car Style Phone has 0:1 0:1 has 0:* 0:1 0:1 has has has Mileage Miles Cost Target Schema T Source Schema S

Individual Facet Matching Attribute Names Data-Value Characteristics Expected Data Values

Attribute Names Target and Source Attributes WordNet S : B WordNet C4.5 Decision Tree: feature selection, trained on schemas in DB books f0: same word f1: synonym f2: sum of distances to a common hypernym root f3: number of different common hypernym roots f4: sum of the number of senses of A and B

WordNet Rule The number of different common hypernym roots of A and B The sum of distances of A and B to a common hypernym The sum of the number of senses of A and B

Confidence Measures

Data-Value Characteristics C4.5 Decision Tree Features Numeric data (Mean, variation, standard deviation, …) Alphanumeric data (String length, numeric ratio, space ratio)

Confidence Measures

Expected Data Values Target Schema T and Source Schema S Regular expression recognizer for attribute A in T Data instances for attribute B in S Hit Ratio = N’/N for (A, B) match N’ : number of B data instances recognized by the regular expressions of A N: number of B data instances

Confidence Measures

Combined Measures 1 1 1 1 1 Threshold: 0.5

Final Confidence Measures

Experimental Results This schema, plus 6 other schemas Matched: 100% 32 matched attributes 376 unmatched attributes Matched: 100% Unmatched: 99.5% “Feature” ---”Color” “Feature” ---”Body Type” F1 93.8% F2 84% F3 92% 98.9% 97.9% 98.4% F1: WordNet F2: Value Characteristics F3: Expected Values

Indirect Schema Matching

Schema Matching Target Source Color Year Year Make Feature Make & Model Body Type Target Car Cost Model Car Style Phone Mileage Miles Cost Source

Mapping Generation Direct Matches as described earlier: Attribute Names based on WordNet Value Characteristics based on value lengths, averages, … Expected Values based on regular-expression recognizers Indirect Matches: Direct matches Structure Evaluation Union Selection Decomposition Composition

Union and Selection Target Source Color Year Year Make Feature Make & Model Body Type Target Car Cost Model Car Style Phone Mileage Miles Cost Source

Decomposition and Composition Color Year Year Make Feature Make & Model Body Type Target Car Cost Model Car Style Phone Mileage Miles Cost Source

Example Taken From [MBR, VLDB’01] Structure Example Taken From [MBR, VLDB’01] PO PurchaseOrder Items POShipTo POBillTo POLines InvoiceTo DeliverTo Count Address ItemCount Item City Street City Street Item ItemNumber City Street Line Qty UoM Quantity UnitOfMeasure Target Source

Structure (Nonlexical Matches) PO PurchaseOrder Items POShipTo POBillTo POLines InvoiceTo DeliverTo DeliverTo Count Address Count Item City Street City Street Item ItemNumber City Street Line Qty UoM Quantity UnitOfMeasure Target Source

Structure (Join over FD Relationship Sets, …) PO PurchaseOrder Items POShipTo POBillTo POLines InvoiceTo DeliverTo City Count City Count Item Street City Street City Street Item Street ItemNumber Line Qty UoM Quantity UnitOfMeasure Target Source

Structure (Lexical Matches) PO PurchaseOrder Items POShipTo POBillTo POLines InvoiceTo DeliverTo City City Count Count City City Count Count Item Street Street City City Street Street City City Street Street Item Street Street ItemNumber Line Line Qty Qty UoM Quantity Quantity UnitOfMeasure Target Source

Experiments Methodology Measures Precision Recall F Measure

Results Indirect Matches: 94% (precision, recall, F-measure) 98 93 96 Applications (Number of Schemes) Precision (%) Recall F Correct False Positive Negative Course Schedule (5) 98 93 96 119 2 9 Faculty Member (5) 100 140 Real Estate (5) 92 94 235 20 10 Indirect Matches: 94% (precision, recall, F-measure) Data borrowed from Univ. of Washington [DDH, SIGMOD01] Rough Comparison with U of W Results (Direct Matches only) * Course Schedule – Accuracy: ~71% * Faculty Members – Accuracy, ~92% * Real Estate (2 tests) – Accuracy: ~75%

Conclusions and Future Work

Conclusions Table Mappings Direct Attribute Matching Tables: 94.7% (Recall); 100% (Precision) Linked Text: ~97% (Recall); ~86% (Precision) Direct Attribute Matching Matched 32 of 32: 100% Recall 2 False Positives: 94% Precision Direct and Indirect Attribute Matching Matched 494 of 513: 96% Recall 22 False Positives: 96% Precision www.deg.byu.edu

Current & Future Work: Improve and Extend Indirect Matching Improve Object-Set Matching (e.g. Lex/non-Lex) Add Relationship-Set Matching Computations

Current & Future Work: Tables Behind Forms Crawling the Hidden Web Filling in Forms from Global Queries

Current & Future Work: Developing Extraction Ontologies Creation from Knowledge Sources and Sample Application Pages K Ontology + Data Frames, Lexicons, … RDF Ontologies User Creation by Example

Current & Future Work: and Much More … Table Understanding Microfilm Census Records Generate Ontologies by Reading Tables … www.deg.byu.edu