1 S = (X, A  {d[1],d[2],..,d[k]}, V), where: - X is a finite set of objects, - A is a finite set of classification attributes, - {d[1],d[2],..,d[k]} is.

Slides:



Advertisements
Similar presentations
Anytime reasoning by Ontology Approximation S.Schlobach, E.Blaauw, M.El Kebir, A.ten Teije, F.van Harmelen, S.Bortoli, M.Hobbelman, K.Millian, Y.Ren, S.Stam,,
Advertisements

Introduction Simple Random Sampling Stratified Random Sampling
8.3 Inverse Linear Transformations
CS4432: Database Systems II
Basic notation for supertype/subtype relationships
Answer Set Programming Overview Dr. Rogelio Dávila Pérez Profesor-Investigador División de Posgrado Universidad Autónoma de Guadalajara
Significant Figures Every measurement has a limit on its accuracy based on the properties of the instrument used. we must indicate the precision of the.
STA Lecture 81 STA 291 Lecture 8 Probability – Probability Rules – Joint and Marginal Probability.
The Query Compiler Varun Sud ID: 104. Agenda Parsing  Syntax analysis and Parse Trees.  Grammar for a simple subset of SQL  Base Syntactic Categories.
 Traditional NER assume that each entity type is an independent class.  However, they can have a hierarchical structure.
NaLIX: A Generic Natural Language Search Environment for XML Data Presented by: Erik Mathisen 02/12/2008.
Lecture 21 Rule discovery strategies LERS & ERID.
Preface Exponential growth of data volume, steady drop in storage costs, and rapid increase in storage capacity Inadequacy of the sequential processing.
1 Lecture 5: Relational calculus
Data Mining with Decision Trees Lutz Hamel Dept. of Computer Science and Statistics University of Rhode Island.
I.1 ii.2 iii.3 iv.4 1+1=. i.1 ii.2 iii.3 iv.4 1+1=
Action Rules Discovery Systems: DEAR1, DEAR2, ARED, ….. by Zbigniew W. Raś.
Association Action Rules b y Zbigniew W. Ras 1,5 Agnieszka Dardzinska 2 Li-Shiang Tsay 3 Hanna Wasyluk 4 1)University of North Carolina, Charlotte, NC,
Naïve Bayesian Classifiers Before getting to Naïve Bayesian Classifiers let’s first go over some basic probability theory p(C k |A) is known as a conditional.
Probability theory 2011 Outline of lecture 7 The Poisson process  Definitions  Restarted Poisson processes  Conditioning in Poisson processes  Thinning.
Solving Failing Queries *) Zbigniew W. Ras University of North Carolina Charlotte, N.C , USA
I.1 ii.2 iii.3 iv.4 1+1=. i.1 ii.2 iii.3 iv.4 1+1=
XML –Query Languages, Extracting from Relational Databases ADVANCED DATABASES Khawaja Mohiuddin Assistant Professor Department of Computer Sciences Bahria.
The Enhanced E-R (EER) Model
Chapter 7 Reasoning about Knowledge by Neha Saxena Id: 13 CS 267.
Section 4.1 Finite Permutation Groups Permutation of a Set Let A be the set { 1, 2, …, n }. A permutation on A is a function f : A  A that is both one-to-one.
Sections 8-1 and 8-2 Review and Preview and Basics of Hypothesis Testing.
Homework Define a loss function that compares two matrices (say mean square error) b = svd(bellcore) b2 = b$u[,1:2] %*% diag(b$d[1:2]) %*% t(b$v[,1:2])
Overview Basics of Hypothesis Testing
The Relational Model: Relational Calculus
Chapter Two ( Data Model) Objectives Introduction to Data Models What are the Data Models Why they are important Learn how to design a DBMS.
Multi-Field Range Encoding for Packet Classification in TCAM Author: Yeim-Kuan Chang, Chun-I Lee and Cheng-Chien Su Publisher: INFOCOM 2011 Presenter:
Section 1.7 Solving Compound Inequalities. At least $5.20/hr but less than $8.35/hr How would you express: At least 550 and no more than 600 $5.20 ≤ x.
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search The best paper prize at CVPR 2008.
Efficient Subwindow Search: A Branch and Bound Framework for Object Localization ‘PAMI09 Beyond Sliding Windows: Object Localization by Efficient Subwindow.
Optimal Bayes Classification
Discrete Mathematics Relation.
Database Systems Supertypes and Subtypes Lecture # 10.
Entity Relationship Modeling
Complete Graphs A complete graph is one where there is an edge between every two nodes A C B G.
Functional Annotation of Genes Using Hierarchical Text Categorization Svetlana Kiritchenko, Stan Matwin University of Ottawa, Canada and A. Fazel Famili.
The Entity-Relationship Model, P. I R. Nakatsu. Data Modeling A data model is the relatively simple representation, usually graphic, of the structure.
Impossibility of Distributed Consensus with One Faulty Process By, Michael J.Fischer Nancy A. Lynch Michael S.Paterson.
1 Chi-square Test Dr. T. T. Kachwala. Using the Chi-Square Test 2 The following are the two Applications: 1. Chi square as a test of Independence 2.Chi.
SECTION 9 Orbits, Cycles, and the Alternating Groups Given a set A, a relation in A is defined by : For a, b  A, let a  b if and only if b =  n (a)
4-3 Addition Rule This section presents the addition rule as a device for finding probabilities that can be expressed as P(A or B), the probability that.
Multilingual Information Retrieval using GHSOM Hsin-Chang Yang Associate Professor Department of Information Management National University of Kaohsiung.
Modeling Security-Relevant Data Semantics Xue Ying Chen Department of Computer Science.
Presented by Kyumars Sheykh Esmaili Description Logics for Data Bases (DLHB,Chapter 16) Semantic Web Seminar.
Chapter 10: Design and Analysis of Propositional-Logic Rule- Based Systems Albert M. K. Cheng.
Chase Methods based on Knowledge Discovery Agnieszka Dardzinska & Zbigniew W. Ras &
Probability Probability II. Opening Routine # 1.
Linear Models & Clustering Presented by Kwak, Nam-ju 1.
AS Mathematics Algebra – Quadratic equations. Objectives Be confident in the use of brackets Be able to factorise quadratic expressions Be able to solve.
Text Classification and Naïve Bayes Formalizing the Naïve Bayes Classifier.
Rough Set Theory and Databases Senior Lecturer: Laurie Webster II, M.S.S.E.,M.S.E.E., M.S.BME, Ph.D., P.E. Lecture 28 A First Course in Database Systems.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
By Arijit Chatterjee Dr
Review and Preview and Basics of Hypothesis Testing
Natural Language Processing - Formal Language -
UML UML Data Modeling.
Chapter 9 Hypothesis Testing.
Functional Coherence in Domain Interaction Networks
Multi-Hierarchical Decision System
Chase Zbigniew W. Ras UNC-Charlotte.
based on Knowledge Discovery
Dependencies in Structures of Decision Tables
A simple function.
Information Retrieval and Web Design
U A B II III I IV 94.
Presentation transcript:

1 S = (X, A  {d[1],d[2],..,d[k]}, V), where: - X is a finite set of objects, - A is a finite set of classification attributes, - {d[1],d[2],..,d[k]} is a set of hierarchical decision attributes, and V =  {V a : a  A  {d[1],d[2],..,d[k]}} is a set of their values. We assume that: V a, V b are disjoint for any a, b  A  {d[1],d[2],..,d[k]}, such that a ≠ b, a : X →V a is a partial function for every a  A  {d[1],d[2],..,d[k]}. Decision queries (d-queries) for S - a least set T D such that: - 0, 1  T D, - if w   {V a : a  {d[1],d[2],..,d[k]}}, then w, ~w  T D, - if t 1, t 2  T D, then (t 1 + t 2 ), (t 1  t 2 )  T D. Multi-Hierarchical Decision System Incomplete Database Atomic Level

2 Example Xabcd x1a[1]b[2]c[1]d[3] x2a[1]b[1]c[1]d[3,1] x3a[1]b[2]c[2,2]d[1] x4a[2]b[2]c[2]d[1] C[1]C[2] C[2,1]C[2,2] d[1]d[2] d[3,1]d[3,2] 3 d[3]Level I Level II Classification AttributesDecision Attributes

3 Classification terms (c-terms) for S are defined as the least set T C : -0, 1  T C, -if w   {V a : a  A}, then w, ~w  T C, -if t 1, t 2  T C, then (t ), (t 1  t 2 )  T C. c-term t is called simple if t = t 1  t 2  …  t n and (  j  {1,2,…,n}) [(t j   {V a : a  A})]  (t j = ~w  w   {V a : a  A})]. d-query t is called simple if t = t 1  t 2  …  t n and (  j  {1,2,…,n}) [(t j   {V a : a  {d[1],d[2],..,d[k]}})  (t j = ~w  w   {V a : a  {d[1], d[2],.., d[k]})]. By a classification rule we mean an expression [t 1  t 2 ], - both t 1 and t 2 are simple. Simple Term to Simple Query Atomic Level

4 Semantics M S of c-terms in S is defined in a standard way as follows: - M S (0) = 0, M S (1) = X, - M S (w) = {x  X : w = a(x)} for any w  V a, a  A, - M S (~w) = {x  X : (  v  V a )[v = a(x)  v≠w]} for any w  V a, a  A, - if t1, t2 are terms, then M S (t1 + t2) = M S (t1)  M S (t2), M S (t1  t2) = M S (t1)  M S (t2). Classifier-based semantics M S of d-queries in S = (X, A  {d[1],d[2],..,d[k]}, V ), if t is a simple d-query in S and {r j = [t j  t]: j  J t } is a set of all rules defining t which are extracted from S by classifier, then M S (t) = {(x,p x ): (  j  J t )(x  M S (t j )[p x =  {conf(j)  sup(j): x  M S (t j ) & j  J t }/  {sup(j): x  M S (t j ) & j  J t }]}, where conf(j), sup(j) denote the confidence and the support of [t j  t], correspondingly. Classifier-based Semantics

5 Attribute value d[j 1, j 2,…j n ] in S is dependent on an attribute value which is either its ancestor or descendant in d[j 1 ]. d[j 1, j 2,…j n ] is independent from any other attribute value in S. Let S = (X, A  {d[1],d[2],..,d[k]}, V), w  V d[i], and IV d[i] be the set of all attribute values in V d[i] which are independent from w. Standard semantics N S of d-queries in S is defined as follows: -N S (0) = 0, N S (1) = X, -if w  V d[i], then N S (w) = {x  X : d[i](x)=w}, for any 1  i  k - if w  V d[i], then N S (~w) = {x  X : (  v  IV d[i] )[ d[i](x)=v]}, for any 1  i  k -if t 1, t 2 are terms, then -N S (t 1 + t 2 ) = N S (t 1 )  N S (t 2 ), N S (t 1  t 2 ) = N S (t 1 )  N S (t 2 ). Standard Semantics of D-queries

6 Let S = (X, A  {d[1],d[2],..,d[k]}, V), t is a d-query in S, N S (t) is its meaning under standard semantics, and M S (t) is its meaning under classifier-based semantics. Assume that N S (t) = X  Y, where X = {x i, i  I 1 }, Y = {y i, i  I 2 }. Assume also that M S (t) = {(x i, p i ): i  I 1 }  {(z i, q i ): i  I 3 } and {y i, i  I 2 }  {z i, i  I 3 }= . The Overlap of Semantics NSNS I2I2 I1I1 I3I3 MSMS

7 Precision & Recall Precision Prec(M S, t) of a classifier-based semantics M S on a d-query t: Prec(M S, t) = [  {p i : i  I 1 } +  {(1 – q i ) : i  I 3 }] [card(I 1 ) + card(I 3 )]. Recall Rec(M S, t) of a classifier-based semantics M S on a d-query t: Rec(M S, t) = [  {p i : i  I 1 }]____ [card(I 1 ) + card(I 2 )]. NSNS I2I2 I1I1 I3I3 MSMS