Multi-Hierarchical Decision System

Slides:



Advertisements
Similar presentations
Anytime reasoning by Ontology Approximation S.Schlobach, E.Blaauw, M.El Kebir, A.ten Teije, F.van Harmelen, S.Bortoli, M.Hobbelman, K.Millian, Y.Ren, S.Stam,,
Advertisements

Introduction Simple Random Sampling Stratified Random Sampling
Rasool Jalili; 2 nd semester ; Database Security, Sharif Uni. of Tech. The Jajodia & Sandhu model Jajodia & Sandhu (1991), a model for the application.
CS4432: Database Systems II
_ Rough Sets. Basic Concepts of Rough Sets _ Information/Decision Systems (Tables) _ Indiscernibility _ Set Approximation _ Reducts and Core _ Rough Membership.
Review of Probability. Definitions (1) Quiz 1.Let’s say I have a random variable X for a coin, with event space {H, T}. If the probability P(X=H) is.
Basic notation for supertype/subtype relationships
Answer Set Programming Overview Dr. Rogelio Dávila Pérez Profesor-Investigador División de Posgrado Universidad Autónoma de Guadalajara
STA Lecture 81 STA 291 Lecture 8 Probability – Probability Rules – Joint and Marginal Probability.
 Traditional NER assume that each entity type is an independent class.  However, they can have a hierarchical structure.
NaLIX: A Generic Natural Language Search Environment for XML Data Presented by: Erik Mathisen 02/12/2008.
Lecture 21 Rule discovery strategies LERS & ERID.
Evaluation of kernel function modification in text classification using SVMs Yangzhe Xiao.
2005certain1 Views as Incomplete Databases – Certain & Possible Answers  Views – an incomplete representation  Certain and possible answers  Complexity.
Naïve Bayesian Classifiers Before getting to Naïve Bayesian Classifiers let’s first go over some basic probability theory p(C k |A) is known as a conditional.
Fuzzy Logic1 FUZZY LOGIC AND ITS APPLICATION TO SWITCHING SYSTEMS.
Operational Semantics Semantics with Applications Chapter 2 H. Nielson and F. Nielson
Chapter 7 Reasoning about Knowledge by Neha Saxena Id: 13 CS 267.
Section 4.1 Finite Permutation Groups Permutation of a Set Let A be the set { 1, 2, …, n }. A permutation on A is a function f : A  A that is both one-to-one.
Rational and Irrational Numbers
Integrating Multi-Media with Geographical Information in the BORG Architecture R. George Department of Computer Science Clark Atlanta University Atlanta,
Sections 8-1 and 8-2 Review and Preview and Basics of Hypothesis Testing.
Overview Basics of Hypothesis Testing
The Relational Model: Relational Calculus
Chapter Two ( Data Model) Objectives Introduction to Data Models What are the Data Models Why they are important Learn how to design a DBMS.
IS 475/675 - Introduction to Database Design
Reducing the Response Time for Data Warehouse Queries Using Rough Set Theory By Mahmoud Mohamed Al-Bouraie Yasser Fouad Mahmoud Hassan Wesam Fathy Jasser.
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search The best paper prize at CVPR 2008.
Efficient Subwindow Search: A Branch and Bound Framework for Object Localization ‘PAMI09 Beyond Sliding Windows: Object Localization by Efficient Subwindow.
Optimal Bayes Classification
Database Systems Supertypes and Subtypes Lecture # 10.
Education as a Signaling Device and Investment in Human Capital Topic 3 Part I.
1 S = (X, A  {d[1],d[2],..,d[k]}, V), where: - X is a finite set of objects, - A is a finite set of classification attributes, - {d[1],d[2],..,d[k]} is.
SECTION 9 Orbits, Cycles, and the Alternating Groups Given a set A, a relation in A is defined by : For a, b  A, let a  b if and only if b =  n (a)
4-3 Addition Rule This section presents the addition rule as a device for finding probabilities that can be expressed as P(A or B), the probability that.
Modeling Security-Relevant Data Semantics Xue Ying Chen Department of Computer Science.
Chase Methods based on Knowledge Discovery Agnieszka Dardzinska & Zbigniew W. Ras &
Probability Probability II. Opening Routine # 1.
Rough Set Theory and Databases Senior Lecturer: Laurie Webster II, M.S.S.E.,M.S.E.E., M.S.BME, Ph.D., P.E. Lecture 28 A First Course in Database Systems.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Relational Calculus Database Management Systems, 3rd ed., Ramakrishnan and Gehrke, Chapter 4.
Lecture 2. Bayesian Decision Theory
Contents. Goal and Overview. Ingredients. The Page Model.
EXCEL: Multiple Regression
Venn Diagrams and Set Operation
By Arijit Chatterjee Dr
Review and Preview and Basics of Hypothesis Testing
Database Design with Semantic Object Models
Natural Language Processing - Formal Language -
ALGEBRA II H/G - SETS : UNION and INTERSECTION
Access queries p.meade.
Diagonalization Fact: Many books exist.
Chapter 9 Hypothesis Testing.
Functional Coherence in Domain Interaction Networks
Measurement Uncertainty Analysis
Copyright © Cengage Learning. All rights reserved.
Chase Zbigniew W. Ras UNC-Charlotte.
based on Knowledge Discovery
ALGEBRA I - SETS : UNION and INTERSECTION
Dependencies in Structures of Decision Tables
Local-as-View Mediators
ALGEBRA II H/G - SETS : UNION and INTERSECTION
CPSC-608 Database Systems
Views 1.
Database Management System
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Tautology Decision May be able to use unateness to simplify process
Information Retrieval and Web Design
Comparing Theory and Measurement
An M-ary KMP Classifier for Multi-aspect Target Classification
Presentation transcript:

Multi-Hierarchical Decision System S = (X, A{d[1],d[2],..,d[k]}, V), where: - X is a finite set of objects, - A is a finite set of classification attributes, - {d[1],d[2],..,d[k]} is a set of hierarchical decision attributes, and V = {Va : a A{d[1],d[2],..,d[k]}} is a set of their values. We assume that: Va, Vb are disjoint for any a, b  A{d[1],d[2],..,d[k]}, such that a ≠ b, a : X →Va is a partial function for every a  A{d[1],d[2],..,d[k]}. Queries (d-terms) for S - a least set TD such that: - 0, 1  TD, - if w  {Va : a  {d[1],d[2],..,d[k]}} , then w, ~w  TD, - if t1, t2  TD, then (t1 + t2), (t1 t2)  TD. Incomplete Database Atomic Level

Example Level I C[1] C[2] d[1] d[2] d[3] Level II C[2,1] C[2,2] d[3,1] b c d x1 a[1] b[2] c[1] d[3] x2 b[1] d[3,1] x3 c[2,2] d[1] x4 a[2] c[2] Classification Attributes Decision Attributes

Simple Term to Simple Query Classification terms (c-terms) for S are defined as the least set TC : 0, 1  TC, if w  {Va : a  A}, then w, ~w  TC, if t1, t2  TC, then (t1 + 2), (t1 t2) TC. c-term t is called simple if t = t1t2…tn and (j  {1,2,…,n}) [(tj  {Va : a  A})]  (tj = ~w  w {Va : a  A})]. Query (d-term) t is called simple if t = t1t2…tn and (j  {1,2,…,n}) [(tj  {Va : a {d[1],d[2],..,d[k]}})  (tj = ~w  w {Va : a {d[1], d[2],.., d[k]})]. By a classification rule we mean an expression [t1  t2], where t1 is a simple c-term and t2 is a simple d-term. Atomic Level 3

Classifier-based Semantics Semantics MS of c-terms in S is defined in a standard way as follows: - MS(0) = 0, MS(1) = X, - MS(w) = {x  X : w = a(x)} for any w  Va, a A, - MS(~w) = {x  X : (v Va)[v = a(x)  v≠w]} for any w Va, a A, - if t1, t2 are c-terms, then MS(t1 + t2) = MS(t1)  MS(t2), MS(t1  t2) = MS(t1)  MS(t2). Classifier-based semantics MS of d-terms in S = (X, A{d[1],d[2],..,d[k]}, V ), if t is a simple d-term in S and {rj = [tj  t]: j  Jt} is a set of all rules defining t which are extracted from S by classifier, then MS(t) = {(x,px): (j  Jt)(x  MS(tj)[px = {conf(j)sup(j): x  MS(tj) & j  Jt}/{sup(j): x  MS(tj) & j Jt}]}, where conf(j), sup(j) denote the confidence and the support of [tj  t], correspondingly. 4

Standard Semantics of D-queries Attribute value d[j1, j2,…jn] in S is dependent on an attribute value which is either its ancestor or descendant in d[j1]. d[j1, j2,…jn] is independent from any other attribute value in S. Let S = (X, A{d[1],d[2],..,d[k]}, V), w  Vd[i] , and IVd[i] be the set of all attribute values in Vd[i] which are independent from w. Standard semantics NS of d-terms in S is defined as follows: NS(0) = 0, NS(1) = X, if w  Vd[i] , then NS(w) = {x  X : d[i](x)=w}, for any 1 i k if w  Vd[i] , then NS(~w) = {x  X : (v  IVd[i] )[ d[i](x)=v]}, for any 1 i k if t1, t2 are terms, then NS(t1 + t2) = NS(t1)  NS(t2), NS(t1  t2) = NS(t1)  NS(t2). 5

The Overlap of Semantics Let S = (X, A{d[1],d[2],..,d[k]}, V), t is a query (d-term) in S, NS(t) is its meaning under standard semantics, and MS(t) is its meaning under classifier-based semantics. Assume that NS(t) = X  Y, where X = {xi, i  I1}, Y = {yi , i  I2}. Assume also that MS(t) = {(xi, pi): i  I1}  {(zi, qi): i  I3} and {yi , i  I2}  {zi , i I3}= . I2 I1 I3 NS MS 6

Precision & Recall I2 I1 I3 NS MS Precision Prec(MS, t) of a classifier-based semantics MS on a query t: Prec(MS, t) = [{pi : i  I1} + {(1 – qi ) : i  I3}] [card(I1) + card(I3)]. Recall Rec(MS, t) of a classifier-based semantics MS on a query t: Rec(MS, t) = [{pi : i  I1}]____ [card(I1) + card(I2)].