Rough Sets.

Slides:



Advertisements
Similar presentations
Protein Secondary Structure Prediction Using BLAST and Relaxed Threshold Rule Induction from Coverings Leong Lee Missouri University of Science and Technology,
Advertisements

Chapter 9 -- Simplification of Sequential Circuits.
Schema Refinement: Canonical/minimal Covers
Rough Sets in Data Mining CSE5610 Intelligent Software Systems Semester 1, 2006.
FUNCTION OPTIMIZATION Switching Function Representations can be Classified in Terms of Levels Number of Levels, k, is Number of Unique Boolean (binary)
Rough Sets Tutorial.
Digital Circuits.
CSEE 4823 Advanced Logic Design Handout: Lecture #2 1/22/15
Relations Relations on a Set. Properties of Relations.
_ Rough Sets. Basic Concepts of Rough Sets _ Information/Decision Systems (Tables) _ Indiscernibility _ Set Approximation _ Reducts and Core _ Rough Membership.
April 9, 2015Applied Discrete Mathematics Week 9: Relations 1 Solving Recurrence Relations Another Example: Give an explicit formula for the Fibonacci.
Huge Raw Data Cleaning Data Condensation Dimensionality Reduction Data Wrapping/ Description Machine Learning Classification Clustering Rule Generation.
Rough Set Strategies to Data with Missing Attribute Values Jerzy W. Grzymala-Busse Department of Electrical Engineering and Computer Science University.
Rough Sets Theory Speaker:Kun Hsiang.
CHAPTER 8 Decision Making CS267 BY GAYATRI BOGINERI (12) & BHARGAV VADHER (14) October 22, 2007.
Chapter 10 Dissimilarity Analysis Presented by: ASMI SHAH (ID : 24)
_ Rough Sets. Basic Concepts of Rough Sets _ Information/Decision Systems (Tables) _ Indiscernibility _ Set Approximation _ Reducts and Core.
Lecture 21 Rule discovery strategies LERS & ERID.
Chapter 9 Data Analysis CS267 By Anand Sivaramakrishnan.
August 2005RSFDGrC 2005, Regina, Canada 1 Feature Selection Based on Relative Attribute Dependency: An Experimental Study Jianchao Han 1, Ricardo Sanchez.
ECE 667 Synthesis and Verification of Digital Systems
The application of rough sets analysis in activity-based modelling. Opportunities and constraints Speaker: Yanan Yean.
1 Section 7.1 Relations and their properties. 2 Binary relation A binary relation is a set of ordered pairs that expresses a relationship between elements.
On Applications of Rough Sets theory to Knowledge Discovery Frida Coaquira UNIVERSITY OF PUERTO RICO MAYAGÜEZ CAMPUS
CSE & CSE6002E - Soft Computing Winter Semester, 2011 More Rough Sets.
CS143 Review: Normalization Theory Q: Is it a good table design? We can start with an ER diagram or with a large relation that contain a sample of the.
Discrete Math for CS Binary Relation: A binary relation between sets A and B is a subset of the Cartesian Product A x B. If A = B we say that the relation.
3. Rough set extensions  In the rough set literature, several extensions have been developed that attempt to handle better the uncertainty present in.
From Rough Set Theory to Evidence Theory Roman Słowiński Laboratory of Intelligent Decision Support Systems Institute of Computing Science Poznań University.
Reducing the Response Time for Data Warehouse Queries Using Rough Set Theory By Mahmoud Mohamed Al-Bouraie Yasser Fouad Mahmoud Hassan Wesam Fathy Jasser.
Richard Jensen, Andrew Tuson and Qiang Shen Qiang Shen Aberystwyth University, UK Richard Jensen Aberystwyth University, UK Andrew Tuson City University,
Overview Concept Learning Representation Inductive Learning Hypothesis
Formal Semantics of Programming Languages 虞慧群 Topic 1: Introduction.
Rough Set Theory. 2 Introduction _ Fuzzy set theory –Introduced by Zadeh in 1965 [1] –Has demonstrated its usefulness in chemistry and in other disciplines.
Chapter 8 Equivalence Relations Let A and B be two sets. A relation R from A to B is a subset of AXB. That is, R is a set of ordered pairs, where the first.
Improving Support Vector Machine through Parameter Optimized Rujiang Bai, Junhua Liao Shandong University of Technology Library Zibo , China { brj,
Data Mining By Farzana Forhad CS 157B. Agenda Decision Tree and ID3 Rough Set Theory Clustering.
SECTION 2 BINARY OPERATIONS Definition: A binary operation  on a set S is a function mapping S X S into S. For each (a, b)  S X S, we will denote the.
Concept Learning and The General-To Specific Ordering
Binary Relations Definition: A binary relation R from a set A to a set B is a subset R ⊆ A × B. Example: Let A = { 0, 1,2 } and B = {a,b} {( 0, a), (
Chapter8 Relations 8.1: Relations and their properties.
Rough Set Theory and Databases Senior Lecturer: Laurie Webster II, M.S.S.E.,M.S.E.E., M.S.BME, Ph.D., P.E. Lecture 28 A First Course in Database Systems.
Rough Sets, Their Extensions and Applications 1.Introduction  Rough set theory offers one of the most distinct and recent approaches for dealing with.
Set. Outline Universal Set Venn Diagram Operations on Sets.
Binary Relation: A binary relation between sets A and B is a subset of the Cartesian Product A x B. If A = B we say that the relation is a relation on.
More Rough Sets.
Equivalence Relations and Classes
Equivalence Relations: Selected Exercises
Relations and Their Properties
Review: Discrete Mathematics and Its Applications
Relations Binary relations represent relationships between the elements of two sets. A binary relation R from set A to set B is defined by: R  A 
Finish Fuzzy Sets and Logic Begin Rough Sets
DeMorgan’s Theorem DeMorgan’s 2nd Theorem
Relations and Digraphs
CS 9633 Machine Learning Concept Learning
Rough Sets.
Classification and Prediction
Classification by Decision Tree Induction
Introductory Material
Optimization Algorithm
Review: Discrete Mathematics and Its Applications
Rough Set Theory.
Dependencies in Structures of Decision Tables
Rough Sets (Theoretical Aspects of Reasoning about Data)
ECE 352 Digital System Fundamentals
Rough Sets in KDD Tutorial Notes
Rough Set Model Selection for Practical Decision Making
Equivalence Relations: Selected Exercises
Implementation of Learning Systems
Introductory Material
Presentation transcript:

Rough Sets

Basic Concepts of Rough Sets Information/Decision Systems (Tables) Indiscernibility Set Approximation Reducts and Core Rough Membership Dependency of Attributes

Information Systems/Tables IS is a pair (U, A) U is a non-empty finite set of objects. A is a non-empty finite set of attributes such that for every is called the value set of a. Age LEMS x1 16-30 50 x2 16-30 0 x3 31-45 1-25 x4 31-45 1-25 x5 46-60 26-49 x6 16-30 26-49 x7 46-60 26-49

Decision Systems/Tables Age LEMS Walk DS: is the decision attribute (instead of one we can consider more decision attributes). The elements of A are called the condition attributes. x1 16-30 50 yes x2 16-30 0 no x3 31-45 1-25 no x4 31-45 1-25 yes x5 46-60 26-49 no x6 16-30 26-49 yes x7 46-60 26-49 no

Indiscernibility The equivalence relation A binary relation which is reflexive (xRx for any object x) , symmetric (if xRy then yRx), and transitive (if xRy and yRz then xRz). The equivalence class of an element consists of all objects such that xRy.

Indiscernibility (2) Let IS = (U, A) be an information system, then with any there is an associated equivalence relation: where is called the B-indiscernibility relation. If then objects x and x’ are indiscernible from each other by attributes from B. The equivalence classes of the B-indiscernibility relation are denoted by

An Example of Indiscernibility The non-empty subsets of the condition attributes are {Age}, {LEMS}, and {Age, LEMS}. IND({Age}) = {{x1,x2,x6}, {x3,x4}, {x5,x7}} IND({LEMS}) = {{x1}, {x2}, {x3,x4}, {x5,x6,x7}} IND({Age,LEMS}) = {{x1}, {x2}, {x3,x4}, {x5,x7}, {x6}}. Age LEMS Walk x1 16-30 50 yes x2 16-30 0 no x3 31-45 1-25 no x4 31-45 1-25 yes x5 46-60 26-49 no x6 16-30 26-49 yes x7 46-60 26-49 no

Set Approximation Let T = (U, A) and let and We can approximate X using only the information contained in B by constructing the B-lower and B-upper approximations of X, denoted and respectively, where

An Example of Set Approximation Let W = {x | Walk(x) = yes}. The decision class, Walk, is rough since the boundary region is not empty. Age LEMS Walk x1 16-30 50 yes x2 16-30 0 no x3 31-45 1-25 no x4 31-45 1-25 yes x5 46-60 26-49 no x6 16-30 26-49 yes x7 46-60 26-49 no

An Example of Set Approximation (2) {{x2}, {x5,x7}} {{x3,x4}} yes AW {{x1},{x6}} yes/no no

Lower & Upper Approximations U/R R : subset of attributes setX

Lower & Upper Approximations (2) Lower Approximation:

Lower & Upper Approximations (3) The indiscernibility classes defined by R = {Headache, Temp.} are {u1}, {u2}, {u3}, {u4}, {u5, u7}, {u6, u8}. X1 = {u | Flu(u) = yes} = {u2, u3, u6, u7} RX1 = {u2, u3} = {u2, u3, u6, u7, u8, u5} X2 = {u | Flu(u) = no} = {u1, u4, u5, u8} RX2 = {u1, u4} = {u1, u4, u5, u8, u7, u6}

Lower & Upper Approximations (4) R = {Headache, Temp.} U/R = { {u1}, {u2}, {u3}, {u4}, {u5, u7}, {u6, u8}} X1 = {u | Flu(u) = yes} = {u2,u3,u6,u7} X2 = {u | Flu(u) = no} = {u1,u4,u5,u8} X2 X1 RX1 = {u2, u3} = {u2, u3, u6, u7, u8, u5} u2 u7 u5 u1 RX2 = {u1, u4} = {u1, u4, u5, u8, u7, u6} u6 u8 u3 u4

Issues in the Decision Table The same or indiscernible objects may be represented several times. Some of the attributes may be superfluous (redundant). That is, their removal cannot worsen the classification.

Reducts Keep only those attributes that preserve the indiscernibility relation and, consequently, set approximation. There are usually several such subsets of attributes and those which are minimal are called reducts.

Independent T = (U, C, D) is independent   if all are indispensable in T.

Reduct & Core The set of attributes is called a reduct of C, if T’ = (U, R, D) is independent and The set of all the condition attributes indispensable in T is denoted by CORE(C). where RED(C) is the set of all reducts of C.

An Example of Reducts & Core Reduct1 = {Muscle-pain,Temp.} Reduct2 = {Headache, Temp.} CORE = {Headache,Temp}  {MusclePain, Temp} = {Temp}

Discernibility Matrix (relative to positive region) Let T = (U, C, D) be a decision table, with By a discernibility matrix of T, denoted M(T), we will mean matrix defined as: for i, j = 1,2,…,n such that or belongs to the C-positive region of D. is the set of all the condition attributes that classify objects ui and uj into different classes.

Discernibility Matrix (relative to positive region) (2) The equation is similar but conjunction is taken over all non-empty entries of M(T) corresponding to the indices i, j such that or belongs to the C-positive region of D. denotes that this case does not need to be considered. Hence it is interpreted as logic truth. All disjuncts of minimal disjunctive form of this function define the reducts of T (relative to the positive region).

Discernibility Function (relative to objects) For any where (1) is the disjunction of all variables a such that if (2) if (3) if Each logical product in the minimal disjunctive normal form (DNF) defines a reduct of instance

Examples of Discernibility Matrix In order to discern equivalence classes of the decision attribute d, to preserve conditions described by the discernibility matrix for this table No a b c d u1 a0 b1 c1 y u2 a1 b1 c0 n u3 a0 b2 c1 n u4 a1 b1 c1 y u1 u2 u3 C = {a, b, c} D = {d} u2 u3 u4 a,c b c a,b Reduct = {b, c}

Examples of Discernibility Matrix (2) u1 u2 u3 u4 u5 u6 u2 u3 u4 u5 u6 u7 b,c,d b,c b b,d c,d a,b,c,d a,b,c a,b,c,d a,b,c,d a,b c,d c,d Core = {b} Reduct1 = {b,c} Reduct2 = {b,d}

The Goal of Attribute Selection Finding an optimal subset of attributes in a database according to some criterion, so that a classifier with the highest possible accuracy can be induced by learning algorithm using information about data available only from the subset of attributes.

Attribute Selection

Why Heuristics ? The number of possible reducts can be where N is the number of attributes. Selecting the optimal reduct from all of possible reducts is time-complex and heuristics must be used.

Attribute Evaluation Criteria Selecting the attributes that cause the number of consistent instances to increase faster To obtain the subset of attributes as small as possible Selecting an attribute that has smaller number of different values To guarantee that the number of instances covered by rules is as large as possible.

  Main Features of RSH It can select a better subset of attributes quickly and effectively from a large DB. The selected attributes do not damage the performance of induction so much.

An Example of Attribute Selection Condition Attributes:   a: Va = {1, 2}   b: Vb = {0, 1, 2} c: Vc = {0, 1, 2} d: Vd = {0, 1} Decision Attribute: e: Ve = {0, 1, 2}

Searching for CORE Removing attribute a Removing attribute a does not cause inconsistency. Hence, a is not used as CORE.

Searching for CORE (2) Removing attribute b Removing attribute b cause inconsistency. Hence, b is used as CORE.

Searching for CORE (3) Removing attribute c Removing attribute c does not cause inconsistency. Hence, c is not used as CORE.

Searching for CORE (4) Removing attribute d Removing attribute d does not cause inconsistency. Hence, d is not used as CORE.

Searching for CORE (5) CORE(C)={b} Initial subset R = {b} Attribute b is the unique indispensable attribute. CORE(C)={b} Initial subset R = {b}

R={b} T T’ The instances containing b0 will not be considered.

Attribute Evaluation Criteria Selecting the attributes that cause the number of consistent instances to increase faster To obtain the subset of attributes as small as possible Selecting the attribute that has smaller number of different values To guarantee that the number of instances covered by a rule is as large as possible.

Selecting Attribute from {a,c,d} U/{a,b} 1. Selecting {a} R = {a,b} u3 u5 u6 u4 u7 U/{e} u3,u5,u6 u4 u7

Selecting Attribute from {a,c,d} (2) 2. Selecting {c} R = {b,c} U/{e} u3,u5,u6 u4 u7

Selecting Attribute from {a,c,d} (3) 3. Selecting {d} R = {b,d} U/{e} u3,u5,u6 u4 u7

Selecting Attribute from {a,c,d} (4) 3. Selecting {d} R = {b,d} u3,u5,u6 u4 u7 U/{e} u3, U/{b,d} u5,u6 Result: Subset of attributes= {b, d}