Action Rules Discovery Systems: DEAR1, DEAR2, ARED, ….. by Zbigniew W. Raś.

Slides:



Advertisements
Similar presentations
O X Click on Number next to person for a question.
Advertisements

NEXTPREVIOUS Instructor Terry Wiseth Genetics genes Genetics is the study of how characteristics are passed from one generation to the next. Characteristics.
Traits, Genes, and Alleles TERMINOLOGY FOR THE SUCCESS OF GENETICS.
O X Click on Number next to person for a question.
CCCC 8 CCCC CCCC 8 CCCC.
Action Rules Discovery /Lecture I/ by Zbigniew W. Ras UNC-Charlotte, USA.
Minimum Spanning Trees (MSTs) Prim's Algorithm For each vertex not in the tree, keep track of the lowest cost edge that would connect it to the tree This.
CCCC CCCC CCCC CCCC.
Multiplying out over one bracket Bingo Aim: Full House Grid: 9 Grid Play: Calculate value & cross it off.
AP Biology Probability & Genetics AP Biology Probability & genetics  Calculating probability of making a specific gamete is just like calculating the.
Find the value of x. A. 80 B. 60 C. 40 D. 20 A B C D 5-Minute Check 2.
P449. p450 Figure 15-1 p451 Figure 15-2 p453 Figure 15-2a p453.
Lecture 21 Rule discovery strategies LERS & ERID.
Data Security against Knowledge Loss *) by Zbigniew W. Ras University of North Carolina, Charlotte, USA.
Action Rules Discovery without pre-existing classification rules B y Zbigniew W. Ras 1,2 and Agnieszka Dardzinska 1,3 1)University of North Carolina, Charlotte,
Finding Standard Error. Standard error The standard error is the standard deviation of a sampling distribution – we have three types of standard errors:
ACTION RULES & META ACTIONS presented by Zbigniew W. Ras University of North Carolina, Charlotte, NC College of Computing and Informatics University of.
Association Action Rules b y Zbigniew W. Ras 1,5 Agnieszka Dardzinska 2 Li-Shiang Tsay 3 Hanna Wasyluk 4 1)University of North Carolina, Charlotte, NC,
Solving Failing Queries *) Zbigniew W. Ras University of North Carolina Charlotte, N.C , USA
Concept.
Find the hypotenuse in a right triangle with legs a = 3 and b = Exercise.
Learning Letter Sounds Jack Hartman Shake, Rattle, and Read
Maintenance Tools for ACM Project A C M C l a s s, S J T U D o n g X i e.
Integer programming Branch & bound algorithm ( B&B )
COMP 261 Lecture 12 Disjoint Sets. Menu Kruskal's minimum spanning tree algorithm Disjoint-set data structure and Union-Find algorithm Administrivia –Marking.
Dec. 14 HW 18: Transformations Aim: Working with Dilation, Reflection, Translations, and Rotations. Review from 7 th Accelerated. Materials you will need.
Concept. Example 1 A Identify Linear Equations First rewrite the equation so that the variables are on the same side of the equation. A. Determine whether.
Michele Mabel E. Langcauon Ramon Magsaysay (Cubao) High School.
EXAMPLE 2 Use the power of quotient property x3x3 y3y3 = a.a. x y 3 (– 7) 2 x 2 = b.b. 7 x – 2 – 7 x 2 = 49 x 2 =
Indirect Measurement and Additional Similarity Theorems 8.5.
ITCS 6162 Project Action Rules Implementation
KEEP THE SHIPS AFLOT BY USING YOUR KNOWLEDGE OF SEA TERM KNOWLEDGE. Click here to move on.
J C E K F S R E TS U C OL B D I L WY M RT O G by RUFUS.
Regular Grammars Chapter 7. Regular Grammars A regular grammar G is a quadruple (V, , R, S), where: ● V is the rule alphabet, which contains nonterminals.
Learning the ABC’s and some words that start with them! By: Alyssa Duchon Introduction Lessons Quiz.
Notes Over Reflections A _______________is a change of position or size of a figure.
Lecture 1: Monge’s projection “The point”
1.A 2.B 3.C 4.D Lesson 4 CYP3 A. The profit a business makes is found by subtracting the cost to produce an item C(x) from the amount earned in sales E(x).
The Theorem Of Pythagoras.  Pythagoras was a Greek Mathematician.( B.C) years old is even older than your teacher.  He was eccentric. (mad!!)
Physics Jeopardy!. $100 $200 $300 $400 $500 Newton’s Laws EnergyMomentum Circular Motion GravitationThermo.
Define radioisotopes Define radioisotopes Define radioisotopes Define radioisotopes Types of natural radiations Types of natural radiations (1) aphpa decay,
Write equations of lines in point-slope form.
DECISION TREE Ge Song. Introduction ■ Decision Tree: is a supervised learning algorithm used for classification or regression. ■ Decision Tree Graph:
Binary decision diagrams (BDD’s) Compact representation of a logic function ROBDD’s (reduced ordered BDD’s) are a canonical representation: equivalence.
1 S = (X, A  {d[1],d[2],..,d[k]}, V), where: - X is a finite set of objects, - A is a finite set of classification attributes, - {d[1],d[2],..,d[k]} is.
Einsteinium By: Carrington Austin 99 Es 252. Properties and Uses There are no properties, Einsteinium is too small There are no properties, Einsteinium.
Over Chapter 7 A.A B.B C.C D.D 5-Minute Check 1 5a3b5a3b.
Then/Now You evaluated expressions containing positive exponents. (Lesson 9–1) Write expressions using negative exponents. Evaluate numerical expressions.
© A B C D 78 A1A2A3A4 B1 B2 B3B4 C1C2 C3C4 D1D2D3D4 A5A6A7A8 B5B6B7B8 C5C6C7C8 D5D6D7D8 Questions: Ships Remaining: © 2012.
Build each figure then sketch it in your comp book. Write the AREA for each A = _________________ x² + 2x + 2 2x + 3 x² + 2x + 2.
1 2-7 Diffusion 2 Molecules Molecules are the smallest part of a substance that still have the characteristics of that substance. Molecules are the smallest.
B L O C K E U I G F L S T J C W Y D E M R T O R S.
Chase Methods based on Knowledge Discovery Agnieszka Dardzinska & Zbigniew W. Ras &
Punnett Squares. Monohybrid Crosses A punnett square that examines only one trait at a time. Shows the probability of the traits of offspring. Set up.
Jigsaw cards for LARGE classes Use these cards for classes with 24 or more students. Distribute letter cards evenly. If you have 32 students, pass out.
Rough Set Theory and Databases Senior Lecturer: Laurie Webster II, M.S.S.E.,M.S.E.E., M.S.BME, Ph.D., P.E. Lecture 28 A First Course in Database Systems.
ACTION RULES /Lecture II/
Hierarchical Recommender System for Improving NPS.
3.4 Quick Review Express In 56 in terms of ln 2 and ln 7.
From Knowledge Discovery to Customer Attrition
Snakes & Ladders Board Game
Miss Schwarz’s class rules
Isesaki City LEADER program Module 1
Chapter 7 Regular Grammars
By: The Kid’s in Mrs. Tower’s Room
Make a fist with your thumb out.
ADD YOUR TITLE HERE Your Name
ACTION RULES & META ACTIONS presented by Zbigniew W. Ras
Find the value of g. Find the value of h. 105° h g 75°
Removal of brackets Example Work out each of the following
Presentation transcript:

Action Rules Discovery Systems: DEAR1, DEAR2, ARED, ….. by Zbigniew W. Raś

LERS Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 (a, a 1 ) (a, a 2 ) (b, b 1 ) (b,b 2 ) ……….. (d,d 1 ) (d,d 2 ) Decision System S atomic terms r = [[(a, a 2 )*(b, b 1 )] → (d, d 1 )] w  Y → w  Z rule Support: Confidence: Y = {x 2, x 4 } Z = {x 1,x 2,x 3,x 4,x 5,x 7 } sup(r) = 2 conf(r) = 2/2 = 1

abcefd bcefd bcefd Splitting the node using the stable attribute Dom(a) = {1,2,3} & Dom(b) = {1,2,3,4,5} All objects have the same decision value, so this sub-table is not analyzed any further None of the objects contain the desired class “1”, so this sub-table stops splitting any further a = 1 a = 2 a = 3 bcefd ced ced b = 1 b = 5 All the flexible values are the same for both objects, therefore this sub-table is not analyzed any further Partition decision table S Stable:{ a, b} Flexible: {c, e, f} Reclassification direction: 2  1 or 3  1 All objects have the same value 8 for attribute f, so it is crossed out from the sub-table ( this condition is used for stable attributes as well) T1T1 T2T2 T3T3 T4T4 T5T5 Action Rules Discovery (Preprocessing)

Table: Set of rules R with supporting objects Figure of (d, H)-tree T1 Figure of (d, L)-tree T2 Objectsabcd x1, x2, x3, x40L x1, x30L x2, x42L 1L x5, x63L x7, x821H 12H Objectsabc x1, x2, x3, x40 x1, x30 x2, x42 1 x5, x63 Objectsbc x1, x30 x2, x42 1 x5, x63 Objectsb x2, x42 x5, x63 c = 1c = ? c = 0 Objectsbc x1, x2, x3, x4 Objectsb x1, x3 a = 0 Objectsb x2, x4 a = ? Objectsabc x7, x Objectsbc x7, x81 a = 2 Objectsbc x7, x812 a = ? Stable Attribute: {a, c} Flexible Attribute: b Decision Attribute: d T1 T2 T3 T4 (T3, T1) : (a = 2)  (b, 2  1)  ( d, L  H) (a = 2)  (b, 3  1)  ( d, L  H) Objectsb x7, x81 c = ? c = 2 Objectsb x7, x81 c = ? Objectsb x1, x2, x3, x4 T5 T6 System DEAR1

Objectsabcd r1x1, x2, x3, x40L r2x1, x30L r3x2, x42L r4x2, x41L r5x5, x63L r6x7, x821H r7x7, x812H Objectsabcd x1, x2, x3, x40L x1, x30L x2, x42L 1L x5, x63L x7, x821H 12H Stable Attribute: b Flexible Attribute: {a, c} Decision Attribute: d Objectsacd x1, x2, x3, x40L x1, x30L x2, x4L 1L b = 2 Objectsacd x1, x2, x3, x40L x1, x30L x2, x41L x5, x6L b = 3 Objectsacd x1, x2, x3, x40L x1, x30L x2, x41L x7, x82H 2H b = 1 Objectsac x1, x2, x3, x40 x1, x30 x2, x41 Objectsac x7, x82 2 d = L d = H Set of rules R with supporting objects (b = 1)  (a, 0  2)  ( d, L  H) (b = 1)  (c, 0  2)  ( d, L  H) (b = 1)  (c, 1  2)  ( d, L  H) System DEAR2

Cost of Action Rule Action rule r: [(b 1, v 1 → w 1 )  (b 2, v 2 → w 2 )  …  ( b p, v p → w p )](x)  (d, k 1 → k 2 )(x) The cost of r in S: cost S (r) =  {  S (v i, w i ) : 1  i  p} Action rule r is feasible in S, if cost S (r) <  S (k 1, k 2 ). For any feasible action rule r, the cost of the conditional part of r is lower than the cost of its decision part.

Example: r = [(b 1, v 1 → w 1 )  …  (b j, v j → w j )  …  ( b p, v p → w p )](x)  (d, k 1 → k 2 )(x) In R S [(b j, v j → w j )] we find r 1 = [(b j1, v j1 → w j1 )  (b j2, v j2 → w j2 )  …  ( b jq, v jq → w jq )](x)  (b j, v j → w j )(x) Then, we can compose r with r 1 and the same replace term (b j, v j → w j ) by term from the left hand side of r 1 : [(b 1, v 1 → w 1 )  …  [(b j1, v j1 → w j1 )  (b j2, v j2 → w j2 )  …  ( b jq, v jq → w jq )]  …  ( b p, v p → w p )](x)  (d, k 1 → k 2 )(x) Cost of Action Rule

ARED Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 (a, a 1 → a 1 ) (a, a 2 → a 2 ) (b, b 1 → b 1 ) (b, b 2 → b 2 ) ……….. (d, d 1 → d 1 ) (d, d 2 → d 2 ) Decision System S atomic action terms r=[(a, a 2 → a 2 )*(b, b 1 → b 1 )] → (d, d 1 → d 1 ) (w, w)  (Y, Y ) → (w,w)  (Z, Z) action rule Support: Confidence: Y = {x 2, x 4 } Z = {x 1,x 2,x 3,x 4,x 5,x 7 } sup(r) = 2 conf(r) = 2/2 = 1

ARED Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 (a, a 1 → a 1 ) (a, a 2 → a 2 ) (b, b 1 → b 1 ) (b, b 2 → b 2 ) ……….. (d, d 1 → d 1 ) (d, d 2 → d 2 ) Decision System S atomic action terms r=[(a, a 2 → a 1 )*(b, b 1 → b 1 )] → (d, d 1 → d 2 ) (w 1, w 2 )  (Y 1, Y 2 ) → (w 1,w 2 )  (Z 1, Z 2 ) action rule Support: Confidence: Y = {x 2, x 4 } Z = {x 1,x 2,x 3,x 4,x 5,x 7 } sup(r) = ? conf(r) = ? Y=(Y 1,Y 2 ), Z=(Z 1,Z 2 ) w = (w 1,w 2 )

ARED Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 (a, a 1 → a 1 ) (a, a 1 → a 2 ) (b, b 1 → b 2 ) (b, b 2 → b 2 ) ……….. (d, d 1 → d 1 ) (d, d 2 → d 2 ) Decision System S atomic action terms r=[(a, a 2 → a 1 )*(b, b 1 → b 1 )] → (d, d 1 → d 2 ) (Y 1, Y 2 ) (Z 1, Z 2 ) action rule Support: Confidence: Y 1 = {x 2, x 4 } Z 1 = {x 1,x 2,x 3,x 4,x 5,x 7 } Y 2 = {x 1, x 6 } Z 2 = { x 6 } sup(r) = 2 conf(r) = 2/2 = 1 Y 1 → Z 1, Y 2 → Z 2

ARED Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 (a, a 1 → a 1 ) (a, a 1 → a 2 ) (b, b 1 → b 2 ) (b, b 2 → b 2 ) ……….. (d, d 1 → d 1 ) (d, d 2 → d 2 ) Decision System S atomic terms r=[(a, a 2 → a 1 )*(b, b 1 → b 1 )] → (d, d 1 → d 2 ) (Y 1, Y 2 ) (Z 1, Z 2 ) rule Y 1 = {x 2, x 4 } Z 1 = {x 1,x 2,x 3,x 4,x 5,x 7 } Y 2 = {x 1, x 6 } Z 2 = { x 6 } sup(r) = 2 conf(r) = 2/2 = 1

ARED Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 (a, a 1 → a 1 ) (a, a 1 → a 2 ) (b, b 1 → b 2 ) (b, b 2 → b 2 ) ……….. (d, d 1 → d 1 ) (d, d 2 → d 2 ) Decision System S atomic terms r=[(a, a 2 → a 1 )*(b, b 1 → b 1 )] → (d, d 1 → d 2 ) (Y 1, Y 2 ) (Z 1, Z 2 ) rule Y 1 = {x 2, x 4 } Z 1 = {x 1,x 2,x 3,x 4,x 5,x 7 } Y 2 = {x 1, x 6 } Z 2 = { x 6 } sup(r) = 2 conf(r) = 2/2 = 1

ARED Meaning of (d,d 1  d 2 ) in S: N S (d,d 1  d 2 )=[{x 1,x 2, x 3, x 4, x 5, x 7 }, {x 6 }] Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 stable attribute flexible attributes Object reclassification from class d 1 to d 2 λ1=2, λ2=1/4 Atomic classification terms: (b,b 1  b 1 ), (b,b 2  b 2 ), (b,b 3  b 3 ) (a,a 1  a 2 ), (a,a 1  a 1 ), (a,a 2  a 2 ), (a,a 2  a 1 ) (c,c 1  c 2 ), (c,c 2  c 1 ), (c,c 1  c 1 ), (c,c 2  c 2 ) λ1 - minimum support, λ2 - minimum confidence

ARED Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 stable attribute flexible attributes Object reclassification from class d 1 to d 2 λ1=2, λ2=1/4 Notation: t 1 =(b,b 1  b 1 ), t 2 =(b,b 2  b 2 ), t 3 =(b,b 3  b 3 ), t 4 =(a,a 1  a 2 ), t 5 =(a,a 1  a 1 ), t 6 =(a,a 2  a 2 ), t 7 =(a,a 2  a 1 ), t 8 =(c,c 1  c 2 ), t 9 =(c,c 2  c 1 ), t 10 =(c,c 1  c 1 ), t 11 =(c,c 2  c 2 ), t 12 = (d,d 1  d 2 ). λ1 - minimum support, λ2 - minimum confidence

For decision attribute in S: N S (d,d 1  d 2 )=[{x 1,x 2, x 3, x 4, x 5, x 7 }, {x 6 }] Object reclassification from class d 1 to d 2 λ1=2, λ2=1/4 For classification attribute in S: N S (t 1 ) = N S (b,b 1  b 1 ) = [{x 1,x 2, x 4, x 6 }, {x 1,x 2, x 4, x 6 }] N S (t 2 ) = N S (b,b 2  b 2 ) = [{x 3,x 7, x 8 }, {x 3,x 7, x 8 }] N S (t 3 ) = N S (b,b 3  b 3 ) = [{x 5 }, {x 5 }] N S (t 4 ) = N S (a,a 1  a 2 ) = [{x 1,x 6, x 7, x 8 }, {x 2,x 3, x 4, x 5 }] Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 Not marked λ1=3 Mark “-” λ2=0 Mark “-” λ1=1 Mark “-” λ2=0

For decision attribute in S: N S (d,d 1  d 2 )=[{x 1,x 2, x 3, x 4, x 5, x 7 }, {x 6 }] Object reclassification from class d 1 to d 2 λ1=2, λ2=1/4 For classification attribute in S: Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3 Not marked λ1=2 Mark “-” λ2= 0 Mark “+” λ1=4,λ2=1/4 N S (t 5 ) = N S (a,a 1  a 1 ) = [{x 1,x 6, x 7, x 8 }, {x 1,x 6, x 7, x 8 }] N S (t 6 )= N S (a,a 2  a 2 ) = [{x 2,x 3, x 4, x 5 }, {x 2,x 3, x 4, x 5 }] N S (t 7 )= N S (a,a 2  a 1 ) = [{x 2,x 3, x 4, x 5 }, {x 1,x 6, x 7, x 8 }]

For decision attribute in S: N S (t 12 )=[{x 1,x 2, x 3, x 4, x 5, x 7 }, {x 6 }] Object reclassification from class d1 to d2 λ1=2, λ2=1/4 For classification attribute in S: N S (t 1 )=[{x 1,x 2, x 4, x 6 }, {x 1,x 2, x 4, x 6 }] Not marked λ1=3 N S (t 2 )=[{x 3,x 7, x 8 }, {x 3,x 7, x 8 }] Marked “-” λ2=0 N S (t 3 )=[{x 5 }, {x 5 }] Marked “-” λ1=1 N S (t 4 )=[{x 1,x 6, x 7, x 8 }, {x 2,x 3, x 4, x 5 }] Marked “-” λ2=0 N S (t 5 )=[{x 1,x 6, x 7, x 8 }, {x 1,x 6, x 7, x 8 }] Not marked λ1=2 N S (t 6 )=[{x 2,x 3, x 4, x 5 }, {x 2,x 3, x 4, x 5 }] Marked “-” λ2=0 Mark “+” λ1=4, λ2=1/4 N S (t 7 )=[{x 2,x 3, x 4, x 5 }, {x 1,x 6, x 7, x 8 }] r = [t 7  t 1 ] Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3

For decision attribute in S: N S (t 12 )=[{x 1,x 2, x 3, x 4, x 5, x 7 }, {x 6 }] Object reclassification from class d1 to d2 λ1=2, λ2=1/4 For classification attribute in S: N S (t 8 )= N S (c,c 1  c 2 ) = [{x 1,x 4, x 8 }, {x 2, x 3, x 5, x 6, x 7 }] Not marked Marked “-” N S (t 10 ) = N S (c,c 1  c 1 ) = [{x 1, x 4, x 8 }, {x 1, x 4, x 8 }] Marked “-” N S (t 11 ) = N S (c,c 2  c 2 )= [{x 2, x 3, x 5, x 6, x 7 }, {x 2, x 3, x 5, x 6, x 7 }] Not marked conf = 2/3 *1/5 <λ 2 N S (t 9 ) = N S (c,c 2  c 1 ) = [{x 2, x 3, x 5, x 6, x 7 }, {x 1, x 4, x 8 }] Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3

For decision attribute in S: N S (t 12 )=[{x 1,x 2, x 3, x 4, x 5, x 7 }, {x 6 }] Object reclassification from class d1 to d2λ1=2, λ2=1/4 For classification attribute in S: Marked “+” N S (t 1 *t 11 )=[{x 2, x 6 }, {x 2, x 6 }] Marked “-”, λ1=1 N S (t 5 *t 8 )=[{x 1, x 8 }, {x 6, x 7 }] Marked “-”, λ1=1 Rule r = [t 1 *t 8 →t 12 ], conf = 1/2 ≥ λ 2, sup=2 ≥ λ 1 Now action terms of length 2 from unmarked action terms of length 1 N S (t 1 *t 5 )=[{x 1, x 6 }, {x 1, x 6 }] Marked “-”, λ1=1 N S (t 1 *t 8 )=[{x 1, x 4 }, {x 2, x 6 }] N S (t 5 *t 11 )=[{x 6, x 7 }, {x 6, x 7 }] Marked “-”, λ1=1 N S (t 8 *t 11 )=[ Ø, {x 2, x 3, x 5, x 6, x 7 }] Marked “-” N S (t 1 )=[{x 1,x 2, x 4, x 6 }, {x 1,x 2, x 4, x 6 }], N S (t 5 )=[{x 1,x 6, x 7, x 8 }, {x 1,x 6, x 7, x 8 }], N S (t 8 )=[{x 1,x 4, x 8 }, {x 2, x 3, x 5, x 6, x 7 }], N S (t 11 )= [{x 2, x 3, x 5, x 6, x 7 }, {x 2, x 3, x 5, x 6, x 7 }].

ARED Algorithm For decision attribute in S: N S (t 12 )=[{x 1,x 2, x 3, x 4, x 5, x 7 }, {x 6 }] Object reclassification from class d1 to d2λ1=2, λ2=1/4 For classification attribute in S: Action rules: [[(b,b1 →b1 )*(c,c1 → c2)] → (d, d1→d2) ] [[(a,a2 →a1 ] → (d, d1→d2) ] Xabcd x1x1x1x1 a1a1a1a1 b1b1b1b1 c1c1c1c1 d1d1d1d1 x2x2x2x2 a2a2a2a2 b1b1b1b1 c2c2c2c2 d1d1d1d1 x3x3x3x3 a2a2a2a2 b2b2b2b2 c2c2c2c2 d1d1d1d1 x4x4x4x4 a2a2a2a2 b1b1b1b1 c1c1c1c1 d1d1d1d1 x5x5x5x5 a2a2a2a2 b3b3b3b3 c2c2c2c2 d1d1d1d1 x6x6x6x6 a1a1a1a1 b1b1b1b1 c2c2c2c2 d2d2d2d2 x7x7x7x7 a1a1a1a1 b2b2b2b2 c2c2c2c2 d1d1d1d1 x8x8x8x8 a1a1a1a1 b2b2b2b2 c1c1c1c1 d3d3d3d3

Thank You Questions?