Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Mining-Knowledge Presentation 2 Prof. Sin-Min Lee.

Similar presentations


Presentation on theme: "Data Mining-Knowledge Presentation 2 Prof. Sin-Min Lee."— Presentation transcript:

1 Data Mining-Knowledge Presentation 2 Prof. Sin-Min Lee

2 Overview l Association rules are useful in that they suggest hypotheses for future research l Association rules integrated into the generic actual argument model can assist in identifying the most plausible claim from given data items in a forward inference way or the likelihood of missing data values in a backward inference way

3 What is data mining ? What is knowledge discovery from databases KDD? l knowledge discovery in databases (KDD) is the 'non trivial extraction of nontrivial of implicit, previously unknown, and potentially useful information from data

4 l KDD encompasses a number of different technical approaches, such as clustering, data summarization, learning classification rules, finding dependency networks, analyzing changes, and detecting anomalies l KDD has only recently emerged because we only recently have been gathering vast quantities of data l

5 l Mangasarian et al (1997) Breast Cancer diagnosis. A sample from breast lump mass is assessed by: l mammagrophy (not sensitive 68%-79%) l data mining from FNA test results and visual inspection (65%-98%) l surgery (100% but invasive, expensive) l Basket analysis. People who buy nappies also buy beer NBA. National Basketball Association of America. Player pattern profile. Bhandary et al (1997) l Credit card fraud detection l Stranieri/Zeleznikow (1997) predict family law property outcomes l Rissland and Friedman (1997) discovers a change in the concept of ‘good faith’ in US Bankruptcy cases l Pannu (1995) discovers a prototypical case from a library of cases Wilkins and Pillaipakkamnatt (1997) predicts the time a case takes to be heard Veliev et al (1999) association rules for economic analaysis Examples of KDD studies

6 Overview of process of knowledge discovery in databases ? Interpret patterns Data mining Trans form Pre process Select knowledge patterns Transform ed data Pre-proces sed data Target data Raw data from Fayyad, Pitatetsky-Shapiro, Smyth (1996)

7 Phase 4. Data mining l Finding patterns in data or fitting models to data l Categories of techniques l Predictive (classification: neural networks, rule induction, linear, multiple regression) l Segmentation (clustering, k-means, k-median) l Summarisation (associations, visualisation) l Change detection/modelling

8 What Is Association Mining? Association rule mining: – Finding frequent patterns, associations, correlations, or causal structures among sets of items or objects in transaction databases, relational databases, and other information repositories. Applications: – Basket data analysis, cross-marketing, catalog design, loss- leader analysis, clustering, classification, etc. Examples. – Rule form: “ Body  ead [support, confidence] ”. – buys(x, “ diapers ” )  buys(x, “ beers ” ) [0.5%, 60%] – major(x, “ CS ” ) ^ takes(x, “ DB ” )  grade(x, “ A ” ) [1%, 75%]

9 More examples –age(X, “ 20..29 ” ) ^ income(X, “ 20..29K ” )  buys(X, “ PC ” ) [support = 2%, confidence = 60%] –contains(T, “ computer ” )  contains(x, “ software ” ) [1%, 75%]

10 Association rules are a data mining technique An association rules tell us something about the association between two attributes Agrawal et al (1993) developed the first association rule algorithm, Apriori A famous (but unsubstantiated AR) from a hypothetical supermarket transaction database is if nappies then beer (80%) Read this as nappies are bought implies beer are bought 80% of the time Association rules have only recently been applied to law with promising results Association rules can automatically discover rules that may prompt an analyst to think of hypothesis they would otherwise have considered

11 Rule Measures: Support and Confidence Find all the rules X & Y  Z with minimum confidence and support –support, s, probability that a transaction contains {X  Y  Z} –confidence, c, conditional probability that a transaction having {X  Y} also contains Z Let minimum support 50%, and minimum confidence 50%, we have –A  C (50%, 66.6%) –C  A (50%, 100%) Customer buys diaper Customer buys both Customer buys beer Support and confidence are two independent notions.

12 Mining Association Rules — An Example For rule A  C: support = support({A  C}) = 50% confidence = support({A  C})/support({A}) = 66.6% Min. support 50% Min. confidence 50%

13 Two Step Association Rule Mining Step 1: Frequent itemset generation – use Support Step 2: Rule generation – use Confidence

14 {milk, bread} is a frequent item set. Folks buying milk, also buy bread. Is it also true?: “Folks buying bread also buy milk.”

15 Confidence and support of an association rule 80% is the confidence of the rule if nappies then beer (80%). This is calculated by n2/n1 where: n1 = no of records where nappies are bought n2 = no of records where nappies were bought and beer was also bought. if 1000 transactions for nappies, and of those, 800 also had beer then confidence is 80%. A rule may have a high confidence but not be interesting because it doesn’t apply to many records in the database. i.e. no. of records where nappies were bought with beer / total records. Rules that may be interesting have a confidence level and support level above a user set threshold

16 Interesting rules: Confidence and support of an association rule if 1000 transactions for nappies, and of those, 800 also had beer then confidence is 80%. A rule may have a high confidence but not be interesting because it doesn’t apply to many records in the database. i.e. no. of records where nappies were bought with beer / total records. Rules that may be interesting have a confidence level and support level above a user set threshold

17 Association rule screen shot with A-Miner from Split Up data set In 73.4% of cases where the wife's needs are some to high then the husband's future needs are few to some. Prompts an analyst to posit plausible hypothesis e.g. it may be the case that the rule reflects the fact that more women remain custodial parents of the children following divorce than men do. The women that have some to high needs may do so because of their obligation to children.

18 Mining Frequent Itemsets: the Key Step Find the frequent itemsets: the sets of items that have minimum support –A subset of a frequent itemset must also be a frequent itemset – Apriori principle i.e., if {AB} is a frequent itemset, both {A} and {B} should be a frequent itemset –Iteratively find frequent itemsets with cardinality from 1 to k (k-itemset) Use the frequent itemsets to generate association rules.

19 The Apriori Algorithm Join Step: C k is generated by joining L k-1 with itself Prune Step: Any (k-1)-itemset that is not frequent cannot be a subset of a frequent k-itemset Pseudo-code: C k : Candidate itemset of size k L k : frequent itemset of size k L 1 = {frequent items}; for (k = 1; L k !=  ; k++) do begin C k+1 = candidates generated from L k ; for each transaction t in database do increment the count of all candidates in C k+1 that are contained in t L k+1 = candidates in C k+1 with min_support end return  k L k ;

20 Association rules in law Association rules generators are typically packaged with very expensive data mining suites. We developed A-Miner (available from authors) for a PC platform. Typically, too many association rules are generated for feasible analysis. So, our current research involves exploring metrics of interesting to restrict numbers of rules that might be interesting In general, structured data is not collected in law as it is in other domains so very large databases are rare Our current research involves 380,000 records from a Legal Aid organization data base that contains data on client features. ArgumentDeveloper shell that can be used by judges to structure their reasoning in a way that will facilitate data collection and reasoning

21 The Apriori Algorithm — Example Database D Scan D C1C1 L1L1 L2L2 C2C2 C2C2 Support = 2

22 Join Operation — Example C3C3 L3L3 Scan D L2L2 L2L2 L2L2 L2L2 join {1 3} {1 3} {2 3} {1 3} {2 5} {1 3} {3 5} null {1 2 3} null {1 3 5} {2 3} {2 3} {2 5} {2 3} {3 5} null {2 3 5} {1 2} {1 5} Infrequent Subset {2 5} {2 5} {3 5} null {2 3 5}

23 Anti-Monotone Property If a set cannot pass a test, all of its supersets will fail the same test as well. If {2 3} does not have a support, nor will {1 2 3}, {2 3 5}, {1 2 3 5}, etc. If {2 3} occurs only in 5 times, can {2 3 5} occur in 8 times?

24 How to Generate Candidates? Suppose the items in L k-1 are listed in an order Step 1: self-joining L k-1 insert into C k select p.item 1, p.item 2, …, p.item k-1, q.item k-1 from L k-1 p, L k-1 q where p.item 1 =q.item 1, …, p.item k-2 =q.item k-2, p.item k-1 < q.item k-1 Step 2: pruning forall itemsets c in C k do forall (k-1)-subsets s of c do if (s is not in L k-1 ) then delete c from C k

25 Example of Generating Candidates L 3 ={abc, abd, acd, ace, bcd} Self-joining: L 3 *L 3 –abcd from abc and abd –acde from acd and ace Pruning: –acde is removed because ade is not in L 3 C 4 ={abcd} Problem of generate-&-test heuristic

26 Association rules can be used for forward and backward inferences in the generic/actual argument model for sentencing armed robbery

27 Generic/actual argument model for sentencing armed robbery

28 Forward inference: confidence In the sentence actual argument database the following outcomes were noted for the inputs suggested: 57% 0.1% 0% 12% 2% 10% 16% 0%

29 If extremely serious pattern of priors then imprisonment If very serious pattern of priors then imprisonment If serious pattern of priors then imprisonment If not so serious pattern of priors then imprisonment If no prior convictions then imprisonment Backward inference: constructing the strongest argument If all the items you suggest AND 90% 2% 75% 7% 68% 17% 78% 17% 2% 3%

30 Conclusion l Data mining or Knowledge discovery from databases has not been appropriately exploited in law to date. l Association rules are useful in that they suggest hypotheses for future research l Association rules integrated into the generic actual argument model can assist in identifying the most plausible claim from given data items in a forward inference way or the likelihood of missing data values in a backward inference way

31 Generating Association Rules For each nonempty subset s of l, output the rule: s => (l - s) if support_count(l) / support_count(s) >= min_conf where min_conf is the minimum confidence threshold. l = {2 3 5},{2 3},{2},{3 5},{3},{2 5},& {5}. {2 3} => {5} {3 5} => {2} {2 5} => {3} {2} => {3 5} {3} => {2 5} {5} => {2 3} s of l are Candidate rules:

32 Generating Association Rules if support_count(l) / support_count(s) >= min_conf (e.g,75%), then introduce the rule s => (l - s). l = {2 3 5} s = {2 3} {2}{3 5}{3}{2 5}{5} {2 3} => {5} : 2/2 {3 5} => {2} : 2/2 {2 5} => {3} : 2/3 {2} => {3 5} : 2/3 {3} => {2 5} : 2/3 {5} => {2 3} : 2/3

33 Presentation of Association Rules (Table Form )

34 Visualization of Association Rule Using Plane Graph

35 Visualization of Association Rule Using Rule Graph

36

37

38 Decision tree is a classifier in the form of a tree structure where each node is either: a leaf node, indicating a class of instances, or a decision node that specifies some test to be carried out on a single attribute value, with one branch and sub-tree for each possible outcome of the test. A decision tree can be used to classify an instance by starting at the root of the tree and moving through it until a leaf node, which provides the classification of the instance.

39 Example: Decision making in the London stock market Suppose that the major factors affecting the London stock market are: what it did yesterday; what the New York market is doing today; bank interest rate; unemployment rate; England ’ s prospect at cricket.

40

41

42 The process of predicting an instance by this decision tree can also be expressed by answering the questions in the following order: Is unemployment high? YES: The London market will rise today NO: Is the New York market rising today? YES: The London market will rise today NO: The London market will not rise today.

43 Decision tree induction is a typical inductive approach to learn knowledge on classification. The key requirements to do mining with decision trees are: Attribute-value description: object or case must be expressible in terms of a fixed collection of properties or attributes. Predefined classes: The categories to which cases are to be assigned must have been established beforehand (supervised data). Discrete classes: A case does or does not belong to a particular class, and there must be for more cases than classes. Sufficient data: Usually hundreds or even thousands of training cases. “ Logical ” classification model: Classifier that can be only expressed as decision trees or set of production rules

44

45

46 An appeal of market analysis comes from the clarity and utility of its results, which are in the form of association rules. There is an intuitive appeal to a market analysis because it expresses how tangible products and services relate to each other, how they tend to group together. A rule like, “ if a customer purchases three way calling, then that customer will also purchase call waiting ” is clear. Even better, it suggests a specific course of action, like bundling three-way calling with call waiting into a single service package. While association rules are easy to understand, they are not always useful.

47 The following three rules are examples of real rules generated from real data: · On Thursdays, grocery store consumers often purchase diapers and beer together. · Customers who purchase maintenance agreements are very likely to purchase large appliances. · When a new hardware store opens, one of the most commonly sold items is toilet rings. These three examples illustrate the three common types of rules produced by association rule analysis: the useful, the trivial, and the inexplicable.

48

49 OLAP (Summarization) Display Using MS/Excel 2000

50 Market-Basket-Analysis (Association)—Ball graph

51 Display of Association Rules in Rule Plane Form

52 Display of Decision Tree (Classification Results)

53 Display of Clustering (Segmentation) Results

54 3D Cube Browser


Download ppt "Data Mining-Knowledge Presentation 2 Prof. Sin-Min Lee."

Similar presentations


Ads by Google