Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Review. 2 Distributions 3 Distribution Definitions Discrete Probability Distribution Continuous Probability Distribution Cumulative Distribution Function.

Similar presentations


Presentation on theme: "1 Review. 2 Distributions 3 Distribution Definitions Discrete Probability Distribution Continuous Probability Distribution Cumulative Distribution Function."— Presentation transcript:

1 1 Review

2 2 Distributions

3 3 Distribution Definitions Discrete Probability Distribution Continuous Probability Distribution Cumulative Distribution Function

4 4 Discrete Distribution A r.v. X is discrete if it takes countably many values {x 1,x 2,….} The probability function or probability mass function for X is given by –f X (x)= P(X=x) From previous example

5 5 Continuous Distributions A r.v. X is continuous if there exists a function f X such that

6 6 Example: Continuous Distribution Suppose X has the pdf This is the Uniform (0,1) distribution

7 7 Binomial Distribution A coin flips Heads with probability p. Flip it n times and let X be the number of Heads. Assume flips are independent. Let f(x) =P(X=x), then

8 8 Binomial Example Let p =0.5; n = 5 then In Matlab >>binopdf(4,5,0.5)

9 9 Normal Distribution X has a Normal (Gaussian) distribution with parameters μ and σ if X is standard Normal if μ =0 and σ =1. It is denoted as Z. If X ~ N(μ, σ 2 ) then

10 10 Normal Example The number of spam emails received by a email server in a day follows a Normal Distribution N(1000,500). What is the probability of receiving 2000 spam emails in a day? Let X be the number of spam emails received in a day. We want P(X = 2000)? The answer is P(X=2000) = 0; It is more meaningful to ask P(X >= 2000);

11 11 Normal Example This is In Matlab: >> 1 –normcdf(2000,1000,500) The answer is 1 – 0.9772 = 0.0228 or 2.28% This type of analysis is so common that there is a special name for it: cumulative distribution function F.

12 12 Conditional Independence If A and B are independent then P(A|B)=P(A) P(AB) = P(A|B)P(B) Law of Total Probability.

13 13 Bayes Theorem

14 14 Question 1 Question: Suppose you randomly select a credit card holder and the person has defaulted on their credit card. What is the probability that the person selected is a ‘Female’? Gender% of credit card holders % of gender who default Male6055 Female4035

15 15 Answer to Question 1 But what does G=F and D=Y mean? We have not even formally defined them.

16 16 Clustering

17 17 Types of Clusterings A clustering is a set of clusters Important distinction between hierarchical and partitional sets of clusters Partitional Clustering –A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset Hierarchical clustering –A set of nested clusters organized as a hierarchical tree

18 18 Partitional Clustering Original Points A Partitional Clustering

19 19 Hierarchical Clustering Traditional Hierarchical Clustering Non-traditional Hierarchical ClusteringNon-traditional Dendrogram Traditional Dendrogram

20 20 K-means Clustering Partitional clustering approach Each cluster is associated with a centroid (center point) Each point is assigned to the cluster with the closest centroid Number of clusters, K, must be specified The basic algorithm is very simple

21 21 K-means Clustering – Details Initial centroids are often chosen randomly. –Clusters produced vary from one run to another. The centroid is (typically) the mean of the points in the cluster. ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc. K-means will converge for common similarity measures mentioned above. Most of the convergence happens in the first few iterations. –Often the stopping condition is changed to ‘Until relatively few points change clusters’ Complexity is O( n * K * I * d ) –n = number of points, K = number of clusters, I = number of iterations, d = number of attributes

22 22 Evaluating K-means Clusters Most common measure is Sum of Squared Error (SSE) –For each point, the error is the distance to the nearest cluster –To get SSE, we square these errors and sum them. –x is a data point in cluster C i and m i is the representative point for cluster C i can show that m i corresponds to the center (mean) of the cluster –Given two clusters, we can choose the one with the smallest error –One easy way to reduce SSE is to increase K, the number of clusters A good clustering with smaller K can have a lower SSE than a poor clustering with higher K

23 23 Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram –A tree like diagram that records the sequences of merges or splits

24 24 Strengths of Hierarchical Clustering Do not have to assume any particular number of clusters –Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level They may correspond to meaningful taxonomies –Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, …)

25 25 Hierarchical Clustering Two main types of hierarchical clustering –Agglomerative: Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left –Divisive: Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters) Traditional hierarchical algorithms use a similarity or distance matrix –Merge or split one cluster at a time

26 26 Agglomerative Clustering Algorithm More popular hierarchical clustering technique Basic algorithm is straightforward 1.Compute the proximity matrix 2.Let each data point be a cluster 3.Repeat 4.Merge the two closest clusters 5.Update the proximity matrix 6.Until only a single cluster remains Key operation is the computation of the proximity of two clusters –Different approaches to defining the distance between clusters distinguish the different algorithms

27 27 EM Algorithm

28 28 Missing Data We think of clustering as a problem of estimating missing data. The missing data are the cluster labels. Clustering is only one example of a missing data problem. Several other problems can be formulated as missing data problems.

29 29 Missing Data Problem Let D = {x(1),x(2),…x(n)} be a set of n observations. Let H = {z(1),z(2),..z(n)} be a set of n values of a hidden variable Z. –z(i) corresponds to x(i) Assume Z is discrete.

30 30 EM Algorithm The log-likelihood of the observed data is Not only do we have to estimate  but also H Let Q(H) be the probability distribution on the missing data.

31 31 EM Algorithm Inequality is because of Jensen’s Inequality. This means that the F(Q,  ) is a lower bound on l(  ) Notice that the log of sums is become a sum of logs

32 32 EM Algorithm The EM Algorithm alternates between maximizing F with respect to Q (theta fixed) and then maximizing F with respect to theta (Q fixed).

33 33 EM Algorithm It turns out that the E-step is just And, furthermore Just plug-in

34 34 EM Algorithm The M-step reduces to maximizing the first term with respect to  as there is no  in the second term.

35 35 EM Algorithm for Mixture of Normals E Step M-Step Mixture of Normals

36 36 What is Association Rule Mining? Association rule mining finds combinations of items that typically occur together in a database (market-basket analysis) Sequences of items that occur frequently (sequential analysis) in a database Originally introduced for Market-basket analysis -- useful for analysing purchasing behaviour of customers.

37 37 Market-Basket Analysis – Examples Where should strawberries be placed to maximize their sale? Services purchased together by telecommunication customers (e.g. broad band Internet, call forwarding, etc.) help determine how to bundle these services together to maximize revenue Unusual combinations of insurance claims can be a sign of a fraud Medical histories can give indications of complications based on combinations of treatments Sport: analyzing game statistics (shots blocked, assists, and fouls) to gain competitive advantage “When player X is on the floor, player Y’s shot accuracy decreases from 75% to 30%” Bhandari et.al. (1997). Advanced Scout: data mining and knowledge discovery in NBA data, Data Mining and Knowledge Discovery, 1(1), pp.121-125

38 38 Support and Confidence - Example What is the support and confidence of the following rules? {Beer}  {Bread} {Bread, PeanutButter}  {Jelly} ? Support(X  Y)=support(X  Y) confidence(X  Y)=support(X  Y)/support(X)

39 39 Association Rule Mining Problem Definition Given a set of transactions T={t 1, t 2, …,t n } and 2 thresholds; minsup and minconf, Find all association rules X  Y with support  minsup and confidence  minconf I.E: we want rules with high confidence and support We call these rules interesting We would like to Design an efficient algorithm for mining association rules in large data sets Develop an effective approach for distinguishing interesting rules from spurious ones

40 40 Generating Association Rules – Approach 1 (Naïve) Enumerate all possible rules and select those of them that satisfy the minimum support and confidence thresholds Not practical for large databases For a given dataset with m items, the total number of possible rules is 3 m -2 m+1 +1 (Why?*) And most of these will be discarded! We need a strategy for rule generation -- generate only the promising rules rules that are likely to be interesting, or, more accurately, don’t generate rules that can’t be interesting. *hint: use inclusion-exclusion principle

41 41 Generating Association Rules – Approach 2 What do these rules have in common? A,B  C A,C  B B,C  A The support of a rule X  Y depends only on the support of its itemset X  Y Answer: they have the same support: support({A,B,C}) Hence, a better approach: find Frequent itemsets first, then generate the rules Frequent itemset is an itemset that occurs more than minsup times If an itemset is infrequent, all the rules that contain it will have support<minsup and there is no need to generate them

42 42 2 step-approach: Step 1: Generate frequent itemsets -- Frequent Itemset Mining (i.e. support  minsup ) e.g. {A,B,C} is frequent (so A,B  C, A,C  B and B,C  A satisfy the minSup threshold). Step 2: From them, extract rules that satisfy the confidence threshold (i.e. confidence  minconf) e.g. maybe only A,B  C and C,B  A are confident Step 1 is the computationally difficult part (the next slides explain why, and a way to reduce the complexity….) Generating Association Rules – Approach 2

43 43 Frequent Itemset Generation (Step 1) – Brute-Force Approach Enumerate all possible itemsets and scan the dataset to calculate the support for each of them Example: I={a,b,c,d,e} Given d items, there are 2 d- 1 possible (non- empty) candidate itemsets => not practical for large d Search space showing superset / subset relationships

44 44 A subset of any frequent itemset is also frequent Example: If {c,d,e} is frequent then {c,d}, {c,e}, {d,e}, {c}, {d} are also frequent Frequent Itemset Generation (Step 1) -- Apriori Principle (1)

45 45 If an itemset is not frequent, a superset of it is also not frequent Frequent Itemset Generation (Step 1) -- Apriori Principle (2) Example: If we know that {a,b} is infrequent, the entire sub-graph can be pruned. Ie: {a,b,c}, {a,b,d}, {a,b,e}, {a,b,c,d}, {a,b,c,e}, {a,b,d,e} and {a,b,c,d} are infrequent

46 46 Recall the 2 Step process for Association Rule Mining Step 1: Find all frequent Itemsets So far: main ideas and concepts (Apriori principle). Later: algorithms Step 2: Generate the association rules from the frequent itemsets.

47 47 ARGen Algorithm (Step 2) Generates interesting rules from the frequent itemsets Already know the rules are frequent (Why?), just need to check confidence. ARGen algorithm for each frequent itemset F generate all non-empty subsets S. for each s in S do if confidence(s  F-s) ≥ minConf then output rule s  F-s end Example: F={a,b,c} S={{a,b}, {a,c}, {b,c}, {a}, {b}, {c}} rules output: {a,b}  {c}, etc.

48 48 ARGen - Example minsup=30%, minconf=50% The set of frequent itemsets L={{Beer},{Bread}, {Milk}, {PeanutButter}, {Bread, PeanutButter}} Only the last itemset from L consists of 2 nonempty subsets of frequent itemsets – Bread and PeanutButter. => 2 rules will be generated

49 49 Bayes Classifier A probabilistic framework for solving classification problems Conditional Probability: Bayes theorem:

50 50 Example of Bayes Theorem Given: –A doctor knows that meningitis causes stiff neck 50% of the time –Prior probability of any patient having meningitis is 1/50,000 –Prior probability of any patient having stiff neck is 1/20 If a patient has stiff neck, what’s the probability he/she has meningitis?

51 51 Consider each attribute and class label as random variables Given a record with attributes (A 1, A 2,…,A n ) –Goal is to predict class C –Specifically, we want to find the value of C that maximizes P(C| A 1, A 2,…,A n ) Can we estimate P(C| A 1, A 2,…,A n ) directly from data? Bayesian Classifiers

52 52 Bayesian Classifiers Approach: –compute the posterior probability P(C | A 1, A 2, …, A n ) for all values of C using the Bayes theorem –Choose value of C that maximizes P(C | A 1, A 2, …, A n ) –Equivalent to choosing value of C that maximizes P(A 1, A 2, …, A n |C) P(C) How to estimate P(A 1, A 2, …, A n | C )?

53 53 Naïve Bayes Classifier Assume independence among attributes A i when class is given: –P(A 1, A 2, …, A n |C) = P(A 1 | C j ) P(A 2 | C j )… P(A n | C j ) –Can estimate P(A i | C j ) for all A i and C j. –New point is classified to C j if P(C j )  P(A i | C j ) is maximal.

54 54 Class: P(C) = N c /N –e.g., P(No) = 7/10, P(Yes) = 3/10 For discrete attributes: P(A i | C k ) = |A ik |/ N c –where |A ik | is number of instances having attribute A i and belongs to class C k –Examples: P(Status=Married|No) = 4/7 P(Refund=Yes|Yes)=0 k How to Estimate Probabilities from Data?

55 55 How to Estimate Probabilities from Data? For continuous attributes: –Discretize the range into bins one ordinal attribute per bin violates independence assumption –Two-way split: (A v) choose only one of the two splits as new attribute –Probability density estimation: Assume attribute follows a normal distribution Use data to estimate parameters of distribution (e.g., mean and standard deviation) Once probability distribution is known, can use it to estimate the conditional probability P(A i |c) k

56 56 How to Estimate Probabilities from Data ? Normal distribution: –One for each (A i,c i ) pair For (Income, Class=No): –If Class=No sample mean = 110 sample variance = 2975

57 57 Example of Naïve Bayes Classifier l P(X|Class=No) = P(Refund=No|Class=No)  P(Married| Class=No)  P(Income=120K| Class=No) = 4/7  4/7  0.0072 = 0.0024 l P(X|Class=Yes) = P(Refund=No| Class=Yes)  P(Married| Class=Yes)  P(Income=120K| Class=Yes) = 1  0  1.2  10 -9 = 0 Since P(X|No)P(No) > P(X|Yes)P(Yes) Therefore P(No|X) > P(Yes|X) => Class = No Given a Test Record:

58 58 Naïve Bayes Classifier If one of the conditional probability is zero, then the entire expression becomes zero Probability estimation: c: number of classes p: prior probability m: parameter

59 59 Example of Naïve Bayes Classifier A: attributes M: mammals N: non-mammals P(A|M)P(M) > P(A|N)P(N) => Mammals

60 60 Naïve Bayes (Summary) Robust to isolated noise points Handle missing values by ignoring the instance during probability estimate calculations Robust to irrelevant attributes Independence assumption may not hold for some attributes –Use other techniques such as Bayesian Belief Networks (BBN)

61 61 Principal Component Analysis

62 62 Motivation Bulk of data has a time component For example, retail transactions, stock prices Data set can be organized as N x M table N customers and the price of the calls they made in 365 days M << N

63 63 Objective Compress the data matrix X into Xc, such that –The compression ratio is high and the average error between the original and the compressed matrix is low –N could be in the order of millions and M in the order of hundreds

64 64 Example database We 7/10 Thr 7/11 Fri 7/12 Sat 7/13 Sun 7/14 ABC11100 DEF22200 GHI11100 KLM55500 smit h 00022 john00033 tom00011

65 65 Decision Support Queries What was the amount of sales to GHI on July 11? Find the total sales to business customers for the week ending July 12th?

66 66 Intuition behind SVD x y x’ y’ Customers are 2-D points

67 67 SVD Definition An N x M matrix X can be expressed as Lambda is a diagonal r x r matrix.

68 68 SVD Definition More importantly X can be written as Where the eigenvalues are in decreasing order. k,<r

69 69 Example

70 70 Compression Where k <=r <= M

71 71 Density-based: LOF approach For each point, compute the density of its local neighborhood Compute local outlier factor (LOF) of a sample p as the average of the ratios of the density of sample p and the density of its nearest neighbors Outliers are points with largest LOF value p 2  p 1  In the NN approach, p 2 is not considered as outlier, while LOF approach find both p 1 and p 2 as outliers

72 72 Clustering-Based Basic idea: –Cluster the data into groups of different density –Choose points in small cluster as candidate outliers –Compute the distance between candidate points and non-candidate clusters. If candidate points are far from all other non-candidate points, they are outliers

73 73 Base Rate Fallacy Bayes theorem: More generally:

74 74 Base Rate Fallacy (Axelsson, 1999)

75 75 Base Rate Fallacy in Intrusion Detection I: intrusive behavior,  I: non-intrusive behavior A: alarm  A: no alarm Detection rate (true positive rate): P(A|I) False alarm rate: P(A|  I) Goal is to maximize both –Bayesian detection rate, P(I|A) –P(  I|  A)

76 76 Detection Rate vs False Alarm Rate Suppose: Then: False alarm rate becomes more dominant if P(I) is very low

77 77 Detection Rate vs False Alarm Rate Axelsson: We need a very low false alarm rate to achieve a reasonable Bayesian detection rate


Download ppt "1 Review. 2 Distributions 3 Distribution Definitions Discrete Probability Distribution Continuous Probability Distribution Cumulative Distribution Function."

Similar presentations


Ads by Google