Download presentation
Presentation is loading. Please wait.
Published byEleanor Reeves Modified over 9 years ago
1
CSE 5331/7331 F'091 CSE 5331/7331 Fall 2009 DATA MINING Introductory and Related Topics Margaret H. Dunham Department of Computer Science and Engineering Southern Methodist University Slides extracted from Data Mining, Introductory and Advanced Topics, Prentice Hall, 2002.
2
2 CSE 5331/7331 F'09 Data Mining Outline PART I PART I –Introduction –Techniques PART II – Core Topics PART III – Related Topics
3
3 CSE 5331/7331 F'09 Introduction Outline Define data mining Define data mining Data mining vs. databases Data mining vs. databases Basic data mining tasks Basic data mining tasks Data mining development Data mining development Data mining issues Data mining issues Goal: Provide an overview of data mining.
4
4 CSE 5331/7331 F'09 Introduction Data is growing at a phenomenal rate Data is growing at a phenomenal rate Users expect more sophisticated information Users expect more sophisticated information How? How? UNCOVER HIDDEN INFORMATION DATA MINING
5
5 CSE 5331/7331 F'09 Data Mining Definition Finding hidden information in a database Finding hidden information in a database Fit data to a model Fit data to a model Similar terms Similar terms –Exploratory data analysis –Data driven discovery –Deductive learning
6
6 CSE 5331/7331 F'09 Data Mining Algorithm Objective: Fit Data to a Model Objective: Fit Data to a Model –Descriptive –Predictive Preference – Technique to choose the best model Preference – Technique to choose the best model Search – Technique to search the data Search – Technique to search the data –“Query”
7
7 CSE 5331/7331 F'09 Database Processing vs. Data Mining Processing Query Query –Well defined –SQL Query –Poorly defined –No precise query language Data Data – Operational data Output Output – Precise – Subset of database Data Data – Not operational data Output Output – Fuzzy – Not a subset of database
8
8 CSE 5331/7331 F'09 Query Examples Database Database Data Mining Data Mining – Find all customers who have purchased milk – Find all items which are frequently purchased with milk. (association rules) – Find all credit applicants with last name of Smith. – Identify customers who have purchased more than $10,000 in the last month. – Find all credit applicants who are poor credit risks. (classification) – Identify customers with similar buying habits. (Clustering)
9
9 CSE 5331/7331 F'09 Data Mining Models and Tasks
10
10 CSE 5331/7331 F'09 Basic Data Mining Tasks Classification maps data into predefined groups or classes Classification maps data into predefined groups or classes –Supervised learning –Pattern recognition –Prediction Regression is used to map a data item to a real valued prediction variable. Regression is used to map a data item to a real valued prediction variable. Clustering groups similar data together into clusters. Clustering groups similar data together into clusters. –Unsupervised learning –Segmentation –Partitioning
11
11 CSE 5331/7331 F'09 Basic Data Mining Tasks (cont’d) Summarization maps data into subsets with associated simple descriptions. Summarization maps data into subsets with associated simple descriptions. –Characterization –Generalization Link Analysis uncovers relationships among data. Link Analysis uncovers relationships among data. –Affinity Analysis –Association Rules –Sequential Analysis determines sequential patterns.
12
12 CSE 5331/7331 F'09 Ex: Time Series Analysis Example: Stock Market Example: Stock Market Predict future values Predict future values Determine similar patterns over time Determine similar patterns over time Classify behavior Classify behavior
13
13 CSE 5331/7331 F'09 Data Mining vs. KDD Knowledge Discovery in Databases (KDD): process of finding useful information and patterns in data. Knowledge Discovery in Databases (KDD): process of finding useful information and patterns in data. Data Mining: Use of algorithms to extract the information and patterns derived by the KDD process. Data Mining: Use of algorithms to extract the information and patterns derived by the KDD process.
14
14 CSE 5331/7331 F'09 KDD Process Selection: Obtain data from various sources. Selection: Obtain data from various sources. Preprocessing: Cleanse data. Preprocessing: Cleanse data. Transformation: Convert to common format. Transform to new format. Transformation: Convert to common format. Transform to new format. Data Mining: Obtain desired results. Data Mining: Obtain desired results. Interpretation/Evaluation: Present results to user in meaningful manner. Interpretation/Evaluation: Present results to user in meaningful manner. Modified from [FPSS96C]
15
15 CSE 5331/7331 F'09 KDD Process Ex: Web Log Selection: Selection: –Select log data (dates and locations) to use Preprocessing: Preprocessing: – Remove identifying URLs – Remove error logs Transformation: Transformation: –Sessionize (sort and group) Data Mining: Data Mining: –Identify and count patterns –Construct data structure Interpretation/Evaluation: Interpretation/Evaluation: –Identify and display frequently accessed sequences. Potential User Applications: Potential User Applications: –Cache prediction –Personalization
16
16 CSE 5331/7331 F'09 Data Mining Development Similarity Measures Hierarchical Clustering IR Systems Imprecise Queries Textual Data Web Search Engines Bayes Theorem Regression Analysis EM Algorithm K-Means Clustering Time Series Analysis Neural Networks Decision Tree Algorithms Algorithm Design Techniques Algorithm Analysis Data Structures Relational Data Model SQL Association Rule Algorithms Data Warehousing Scalability Techniques
17
17 CSE 5331/7331 F'09 KDD Issues Human Interaction Human Interaction Overfitting Overfitting Outliers Outliers Interpretation Interpretation Visualization Visualization Large Datasets Large Datasets High Dimensionality High Dimensionality
18
18 CSE 5331/7331 F'09 KDD Issues (cont’d) Multimedia Data Multimedia Data Missing Data Missing Data Irrelevant Data Irrelevant Data Noisy Data Noisy Data Changing Data Changing Data Integration Integration Application Application
19
19 CSE 5331/7331 F'09 Social Implications of DM Privacy Privacy Profiling Profiling Unauthorized use Unauthorized use
20
20 CSE 5331/7331 F'09 Data Mining Metrics Usefulness Usefulness Return on Investment (ROI) Return on Investment (ROI) Accuracy Accuracy Space/Time Space/Time
21
21 CSE 5331/7331 F'09 Visualization Techniques Graphical Graphical Geometric Geometric Icon-based Icon-based Pixel-based Pixel-based Hierarchical Hierarchical Hybrid Hybrid
22
22 CSE 5331/7331 F'09 Models Based on Summarization Visualization: Frequency distribution, mean, variance, median, mode, etc. Visualization: Frequency distribution, mean, variance, median, mode, etc. Box Plot: Box Plot:
23
23 CSE 5331/7331 F'09 Scatter Diagram
24
24 CSE 5331/7331 F'09 Data Mining Techniques Outline Statistical Statistical –Point Estimation –Models Based on Summarization –Bayes Theorem –Hypothesis Testing –Regression and Correlation Similarity Measures Similarity Measures Decision Trees Decision Trees Neural Networks Neural Networks –Activation Functions Genetic Algorithms Genetic Algorithms Goal: Provide an overview of basic data mining techniques
25
25 CSE 5331/7331 F'09 Point Estimation Point Estimate: estimate a population parameter. Point Estimate: estimate a population parameter. May be made by calculating the parameter for a sample. May be made by calculating the parameter for a sample. May be used to predict value for missing data. May be used to predict value for missing data. Ex: Ex: –R contains 100 employees –99 have salary information –Mean salary of these is $50,000 –Use $50,000 as value of remaining employee’s salary. Is this a good idea?
26
26 CSE 5331/7331 F'09 Estimation Error Bias: Difference between expected value and actual value. Bias: Difference between expected value and actual value. Mean Squared Error (MSE): expected value of the squared difference between the estimate and the actual value: Mean Squared Error (MSE): expected value of the squared difference between the estimate and the actual value: Why square? Why square? Root Mean Square Error (RMSE) Root Mean Square Error (RMSE)
27
27 CSE 5331/7331 F'09 Jackknife Estimate Jackknife Estimate: estimate of parameter is obtained by omitting one value from the set of observed values. Jackknife Estimate: estimate of parameter is obtained by omitting one value from the set of observed values. Ex: estimate of mean for X={x, …, x} Ex: estimate of mean for X={x 1, …, x n }
28
28 CSE 5331/7331 F'09 Maximum Likelihood Estimate (MLE) Obtain parameter estimates that maximize the probability that the sample data occurs for the specific model. Obtain parameter estimates that maximize the probability that the sample data occurs for the specific model. Joint probability for observing the sample data by multiplying the individual probabilities. Likelihood function: Joint probability for observing the sample data by multiplying the individual probabilities. Likelihood function: Maximize L. Maximize L.
29
29 CSE 5331/7331 F'09 MLE Example Coin toss five times: {H,H,H,H,T} Coin toss five times: {H,H,H,H,T} Assuming a perfect coin with H and T equally likely, the likelihood of this sequence is: Assuming a perfect coin with H and T equally likely, the likelihood of this sequence is: However if the probability of a H is 0.8 then: However if the probability of a H is 0.8 then:
30
30 CSE 5331/7331 F'09 MLE Example (cont’d) General likelihood formula: General likelihood formula: Estimate for p is then 4/5 = 0.8 Estimate for p is then 4/5 = 0.8
31
31 CSE 5331/7331 F'09 Expectation-Maximization (EM) Solves estimation with incomplete data. Solves estimation with incomplete data. Obtain initial estimates for parameters. Obtain initial estimates for parameters. Iteratively use estimates for missing data and continue until convergence. Iteratively use estimates for missing data and continue until convergence.
32
32 CSE 5331/7331 F'09 EM Example
33
33 CSE 5331/7331 F'09 EM Algorithm
34
34 CSE 5331/7331 F'09 Bayes Theorem Posterior Probability: P(h|x) Posterior Probability: P(h 1 |x i ) Prior Probability: P(h) Prior Probability: P(h 1 ) Bayes Theorem: Bayes Theorem: Assign probabilities of hypotheses given a data value. Assign probabilities of hypotheses given a data value.
35
35 CSE 5331/7331 F'09 Bayes Theorem Example Credit authorizations (hypotheses): h 1 =authorize purchase, h= authorize after further identification, h=do not authorize, h= do not authorize but contact police Credit authorizations (hypotheses): h 1 =authorize purchase, h 2 = authorize after further identification, h 3 =do not authorize, h 4 = do not authorize but contact police Assign twelve data values for all combinations of credit and income: Assign twelve data values for all combinations of credit and income: From training data: P(h 1 ) = 60%; P(h 2 )=20%; P(h 3 )=10%; P(h 4 )=10%. From training data: P(h 1 ) = 60%; P(h 2 )=20%; P(h 3 )=10%; P(h 4 )=10%.
36
36 CSE 5331/7331 F'09 Bayes Example(cont’d) Training Data: Training Data:
37
37 CSE 5331/7331 F'09 Bayes Example(cont’d) Calculate P(x i |h j ) and P(x i ) Calculate P(x i |h j ) and P(x i ) Ex: P(x 7 |h 1 )=2/6; P(x 4 |h 1 )=1/6; P(x 2 |h 1 )=2/6; P(x 8 |h 1 )=1/6; P(x i |h 1 )=0 for all other x i. Ex: P(x 7 |h 1 )=2/6; P(x 4 |h 1 )=1/6; P(x 2 |h 1 )=2/6; P(x 8 |h 1 )=1/6; P(x i |h 1 )=0 for all other x i. Predict the class for x 4 : Predict the class for x 4 : –Calculate P(h j |x 4 ) for all h j. –Place x 4 in class with largest value. –Ex: »P(h 1 |x 4 )=(P(x 4 |h 1 )(P(h 1 ))/P(x 4 ) =(1/6)(0.6)/0.1=1. =(1/6)(0.6)/0.1=1. »x 4 in class h 1.
38
38 CSE 5331/7331 F'09 Hypothesis Testing Find model to explain behavior by creating and then testing a hypothesis about the data. Find model to explain behavior by creating and then testing a hypothesis about the data. Exact opposite of usual DM approach. Exact opposite of usual DM approach. H 0 – Null hypothesis; Hypothesis to be tested. H 0 – Null hypothesis; Hypothesis to be tested. H 1 – Alternative hypothesis H 1 – Alternative hypothesis
39
39 CSE 5331/7331 F'09 Chi Squared Statistic O – observed value O – observed value E – Expected value based on hypothesis. E – Expected value based on hypothesis. Ex: Ex: –O={50,93,67,78,87} –E=75 – 2 =15.55 and therefore significant
40
40 CSE 5331/7331 F'09 Regression Predict future values based on past values Predict future values based on past values Linear Regression assumes linear relationship exists. Linear Regression assumes linear relationship exists. y = c 0 + c 1 x 1 + … + c n x n Find values to best fit the data Find values to best fit the data
41
41 CSE 5331/7331 F'09 Linear Regression
42
42 CSE 5331/7331 F'09 Correlation Examine the degree to which the values for two variables behave similarly. Examine the degree to which the values for two variables behave similarly. Correlation coefficient r: Correlation coefficient r: 1 = perfect correlation1 = perfect correlation -1 = perfect but opposite correlation-1 = perfect but opposite correlation 0 = no correlation0 = no correlation
43
43 CSE 5331/7331 F'09 Similarity Measures Determine similarity between two objects. Determine similarity between two objects. Similarity characteristics: Similarity characteristics: Alternatively, distance measure measure how unlike or dissimilar objects are. Alternatively, distance measure measure how unlike or dissimilar objects are.
44
44 CSE 5331/7331 F'09 Similarity Measures
45
45 CSE 5331/7331 F'09 Distance Measures Measure dissimilarity between objects Measure dissimilarity between objects
46
46 CSE 5331/7331 F'09 Twenty Questions Game
47
47 CSE 5331/7331 F'09 Decision Trees Decision Tree (DT): Decision Tree (DT): –Tree where the root and each internal node is labeled with a question. –The arcs represent each possible answer to the associated question. –Each leaf node represents a prediction of a solution to the problem. Popular technique for classification; Leaf node indicates class to which the corresponding tuple belongs. Popular technique for classification; Leaf node indicates class to which the corresponding tuple belongs.
48
48 CSE 5331/7331 F'09 Decision Tree Example
49
49 CSE 5331/7331 F'09 Decision Trees A Decision Tree Model is a computational model consisting of three parts: A Decision Tree Model is a computational model consisting of three parts: –Decision Tree –Algorithm to create the tree –Algorithm that applies the tree to data Creation of the tree is the most difficult part. Creation of the tree is the most difficult part. Processing is basically a search similar to that in a binary search tree (although DT may not be binary). Processing is basically a search similar to that in a binary search tree (although DT may not be binary).
50
50 CSE 5331/7331 F'09 Decision Tree Algorithm
51
51 CSE 5331/7331 F'09 DT Advantages/Disadvantages Advantages: Advantages: –Easy to understand. –Easy to generate rules Disadvantages: Disadvantages: –May suffer from overfitting. –Classifies by rectangular partitioning. –Does not easily handle nonnumeric data. –Can be quite large – pruning is necessary.
52
52 CSE 5331/7331 F'09 Neural Networks Based on observed functioning of human brain. Based on observed functioning of human brain. (Artificial Neural Networks (ANN) (Artificial Neural Networks (ANN) Our view of neural networks is very simplistic. Our view of neural networks is very simplistic. We view a neural network (NN) from a graphical viewpoint. We view a neural network (NN) from a graphical viewpoint. Alternatively, a NN may be viewed from the perspective of matrices. Alternatively, a NN may be viewed from the perspective of matrices. Used in pattern recognition, speech recognition, computer vision, and classification. Used in pattern recognition, speech recognition, computer vision, and classification.
53
53 CSE 5331/7331 F'09 Neural Networks Neural Network (NN) is a directed graph F= with vertices V={1,2,…,n} and arcs A={ |1 with vertices V={1,2,…,n} and arcs A={ |1<=i,j<=n}, with the following restrictions: –V is partitioned into a set of input nodes, V I, hidden nodes, V H, and output nodes, V O. –The vertices are also partitioned into layers –Any arc must have node i in layer h-1 and node j in layer h. –Arc is labeled with a numeric value w ij. –Node i is labeled with a function f i.
54
54 CSE 5331/7331 F'09 Neural Network Example
55
55 CSE 5331/7331 F'09 NN Node
56
56 CSE 5331/7331 F'09 NN Activation Functions Functions associated with nodes in graph. Functions associated with nodes in graph. Output may be in range [-1,1] or [0,1] Output may be in range [-1,1] or [0,1]
57
57 CSE 5331/7331 F'09 NN Activation Functions
58
58 CSE 5331/7331 F'09 NN Learning Propagate input values through graph. Propagate input values through graph. Compare output to desired output. Compare output to desired output. Adjust weights in graph accordingly. Adjust weights in graph accordingly.
59
59 CSE 5331/7331 F'09 Neural Networks A Neural Network Model is a computational model consisting of three parts: A Neural Network Model is a computational model consisting of three parts: –Neural Network graph –Learning algorithm that indicates how learning takes place. –Recall techniques that determine hew information is obtained from the network. We will look at propagation as the recall technique. We will look at propagation as the recall technique.
60
60 CSE 5331/7331 F'09 NN Advantages Learning Learning Can continue learning even after training set has been applied. Can continue learning even after training set has been applied. Easy parallelization Easy parallelization Solves many problems Solves many problems
61
61 CSE 5331/7331 F'09 NN Disadvantages Difficult to understand Difficult to understand May suffer from overfitting May suffer from overfitting Structure of graph must be determined a priori. Structure of graph must be determined a priori. Input values must be numeric. Input values must be numeric. Verification difficult. Verification difficult.
62
62 CSE 5331/7331 F'09 Genetic Algorithms Optimization search type algorithms. Optimization search type algorithms. Creates an initial feasible solution and iteratively creates new “better” solutions. Creates an initial feasible solution and iteratively creates new “better” solutions. Based on human evolution and survival of the fittest. Based on human evolution and survival of the fittest. Must represent a solution as an individual. Must represent a solution as an individual. Individual: string I=I 1,I 2,…,I n where I j is in given alphabet A. Individual: string I=I 1,I 2,…,I n where I j is in given alphabet A. Each character I j is called a gene. Each character I j is called a gene. Population: set of individuals. Population: set of individuals.
63
63 CSE 5331/7331 F'09 Genetic Algorithms A Genetic Algorithm (GA) is a computational model consisting of five parts: A Genetic Algorithm (GA) is a computational model consisting of five parts: –A starting set of individuals, P. –Crossover: technique to combine two parents to create offspring. –Mutation: randomly change an individual. –Fitness: determine the best individuals. –Algorithm which applies the crossover and mutation techniques to P iteratively using the fitness function to determine the best individuals in P to keep.
64
64 CSE 5331/7331 F'09 Crossover Examples
65
65 CSE 5331/7331 F'09 Genetic Algorithm
66
66 CSE 5331/7331 F'09 GA Advantages/Disadvantages Advantages Advantages –Easily parallelized Disadvantages Disadvantages –Difficult to understand and explain to end users. –Abstraction of the problem and method to represent individuals is quite difficult. –Determining fitness function is difficult. –Determining how to perform crossover and mutation is difficult.
67
67 CSE 5331/7331 F'09 Data Mining Outline PART I - Introduction PART II – Core Topics PART II – Core Topics –Classification –Clustering –Association Rules PART III – Related Topics
68
68 CSE 5331/7331 F'09 Classification Outline Classification Problem Overview Classification Problem Overview Classification Techniques Classification Techniques –Regression –Distance –Decision Trees –Rules –Neural Networks Goal: Provide an overview of the classification problem and introduce some of the basic algorithms
69
69 CSE 5331/7331 F'09 Classification Problem Given a database D={t 1,t 2,…,t n } and a set of classes C={C 1,…,C m }, the Classification Problem is to define a mapping f:D C where each t i is assigned to one class. Given a database D={t 1,t 2,…,t n } and a set of classes C={C 1,…,C m }, the Classification Problem is to define a mapping f:D C where each t i is assigned to one class. Actually divides D into equivalence classes. Actually divides D into equivalence classes. Prediction is similar, but may be viewed as having infinite number of classes. Prediction is similar, but may be viewed as having infinite number of classes.
70
70 CSE 5331/7331 F'09 Classification Examples Teachers classify students’ grades as A, B, C, D, or F. Teachers classify students’ grades as A, B, C, D, or F. Identify mushrooms as poisonous or edible. Identify mushrooms as poisonous or edible. Predict when a river will flood. Predict when a river will flood. Identify individuals with credit risks. Identify individuals with credit risks. Speech recognition Speech recognition Pattern recognition Pattern recognition
71
71 CSE 5331/7331 F'09 Classification Ex: Grading If x >= 90 then grade =A. If x >= 90 then grade =A. If 80<=x<90 then grade =B. If 80<=x<90 then grade =B. If 70<=x<80 then grade =C. If 70<=x<80 then grade =C. If 60<=x<70 then grade =D. If 60<=x<70 then grade =D. If x<50 then grade =F. If x<50 then grade =F. >=90<90 x >=80<80 x >=70<70 x F B A >=60<50 x C D
72
72 CSE 5331/7331 F'09 Classification Ex: Letter Recognition View letters as constructed from 5 components: Letter C Letter E Letter A Letter D Letter F Letter B
73
73 CSE 5331/7331 F'09 Classification Techniques Approach: Approach: 1.Create specific model by evaluating training data (or using domain experts’ knowledge). 2.Apply model developed to new data. Classes must be predefined Classes must be predefined Most common techniques use DTs, NNs, or are based on distances or statistical methods. Most common techniques use DTs, NNs, or are based on distances or statistical methods.
74
74 CSE 5331/7331 F'09 Defining Classes Partitioning Based Distance Based
75
75 CSE 5331/7331 F'09 Issues in Classification Missing Data Missing Data –Ignore –Replace with assumed value Measuring Performance Measuring Performance –Classification accuracy on test data –Confusion matrix –OC Curve
76
76 CSE 5331/7331 F'09 Height Example Data
77
77 CSE 5331/7331 F'09 Classification Performance True Positive True NegativeFalse Positive False Negative
78
78 CSE 5331/7331 F'09 Confusion Matrix Example Using height data example with Output1 correct and Output2 actual assignment
79
79 CSE 5331/7331 F'09 Operating Characteristic Curve
80
80 CSE 5331/7331 F'09 RegressionTopics Linear Regression Linear Regression Nonlinear Regression Nonlinear Regression Logistic Regression Logistic Regression Metrics Metrics
81
81 CSE 5331/7331 F'09 Remember High School? Y= mx + b Y= mx + b You need two points to determine a straight line. You need two points to determine a straight line. You need two points to find values for m and b. You need two points to find values for m and b. THIS IS REGRESSION
82
82 CSE 5331/7331 F'09 Regression Assume data fits a predefined function Assume data fits a predefined function Determine best values for regression coefficients c 0,c 1,…,c n. Determine best values for regression coefficients c 0,c 1,…,c n. Assume an error: y = c 0 +c 1 x 1 +…+c n x n Assume an error: y = c 0 +c 1 x 1 +…+c n x n + Estimate error using mean squared error for training set:
83
83 CSE 5331/7331 F'09 Linear Regression Assume data fits a predefined function Assume data fits a predefined function Determine best values for regression coefficients c 0,c 1,…,c n. Determine best values for regression coefficients c 0,c 1,…,c n. Assume an error: y = c 0 +c 1 x 1 +…+c n x n Assume an error: y = c 0 +c 1 x 1 +…+c n x n + Estimate error using mean squared error for training set:
84
84 CSE 5331/7331 F'09 Classification Using Linear Regression Division: Use regression function to divide area into regions. Division: Use regression function to divide area into regions. Prediction: Use regression function to predict a class membership function. Input includes desired class. Prediction: Use regression function to predict a class membership function. Input includes desired class.
85
85 CSE 5331/7331 F'09Division
86
86 CSE 5331/7331 F'09Prediction
87
87 CSE 5331/7331 F'09 Linear Regression Poor Fit Why use sum of least squares? http://curvefit.com/sum_of_squares.htm Linear doesn’t always work well
88
88 CSE 5331/7331 F'09 Nonlinear Regression Data does not nicely fit a straight line Data does not nicely fit a straight line Fit data to a curve Fit data to a curve Many possible functions Many possible functions Not as easy and straightforward as linear regression Not as easy and straightforward as linear regression How nonlinear regression works: How nonlinear regression works: http://curvefit.com/how_nonlin_works.htm
89
89 CSE 5331/7331 F'09 Logistic Regression Generalized linear model Generalized linear model Predict discrete outcome Predict discrete outcome –Binomial (binary) logistic regression –Multinomial logistic regression One dependent variable One dependent variable Logistic Regression by Gerard E. Dallal Logistic Regression by Gerard E. Dallal http://www.jerrydallal.com/LHSP/logistic.htm
90
90 CSE 5331/7331 F'09 Logistic Regression (cont’d) Log Odds Function: Log Odds Function: P is probability that outcome is 1 P is probability that outcome is 1 Odds – The probability the event occurs divided by the probability that it does not occur Odds – The probability the event occurs divided by the probability that it does not occur Log Odds function is strictly increasing as p increases Log Odds function is strictly increasing as p increases
91
91 CSE 5331/7331 F'09 Why Log Odds? Shape of curve is desirable Shape of curve is desirable Relationship to probability Relationship to probability Range – to + Range – to +
92
92 CSE 5331/7331 F'09 P-value The probability that a variable has a value greater than the observed value The probability that a variable has a value greater than the observed value http://en.wikipedia.org/wiki/P-value http://en.wikipedia.org/wiki/P-value http://en.wikipedia.org/wiki/P-value http://sportsci.org/resource/stats/pvalues.html http://sportsci.org/resource/stats/pvalues.html http://sportsci.org/resource/stats/pvalues.html
93
93 CSE 5331/7331 F'09 Covariance Degree to which two variables vary in the same manner Degree to which two variables vary in the same manner Correlation is normalized and covariance is not Correlation is normalized and covariance is not http://www.ds.unifi.it/VL/VL_EN/expect/expect3. html http://www.ds.unifi.it/VL/VL_EN/expect/expect3. html http://www.ds.unifi.it/VL/VL_EN/expect/expect3. html http://www.ds.unifi.it/VL/VL_EN/expect/expect3. html
94
94 CSE 5331/7331 F'09 Residual Error Error Difference between desired output and predicted output Difference between desired output and predicted output May actually use sum of squares May actually use sum of squares
95
95 CSE 5331/7331 F'09 Classification Using Distance Place items in class to which they are “closest”. Place items in class to which they are “closest”. Must determine distance between an item and a class. Must determine distance between an item and a class. Classes represented by Classes represented by –Centroid: Central value. –Medoid: Representative point. –Individual points Algorithm: KNN Algorithm: KNN
96
96 CSE 5331/7331 F'09 K Nearest Neighbor (KNN): Training set includes classes. Training set includes classes. Examine K items near item to be classified. Examine K items near item to be classified. New item placed in class with the most number of close items. New item placed in class with the most number of close items. O(q) for each tuple to be classified. (Here q is the size of the training set.) O(q) for each tuple to be classified. (Here q is the size of the training set.)
97
97 CSE 5331/7331 F'09 KNN
98
98 CSE 5331/7331 F'09 KNN Algorithm
99
99 CSE 5331/7331 F'09 Classification Using Decision Trees Partitioning based: Divide search space into rectangular regions. Partitioning based: Divide search space into rectangular regions. Tuple placed into class based on the region within which it falls. Tuple placed into class based on the region within which it falls. DT approaches differ in how the tree is built: DT Induction DT approaches differ in how the tree is built: DT Induction Internal nodes associated with attribute and arcs with values for that attribute. Internal nodes associated with attribute and arcs with values for that attribute. Algorithms: ID3, C4.5, CART Algorithms: ID3, C4.5, CART
100
100 CSE 5331/7331 F'09 Decision Tree Given: –D = {t 1, …, t n } where t i = –D = {t 1, …, t n } where t i = –Database schema contains {A 1, A 2, …, A h } –Classes C={C 1, …., C m } Decision or Classification Tree is a tree associated with D such that –Each internal node is labeled with attribute, A i –Each arc is labeled with predicate which can be applied to attribute at parent –Each leaf node is labeled with a class, C j
101
101 CSE 5331/7331 F'09 DT Induction
102
102 CSE 5331/7331 F'09 DT Splits Area Gender Height M F
103
103 CSE 5331/7331 F'09 Comparing DTs Balanced Deep
104
104 CSE 5331/7331 F'09 DT Issues Choosing Splitting Attributes Choosing Splitting Attributes Ordering of Splitting Attributes Ordering of Splitting Attributes Splits Splits Tree Structure Tree Structure Stopping Criteria Stopping Criteria Training Data Training Data Pruning Pruning
105
105 CSE 5331/7331 F'09 Decision Tree Induction is often based on Information Theory So
106
106 CSE 5331/7331 F'09 Information
107
107 CSE 5331/7331 F'09 DT Induction When all the marbles in the bowl are mixed up, little information is given. When all the marbles in the bowl are mixed up, little information is given. When the marbles in the bowl are all from one class and those in the other two classes are on either side, more information is given. When the marbles in the bowl are all from one class and those in the other two classes are on either side, more information is given. Use this approach with DT Induction !
108
108 CSE 5331/7331 F'09 Information/Entropy Given probabilitites p 1, p 2,.., p s whose sum is 1, Entropy is defined as: Given probabilitites p 1, p 2,.., p s whose sum is 1, Entropy is defined as: Entropy measures the amount of randomness or surprise or uncertainty. Entropy measures the amount of randomness or surprise or uncertainty. Goal in classification Goal in classification – no surprise – entropy = 0
109
109 CSE 5331/7331 F'09 Entropy log (1/p)H(p,1-p)
110
110 CSE 5331/7331 F'09 ID3 Creates tree using information theory concepts and tries to reduce expected number of comparison.. Creates tree using information theory concepts and tries to reduce expected number of comparison.. ID3 chooses split attribute with the highest information gain: ID3 chooses split attribute with the highest information gain:
111
111 CSE 5331/7331 F'09 ID3 Example (Output1) Starting state entropy: Starting state entropy: 4/15 log(15/4) + 8/15 log(15/8) + 3/15 log(15/3) = 0.4384 Gain using gender: Gain using gender: –Female: 3/9 log(9/3)+6/9 log(9/6)=0.2764 –Male: 1/6 (log 6/1) + 2/6 log(6/2) + 3/6 log(6/3) = 0.4392 –Weighted sum: (9/15)(0.2764) + (6/15)(0.4392) = 0.34152 –Gain: 0.4384 – 0.34152 = 0.09688 Gain using height: Gain using height: 0.4384 – (2/15)(0.301) = 0.3983 Choose height as first splitting attribute Choose height as first splitting attribute
112
112 CSE 5331/7331 F'09 C4.5 ID3 favors attributes with large number of divisions ID3 favors attributes with large number of divisions Improved version of ID3: Improved version of ID3: –Missing Data –Continuous Data –Pruning –Rules –GainRatio:
113
113 CSE 5331/7331 F'09 CART Create Binary Tree Create Binary Tree Uses entropy Uses entropy Formula to choose split point, s, for node t: Formula to choose split point, s, for node t: P L,P R probability that a tuple in the training set will be on the left or right side of the tree. P L,P R probability that a tuple in the training set will be on the left or right side of the tree.
114
114 CSE 5331/7331 F'09 CART Example At the start, there are six choices for split point (right branch on equality): At the start, there are six choices for split point (right branch on equality): –P(Gender)= 2(6/15)(9/15)(2/15 + 4/15 + 3/15)=0.224 –P(1.6) = 0 –P(1.7) = 2(2/15)(13/15)(0 + 8/15 + 3/15) = 0.169 –P(1.8) = 2(5/15)(10/15)(4/15 + 6/15 + 3/15) = 0.385 –P(1.9) = 2(9/15)(6/15)(4/15 + 2/15 + 3/15) = 0.256 –P(2.0) = 2(12/15)(3/15)(4/15 + 8/15 + 3/15) = 0.32 Split at 1.8 Split at 1.8
115
115 CSE 5331/7331 F'09 Classification Using Neural Networks Typical NN structure for classification: Typical NN structure for classification: –One output node per class –Output value is class membership function value Supervised learning Supervised learning For each tuple in training set, propagate it through NN. Adjust weights on edges to improve future classification. For each tuple in training set, propagate it through NN. Adjust weights on edges to improve future classification. Algorithms: Propagation, Backpropagation, Gradient Descent Algorithms: Propagation, Backpropagation, Gradient Descent
116
116 CSE 5331/7331 F'09 NN Issues Number of source nodes Number of source nodes Number of hidden layers Number of hidden layers Training data Training data Number of sinks Number of sinks Interconnections Interconnections Weights Weights Activation Functions Activation Functions Learning Technique Learning Technique When to stop learning When to stop learning
117
117 CSE 5331/7331 F'09 Decision Tree vs. Neural Network
118
118 CSE 5331/7331 F'09 Propagation Tuple Input Output
119
119 CSE 5331/7331 F'09 NN Propagation Algorithm
120
120 CSE 5331/7331 F'09 Example Propagation © Prentie Hall
121
121 CSE 5331/7331 F'09 NN Learning Adjust weights to perform better with the associated test data. Adjust weights to perform better with the associated test data. Supervised: Use feedback from knowledge of correct classification. Supervised: Use feedback from knowledge of correct classification. Unsupervised: No knowledge of correct classification needed. Unsupervised: No knowledge of correct classification needed.
122
122 CSE 5331/7331 F'09 NN Supervised Learning
123
123 CSE 5331/7331 F'09 Supervised Learning Possible error values assuming output from node i is y i but should be d i : Possible error values assuming output from node i is y i but should be d i : Change weights on arcs based on estimated error Change weights on arcs based on estimated error
124
124 CSE 5331/7331 F'09 NN Backpropagation Propagate changes to weights backward from output layer to input layer. Propagate changes to weights backward from output layer to input layer. Delta Rule: w ij = c x ij (d j – y j ) Delta Rule: w ij = c x ij (d j – y j ) Gradient Descent: technique to modify the weights in the graph. Gradient Descent: technique to modify the weights in the graph.
125
125 CSE 5331/7331 F'09 Backpropagation Error
126
126 CSE 5331/7331 F'09 Backpropagation Algorithm
127
127 CSE 5331/7331 F'09 Gradient Descent
128
128 CSE 5331/7331 F'09 Gradient Descent Algorithm
129
129 CSE 5331/7331 F'09 Output Layer Learning
130
130 CSE 5331/7331 F'09 Hidden Layer Learning
131
131 CSE 5331/7331 F'09 Types of NNs Different NN structures used for different problems. Different NN structures used for different problems. Perceptron Perceptron Self Organizing Feature Map Self Organizing Feature Map Radial Basis Function Network Radial Basis Function Network
132
132 CSE 5331/7331 F'09 Perceptron Perceptron is one of the simplest NNs. No hidden layers.
133
133 CSE 5331/7331 F'09 Perceptron Example Suppose: –Summation: S=3x 1 +2x 2 -6 –Activation: if S>0 then 1 else 0
134
134 CSE 5331/7331 F'09 Self Organizing Feature Map (SOFM) Competitive Unsupervised Learning Competitive Unsupervised Learning Observe how neurons work in brain: Observe how neurons work in brain: –Firing impacts firing of those near –Neurons far apart inhibit each other –Neurons have specific nonoverlapping tasks Ex: Kohonen Network Ex: Kohonen Network
135
135 CSE 5331/7331 F'09 Kohonen Network
136
136 CSE 5331/7331 F'09 Kohonen Network Competitive Layer – viewed as 2D grid Competitive Layer – viewed as 2D grid Similarity between competitive nodes and input nodes: Similarity between competitive nodes and input nodes: –Input: X = –Input: X = –Weights: –Weights: –Similarity defined based on dot product Competitive node most similar to input “wins” Competitive node most similar to input “wins” Winning node weights (as well as surrounding node weights) increased. Winning node weights (as well as surrounding node weights) increased.
137
137 CSE 5331/7331 F'09 Radial Basis Function Network RBF function has Gaussian shape RBF function has Gaussian shape RBF Networks RBF Networks –Three Layers –Hidden layer – Gaussian activation function –Output layer – Linear activation function
138
138 CSE 5331/7331 F'09 Radial Basis Function Network
139
139 CSE 5331/7331 F'09 Classification Using Rules Perform classification using If-Then rules Perform classification using If-Then rules Classification Rule: r = Classification Rule: r = Antecedent, Consequent May generate from from other techniques (DT, NN) or generate directly. May generate from from other techniques (DT, NN) or generate directly. Algorithms: Gen, RX, 1R, PRISM Algorithms: Gen, RX, 1R, PRISM
140
140 CSE 5331/7331 F'09 Generating Rules from DTs
141
141 CSE 5331/7331 F'09 Generating Rules Example
142
142 CSE 5331/7331 F'09 Generating Rules from NNs
143
143 CSE 5331/7331 F'09 1R Algorithm
144
144 CSE 5331/7331 F'09 1R Example
145
145 CSE 5331/7331 F'09 PRISM Algorithm
146
146 CSE 5331/7331 F'09 PRISM Example
147
147 CSE 5331/7331 F'09 Decision Tree vs. Rules Tree has implied order in which splitting is performed. Tree has implied order in which splitting is performed. Tree created based on looking at all classes. Tree created based on looking at all classes. Rules have no ordering of predicates. Only need to look at one class to generate its rules.
148
148 CSE 5331/7331 F'09 Clustering Outline Clustering Problem Overview Clustering Problem Overview Clustering Techniques Clustering Techniques –Hierarchical Algorithms –Partitional Algorithms –Genetic Algorithm –Clustering Large Databases Goal: Provide an overview of the clustering problem and introduce some of the basic algorithms
149
149 CSE 5331/7331 F'09 Clustering Examples Segment customer database based on similar buying patterns. Segment customer database based on similar buying patterns. Group houses in a town into neighborhoods based on similar features. Group houses in a town into neighborhoods based on similar features. Identify new plant species Identify new plant species Identify similar Web usage patterns Identify similar Web usage patterns
150
150 CSE 5331/7331 F'09 Clustering Example
151
151 CSE 5331/7331 F'09 Clustering Houses Size Based Geographic Distance Based
152
152 CSE 5331/7331 F'09 Clustering vs. Classification No prior knowledge No prior knowledge –Number of clusters –Meaning of clusters Unsupervised learning Unsupervised learning
153
153 CSE 5331/7331 F'09 Clustering Issues Outlier handling Outlier handling Dynamic data Dynamic data Interpreting results Interpreting results Evaluating results Evaluating results Number of clusters Number of clusters Data to be used Data to be used Scalability Scalability
154
154 CSE 5331/7331 F'09 Impact of Outliers on Clustering
155
155 CSE 5331/7331 F'09 Clustering Problem Given a database D={t 1,t 2,…,t n } of tuples and an integer value k, the Clustering Problem is to define a mapping f:D {1,..,k} where each t i is assigned to one cluster K j, 1<=j<=k. Given a database D={t 1,t 2,…,t n } of tuples and an integer value k, the Clustering Problem is to define a mapping f:D {1,..,k} where each t i is assigned to one cluster K j, 1<=j<=k. A Cluster, K j, contains precisely those tuples mapped to it. A Cluster, K j, contains precisely those tuples mapped to it. Unlike classification problem, clusters are not known a priori. Unlike classification problem, clusters are not known a priori.
156
156 CSE 5331/7331 F'09 Types of Clustering Hierarchical – Nested set of clusters created. Hierarchical – Nested set of clusters created. Partitional – One set of clusters created. Partitional – One set of clusters created. Incremental – Each element handled one at a time. Incremental – Each element handled one at a time. Simultaneous – All elements handled together. Simultaneous – All elements handled together. Overlapping/Non-overlapping Overlapping/Non-overlapping
157
157 CSE 5331/7331 F'09 Clustering Approaches Clustering HierarchicalPartitionalCategoricalLarge DB AgglomerativeDivisive SamplingCompression
158
158 CSE 5331/7331 F'09 Cluster Parameters
159
159 CSE 5331/7331 F'09 Distance Between Clusters Single Link: smallest distance between points Single Link: smallest distance between points Complete Link: largest distance between points Complete Link: largest distance between points Average Link: average distance between points Average Link: average distance between points Centroid: distance between centroids Centroid: distance between centroids
160
160 CSE 5331/7331 F'09 Hierarchical Clustering Clusters are created in levels actually creating sets of clusters at each level. Clusters are created in levels actually creating sets of clusters at each level. Agglomerative Agglomerative –Initially each item in its own cluster –Iteratively clusters are merged together –Bottom Up Divisive Divisive –Initially all items in one cluster –Large clusters are successively divided –Top Down
161
161 CSE 5331/7331 F'09 Hierarchical Algorithms Single Link Single Link MST Single Link MST Single Link Complete Link Complete Link Average Link Average Link
162
162 CSE 5331/7331 F'09 Dendrogram Dendrogram: a tree data structure which illustrates hierarchical clustering techniques. Dendrogram: a tree data structure which illustrates hierarchical clustering techniques. Each level shows clusters for that level. Each level shows clusters for that level. –Leaf – individual clusters –Root – one cluster A cluster at level i is the union of its children clusters at level i+1. A cluster at level i is the union of its children clusters at level i+1.
163
163 CSE 5331/7331 F'09 Levels of Clustering
164
164 CSE 5331/7331 F'09 Agglomerative Example ABCDE A01223 B10243 C22015 D24103 E33530 BA EC D 4 Threshold of 2351 ABCDE
165
165 CSE 5331/7331 F'09 MST Example ABCDE A01223 B10243 C22015 D24103 E33530 BA EC D
166
166 CSE 5331/7331 F'09 Agglomerative Algorithm
167
167 CSE 5331/7331 F'09 Single Link View all items with links (distances) between them. View all items with links (distances) between them. Finds maximal connected components in this graph. Finds maximal connected components in this graph. Two clusters are merged if there is at least one edge which connects them. Two clusters are merged if there is at least one edge which connects them. Uses threshold distances at each level. Uses threshold distances at each level. Could be agglomerative or divisive. Could be agglomerative or divisive.
168
168 CSE 5331/7331 F'09 MST Single Link Algorithm
169
169 CSE 5331/7331 F'09 Single Link Clustering
170
170 CSE 5331/7331 F'09 Partitional Clustering Nonhierarchical Nonhierarchical Creates clusters in one step as opposed to several steps. Creates clusters in one step as opposed to several steps. Since only one set of clusters is output, the user normally has to input the desired number of clusters, k. Since only one set of clusters is output, the user normally has to input the desired number of clusters, k. Usually deals with static sets. Usually deals with static sets.
171
171 CSE 5331/7331 F'09 Partitional Algorithms MST MST Squared Error Squared Error K-Means K-Means Nearest Neighbor Nearest Neighbor PAM PAM BEA BEA GA GA
172
172 CSE 5331/7331 F'09 MST Algorithm
173
173 CSE 5331/7331 F'09 Squared Error Minimized squared error Minimized squared error
174
174 CSE 5331/7331 F'09 Squared Error Algorithm
175
175 CSE 5331/7331 F'09 K-Means Initial set of clusters randomly chosen. Initial set of clusters randomly chosen. Iteratively, items are moved among sets of clusters until the desired set is reached. Iteratively, items are moved among sets of clusters until the desired set is reached. High degree of similarity among elements in a cluster is obtained. High degree of similarity among elements in a cluster is obtained. Given a cluster K i ={t i1,t i2,…,t im }, the cluster mean is m i = (1/m)(t i1 + … + t im ) Given a cluster K i ={t i1,t i2,…,t im }, the cluster mean is m i = (1/m)(t i1 + … + t im )
176
176 CSE 5331/7331 F'09 K-Means Example Given: {2,4,10,12,3,20,30,11,25}, k=2 Given: {2,4,10,12,3,20,30,11,25}, k=2 Randomly assign means: m 1 =3,m 2 =4 Randomly assign means: m 1 =3,m 2 =4 K 1 ={2,3}, K 2 ={4,10,12,20,30,11,25}, m 1 =2.5,m 2 =16 K 1 ={2,3}, K 2 ={4,10,12,20,30,11,25}, m 1 =2.5,m 2 =16 K 1 ={2,3,4},K 2 ={10,12,20,30,11,25}, m 1 =3,m 2 =18 K 1 ={2,3,4},K 2 ={10,12,20,30,11,25}, m 1 =3,m 2 =18 K 1 ={2,3,4,10},K 2 ={12,20,30,11,25}, m 1 =4.75,m 2 =19.6 K 1 ={2,3,4,10},K 2 ={12,20,30,11,25}, m 1 =4.75,m 2 =19.6 K 1 ={2,3,4,10,11,12},K 2 ={20,30,25}, m 1 =7,m 2 =25 K 1 ={2,3,4,10,11,12},K 2 ={20,30,25}, m 1 =7,m 2 =25 Stop as the clusters with these means are the same. Stop as the clusters with these means are the same.
177
177 CSE 5331/7331 F'09 K-Means Algorithm
178
178 CSE 5331/7331 F'09 Nearest Neighbor Items are iteratively merged into the existing clusters that are closest. Items are iteratively merged into the existing clusters that are closest. Incremental Incremental Threshold, t, used to determine if items are added to existing clusters or a new cluster is created. Threshold, t, used to determine if items are added to existing clusters or a new cluster is created.
179
179 CSE 5331/7331 F'09 Nearest Neighbor Algorithm
180
180 CSE 5331/7331 F'09 PAM Partitioning Around Medoids (PAM) (K-Medoids) Partitioning Around Medoids (PAM) (K-Medoids) Handles outliers well. Handles outliers well. Ordering of input does not impact results. Ordering of input does not impact results. Does not scale well. Does not scale well. Each cluster represented by one item, called the medoid. Each cluster represented by one item, called the medoid. Initial set of k medoids randomly chosen. Initial set of k medoids randomly chosen.
181
181 CSE 5331/7331 F'09PAM
182
182 CSE 5331/7331 F'09 PAM Cost Calculation At each step in algorithm, medoids are changed if the overall cost is improved. At each step in algorithm, medoids are changed if the overall cost is improved. C jih – cost change for an item t j associated with swapping medoid t i with non-medoid t h. C jih – cost change for an item t j associated with swapping medoid t i with non-medoid t h.
183
183 CSE 5331/7331 F'09 PAM Algorithm
184
184 CSE 5331/7331 F'09 BEA Bond Energy Algorithm Bond Energy Algorithm Database design (physical and logical) Database design (physical and logical) Vertical fragmentation Vertical fragmentation Determine affinity (bond) between attributes based on common usage. Determine affinity (bond) between attributes based on common usage. Algorithm outline: Algorithm outline: 1.Create affinity matrix 2.Convert to BOND matrix 3.Create regions of close bonding
185
185 CSE 5331/7331 F'09BEA Modified from [OV99]
186
186 CSE 5331/7331 F'09 Genetic Algorithm Example { A,B,C,D,E,F,G,H} { A,B,C,D,E,F,G,H} Randomly choose initial solution: Randomly choose initial solution: {A,C,E} {B,F} {D,G,H} or 10101000, 01000100, 00010011 Suppose crossover at point four and choose 1 st and 3 rd individuals: Suppose crossover at point four and choose 1 st and 3 rd individuals: 10100011, 01000100, 00011000 What should termination criteria be? What should termination criteria be?
187
187 CSE 5331/7331 F'09 GA Algorithm
188
188 CSE 5331/7331 F'09 Clustering Large Databases Most clustering algorithms assume a large data structure which is memory resident. Most clustering algorithms assume a large data structure which is memory resident. Clustering may be performed first on a sample of the database then applied to the entire database. Clustering may be performed first on a sample of the database then applied to the entire database. Algorithms Algorithms –BIRCH –DBSCAN –CURE
189
189 CSE 5331/7331 F'09 Desired Features for Large Databases One scan (or less) of DB One scan (or less) of DB Online Online Suspendable, stoppable, resumable Suspendable, stoppable, resumable Incremental Incremental Work with limited main memory Work with limited main memory Different techniques to scan (e.g. sampling) Different techniques to scan (e.g. sampling) Process each tuple once Process each tuple once
190
190 CSE 5331/7331 F'09 BIRCH Balanced Iterative Reducing and Clustering using Hierarchies Balanced Iterative Reducing and Clustering using Hierarchies Incremental, hierarchical, one scan Incremental, hierarchical, one scan Save clustering information in a tree Save clustering information in a tree Each entry in the tree contains information about one cluster Each entry in the tree contains information about one cluster New nodes inserted in closest entry in tree New nodes inserted in closest entry in tree
191
191 CSE 5331/7331 F'09 Clustering Feature CT Triple: (N,LS,SS) CT Triple: (N,LS,SS) –N: Number of points in cluster –LS: Sum of points in the cluster –SS: Sum of squares of points in the cluster CF Tree CF Tree –Balanced search tree –Node has CF triple for each child –Leaf node represents cluster and has CF value for each subcluster in it. –Subcluster has maximum diameter
192
192 CSE 5331/7331 F'09 BIRCH Algorithm
193
193 CSE 5331/7331 F'09 Improve Clusters
194
194 CSE 5331/7331 F'09 DBSCAN Density Based Spatial Clustering of Applications with Noise Density Based Spatial Clustering of Applications with Noise Outliers will not effect creation of cluster. Outliers will not effect creation of cluster. Input Input –MinPts – minimum number of points in cluster –Eps – for each point in cluster there must be another point in it less than this distance away.
195
195 CSE 5331/7331 F'09 DBSCAN Density Concepts Eps-neighborhood: Points within Eps distance of a point. Eps-neighborhood: Points within Eps distance of a point. Core point: Eps-neighborhood dense enough (MinPts) Core point: Eps-neighborhood dense enough (MinPts) Directly density-reachable: A point p is directly density-reachable from a point q if the distance is small (Eps) and q is a core point. Directly density-reachable: A point p is directly density-reachable from a point q if the distance is small (Eps) and q is a core point. Density-reachable: A point si density- reachable form another point if there is a path from one to the other consisting of only core points. Density-reachable: A point si density- reachable form another point if there is a path from one to the other consisting of only core points.
196
196 CSE 5331/7331 F'09 Density Concepts
197
197 CSE 5331/7331 F'09 DBSCAN Algorithm
198
198 CSE 5331/7331 F'09 CURE Clustering Using Representatives Clustering Using Representatives Use many points to represent a cluster instead of only one Use many points to represent a cluster instead of only one Points will be well scattered Points will be well scattered
199
199 CSE 5331/7331 F'09 CURE Approach
200
200 CSE 5331/7331 F'09 CURE Algorithm
201
201 CSE 5331/7331 F'09 CURE for Large Databases
202
202 CSE 5331/7331 F'09 Comparison of Clustering Techniques
203
203 CSE 5331/7331 F'09 Association Rules Outline Provide an overview of basic Association Rule mining techniques Goal: Provide an overview of basic Association Rule mining techniques Association Rules Problem Overview Association Rules Problem Overview –Large itemsets Association Rules Algorithms Association Rules Algorithms –Apriori –Sampling –Partitioning –Parallel Algorithms Comparing Techniques Comparing Techniques Incremental Algorithms Incremental Algorithms Advanced AR Techniques Advanced AR Techniques
204
204 CSE 5331/7331 F'09 Example: Market Basket Data Items frequently purchased together: Items frequently purchased together: Bread PeanutButter Uses: Uses: –Placement –Advertising –Sales –Coupons Objective: increase sales and reduce costs Objective: increase sales and reduce costs
205
205 CSE 5331/7331 F'09 Association Rule Definitions Set of items: I={I 1,I 2,…,I m } Set of items: I={I 1,I 2,…,I m } Transactions: D={t 1,t 2, …, t n }, t j I Transactions: D={t 1,t 2, …, t n }, t j I Itemset: {I i1,I i2, …, I ik } I Itemset: {I i1,I i2, …, I ik } I Support of an itemset: Percentage of transactions which contain that itemset. Support of an itemset: Percentage of transactions which contain that itemset. Large (Frequent) itemset: Itemset whose number of occurrences is above a threshold. Large (Frequent) itemset: Itemset whose number of occurrences is above a threshold.
206
206 CSE 5331/7331 F'09 Association Rules Example I = { Beer, Bread, Jelly, Milk, PeanutButter} Support of {Bread,PeanutButter} is 60%
207
207 CSE 5331/7331 F'09 Association Rule Definitions Association Rule (AR): implication X Y where X,Y I and X Y = ; Association Rule (AR): implication X Y where X,Y I and X Y = ; Support of AR (s) X Y: Percentage of transactions that contain X Y Support of AR (s) X Y: Percentage of transactions that contain X Y Confidence of AR ( ) X Y: Ratio of number of transactions that contain X Y to the number that contain X Confidence of AR ( ) X Y: Ratio of number of transactions that contain X Y to the number that contain X
208
208 CSE 5331/7331 F'09 Association Rules Ex (cont’d)
209
209 CSE 5331/7331 F'09 Association Rule Problem Given a set of items I={I 1,I 2,…,I m } and a database of transactions D={t 1,t 2, …, t n } where t i ={I i1,I i2, …, I ik } and I ij I, the Association Rule Problem is to identify all association rules X Y with a minimum support and confidence. Given a set of items I={I 1,I 2,…,I m } and a database of transactions D={t 1,t 2, …, t n } where t i ={I i1,I i2, …, I ik } and I ij I, the Association Rule Problem is to identify all association rules X Y with a minimum support and confidence. Link Analysis Link Analysis NOTE: Support of X Y is same as support of X Y. NOTE: Support of X Y is same as support of X Y.
210
210 CSE 5331/7331 F'09 Association Rule Techniques 1. Find Large Itemsets. 2. Generate rules from frequent itemsets.
211
211 CSE 5331/7331 F'09 Algorithm to Generate ARs
212
212 CSE 5331/7331 F'09 Apriori Large Itemset Property: Large Itemset Property: Any subset of a large itemset is large. Contrapositive: Contrapositive: If an itemset is not large, none of its supersets are large.
213
213 CSE 5331/7331 F'09 Large Itemset Property
214
214 CSE 5331/7331 F'09 Apriori Ex (cont’d) s=30% = 50%
215
215 CSE 5331/7331 F'09 Apriori Algorithm 1. C 1 = Itemsets of size one in I; 2. Determine all large itemsets of size 1, L 1; 3. 3. i = 1; 4. Repeat 5. i = i + 1; 6. C i = Apriori-Gen(L i-1 ); 7. Count C i to determine L i; 8. until no more large itemsets found;
216
216 CSE 5331/7331 F'09 Apriori-Gen Generate candidates of size i+1 from large itemsets of size i. Generate candidates of size i+1 from large itemsets of size i. Approach used: join large itemsets of size i if they agree on i-1 Approach used: join large itemsets of size i if they agree on i-1 May also prune candidates who have subsets that are not large. May also prune candidates who have subsets that are not large.
217
217 CSE 5331/7331 F'09 Apriori-Gen Example
218
218 CSE 5331/7331 F'09 Apriori-Gen Example (cont’d)
219
219 CSE 5331/7331 F'09 Apriori Adv/Disadv Advantages: Advantages: –Uses large itemset property. –Easily parallelized –Easy to implement. Disadvantages: Disadvantages: –Assumes transaction database is memory resident. –Requires up to m database scans.
220
220 CSE 5331/7331 F'09 Sampling Large databases Large databases Sample the database and apply Apriori to the sample. Sample the database and apply Apriori to the sample. Potentially Large Itemsets (PL): Large itemsets from sample Potentially Large Itemsets (PL): Large itemsets from sample Negative Border (BD - ): Negative Border (BD - ): –Generalization of Apriori-Gen applied to itemsets of varying sizes. –Minimal set of itemsets which are not in PL, but whose subsets are all in PL.
221
221 CSE 5331/7331 F'09 Negative Border Example PL PL BD - (PL)
222
222 CSE 5331/7331 F'09 Sampling Algorithm 1. D s = sample of Database D; 2. PL = Large itemsets in D s using smalls; 3. C = PL BD - (PL); 4. Count C in Database using s; 5. ML = large itemsets in BD - (PL); 6. If ML = then done 7. else C = repeated application of BD -; 8. Count C in Database;
223
223 CSE 5331/7331 F'09 Sampling Example Find AR assuming s = 20% Find AR assuming s = 20% D s = { t 1,t 2 } D s = { t 1,t 2 } Smalls = 10% Smalls = 10% PL = {{Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}} PL = {{Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}} BD - (PL)={{Beer},{Milk}} BD - (PL)={{Beer},{Milk}} ML = {{Beer}, {Milk}} ML = {{Beer}, {Milk}} Repeated application of BD - generates all remaining itemsets Repeated application of BD - generates all remaining itemsets
224
224 CSE 5331/7331 F'09 Sampling Adv/Disadv Advantages: Advantages: –Reduces number of database scans to one in the best case and two in worst. –Scales better. Disadvantages: Disadvantages: –Potentially large number of candidates in second pass
225
225 CSE 5331/7331 F'09 Partitioning Divide database into partitions D 1,D 2,…,D p Divide database into partitions D 1,D 2,…,D p Apply Apriori to each partition Apply Apriori to each partition Any large itemset must be large in at least one partition. Any large itemset must be large in at least one partition.
226
226 CSE 5331/7331 F'09 Partitioning Algorithm 1. Divide D into partitions D 1,D 2,…,D p; 2. 2. For I = 1 to p do 3. L i = Apriori(D i ); 4. C = L 1 … L p ; 5. Count C on D to generate L;
227
227 CSE 5331/7331 F'09 Partitioning Example D1D1 D2D2 S=10% {Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}} L 1 ={{Bread}, {Jelly}, {PeanutButter}, {Bread,Jelly}, {Bread,PeanutButter}, {Jelly, PeanutButter}, {Bread,Jelly,PeanutButter}} {Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer,Milk}} L 2 ={{Bread}, {Milk}, {PeanutButter}, {Bread,Milk}, {Bread,PeanutButter}, {Milk, PeanutButter}, {Bread,Milk,PeanutButter}, {Beer}, {Beer,Bread}, {Beer,Milk}}
228
228 CSE 5331/7331 F'09 Partitioning Adv/Disadv Advantages: Advantages: –Adapts to available main memory –Easily parallelized –Maximum number of database scans is two. Disadvantages: Disadvantages: –May have many candidates during second scan.
229
229 CSE 5331/7331 F'09 Parallelizing AR Algorithms Based on Apriori Based on Apriori Techniques differ: Techniques differ: – What is counted at each site – How data (transactions) are distributed Data Parallelism Data Parallelism –Data partitioned –Count Distribution Algorithm Task Parallelism Task Parallelism –Data and candidates partitioned –Data Distribution Algorithm
230
230 CSE 5331/7331 F'09 Count Distribution Algorithm(CDA) Count Distribution Algorithm(CDA) 1. Place data partition at each site. 2. In Parallel at each site do 3. C 1 = Itemsets of size one in I; 4. Count C 1; 5. Broadcast counts to all sites; 6. Determine global large itemsets of size 1, L 1 ; 7. 7. i = 1; 8. Repeat 9. i = i + 1; 10. C i = Apriori-Gen(L i-1 ); 11. Count C i; 12. Broadcast counts to all sites; 13. Determine global large itemsets of size i, L i ; 14. until no more large itemsets found;
231
231 CSE 5331/7331 F'09 CDA Example
232
232 CSE 5331/7331 F'09 Data Distribution Algorithm(DDA) Data Distribution Algorithm(DDA) 1. Place data partition at each site. 2. In Parallel at each site do 3. Determine local candidates of size 1 to count; 4. Broadcast local transactions to other sites; 5. Count local candidates of size 1 on all data; 6. Determine large itemsets of size 1 for local candidates; 7. Broadcast large itemsets to all sites; 8. Determine L 1 ; 9. 9. i = 1; 10. Repeat 11. i = i + 1; 12. C i = Apriori-Gen(L i-1 ); 13. Determine local candidates of size i to count; 14. Count, broadcast, and find L i ; 15. until no more large itemsets found;
233
233 CSE 5331/7331 F'09 DDA Example
234
234 CSE 5331/7331 F'09 Comparing AR Techniques Target Target Type Type Data Type Data Type Data Source Data Source Technique Technique Itemset Strategy and Data Structure Itemset Strategy and Data Structure Transaction Strategy and Data Structure Transaction Strategy and Data Structure Optimization Optimization Architecture Architecture Parallelism Strategy Parallelism Strategy
235
235 CSE 5331/7331 F'09 Comparison of AR Techniques
236
236 CSE 5331/7331 F'09 Hash Tree
237
237 CSE 5331/7331 F'09 Incremental Association Rules Generate ARs in a dynamic database. Generate ARs in a dynamic database. Problem: algorithms assume static database Problem: algorithms assume static database Objective: Objective: –Know large itemsets for D –Find large itemsets for D { D} Must be large in either D or D Must be large in either D or D Save L i and counts Save L i and counts
238
238 CSE 5331/7331 F'09 Note on ARs Many applications outside market basket data analysis Many applications outside market basket data analysis –Prediction (telecom switch failure) –Web usage mining Many different types of association rules Many different types of association rules –Temporal –Spatial –Causal
239
239 CSE 5331/7331 F'09 Advanced AR Techniques Generalized Association Rules Generalized Association Rules Multiple-Level Association Rules Multiple-Level Association Rules Quantitative Association Rules Quantitative Association Rules Using multiple minimum supports Using multiple minimum supports Correlation Rules Correlation Rules
240
240 CSE 5331/7331 F'09 Measuring Quality of Rules Support Support Confidence Confidence Interest Interest Conviction Conviction Chi Squared Test Chi Squared Test
241
241 CSE 5331/7331 F'09 Data Mining Outline PART I - Introduction PART II – Core Topics – –Classification – –Clustering – –Association Rules PART III – Related Topics PART III – Related Topics
242
242 CSE 5331/7331 F'09 Related Topics Outline Database/OLTP Systems Database/OLTP Systems Fuzzy Sets and Logic Fuzzy Sets and Logic Information Retrieval(Web Search Engines) Information Retrieval(Web Search Engines) Dimensional Modeling Dimensional Modeling Data Warehousing Data Warehousing OLAP/DSS OLAP/DSS Statistics Statistics Machine Learning Machine Learning Pattern Matching Pattern Matching Goal: Examine some areas which are related to data mining.
243
243 CSE 5331/7331 F'09 DB & OLTP Systems Schema Schema –(ID,Name,Address,Salary,JobNo) Data Model Data Model –ER –Relational Transaction Transaction Query: Query: SELECT Name FROM T WHERE Salary > 100000 DM: Only imprecise queries
244
CSE 5331/7331 F'09244 Fuzzy Sets Outline Introduction/Overview Material for these slides obtained from: Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book Introduction to “Type-2 Fuzzy Logic” by Jenny Carter Introduction to “Type-2 Fuzzy Logic” by Jenny Carter http://www.cse.dmu.ac.uk/~jennyc/
245
245 CSE 5331/7331 F'09 Fuzzy Sets and Logic Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. f(x): Probability x is in F. f(x): Probability x is in F. 1-f(x): Probability x is not in F. 1-f(x): Probability x is not in F. EX: EX: –T = {x | x is a person and x is tall} –Let f(x) be the probability that x is tall –Here f is the membership function DM: Prediction and classification are fuzzy.
246
246 CSE 5331/7331 F'09 Fuzzy Sets and Logic Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. Fuzzy Set: Set membership function is a real valued function with output in the range [0,1]. f(x): Probability x is in F. f(x): Probability x is in F. 1-f(x): Probability x is not in F. 1-f(x): Probability x is not in F. EX: EX: –T = {x | x is a person and x is tall} –Let f(x) be the probability that x is tall –Here f is the membership function
247
247 CSE 5331/7331 F'09 Fuzzy Sets
248
248 CSE 5331/7331 F'09 IR is Fuzzy SimpleFuzzy Accept Reject
249
249 CSE 5331/7331 F'09 Fuzzy Set Theory A fuzzy subset A of U is characterized by a membership function (A,u) : U [0,1] which associates with each element u of U a number (u) in the interval [0,1] A fuzzy subset A of U is characterized by a membership function (A,u) : U [0,1] which associates with each element u of U a number (u) in the interval [0,1] Definition Definition –Let A and B be two fuzzy subsets of U. Also, let ¬A be the complement of A. Then, » (¬A,u) = 1 - (A,u) » (A B,u) = max( (A,u), (B,u)) » (A B,u) = min( (A,u), (B,u))
250
250 CSE 5331/7331 F'09 The world is imprecise. Mathematical and Statistical techniques often unsatisfactory. Mathematical and Statistical techniques often unsatisfactory. –Experts make decisions with imprecise data in an uncertain world. –They work with knowledge that is rarely defined mathematically or algorithmically but uses vague terminology with words. Fuzzy logic is able to use vagueness to achieve a precise answer. By considering shades of grey and all factors simultaneously, you get a better answer, one that is more suited to the situation. Fuzzy logic is able to use vagueness to achieve a precise answer. By considering shades of grey and all factors simultaneously, you get a better answer, one that is more suited to the situation. © Jenny Carter
251
251 CSE 5331/7331 F'09 Fuzzy Logic then... is particularly good at handling uncertainty, vagueness and imprecision. is particularly good at handling uncertainty, vagueness and imprecision. especially useful where a problem can be described linguistically (using words). especially useful where a problem can be described linguistically (using words). Applications include: Applications include: –robotics –washing machine control –nuclear reactors –focusing a camcorder –information retrieval –train scheduling © Jenny Carter
252
252 CSE 5331/7331 F'09 Crisp Sets Different heights have same ‘tallness’ Different heights have same ‘tallness’ © Jenny Carter
253
253 CSE 5331/7331 F'09 Fuzzy Sets The shape you see is known as the membership function The shape you see is known as the membership function © Jenny Carter
254
254 CSE 5331/7331 F'09 Fuzzy Sets Shows two membership functions: ‘tall’ and ‘short’ © Jenny Carter
255
255 CSE 5331/7331 F'09Notation For the member, x, of a discrete set with membership µ we use the notation µ/x. In other words, x is a member of the set to degree µ. Discrete sets are written as: A = µ 1 /x 1 + µ 2 /x 2 +.......... + µ n /x n Or where x 1, x 2....x n are members of the set A and µ 1, µ 2,...., µ n are their degrees of membership. A continuous fuzzy set A is written as: © Jenny Carter
256
256 CSE 5331/7331 F'09 Fuzzy Sets The members of a fuzzy set are members to some degree, known as a membership grade or degree of membership. The members of a fuzzy set are members to some degree, known as a membership grade or degree of membership. The membership grade is the degree of belonging to the fuzzy set. The larger the number (in [0,1]) the more the degree of belonging. (N.B. This is not a probability) The membership grade is the degree of belonging to the fuzzy set. The larger the number (in [0,1]) the more the degree of belonging. (N.B. This is not a probability) The translation from x to µ A (x) is known as fuzzification. The translation from x to µ A (x) is known as fuzzification. A fuzzy set is either continuous or discrete. A fuzzy set is either continuous or discrete. Graphical representation of membership functions is very useful. Graphical representation of membership functions is very useful. © Jenny Carter
257
257 CSE 5331/7331 F'09 Fuzzy Sets - Example Again, notice the overlapping of the sets reflecting the real world more accurately than if we were using a traditional approach. © Jenny Carter
258
258 CSE 5331/7331 F'09 Rules Rules often of the form: Rules often of the form: IF x is A THEN y is B where A and B are fuzzy sets defined on the universes of discourse X and Y respectively. –if pressure is high then volume is small; –if a tomato is red then a tomato is ripe. where high, small, red and ripe are fuzzy sets. © Jenny Carter
259
259 CSE 5331/7331 F'09 Example - Dinner for two ( p2-21 of FL toolbox user guide) Rule 2If service is good, then tip is average Rule 3If service is excellent or food is delicious, then tip is generous The inputs are crisp (non- fuzzy) numbers limited to a specific range All rules are evaluated in parallel using fuzzy reasoning The results of the rules are combined and distilled (de-fuzzyfied) The result is a crisp (non- fuzzy) number Output Tip (5-25%) Dinner for two: this is a 2 input, 1 output, 3 rule system Input 1 Service (0-10) Input 2 Food (0-10) Rule 1If service is poor or food is rancid, then tip is cheap © Jenny Carter
260
260 CSE 5331/7331 F'09 Dinner for two 1. Fuzzify the input: 2. Apply Fuzzy operator © Jenny Carter
261
261 CSE 5331/7331 F'09 Dinner for two 3. Apply implication method © Jenny Carter
262
262 CSE 5331/7331 F'09 Dinner for two 4.Aggregate all outputs © Jenny Carter
263
263 CSE 5331/7331 F'09 Dinner for two 5. defuzzify 5. defuzzify Various approaches e.g. centre of area mean of max © Jenny Carter
264
CSE 5331/7331 F'09264 Information Retrieval Outline Introduction/Overview Material for these slides obtained from: Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/ Modern Information Retrieval by Ricardo Baeza-Yates and Berthier Ribeiro-Neto http://www.sims.berkeley.edu/~hearst/irbook/ http://www.sims.berkeley.edu/~hearst/irbook/ Data Mining Introductory and Advanced Topics by Margaret H. Dunham Data Mining Introductory and Advanced Topics by Margaret H. Dunham http://www.engr.smu.edu/~mhd/book
265
265 CSE 5331/7331 F'09 Information Retrieval Information Retrieval (IR): retrieving desired information from textual data. Information Retrieval (IR): retrieving desired information from textual data. Library Science Library Science Digital Libraries Digital Libraries Web Search Engines Web Search Engines Traditionally keyword based Traditionally keyword based Sample query: Sample query: Find all documents about “data mining”. DM: Similarity measures; Mine text/Web data. Mine text/Web data.
266
266 CSE 5331/7331 F'09 Information Retrieval Information Retrieval (IR): retrieving desired information from textual data. Information Retrieval (IR): retrieving desired information from textual data. Library Science Library Science Digital Libraries Digital Libraries Web Search Engines Web Search Engines Traditionally keyword based Traditionally keyword based Sample query: Sample query: Find all documents about “data mining”.
267
267 CSE 5331/7331 F'09 DB vs IR Records (tuples) vs. documents Records (tuples) vs. documents Well defined results vs. fuzzy results Well defined results vs. fuzzy results DB grew out of files and traditional business systesm DB grew out of files and traditional business systesm IR grew out of library science and need to categorize/group/access books/articles IR grew out of library science and need to categorize/group/access books/articles
268
268 CSE 5331/7331 F'09 DB vs IR (cont’d) Data retrieval which docs contain a set of keywords? Well defined semantics a single erroneous object implies failure! Information retrieval information about a subject or topic semantics is frequently loose small errors are tolerated IR system: interpret contents of information items generate a ranking which reflects relevance notion of relevance is most important
269
269 CSE 5331/7331 F'09 Motivation IR in the last 20 years: classification and categorization systems and languages user interfaces and visualization Still, area was seen as of narrow interest Advent of the Web changed this perception once and for all universal repository of knowledge free (low cost) universal access no central editorial board many problems though: IR seen as key to finding the solutions!
270
270 CSE 5331/7331 F'09 Basic Concepts Logical view of the documents Document representation viewed as a continuum: logical view of docs might shift structure Accents spacing stopwords Noun groups stemming Manual indexing Docs structureFull textIndex terms Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
271
271 CSE 5331/7331 F'09 User Interface Text Operations Query Operations Indexing Searching Ranking Index Text query user need user feedback ranked docs retrieved docs logical view inverted file DB Manager Module Text Database Text The Retrieval Process Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
272
272 CSE 5331/7331 F'09 Information Retrieval Similarity: measure of how close a query is to a document. Similarity: measure of how close a query is to a document. Documents which are “close enough” are retrieved. Documents which are “close enough” are retrieved. Metrics: Metrics: –Precision = |Relevant and Retrieved| |Retrieved| |Retrieved| –Recall = |Relevant and Retrieved| |Relevant| |Relevant|
273
273 CSE 5331/7331 F'09 Indexing IR systems usually adopt index terms to process queries IR systems usually adopt index terms to process queries Index term: Index term: –a keyword or group of selected words –any word (more general) Stemming might be used: Stemming might be used: –connect: connecting, connection, connections An inverted file is built for the chosen index terms An inverted file is built for the chosen index terms Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
274
274 CSE 5331/7331 F'09Indexing Docs Information Need Index Terms doc query Ranking match Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
275
275 CSE 5331/7331 F'09 Inverted Files There are two main elements: There are two main elements: –vocabulary – set of unique terms –Occurrences – where those terms appear The occurrences can be recorded as terms or byte offsets The occurrences can be recorded as terms or byte offsets Using term offset is good to retrieve concepts such as proximity, whereas byte offsets allow direct access Using term offset is good to retrieve concepts such as proximity, whereas byte offsets allow direct access Vocabulary Occurrences (byte offset) …… Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
276
276 CSE 5331/7331 F'09 Inverted Files The number of indexed terms is often several orders of magnitude smaller when compared to the documents size (Mbs vs Gbs) The number of indexed terms is often several orders of magnitude smaller when compared to the documents size (Mbs vs Gbs) The space consumed by the occurrence list is not trivial. Each time the term appears it must be added to a list in the inverted file The space consumed by the occurrence list is not trivial. Each time the term appears it must be added to a list in the inverted file That may lead to a quite considerable index overhead That may lead to a quite considerable index overhead Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
277
277 CSE 5331/7331 F'09 Example Text: Text: Inverted file Inverted file 1 6 12 16 18 25 29 36 40 45 54 58 66 70 That house has a garden. The garden has many flowers. The flowers are beautiful beautiful flowers garden house 70 45, 58 18, 29 6 VocabularyOccurrences Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
278
278 CSE 5331/7331 F'09 Ranking A ranking is an ordering of the documents retrieved that (hopefully) reflects the relevance of the documents to the query A ranking is an ordering of the documents retrieved that (hopefully) reflects the relevance of the documents to the query A ranking is based on fundamental premisses regarding the notion of relevance, such as: A ranking is based on fundamental premisses regarding the notion of relevance, such as: –common sets of index terms –sharing of weighted terms –likelihood of relevance Each set of premisses leads to a distinct IR model Each set of premisses leads to a distinct IR model Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
279
279 CSE 5331/7331 F'09 Classic IR Models - Basic Concepts Each document represented by a set of representative keywords or index terms Each document represented by a set of representative keywords or index terms An index term is a document word useful for remembering the document main themes An index term is a document word useful for remembering the document main themes Usually, index terms are nouns because nouns have meaning by themselves Usually, index terms are nouns because nouns have meaning by themselves However, search engines assume that all words are index terms (full text representation) However, search engines assume that all words are index terms (full text representation) Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
280
280 CSE 5331/7331 F'09 Classic IR Models - Basic Concepts The importance of the index terms is represented by weights associated to them The importance of the index terms is represented by weights associated to them k i - an index term k i - an index term d j - a document d j - a document w ij - a weight associated with (k i,d j ) w ij - a weight associated with (k i,d j ) The weight w ij quantifies the importance of the index term for describing the document contents The weight w ij quantifies the importance of the index term for describing the document contents Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
281
281 CSE 5331/7331 F'09 Classic IR Models - Basic Concepts –t is the total number of index terms –K = {k 1, k 2, …, k t } is the set of all index terms –w ij >= 0 is a weight associated with (k i,d j ) –w ij = 0 indicates that term does not belong to doc –d j = (w 1j, w 2j, …, w tj ) is a weighted vector associated with the document d j –g i (d j ) = w ij is a function which returns the weight associated with pair (k i,d j ) Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
282
282 CSE 5331/7331 F'09 The Boolean Model Simple model based on set theory Simple model based on set theory Queries specified as boolean expressions Queries specified as boolean expressions –precise semantics and neat formalism Terms are either present or absent. Thus, w ij {0,1} Terms are either present or absent. Thus, w ij {0,1} Consider Consider –q = k a (k b k c ) –q dnf = (1,1,1) (1,1,0) (1,0,0) –q cc = (1,1,0) is a conjunctive component Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
283
283 CSE 5331/7331 F'09 The Vector Model Use of binary weights is too limiting Use of binary weights is too limiting Non-binary weights provide consideration for partial matches Non-binary weights provide consideration for partial matches These term weights are used to compute a degree of similarity between a query and each document These term weights are used to compute a degree of similarity between a query and each document Ranked set of documents provides for better matching Ranked set of documents provides for better matching Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
284
284 CSE 5331/7331 F'09 The Vector Model w ij > 0 whenever k i appears in d j w ij > 0 whenever k i appears in d j w iq >= 0 associated with the pair (k i,q) w iq >= 0 associated with the pair (k i,q) d j = (w 1j, w 2j,..., w tj ) d j = (w 1j, w 2j,..., w tj ) q = (w 1q, w 2q,..., w tq ) q = (w 1q, w 2q,..., w tq ) To each term k i is associated a unitary vector i To each term k i is associated a unitary vector i The unitary vectors i and j are assumed to be orthonormal (i.e., index terms are assumed to occur independently within the documents) The unitary vectors i and j are assumed to be orthonormal (i.e., index terms are assumed to occur independently within the documents) The t unitary vectors i form an orthonormal basis for a t-dimensional space where queries and documents are represented as weighted vectors The t unitary vectors i form an orthonormal basis for a t-dimensional space where queries and documents are represented as weighted vectors Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
285
285 CSE 5331/7331 F'09 Query Languages Keyword Based Keyword Based Boolean Boolean Weighted Boolean Weighted Boolean Context Based (Phrasal & Proximity) Context Based (Phrasal & Proximity) Pattern Matching Pattern Matching Structural Queries Structural Queries Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
286
286 CSE 5331/7331 F'09 Keyword Based Queries Basic Queries Basic Queries –Single word –Multiple words Context Queries Context Queries –Phrase –Proximity Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
287
287 CSE 5331/7331 F'09 Boolean Queries Keywords combined with Boolean operators: Keywords combined with Boolean operators: –OR: (e 1 OR e 2 ) –AND: (e 1 AND e 2 ) –BUT: (e 1 BUT e 2 ) Satisfy e 1 but not e 2 Negation only allowed using BUT to allow efficient use of inverted index by filtering another efficiently retrievable set. Negation only allowed using BUT to allow efficient use of inverted index by filtering another efficiently retrievable set. Naïve users have trouble with Boolean logic. Naïve users have trouble with Boolean logic. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
288
288 CSE 5331/7331 F'09 Boolean Retrieval with Inverted Indices Primitive keyword: Retrieve containing documents using the inverted index. Primitive keyword: Retrieve containing documents using the inverted index. OR: Recursively retrieve e 1 and e 2 and take union of results. OR: Recursively retrieve e 1 and e 2 and take union of results. AND: Recursively retrieve e 1 and e 2 and take intersection of results. AND: Recursively retrieve e 1 and e 2 and take intersection of results. BUT: Recursively retrieve e 1 and e 2 and take set difference of results. BUT: Recursively retrieve e 1 and e 2 and take set difference of results. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
289
289 CSE 5331/7331 F'09 Phrasal Queries Retrieve documents with a specific phrase (ordered list of contiguous words) Retrieve documents with a specific phrase (ordered list of contiguous words) –“information theory” May allow intervening stop words and/or stemming. May allow intervening stop words and/or stemming. –“buy camera” matches: “buy a camera” “buying the cameras” etc. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
290
290 CSE 5331/7331 F'09 Phrasal Retrieval with Inverted Indices Must have an inverted index that also stores positions of each keyword in a document. Must have an inverted index that also stores positions of each keyword in a document. Retrieve documents and positions for each individual word, intersect documents, and then finally check for ordered contiguity of keyword positions. Retrieve documents and positions for each individual word, intersect documents, and then finally check for ordered contiguity of keyword positions. Best to start contiguity check with the least common word in the phrase. Best to start contiguity check with the least common word in the phrase. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
291
291 CSE 5331/7331 F'09 Proximity Queries List of words with specific maximal distance constraints between terms. List of words with specific maximal distance constraints between terms. Example: “dogs” and “race” within 4 words match “…dogs will begin the race…” Example: “dogs” and “race” within 4 words match “…dogs will begin the race…” May also perform stemming and/or not count stop words. May also perform stemming and/or not count stop words. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
292
292 CSE 5331/7331 F'09 Pattern Matching Allow queries that match strings rather than word tokens. Allow queries that match strings rather than word tokens. Requires more sophisticated data structures and algorithms than inverted indices to retrieve efficiently. Requires more sophisticated data structures and algorithms than inverted indices to retrieve efficiently. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
293
293 CSE 5331/7331 F'09 Simple Patterns Prefixes: Pattern that matches start of word. Prefixes: Pattern that matches start of word. –“anti” matches “antiquity”, “antibody”, etc. Suffixes: Pattern that matches end of word: Suffixes: Pattern that matches end of word: –“ix” matches “fix”, “matrix”, etc. Substrings: Pattern that matches arbitrary subsequence of characters. Substrings: Pattern that matches arbitrary subsequence of characters. – “rapt” matches “enrapture”, “velociraptor” etc. Ranges: Pair of strings that matches any word lexicographically (alphabetically) between them. Ranges: Pair of strings that matches any word lexicographically (alphabetically) between them. –“tin” to “tix” matches “tip”, “tire”, “title”, etc. Baeza-Yates and Ribeiro-Neto © Baeza-Yates and Ribeiro-Neto
294
294 CSE 5331/7331 F'09 IR Query Result Measures and Classification IRClassification
295
295 CSE 5331/7331 F'09 Dimensional Modeling View data in a hierarchical manner more as business executives might View data in a hierarchical manner more as business executives might Useful in decision support systems and mining Useful in decision support systems and mining Dimension: collection of logically related attributes; axis for modeling data. Dimension: collection of logically related attributes; axis for modeling data. Facts: data stored Facts: data stored Ex: Dimensions – products, locations, date Ex: Dimensions – products, locations, date Facts – quantity, unit price Facts – quantity, unit price DM: May view data as dimensional.
296
296 CSE 5331/7331 F'09 Dimensional Modeling View data in a hierarchical manner more as business executives might View data in a hierarchical manner more as business executives might Useful in decision support systems and mining Useful in decision support systems and mining Dimension: collection of logically related attributes; axis for modeling data. Dimension: collection of logically related attributes; axis for modeling data. Facts: data stored Facts: data stored Ex: Dimensions – products, locations, date Ex: Dimensions – products, locations, date Facts – quantity, unit price Facts – quantity, unit price
297
297 CSE 5331/7331 F'09 Aggregation Hierarchies
298
298 CSE 5331/7331 F'09 Multidimensional Schemas Star Schema shows facts and dimensions Star Schema shows facts and dimensions –Center of the star has facts shown in fact tables –Outside of the facts, each diemnsion is shown separately in dimension tables –Access to fact table from dimension table via join SELECT Quantity, Price FROM Facts, Location Where (Facts.LocationID = Location.LocationID) and (Location.City = ‘Dallas’) –View as relations, problem volume of data and indexing
299
299 CSE 5331/7331 F'09 Star Schema
300
300 CSE 5331/7331 F'09 Flattened Star
301
301 CSE 5331/7331 F'09 Normalized Star
302
302 CSE 5331/7331 F'09 Snowflake Schema
303
303 CSE 5331/7331 F'09 OLAP Online Analytic Processing (OLAP): provides more complex queries than OLTP. Online Analytic Processing (OLAP): provides more complex queries than OLTP. OnLine Transaction Processing (OLTP): traditional database/transaction processing. OnLine Transaction Processing (OLTP): traditional database/transaction processing. Dimensional data; cube view Dimensional data; cube view Visualization of operations: Visualization of operations: –Slice: examine sub-cube. –Dice: rotate cube to look at another dimension. –Roll Up/Drill Down DM: May use OLAP queries.
304
304 CSE 5331/7331 F'09 OLAP Introduction OLAP by Example OLAP by Example http://perso.orange.fr/bernard.lupin/englis h/index.htm http://perso.orange.fr/bernard.lupin/englis h/index.htm What is OLAP? What is OLAP? http://www.olapreport.com/fasmi.htm
305
305 CSE 5331/7331 F'09 OLAP Online Analytic Processing (OLAP): provides more complex queries than OLTP. Online Analytic Processing (OLAP): provides more complex queries than OLTP. OnLine Transaction Processing (OLTP): traditional database/transaction processing. OnLine Transaction Processing (OLTP): traditional database/transaction processing. Dimensional data; cube view Dimensional data; cube view Support ad hoc querying Support ad hoc querying Require analysis of data Require analysis of data Can be thought of as an extension of some of the basic aggregation functions available in SQL Can be thought of as an extension of some of the basic aggregation functions available in SQL OLAP tools may be used in DSS systems OLAP tools may be used in DSS systems Multidimentional view is fundamental Multidimentional view is fundamental
306
306 CSE 5331/7331 F'09 OLAP Implementations MOLAP (Multidimensional OLAP) MOLAP (Multidimensional OLAP) –Multidimential Database (MDD) –Specialized DBMS and software system capable of supporting the multidimensional data directly –Data stored as an n-dimensional array (cube) –Indexes used to speed up processing ROLAP (Relational OLAP) ROLAP (Relational OLAP) –Data stored in a relational database –ROLAP server (middleware) creates the multidimensional view for the user –Less Complex; Less efficient HOLAP (Hybrid OLAP) HOLAP (Hybrid OLAP) –Not updated frequently – MDD –Updated frequently - RDB
307
307 CSE 5331/7331 F'09 OLAP Operations Single CellMultiple CellsSliceDice Roll Up Drill Down
308
308 CSE 5331/7331 F'09 OLAP Operations Simple query – single cell in the cube Simple query – single cell in the cube Slice – Look at a subcube to get more specific information Slice – Look at a subcube to get more specific information Dice – Rotate cube to look at another dimension Dice – Rotate cube to look at another dimension Roll Up – Dimension Reduction; Aggregation Roll Up – Dimension Reduction; Aggregation Drill Down Drill Down Visualization: These operations allow the OLAP users to actually “see” results of an operation. Visualization: These operations allow the OLAP users to actually “see” results of an operation.
309
309 CSE 5331/7331 F'09 Relationship Between Topcs
310
310 CSE 5331/7331 F'09 Decision Support Systems Tools and computer systems that assist management in decision making Tools and computer systems that assist management in decision making What if types of questions What if types of questions High level decisions High level decisions Data warehouse – data which supports DSS Data warehouse – data which supports DSS
311
311 CSE 5331/7331 F'09 Unified Dimensional Model Microsoft Cube View Microsoft Cube View SQL Server 2005 SQL Server 2005 http://msdn2.microsoft.com/en- us/library/ms345143.aspx http://msdn2.microsoft.com/en- us/library/ms345143.aspx http://cwebbbi.spaces.live.com/Blog/cns!1pi7ET ChsJ1un_2s41jm9Iyg!325.entry http://cwebbbi.spaces.live.com/Blog/cns!1pi7ET ChsJ1un_2s41jm9Iyg!325.entry MDX AS2005 MDX AS2005 http://msdn2.microsoft.com/en- us/library/aa216767(SQL.80).aspx http://msdn2.microsoft.com/en- us/library/aa216767(SQL.80).aspx
312
312 CSE 5331/7331 F'09 Data Warehousing “ Subject-oriented, integrated, time-variant, nonvolatile” William Inmon “ Subject-oriented, integrated, time-variant, nonvolatile” William Inmon Operational Data: Data used in day to day needs of company. Operational Data: Data used in day to day needs of company. Informational Data: Supports other functions such as planning and forecasting. Informational Data: Supports other functions such as planning and forecasting. Data mining tools often access data warehouses rather than operational data. Data mining tools often access data warehouses rather than operational data. DM: May access data in warehouse.
313
313 CSE 5331/7331 F'09 Operational vs. Informational Operational DataData Warehouse ApplicationOLTPOLAP UsePrecise QueriesAd Hoc TemporalSnapshotHistorical ModificationDynamicStatic OrientationApplicationBusiness DataOperational ValuesIntegrated SizeGigabitsTerabits LevelDetailedSummarized AccessOftenLess Often ResponseFew SecondsMinutes Data SchemaRelationalStar/Snowflake
314
314 CSE 5331/7331 F'09Statistics Simple descriptive models Simple descriptive models Statistical inference: generalizing a model created from a sample of the data to the entire dataset. Statistical inference: generalizing a model created from a sample of the data to the entire dataset. Exploratory Data Analysis: Exploratory Data Analysis: –Data can actually drive the creation of the model –Opposite of traditional statistical view. Data mining targeted to business user Data mining targeted to business user DM: Many data mining methods come from statistical techniques.
315
315 CSE 5331/7331 F'09 Machine Learning Outline Introduction (Chuck Anderson) Introduction (Chuck Anderson) CS545: Machine Learning By Chuck Anderson Department of Computer Science Colorado State University Fall 2006
316
316 CSE 5331/7331 F'09 Machine Learning Machine Learning: area of AI that examines how to write programs that can learn. Machine Learning: area of AI that examines how to write programs that can learn. Often used in classification and prediction Often used in classification and prediction Supervised Learning: learns by example. Supervised Learning: learns by example. Unsupervised Learning: learns without knowledge of correct answers. Unsupervised Learning: learns without knowledge of correct answers. Machine learning often deals with small static datasets. Machine learning often deals with small static datasets. DM: Uses many machine learning techniques.
317
317 CSE 5331/7331 F'09 What is Machine Learning? Statistics ≈ the science of inference from data Machine learning ≈ multivariate statistics + computational statistics Multivariate statistics ≈ prediction of values of a function assumed to underlie a multivariate dataset Computational statistics ≈ computational methods for statistical problems (aka statistical computation) + statistical methods which happen to be computationally intensive Data Mining ≈ exploratory data analysis, particularly with massive/complex datasets Chuck Anderson © Chuck Anderson
318
318 CSE 5331/7331 F'09 Kinds of Learning Learning algorithms are often categorized according to the amount of information provided: Least Information: – –Unsupervised learning is more exploratory. – –Requires samples of inputs. Must find regularities. More Information: – –Reinforcement learning most recent. – –Requires samples of inputs, actions, and rewards or punishments. Most Information: – –Supervised learning is most common. – –Requires samples of inputs and desired outputs. Chuck Anderson © Chuck Anderson
319
319 CSE 5331/7331 F'09 Examples of Algorithms Supervised learning – –Regression » »multivariate regression » »neural networks and kernel methods – –Classification » »linear and quadratic discrimination analysis » »k-nearest neighbors » »neural networks and kernel methods Reinforcement learning – –multivariate regression – –neural networks Unsupervised learning – –principal components analysis – –k-means clustering – –self-organizing networks Chuck Anderson © Chuck Anderson
320
320 CSE 5331/7331 F'09 Chuck Anderson © Chuck Anderson
321
321 CSE 5331/7331 F'09 Chuck Anderson © Chuck Anderson
322
322 CSE 5331/7331 F'09 Chuck Anderson © Chuck Anderson
323
323 CSE 5331/7331 F'09 Chuck Anderson © Chuck Anderson
324
324 CSE 5331/7331 F'09 Pattern Matching (Recognition) Pattern Matching: finds occurrences of a predefined pattern in the data. Pattern Matching: finds occurrences of a predefined pattern in the data. Applications include speech recognition, information retrieval, time series analysis. Applications include speech recognition, information retrieval, time series analysis. DM: Type of classification.
325
325 CSE 5331/7331 F'09 Image Mining Outline Image Mining – What is it? Image Mining – What is it? Feature Extraction Feature Extraction Shape Detection Shape Detection Color Techniques Color Techniques Video Mining Video Mining Facial Recognition Facial Recognition Bioinformatics Bioinformatics
326
326 CSE 5331/7331 F'09 The 2000 ozone hole over the antarctic seen by EPTOMS http://jwocky.gsfc.nasa.gov/multi/multi.html#hole
327
327 CSE 5331/7331 F'09 Image Mining – What is it? Image Retrieval Image Retrieval Image Classification Image Classification Image Clustering Image Clustering Video Mining Video Mining Applications Applications –Bioinformatics –Geology/Earth Science –Security –…
328
328 CSE 5331/7331 F'09 Feature Extraction Identify major components of image Identify major components of image Color Color Texture Texture Shape Shape Spatial relationships Spatial relationships Feature Extraction & Image Processing Feature Extraction & Image Processing http://users.ecs.soton.ac.uk/msn/book/ Feature Extraction Tutorial Feature Extraction Tutorial http://facweb.cs.depaul.edu/research/vc/VC_Worksh op/presentations/pdf/daniela_tutorial2.pdf http://facweb.cs.depaul.edu/research/vc/VC_Worksh op/presentations/pdf/daniela_tutorial2.pdf
329
329 CSE 5331/7331 F'09 Shape Detection Boundary/Edge Detection Boundary/Edge Detection Time Series – Eamonn Keogh Time Series – Eamonn Keogh http://www.engr.smu.edu/~mhd/8337sp07/sh apes.ppt http://www.engr.smu.edu/~mhd/8337sp07/sh apes.ppt
330
330 CSE 5331/7331 F'09 Color Techniques Color Representations Color RepresentationsRGB: http://en.wikipedia.org/wiki/Rgb HSV: http://en.wikipedia.org/wiki/HSV_color_space http://en.wikipedia.org/wiki/HSV_color_space Color Histogram Color Histogram Color Anglogram Color Anglogram http://www.cs.sunysb.edu/~rzhao/publications/Video DB.pdf http://www.cs.sunysb.edu/~rzhao/publications/Video DB.pdf
331
331 CSE 5331/7331 F'09 What is Similarity ? (c) Eamonn Keogh, eamonn@cs.ucr.edu
332
332 CSE 5331/7331 F'09 Video Mining Boundaries between shots Boundaries between shots Movement between frames Movement between frames ANSES: ANSES: http://mmir.doc.ic.ac.uk/demos/anses.html
333
333 CSE 5331/7331 F'09 Facial Recognition Based upon features in face Based upon features in face Convert face to a feature vector Convert face to a feature vector Less invasive than other biometric techniques Less invasive than other biometric techniques http://www.face-rec.org http://www.face-rec.org http://www.face-rec.org http://computer.howstuffworks.com/facial- recognition.htm http://computer.howstuffworks.com/facial- recognition.htm http://computer.howstuffworks.com/facial- recognition.htm http://computer.howstuffworks.com/facial- recognition.htm SIMS: SIMS: http://www.casinoincidentreporting.com/Products. aspx http://www.casinoincidentreporting.com/Products. aspx
334
334 CSE 5331/7331 F'09 Microarray Data Analysis Each probe location associated with gene Each probe location associated with gene Measure the amount of mRNA Measure the amount of mRNA Color indicates degree of gene expression Color indicates degree of gene expression Compare different samples (normal/disease) Compare different samples (normal/disease) Track same sample over time Track same sample over time Questions Questions –Which genes are related to this disease? –Which genes behave in a similar manner? –What is the function of a gene? Clustering Clustering –Hierarchical –K-means
335
335 CSE 5331/7331 F'09 Affymetrix GeneChip ® Array http://www.affymetrix.com/corporate/outreach/lesson_plan/educator_resources.affx
336
336 CSE 5331/7331 F'09 Microarray Data - Clustering "Gene expression profiling identifies clinically relevant subtypes of prostate cancer" Proc. Natl. Acad. Sci. USA, Vol. 101, Issue 3, 811-816, January 20, 2004
337
337 CSE 5331/7331 F'09
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.