Download presentation
1
Chapter 3 – Data Visualization
Data Mining for Business Intelligence Shmueli, Patel & Bruce © Galit Shmueli and Peter Bruce 2010
2
Graphs for Data Exploration
Basic Plots Line Graphs Bar Charts Scatterplots Distribution Plots Boxplots Histograms © Galit Shmueli and Peter Bruce 2010
3
Line Graph for Time Series
© Galit Shmueli and Peter Bruce 2010
4
Bar Chart for Categorical Variable
95% of tracts do not border Charles River Excel can confuse: y-axis is actually “% of records that have a value for CATMEDV” (i.e., “% of all records”) © Galit Shmueli and Peter Bruce 2010
5
Scatterplot Displays relationship between two numerical variables
© Galit Shmueli and Peter Bruce 2010
6
Distribution Plots Display “how many” of each value occur in a data set Or, for continuous data or data with many possible values, “how many” values are in each of a series of ranges or “bins” © Galit Shmueli and Peter Bruce 2010
7
Histograms Boston Housing example:
Histogram shows the distribution of the outcome variable (median house value) © Galit Shmueli and Peter Bruce 2010
8
Boxplots Side-by-side boxplots are useful for comparing subgroups
Boston Housing Example: Display distribution of outcome variable (MEDV) for neighborhoods on Charles river (1) and not on Charles river (0) © Galit Shmueli and Peter Bruce 2010
9
Box Plot Top outliers defined as those above Q3+1.5(Q3-Q1).
“max” = maximum of non-outliers Analogous definitions for bottom outliers and for “min” Details may differ across software outliers “max” Quartile 3 mean Median Quartile 1 “min” © Galit Shmueli and Peter Bruce 2010
10
Heat Maps Color conveys information In data mining, used to visualize
Correlations Missing Data © Galit Shmueli and Peter Bruce 2010
11
Heatmap to highlight correlations (Boston Housing)
In Excel (using conditional formatting) In Spotfire © Galit Shmueli and Peter Bruce 2010
12
Multidimensional Visualization
© Galit Shmueli and Peter Bruce 2010
13
Scatterplot with color added
Boston Housing NOX vs. LSTAT Red = low median value Blue = high median value © Galit Shmueli and Peter Bruce 2010
14
Matrix Plot Shows scatterplots for variable pairs
Example: scatterplots for 3 Boston Housing variables © Galit Shmueli and Peter Bruce 2010
15
Rescaling to log scale (on right) “uncrowds” the data
© Galit Shmueli and Peter Bruce 2010
16
Aggregation © Galit Shmueli and Peter Bruce 2010
17
Amtrak Ridership – Monthly Data
© Galit Shmueli and Peter Bruce 2010
18
Aggregation – Monthly Average
© Galit Shmueli and Peter Bruce 2010
19
Aggregation – Yearly Average
© Galit Shmueli and Peter Bruce 2010
20
Scatter Plot with Labels (Utilities)
© Galit Shmueli and Peter Bruce 2010
21
Scaling: Smaller markers, jittering, color contrast (Universal Bank; red = accept loan)
© Galit Shmueli and Peter Bruce 2010
22
Jittering Moving markers by a small random amount
Uncrowds the data by allowing more markers to be seen © Galit Shmueli and Peter Bruce 2010
23
Without jittering (for comparison)
© Galit Shmueli and Peter Bruce 2010
24
Parallel Coordinate Plot (Boston Housing)
CATMEDV =1 CATMEDV =0 - CAT. MEDV: (1) Filter Settings © Galit Shmueli and Peter Bruce 2010
25
Linked plots (same record is highlighted in each plot)
© Galit Shmueli and Peter Bruce 2010
26
Network Graph – eBay Auctions (sellers on left, buyers on right)
Circle size = # of transactions for the node Line width =# of auctions for the buyer- seller pair Arrows point from buyer to seller © Galit Shmueli and Peter Bruce 2010
27
Treemap – eBay Auctions (Hierarchical eBay data: Category> sub-category> Brand)
Rectangle size = average closing price (=item value) Color = % sellers with negative feedback (darker=more) © Galit Shmueli and Peter Bruce 2010
28
(Comparing countries’ well-being with GDP)
Map Chart (Comparing countries’ well-being with GDP) Darker = higher value © Galit Shmueli and Peter Bruce 2010
29
Summary of Major Visualizations & Operations, According to Data Mining Goal
© Galit Shmueli and Peter Bruce 2010
30
Summary of Major Visualizations & Operations, According to Data Mining Goal
© Galit Shmueli and Peter Bruce 2010
31
Summary of Major Visualizations & Operations, According to Data Mining Goal
© Galit Shmueli and Peter Bruce 2010
32
Chapter 4 –Dimension Reduction
Data Mining for Business Intelligence Shmueli, Patel & Bruce © Galit Shmueli and Peter Bruce 2010
33
Exploring the data Statistical summary of data: common metrics Average
Median Minimum Maximum Standard deviation Counts & percentages © Galit Shmueli and Peter Bruce 2010
34
Summary Statistics – Boston Housing
© Galit Shmueli and Peter Bruce 2010
35
Correlations Between Pairs of Variables: Correlation Matrix from Excel
© Galit Shmueli and Peter Bruce 2010
36
Summarize Using Pivot Tables
Counts & percentages are useful for summarizing categorical data Boston Housing example: 471 neighborhoods border the Charles River (1) 35 neighborhoods do not (0) © Galit Shmueli and Peter Bruce 2010
37
Pivot Tables - cont. Averages are useful for summarizing grouped numerical data Boston Housing example: Compare average home values in neighborhoods that border Charles River (1) and those that do not (0) © Galit Shmueli and Peter Bruce 2010
38
Pivot Tables, cont. Group by multiple criteria:
By # rooms and location E.g., neighborhoods on the Charles with 6-7 rooms have average house value of ($000) © Galit Shmueli and Peter Bruce 2010
39
Pivot Table - Hint To get counts, drag any variable (e.g. “ID”) to the data area Select “settings” then change “sum” to “count” © Galit Shmueli and Peter Bruce 2010
40
Correlation Analysis Below: Correlation matrix for portion of Boston Housing data Shows correlation between variable pairs © Galit Shmueli and Peter Bruce 2010
41
Reducing Categories A single categorical variable with m categories is typically transformed into m-1 dummy variables Each dummy variable takes the values 0 or 1 0 = “no” for the category 1 = “yes” Problem: Can end up with too many variables Solution: Reduce by combining categories that are close to each other Use pivot tables to assess outcome variable sensitivity to the dummies Exception: Naïve Bayes can handle categorical variables without transforming them into dummies © Galit Shmueli and Peter Bruce 2010
42
Combining Categories Many zoning categories are the same or similar with respect to CATMEDV © Galit Shmueli and Peter Bruce 2010
43
Principal Components Analysis
Goal: Reduce a set of numerical variables. The idea: Remove the overlap of information between these variable. [“Information” is measured by the sum of the variances of the variables.] Final product: A smaller number of numerical variables that contain most of the information © Galit Shmueli and Peter Bruce 2010
44
Principal Components Analysis
How does PCA do this? Create new variables that are linear combinations of the original variables (i.e., they are weighted averages of the original variables). These linear combinations are uncorrelated (no information overlap), and only a few of them contain most of the original information. The new variables are called principal components. © Galit Shmueli and Peter Bruce 2010 44
45
Example – Breakfast Cereals
© Galit Shmueli and Peter Bruce 2010
46
Description of Variables
Name: name of cereal mfr: manufacturer type: cold or hot calories: calories per serving protein: grams fat: grams sodium: mg. fiber: grams carbo: grams complex carbohydrates sugars: grams potass: mg. vitamins: % FDA rec shelf: display shelf weight: oz. 1 serving cups: in one serving rating: consumer reports © Galit Shmueli and Peter Bruce 2010
47
Consider calories & ratings
Total variance (=“information”) is sum of individual variances: Calories accounts for / = 66% © Galit Shmueli and Peter Bruce 2010
48
First & Second Principal Components
Z1 and Z2 are two linear combinations. Z1 has the highest variation (spread of values) Z2 has the lowest variation © Galit Shmueli and Peter Bruce 2010
49
PCA output for these 2 variables
Top: weights to project original data onto Z1 & Z2 e.g. (-0.847, 0.532) are weights for Z1 Bottom: reallocated variance for new variables Z1 : 86% of total variance Z2 : 14% © Galit Shmueli and Peter Bruce 2010
50
Principal Component Scores
Weights are used to compute the above scores e.g., col. 1 scores are computed Z1 scores using weights (-0.847, 0.532) © Galit Shmueli and Peter Bruce 2010
51
Properties of the resulting variables
New distribution of information: New variances = 498 (for Z1) and 79 (for Z2) Sum of variances = sum of variances for original variables calories and ratings New variable Z1 has most of the total variance, might be used as proxy for both calories and ratings Z1 and Z2 have correlation of zero (no information overlap) © Galit Shmueli and Peter Bruce 2010
52
Generalization X1, X2, X3, … Xp, original p variables Z1, Z2, Z3, … Zp, weighted averages of original variables All pairs of Z variables have 0 correlation Order Z’s by variance (z1 largest, Zp smallest) Usually the first few Z variables contain most of the information, and so the rest can be dropped. © Galit Shmueli and Peter Bruce 2010
53
PCA on full data set First 6 components shown
First 2 capture 93% of the total variation Note: data differ slightly from text © Galit Shmueli and Peter Bruce 2010
54
Normalizing data In these results, sodium dominates first PC
Just because of the way it is measured (mg), its scale is greater than almost all other variables Hence its variance will be a dominant component of the total variance Normalize each variable to remove scale effect Divide by std. deviation (may subtract mean first) Normalization (= standardization) is usually performed in PCA; otherwise measurement units affect results Note: In XLMiner, use correlation matrix option to use normalized variables © Galit Shmueli and Peter Bruce 2010
55
PCA using standardized variables
First component accounts for smaller part of variance Need to use more components to capture same amount of information © Galit Shmueli and Peter Bruce 2010
56
PCA in Classification/Prediction
Apply PCA to training data Decide how many PC’s to use Use variable weights in those PC’s with validation/new data This creates a new reduced set of predictors in validation/new data © Galit Shmueli and Peter Bruce 2010
57
Regression-Based Dimension Reduction
Multiple Linear Regression or Logistic Regression Use subset selection Algorithm chooses a subset of variables This procedure is integrated directly into the predictive task © Galit Shmueli and Peter Bruce 2010
58
Summary Data summarization is an important for data exploration
Data summaries include numerical metrics (average, median, etc.) and graphical summaries Data reduction is useful for compressing the information in the data into a smaller subset Categorical variables can be reduced by combining similar categories Principal components analysis transforms an original set of numerical data into a smaller set of weighted averages of the original data that contain most of the original information in less variables. © Galit Shmueli and Peter Bruce 2010
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.