Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Chapter 2 Getting to Know Your Data. 2 Chapter 2. Getting to Know Your Data  Data Objects and Attribute Types  Basic Statistical Descriptions of Data.

Similar presentations


Presentation on theme: "1 Chapter 2 Getting to Know Your Data. 2 Chapter 2. Getting to Know Your Data  Data Objects and Attribute Types  Basic Statistical Descriptions of Data."— Presentation transcript:

1 1 Chapter 2 Getting to Know Your Data

2 2 Chapter 2. Getting to Know Your Data  Data Objects and Attribute Types  Basic Statistical Descriptions of Data  Data Visualization  Measuring Data Similarity and Dissimilarity  Summary

3 3 Types of Data Sets: (1) Record Data  Relational records  Relational tables, highly structured  Data matrix, e.g., numerical matrix, crosstabs  Transaction data  Document data: Term-frequency vector (matrix) of text documents

4 4 Types of Data Sets: (2) Graphs and Networks  Transportation network  World Wide Web  Molecular Structures  Social or information networks

5 5 Types of Data Sets: (3) Ordered Data  Video data: sequence of images  Temporal data: time-series  Sequential Data: transaction sequences  Genetic sequence data

6 6 Types of Data Sets: (4) Spatial, image and multimedia Data  Spatial data: maps  Image data:  Video data:

7 7 Important Characteristics of Structured Data  Dimensionality  Curse of dimensionality  Sparsity  Only presence counts  Resolution  Patterns depend on the scale  Distribution  Centrality and dispersion

8 8 Dimensionality  Curse of dimensionality(As dimensions grows, dimensions space increases exponentially)  Data causes high sparsity in the data set and unnecessarily increases storage space and processing time. Eg. For image recognition problem of high resolution images 1280 × 720 = 921,600 pixels needs 921600 dimensions -Value added by additional dimension is much smaller compared to overhead it adds to the algorithm -Bottom line is, the data that can be represented using -10 space units of one true dimension, - after adding 2 more dimensions, needs 1000 space units -just observed these dimensions during the experiment

9 9 Sparsity  Only presence counts May work very well in data mining as 1’s are important. Sparse data A relatively high percentage of the variable's cells do not contain actual data. Such "empty," or NA, values take up storage space in the file. Two types of sparsity: Controlled sparsity occurs when a range of values of one or more dimensions has no data; - For example, a new variable dimensioned by MONTH for which you do not have data for past months. The cells exist because you have past months in the MONTH dimension, but the data is NA. Random sparsity occurs when NA values are scattered throughout the data variable, usually because some combinations of dimension values never have any data. - For example, a district might only sell certain products and never have data for other products. Other districts might sell some of those products and other ones, too.

10 10 Resolution  Patterns depend on the scale - Properties are different at different resolutions, - For example, surface of the Earth at different resolutions

11 11 Data Objects  Data sets are made up of data objects  A data object represents an entity  Examples:  sales database: customers, store items, sales  medical database: patients, treatments  university database: students, professors, courses  Also called samples, examples, instances, data points, objects, tuples  Data objects are described by attributes  Database rows → data objects; columns → attributes

12 12 Attributes  Attribute (or dimensions, features, variables)  A data field, representing a characteristic or feature of a data object.  E.g., customer _ID, name, address  Types:  Nominal (e.g., red, blue)  Binary (e.g., {true, false})  Ordinal (e.g., {freshman, sophomore, junior, senior})  Numeric: quantitative  Interval-scaled: 100 ○ C is interval scales  Ratio-scaled: 100 ○ K is ratio scaled since it is twice as high as 50 ○ K  Q1: Is student ID a nominal, ordinal, or interval-scaled data?  Q2: What about eye color? Or color in the color spectrum of physics?

13 13 Attribute Types  Nominal: categories, states, or “names of things”  Hair_color = {auburn, black, blond, brown, grey, red, white}  marital status, occupation, ID numbers, zip codes  Binary  Nominal attribute with only TWO states (0 and 1)  Symmetric binary: both outcomes equally important  e.g., gender(F,M)  Asymmetric binary: outcomes not equally important.  e.g., medical test (positive vs. negative)  Convention: assign 1 to most important outcome (e.g., HIV positive)  Ordinal  Values have a meaningful order (ranking) but magnitude between successive values is not known  Size = {small, medium, large}, grades {A+,A,A-,B+..}, army rankings

14 14 Numeric Attribute Types  Quantity (integer or real-valued)  Interval  Measurement where the difference between two values is meaningful.  Measured on a scale of equal-sized units  Values have order and can be positive, 0, or negative.  E.g., temperature in C˚or F˚, calendar dates  No true zero-point  Ratio  Inherent zero-point  has all the properties of an interval variable, and also has a clear definition of 0.0  e.g., length, counts, no-of-words, monetary quantities, weight, height

15 15 Numeric Attribute Types  Quantity (integer or real-valued)

16 16 Discrete vs. Continuous Attributes  Discrete Attribute  Has only a finite or countably infinite set of values  E.g., zip codes, profession, or the set of words in a collection of documents  Sometimes, represented as integer variables  Note: Binary attributes are a special case of discrete attributes  Continuous Attribute  Has real numbers as attribute values  E.g., temperature, height, or weight  Practically, real values can only be measured and represented using a finite number of digits  Continuous attributes are typically represented as floating-point variables

17 17 Chapter 2. Getting to Know Your Data  Data Objects and Attribute Types  Basic Statistical Descriptions of Data  Data Visualization  Measuring Data Similarity and Dissimilarity  Summary

18 18 Basic Statistical Descriptions of Data  Motivation : To identify the properties of data and highlight which data values should be treated as noise or outliers.  Data dispersion characteristics  Median, max, min, quantiles, outliers, variance,...  Numerical dimensions correspond to sorted intervals  Data dispersion:  Analyzed with multiple granularities of precision  Boxplot or quantile analysis on sorted intervals  Dispersion analysis on computed measures  Folding measures into numerical dimensions  Boxplot or quantile analysis on the transformed cube

19 19 Measure of Central Tendency  Measure the location of the middle or center of a data distribution  Given an attribute, where do most of its values fall?  Include the mean, medium, mode and mid range 19

20 20 Measuring the Central Tendency: (1) Mean  Mean : Numeric measure of the center of a set of data Single most useful quantity for describing a data set

21 21 Weighted arithmetic Mean  The weights reflect the significance, importance, or occurrence frequency attached to their respective values.  Weighted arithmetic mean:  Trimmed mean:  Chopping extreme values (e.g., Olympics gymnastics score computation)

22 22 Measuring the Central Tendency: (2) Median  Median: which is the middle value in a set of ordered data values  Middle value if odd number of values, or average of the middle two values otherwise  Estimated by interpolation (for grouped data): Approximate median Low boundary of medium interval Interval width (L 2 – L 1 ) Sum of frequencies before the median interval

23 23 Measuring the Central Tendency: (3) Mode  Mode: Value that occurs most frequently in the data  Unimodal  Empirical formula:  Multi-modal(Multi-mode)  Bimodal  Trimodal

24 24 Measuring the Central Tendency: (3) Mode

25 25 Problem : If median = 20, and Mean = 22.5, in a moderately skewed distribution then compute approximate value of mode. Solution: Mean – Mode = 3 ( Mean – Medium) 22.5 – Mode = 3 ( 22.5 -20 ) 22.5 – Mode = 7.5 Mode = 22.5 – 7.5 Mode = 15 25 Example Example

26 26 Measuring the Central Tendency  “the statistical measure that identifies a single value as representative of an entire distribution.”  Mean. The average. (1) Add all the numbers (2) Divide the sum by the amount of numbers given.  155+155+158+164+168=800  Mean = 800/5 = 160  Median. Order the numbers, and then find the middle. If 2 numbers are in the middle add and divide by 2.  Mode. The number that occurs most frequently (Repeat the most) 26

27 27 Measuring the Central Tendency  Range. Subtract the largest minus the smallest. (Max – Min)  Range = 168-155 = 13  Outlier A value far from most others in a set of data 27 Figure. Outlier

28 28 Symmetric vs. Skewed Data Symmetric vs. Skewed Data  Median, mean and mode of symmetric, positively and negatively skewed data positively skewed negatively skewed symmetric

29 29 Exercises 1. Suppose that the data for analysis includes the attribute age. The age values for the data tuples are (in increasing order) 13, 15, 16, 16, 19, 20, 20, 21, 22, 22, 25, 25, 25, 25, 30, 33, 33, 35, 35, 35, 35, 36, 40, 45, 46, 52, 70. (a) What is the mean of the data? What is the median? (b) What is the mode of the data? Comment on the data’s modality (i.e., bimodal, trimodal, etc.). (c) What is the midrange of the data? 2. For a moderately skewed distribution mode = 50.04, mean = 45, Find median. 29

30 30 Practice of Mean  Mean and median both try to measure the "central tendency" in a data set.  The goal of each is to get an idea of a "typical" value in the data set.  The mean is commonly used, but sometimes the median is preferred.  A golf team's members had the scores below in their most recent tournament: 70, 72, 74, 76, 80, 114 Calculate the mean score. What is a correct interpretation of the mean score? A. It represents the "average" score. If each golfer had scored the same, they each would have scored the mean. B. It is the middle point in the set of scores. That is, half the team scored higher than the median, and half the team scored lower than the median. 30

31 31 Practice of Median Score - 70, 72, 74, 76, 80, 114  Calculate the Median score.  What is a correct interpretation of the Median score? A. It represents the "average" score. If each golfer had scored the same, they each would have scored the mean. B. It is the middle point in the set of scores. That is, half the team scored higher than the median, and half the team scored lower than the median. 31

32 32 Choosing the "best" measure of center  Which measure best describes the scores of the team? here are the scores: 70, 72, 74, 76, 80, 114 The ----------- best describe the score of teams, because the ----------- is higher than almost all of the scores in the data set. A. Medium B. Mean 32

33 33 Properties of Normal Distribution Curve ← — ————Represent data dispersion, spread — ————→ Represent central tendency

34 34 Measuring Dispersion of Data  Measure the spread of numeric data  Include range, quantiles, quartiles, percentiles, and the interquartile range, five number summary  Range : the difference between the largest and smallest values.  Quantiles : points taken at regular intervals of a data distribution, dividing it into essentially equal size consecutive sets.  Quartiles : The 4-quantiles are one-fourth of the data distribution.  Percentiles : The 100-quantilethe three data points that split the data distribution into four equal parts; each part represents s ; they divide the data distribution into 100 equal-sized consecutive sets.  The median, quartiles, and percentiles are the most widely used forms of quantiles. 34

35 35  The distance between the first and third quartiles is a simple measure of spread that gives the range covered by the middle half of the data.  This distance is called the interquartile range (IQR)  IQR = Q3 – Q1 35 Measuring Dispersion of Data : Interquartile Range (IQR)

36 36 How to calculate Range & IQR  Suppose we have the following values for salary (in thousands of dollars), shown in increasing order: 30, 36, 47, 50, 52, 52, 56, 60, 63, 70, 70, 110 Largest value – 110, Smallest Value – 30 Range = 110 – 30 = 80 The quartiles for this data are the third, sixth, and ninth values, respectively, in the sorted list. Therefore, Q 1 = $47,000 and Q3 = $63,000. Thus, the interquartile range is IQR = 63 - 47 = $16,000. 36

37 37 Comparing range and interquartile range (IQR)  Range and interquartile range (IQR) both measure the "spread" in a data set.  Looking at spread lets us see how much data varies.  Range is a quick way to get an idea of spread.  It takes longer to find the IQR, but it sometimes gives us more useful information about spread. 37

38 38 Practice of Range  Jonny recorded the daily high temperatures for two different cities in a recent week in degree Celsius. The temperatures for each city are shown below. Mandalay : 23, 25, 28, 28, 32, 33, 35 Yangon : 16, 24, 26, 26, 26, 27, 28 Calculate the range of the temperatures in two cities? What is the best interpretation of the range ? A. The range represents the typical temperature that week. B. The range represents how far apart the lowest and the highest measurements were that week. C. The range represents the amount of spread in the middle half of the data that week. 38

39 39 Practice of IQR  Calculate the IQR of the temperatures in Yangon : 23, 25, 28, 28, 32, 33, 35  Calculate the IQR of the temperatures in Mandalay : 16, 24, 26, 26, 26, 27, 28 What is the best interpretation of the IQR? A. The IQR represents the typical temperature that week. B. The IQR represents how far apart the lowest and the highest measurements were that week. C. The IQR approximates the amount of spread in the middle half of the data that week. According to the IQRs, which city's temperatures were more varied in that week? A. According to the IQRs, the temperatures varied more in Mandalay B. According to the IQRs, the temperatures varied more in Yangon. C. According to the IQRs, the temperatures in each city had the same amount of variability. 39

40 40 Measures Data Distribution: Variance and Standard Deviation

41 41 Standard Deviation  Standard deviation measures the spread of a data distribution.  The more spread out a data distribution is, the greater its standard deviation.  the data points are how far from the mean.  For example, the blue distribution on bottom has a greater standard deviation (SD) than the green distribution on top: 41

42 42 Practice of Standard Deviation Distribution A 0, 1, 1, 2, 3, 3, 4, 4, 5, 5, 5, 6, 6, 7, 7, 7, 8, 8, 8, 9, 9, 9, 10, 10 Distribution B 4, 4, 4, 5, 5, 5, 5, 6, 6, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9 42

43 43 Measuring the Dispersion of Data: Quartiles & Boxplots  Quartiles: Q 1 (25 th percentile), Q 3 (75 th percentile)  Inter-quartile range: IQR = Q 3 – Q 1  Five number summary: min, Q 1, median, Q 3, max  Boxplot: Data is represented with a box  Q 1, Q 3, IQR: The ends of the box are at the first and third quartiles, i.e., the height of the box is IQR  Median (Q 2 ) is marked by a line within the box  Whiskers: two lines outside the box extended to Minimum and Maximum  Outliers: points beyond a specified outlier threshold, plotted individually  Outlier: usually, a value higher/lower than 1.5 x IQR

44 44 Graphic Displays of Basic Statistical Descriptions  Boxplot: graphic display of five-number summary  Histogram: x-axis are values, y-axis repres. frequencies  Quantile plot: each value x i is paired with f i indicating that approximately 100 f i % of data are  x i  Quantile-quantile (q-q) plot: graphs the quantiles of one univariant distribution against the corresponding quantiles of another  Scatter plot: each pair of values is a pair of coordinates and plotted as points in the plane

45 45 Data Mining: Concepts and Techniques Quantile Plot  Displays all of the data (allowing the user to assess both the overall behavior and unusual occurrences)  Plots quantile information  For a data x i data sorted in increasing order, f i indicates that approximately 100 f i % of the data are below or equal to the value x i

46 46 Quantile-Quantile (Q-Q) Plot  Graphs the quantiles of one univariate distribution against the corresponding quantiles of another  Example shows unit price of items sold at Branch 1 vs. Branch 2 for each quantile. Unit prices of items sold at Branch 1 tend to be lower than those at Branch 2

47 47 Histogram Analysis  Histogram: Graph display of tabulated frequencies, shown as bars  Differences between histograms and bar charts  Histograms are used to show distributions of variables while bar charts are used to compare variables  Histograms plot binned quantitative data while bar charts plot categorical data  Bars can be reordered in bar charts but not in histograms  Differs from a bar chart in that it is the area of the bar that denotes the value, not the height as in bar charts, a crucial distinction when the categories are not of uniform width Histogram Bar chart

48 48 Histograms Often Tell More than Boxplots  The two histograms shown in the left may have the same boxplot representation  The same values for: min, Q1, median, Q3, max  But they have rather different data distributions

49 49 Scatter Plots and Data Correlation  one of the most effective graphical methods for determining if there appears to be a relationship, pattern, or trend between two numeric attributes.  The scatter plot is a useful method for providing a first look at bivariate data to see clusters of points and outliers, or to explore the possibility of correlation relationships.  Two attributes, X, and Y, are correlated if one attribute implies the other.  Correlations can be positive, negative, or null (uncorrelated).

50 50 Scatter plot  Provides a first look at bivariate data to see clusters of points, outliers, etc.  Each pair of values is treated as a pair of coordinates and plotted as points in the plane

51 51 Positively and Negatively Correlated Data  The left half fragment is positively correlated  The right half is negative correlated

52 52 Uncorrelated Data Uncorrelated Data

53 53 Chapter 2. Getting to Know Your Data  Data Objects and Attribute Types  Basic Statistical Descriptions of Data  Data Visualization  Measuring Data Similarity and Dissimilarity  Summary

54 54 Data Visualization  Visualization is a process of transforming information into a visual form enabling the viewer to observe, browse, make sense, and understand the information.  Why data visualization?  Gain insight into an information space by mapping data onto graphical primitives  Provide qualitative overview of large data sets  Search for patterns, trends, structure, irregularities, relationships among data  Help find interesting regions and suitable parameters for further quantitative analysis  Provide a visual proof of computer representations derived

55 55 Categorization of visualization methods  Pixel-oriented visualization techniques  Geometric projection visualization techniques  Icon-based visualization techniques  Hierarchical visualization techniques  Visualizing complex data and relations

56 56 Pixel-Oriented Visualization Techniques  For a data set of m dimensions, create m windows on the screen, one for each dimension  The m dimension values of a record are mapped to m pixels at the corresponding positions in the windows  The colors of the pixels reflect the corresponding values (a)Income (b) Credit Limit (c) transaction volume(d) age The correlation between customer Income and credit limit, transaction volume, age

57 57 Laying Out Pixels in Circle Segments  To save space and show the connections among multiple dimensions, space filling is often done in a circle segment (a)Representing a data record in circle segment (b) Laying out pixels in circle segment Representing about 265,000 50-dimensional Data Items with the ‘Circle Segments’ Technique

58 58 Geometric Projection Visualization Techniques  Visualization of geometric transformations and projections of the data  Methods  Scatterplot and scatterplot matrices  Landscapes  Projection pursuit technique: Help users find meaningful projections of multidimensional data  Prosection views  Hyperslice  Parallel coordinates

59 59 Data Mining: Concepts and Techniques Direct Data Visualization Ribbons with Twists Based on Vorticity

60 60 Scatterplot Matrices  Matrix of scatterplots (x-y-diagrams) of the k-dim. data  A total of k(k-1)/2 distinct scatterplots Used by ermission of M. Ward, Worcester Polytechnic Institute

61 61 news articles visualized as a landscape Used by permission of B. Wright, Visible Decisions Inc. Landscapes  Visualization of the data as perspective landscape  The data needs to be transformed into a (possibly artificial) 2D spatial representation which preserves the characteristics of the data

62 62 Parallel Coordinates  n equidistant axes which are parallel to one of the screen axes and correspond to the attributes  The axes are scaled to the [minimum, maximum]: range of the corresponding attribute  Every data item corresponds to a polygonal line which intersects each of the axes at the point which corresponds to the value for the attribute

63 63 Parallel Coordinates of a Data Set

64 64 Icon-Based Visualization Techniques  Visualization of the data values as features of icons  Typical visualization methods  Chernoff Faces  Stick Figures  General techniques  Shape coding: Use shape to represent certain information encoding  Color icons: Use color icons to encode more information  Tile bars: Use small icons to represent the relevant feature vectors in document retrieval

65 65 Chernoff Faces  A way to display variables on a two-dimensional surface, e.g., let x be eyebrow slant, y be eye size, z be nose length, etc.  The figure shows faces produced using 10 characteristics--head eccentricity, eye size, eye spacing, eye eccentricity, pupil size, eyebrow slant, nose size, mouth shape, mouth size, and mouth opening): Each assigned one of 10 possible values, generated using Mathematica (S. Dickson)Mathematica  REFERENCE: Gonick, L. and Smith, W. The Cartoon Guide to Statistics. New York: Harper Perennial, p. 212, 1993The Cartoon Guide to Statistics.  Weisstein, Eric W. "Chernoff Face." From MathWorld--A Wolfram Web Resource. mathworld.wolfram.com/ChernoffFace.html mathworld.wolfram.com/ChernoffFace.html

66 66 used by permission of G. Grinstein, University of Massachusettes at Lowell Stick Figure  A census data figure showing age, income, gender, education, etc.  A 5-piece stick figure (1 body and 4 limbs w. different angle/length)

67 67 Hierarchical Visualization Techniques  Visualization of the data using a hierarchical partitioning into subspaces  Methods  Worlds-within-Worlds  Tree-Map

68 68 Worlds-within-Worlds  Assign the function and two most important parameters to innermost world  Fix all other parameters at constant values - draw other (1 or 2 or 3 dimensional worlds choosing these as the axes)  Software that uses this paradigm  N–vision: Dynamic interaction through data glove and stereo displays, including rotation, scaling (inner) and translation (inner/outer)  Auto Visual: Static interaction by means of queries

69 69 Tree-Map  Screen-filling method which uses a hierarchical partitioning of the screen into regions depending on the attribute values  The x- and y-dimension of the screen are partitioned alternately according to the attribute values (classes) Schneiderman@UMD: Tree-Map of a File System Schneiderman@UMD: Tree-Map to support large data sets of a million items

70 70 InfoCube  A 3-D visualization technique where hierarchical information is displayed as nested semi-transparent cubes  The outermost cubes correspond to the top level data, while the subnodes or the lower level data are represented as smaller cubes inside the outermost cubes, etc.

71 71 Three-D Cone Trees  3D cone tree visualization technique works well for up to a thousand nodes or so  First build a 2D circle tree that arranges its nodes in concentric circles centered on the root node  Cannot avoid overlaps when projected to 2D  G. Robertson, J. Mackinlay, S. Card. “Cone Trees: Animated 3D Visualizations of Hierarchical Information”, ACM SIGCHI'91  Graph from Nadeau Software Consulting website: Visualize a social network data set that models the way an infection spreads from one person to the next

72 72 Visualizing Complex Data and Relations: Tag Cloud  Tag cloud: Visualizing user-generated tags  The importance of tag is represented by font size/color  Popularly used to visualize word/phrase distributions Newsmap: Google News Stories in 2005 KDD 2013 Research Paper Title Tag Cloud

73 73 Visualizing Complex Data and Relations: Social Networks  Visualizing non-numerical data: social and information networks A typical network structure A social network organizing information networks

74 74 Chapter 2. Getting to Know Your Data  Data Objects and Attribute Types  Basic Statistical Descriptions of Data  Data Visualization  Measuring Data Similarity and Dissimilarity  Summary

75 75 Similarity, Dissimilarity, and Proximity  Similarity measure or similarity function  A real-valued function that quantifies the similarity between two objects  Measure how two data objects are alike: The higher value, the more alike  Often falls in the range [0,1]: 0: no similarity; 1: completely similar  Dissimilarity (or distance) measure  Numerical measure of how different two data objects are  In some sense, the inverse of similarity: The lower, the more alike  Minimum dissimilarity is often 0 (i.e., completely similar)  Range [0, 1] or [0, ∞), depending on the definition  Proximity usually refers to either similarity or dissimilarity

76 76 Data Matrix and Dissimilarity Matrix  Data matrix  A data matrix of n data points with l dimensions  Dissimilarity (distance) matrix  n data points, but registers only the distance d(i, j) (typically metric)  d(i, j) is a non-negative number that is close to 0 when objects i and j are highly similar or “near” each other, and becomes larger the more they differ  Distance functions are usually different for real, boolean, categorical, ordinal, ratio, and vector variables  Weights can be associated with different variables based on applications and data semantics

77 77 Standardizing Numeric Data  Z-score:  X: raw score to be standardized, μ: mean of the population, σ: standard deviation  the distance between the raw score and the population mean in units of the standard deviation  negative when the raw score is below the mean, “+” when above  An alternative way: Calculate the mean absolute deviation where  standardized measure (z-score):  Using mean absolute deviation is more robust than using standard deviation

78 78 Proximity Measure for Categorical Attributes  Categorical data, also called nominal attributes  Example: Color (red, yellow, blue, green), profession, etc.  Simple matching  m: # of matches, p: total # of variables where m is the number of matches (i.e., the number of attributes for which i and j are in the same state) p is the total number of attributes describing the objects.

79 79 Dissimilarity between nominal attributes

80 80 Proximity Measure for Binary Attributes  A binary attributes has only one of two states: 0 and 1  Distance measure for symmetric binary variables:  Distance measure for asymmetric binary variables:  Jaccard coefficient (similarity measure for asymmetric binary variables): qrst i1100 j1010 sim (i, j) = 1 – d (i, j )

81 81 Example: Dissimilarity between Asymmetric Binary Variables  Gender is a symmetric attribute (not counted in)  The remaining attributes are asymmetric binary  Let the values Y and P be 1, and the value N be 0  Distance: 10∑ row 1202 0134 ∑ col 336 Jack Mary 10∑ row 1112 0134 ∑ col 246 Jim 10∑ row 1112 0224 ∑ col 336 Jim Mary Jack

82 82 Example: Data Matrix and Dissimilarity Matrix Dissimilarity Matrix (by Euclidean Distance) Data Matrix

83 83 Distance on Numeric Data: Euclidean Distance & Manhattan Distance Let i = (x i1, x i2, …, x ip ) and j = (x j1, x j2, …, x jp ) Euclidean Distance Manhattan Distance

84 84 Let i = ( 1, 2 ) and j = ( 3, 5 ) For Manhattan distance d ( i, j ) = |1 – 3 | + | 2 - 5 | = 2 + 3 = 5 Distance on Numeric Data: Euclidean Distance & Manhattan Distance

85 85 Distance on Numeric Data: Minkowski Distance  Minkowski distance: a generalization of the Euclidean and Manhattan distances where i = (x i1, x i2, …, x il ) and j = (x j1, x j2, …, x jl ) are two l-dimensional data objects, and p is the order (the distance so defined is also called L-p norm)

86 86 Special Cases of Minkowski Distance  p = 1: (L 1 norm) Manhattan (or city block) distance  E.g., the Hamming distance: the number of bits that are different between two binary vectors  p = 2: (L 2 norm) Euclidean distance  p   : (L max norm, L  norm) “supremum” distance  The maximum difference between any component (attribute) of the vectors

87 87 Example: Minkowski Distance at Special Cases Manhattan (L 1 ) Euclidean (L 2 ) Supremum (L  )

88 88 Properties of distance function  Non Negativity : d(i, j) ≥ 0  Identity of indiscernible : d(i, i) = 0  Symmetry : d( i, j ) = d( j, i )  Triangle Inequality : d(i, j) ≤ d(i, k) + d(k, j) Weighted Euclidean Distance

89 89 Ordinal Variables  An ordinal variable can be discrete or continuous  Order is important, e.g., rank (e.g., freshman, sophomore, junior, senior)  Can be treated like interval-scaled  Replace an ordinal variable value by its rank:  Map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by  Example: freshman: 0; sophomore: 1/3; junior: 2/3; senior 1  Then distance: d(freshman, senior) = 1, d(junior, senior) = 1/3  Compute the dissimilarity using methods for interval-scaled variables

90 90 Attributes of Mixed Type  A dataset may contain all attribute types  Nominal, symmetric binary, asymmetric binary, numeric, and ordinal (1)x if or x jf is missing (i.e., there is no measurement of attribute f for object i or object j), or (2)x if = x jf = 0 and attribute f is asymmetric binary; otherwise,

91 91 Attributes of Mixed Type (Cont:)  If f is numeric: Use the normalized distance  If f is binary or nominal: d ij (f) = 0 if x if = x jf ; or d ij (f) = 1 otherwise  If f is ordinal  Compute ranks z if (where )  Treat z if as interval-scaled

92 92 Cosine Similarity of Two Vectors  A document can be represented by a bag of terms or a long vector, with each attribute recording the frequency of a particular term (such as word, keyword, or phrase) in the document  Applications: Information retrieval, biologic taxonomy, gene feature mapping, etc.  Cosine measure: If d 1 and d 2 are two vectors (e.g., term-frequency vectors), then where  indicates vector dot product, ||d||: the length of vector d

93 93 Example: Calculating Cosine Similarity  Calculating Cosine Similarity: where  indicates vector dot product, ||d||: the length of vector d  Ex: Find the similarity between documents 1 and 2. d 1 = (5, 0, 3, 0, 2, 0, 0, 2, 0, 0) d 2 = (3, 0, 2, 0, 1, 1, 0, 1, 0, 1)  First, calculate vector dot product d 1  d 2 = 5 X 3 + 0 X 0 + 3 X 2 + 0 X 0 + 2 X 1 + 0 X 1 + 0 X 1 + 2 X 1 + 0 X 0 + 0 X 1 = 25  Then, calculate ||d 1 || and ||d 2 ||  Calculate cosine similarity: cos(d 1, d 2 ) = 25/ (6.481 X 4.12) = 0.94

94 94 Chapter 2. Getting to Know Your Data  Data Objects and Attribute Types  Basic Statistical Descriptions of Data  Data Visualization  Measuring Data Similarity and Dissimilarity  Summary

95 95 Summary  Data attribute types: nominal, binary, ordinal, interval-scaled, ratio-scaled  Many types of data sets, e.g., numerical, text, graph, Web, image.  Gain insight into the data by:  Basic statistical data description: central tendency, dispersion, graphical displays  Data visualization: map data onto graphical primitives  Measure data similarity  Above steps are the beginning of data preprocessing  Many methods have been developed but still an active area of research


Download ppt "1 Chapter 2 Getting to Know Your Data. 2 Chapter 2. Getting to Know Your Data  Data Objects and Attribute Types  Basic Statistical Descriptions of Data."

Similar presentations


Ads by Google