Presentation is loading. Please wait.

Presentation is loading. Please wait.

Preprocessing of Lifelog September 18, 2008 Sung-Bae Cho 0.

Similar presentations


Presentation on theme: "Preprocessing of Lifelog September 18, 2008 Sung-Bae Cho 0."— Presentation transcript:

1 Preprocessing of Lifelog September 18, 2008 Sung-Bae Cho 0

2 Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

3 Data Types & Forms Attribute-value data: Data types –Numeric, categorical (see the hierarchy for its relationship) –Static, dynamic (temporal) Other kinds of data –Distributed data –Text, web, meta data –Images, audio/video A1A1 A2A2 …AnAn C 2

4 Why Data Preprocessing? Data in the real world is dirty –Incomplete: missing attribute values, lack of certain attributes of interest, or containing only aggregate data e.g., occupation=“ ” –Noisy: containing errors or outliers e.g., Salary=“-10” –Inconsistent: containing discrepancies in codes or names e.g., Age=“42” Birthday=“03/07/1997” e.g., Was rating “1,2,3”, now rating “A, B, C” e.g., discrepancy between duplicate records 3

5 Why Is Data Preprocessing Important? No quality data, no quality results! –Quality decisions must be based on quality data e.g., duplicate or missing data may cause incorrect or even misleading statistics Data preparation, cleaning, and transformation comprises the majority of the work in a lifelog application (90%) 4

6 Multi-Dimensional Measure of Data Quality A well-accepted multi-dimensional view: –Accuracy –Completeness –Consistency –Timeliness –Believability –Value added –Interpretability –Accessibility 5

7 Major Tasks in Data Preprocessing Data cleaning –Fill in missing values, smooth noisy data, identify or remove outliers and noisy data, and resolve inconsistencies Data integration –Integration of multiple sources of information, or sensors Data transformation –Normalization and aggregation Data reduction –Obtains reduced representation in volume but produces the same or similar analytical results Data discretization (for numerical data) –Part of data reduction but with particular importance, especially for numerical data 6

8 Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

9 Data Cleaning Importance –“Data cleaning is the number one problem in data processing & management” Data cleaning tasks –Fill in missing values –Identify outliers and smooth out noisy data –Correct inconsistent data –Resolve redundancy caused by data integration 8

10 Missing Data Data is not always available –e.g., many tuples have no recorded values for several attributes, such as GPS log inside a building Missing data may be due to –Equipment malfunction –Inconsistent with other recorded data and thus deleted –Data not entered due to misunderstanding –Certain data may not be considered important at the time of entry –Not register history or changes of the data 9

11 How to Handle Missing Data? Ignore the tuple Fill in missing values manually: tedious + infeasible? Fill in it automatically with –a global constant : e.g., “unknown”, a new class?! –the attribute mean –the most probable value: inference-based such as Bayesian formula, decision tree, or EM algorithm 10

12 Noisy Data Noise: random error or variance in a measured variable Incorrect attribute values may due to –Faulty data collection instruments –Data entry problems –Data transmission problems –Technology limitation –Inconsistency in naming convention Other data problems which require data cleaning –Duplicate records –Incomplete data –Inconsistent data 11

13 How to Handle Noisy Data? Binning method: –First sort data and partition into (equi-depth) bins –Then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc Clustering –Detect and remove outliers Combined computer and human inspection –Detect suspicious values and check by human e.g., deal with possible outliers Regression –Smooth by fitting the data into regression functions 12

14 Binning Attribute values (for one attribute, e.g., age): –0, 4, 12, 16, 16, 18, 24, 26, 28 Equi-width binning – for bin width of e.g., 10: –Bin 1: 0, 4 [ -, 10 ) bin –Bin 2: 12, 16, 16, 18[10, 20) bin –Bin 3: 24, 26, 28[20, + ) bin –Denote negative infinity, + positive infinity Equi-frequency binning – for bin density of e.g., 3: –Bin 1: 0, 4, 12[ -, 14)bin –Bin 2: 16, 16, 18[14, 21) bin –Bin 3: 24, 26, 28[21, + ] bin 13

15 Clustering Partition data set into clusters, and one can store cluster representation only Can be very effective if data is clustered but not if data is “scattered” There are many choices of clustering definitions and clustering algorithms. 14

16 Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

17 Data Integration Data integration: –Combines data from multiple sources Schema integration –Integrate metadata from different sources –Entity identification problem: identify real world entities from multiple data sources e.g., A.cust-id  B.cust-# Detecting and resolving data value conflicts –For the same real world entity, attribute values from different sources are different e.g., different scales, metric vs. British units –Possible reasons: different representations, different scales e.g., metric vs. British units Removing duplicates and redundant data 16

18 Data Transformation Smoothing –Remove noise from data Normalization –Scaled to fall within a small, specified range Attribute/feature construction –New attributes constructed from the given ones Aggregation –Summarization Generalization –Concept hierarchy climbing 17

19 Data Transformation: Normalization Min-max normalization Z-score normalization Normalization by decimal scaling where j is the smallest integer such that Max(| |)<1 18

20 Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

21 Data Reduction Strategies Data is too big to work with Data reduction –Obtain a reduced representation of the data set that is much smaller in volume but yet produce the same (or almost the same) analytical results Data reduction strategies –Dimensionality reduction — remove unimportant attributes –Aggregation and clustering –Sampling –Numerosity reduction –Discretization and concept hierarchy generation 20

22 Dimensionality Reduction Feature selection (i.e., attribute subset selection) –Select a minimum set of attributes (features) that is sufficient for the lifelog management task Heuristic methods (due to exponential # of choices) –Step-wise forward selection –Step-wise backward elimination –Combining forward selection and backward elimination –Decision-tree induction 21

23 Example of Decision Tree Induction Initial attribute set: {A1, A2, A3, A4, A5, A6} A4 ? A1? A6? Class 1 Class 2Class 1 Class 2  Reduced attribute set: {A1, A4, A6} 22

24 Histograms A popular data reduction technique Divide data into buckets and store average (sum) for each bucket Can be constructed optimally in one dimension using dynamic programming Related to quantization problems 23

25 Heuristic Feature Selection Methods There are 2 d possible sub-features of d features Several heuristic feature selection methods –Best single features under the feature independence assumption: choose by significance tests –Best step-wise feature selection The best single-feature is picked first Then next best feature condition to the first,... –Step-wise feature elimination Repeatedly eliminate the worst feature –Best combined feature selection and elimination –Optimal branch and bound Use feature elimination and backtracking 24

26 Data Compression String compression –There are extensive theories and well-tuned algorithms –Typically lossless –But only limited manipulation is possible without expansion Audio/video compression –Typically lossy compression, with progressive refinement –Sometimes small fragments of signal can be reconstructed without reconstructing the whole Time sequence is not audio –Typically short and vary slowly with time 25

27 Data Compression Original Data Compressed Data lossless Original Data Approximated lossy 26

28 Numerosity Reduction Parametric methods –Assume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers) –Log-linear models: obtain value at a point in m-D space as the product on appropriate marginal subspaces Non-parametric methods –Do not assume models –Major families: histograms, clustering, sampling 27

29 Regression & Log-Linear Models Linear regression: Data are modeled to fit a straight line –Often uses the least-square method to fit the line Multiple regression: –Allows a response variable Y to be modeled as a linear function of multidimensional feature vector Log-linear model: –Approximates discrete multidimensional probability distributions 28

30 Regress Analysis & Log-Linear Models Linear regression: Y =  +  X –Two parameters,  and  specify the line and are to be estimated by using the data at hand –using the least squares criterion to the known values of Y1, Y2, …, X1, X2, …. Multiple regression: Y = b0 + b1 X1 + b2 X2 –Many nonlinear functions can be transformed into the above Log-linear models: –The multi-way table of joint probabilities is approximated by a product of lower-order tables –Probability: p(a, b, c, d) =  ab  ac  ad  bcd 29

31 Sampling Choose a representative subset of the data –Simple random sampling may have poor performance in the presence of skew Develop adaptive sampling methods –Stratified sampling: Approximate the percentage of each class (or subpopulation of interest) in the overall database Used in conjunction with skewed data Raw Data Cluster/Stratified Sample 30

32 Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

33 Discretization Three types of attributes –Nominal — values from an unordered set –Ordinal — values from an ordered set –Continuous — real numbers Discretization –Divide the range of a continuous attribute into intervals because some data analysis algorithms only accept categorical attributes Some techniques –Binning methods – equal-width, equal-frequency –Entropy-based methods 32

34 Hierarchical Reduction Use multi-resolution structure with different degrees of reduction Hierarchical clustering is often performed but tends to define partitions of data sets rather than “clusters” Parametric methods are usually not amenable to hierarchical representation Hierarchical aggregation –An index tree hierarchically divides a data set into partitions by value range of some attributes –Each partition can be considered as a bucket –Thus an index tree with aggregates stored at each node is a hierarchical histogram 33

35 Discretization & Concept Hierarchy Discretization –Reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals. Interval labels can then be used to replace actual data values Concept hierarchies –Reduce the data by collecting and replacing low level concepts (such as numeric values for the attribute age) by higher level concepts (such as young, middle-aged, or senior) 34

36 Discretization & Concept Hierarchy Generation Binning Histogram analysis Clustering analysis Entropy-based discretization Segmentation by natural partitioning 35

37 Entropy-based Discretization Given a set of samples S, if S is partitioned into two intervals S 1 and S 2 using boundary T, the entropy after partitioning is The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization The process is recursively applied to partitions obtained until some stopping criterion is met, e.g., Experiments show that it may reduce data size and improve classification accuracy 36

38 Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

39 Accelerometer Sensor Data Geometrical Calculation –Acceleration –Energy –Frequency domain entropy –Correlation –Vibration Parameter tuning of accelerometer –Adjusting parameters for surroundings 38

40 Acceleration Calculation 3D acceleration(a x, a y, a z ) and gravity (G) Acceleration (a) –Vector combination: (a x, a y, a z ) –Considering gravity (G) a = a x + a y + a z – g γ = a + g = a x + a y + a z 39

41 Energy Kinetic energy Energy by 3D acceleration –Kinetic energy for each direction –Whole kinetic energy 40

42 Why Energy? For classification of sedentary activity For consideration bias by the body weight Moderate intensity activities (walking, typing) Vigorous activities (running) vs. Same activity & heavy man 1 Same activity & light man 2 vs. F1>F2, E1 = E2 ∵ force ∝ weight 41

43 Frequency-domain Entropy Converted value from time scale x(t) to frequency scale X(f) Normalize by entropy calculation Fourier transform –Continuous Fourier Transform –Discrete Fourier Transform Fast Fourier Transform algorithms I(X) is the information content or self-information of X, which is itself a random variable 42

44 Vibration The variation of distance of acceleration (a x, a y, a z ) from origin –Static condition: a x 2 + a y 2 + a z 2 = g 2 –In action: a x 2 + a y 2 + a z 2 ≠ g 2 –Vibration calculation (Δ) 43

45 Sensor Parameter Tuning Acceleration variables (a x, a y, a z ) can be tuned by a linear function Setting the values k x, b x by test sample data (v x1, v x2 ) –Measuring on a static condition –Gravity g is generally 9.8 44

46 LPS DB Labeling Interface Location DB Place Selection Label Inference GPS Data Preprocessing Mapping from GPS coordinates to place Place: –A human readable labeling of positions [Hightower ' 03] Place Mapping Process Scalability? 45

47 Outlier Elimination To get rid of the peculiar GPS data from its normal boundary –t i is the time of the i th GPS data –d a,b means the distance between the a th and b th coordinates –p i is the i th GPS coordinates –k v denotes the threshold for outlier clearing. 46

48 Missing Data Correction Regression for filling missing data –t i is the time of the i th GPS data –p i is the i th GPS coordinates Gray dots 47

49 Mapping Example of Yonsei Univ. Divided the domain area into a lattice and then labeled each region 48

50 Location Analysis from GPS Data GPS data analysis Location information –Staying place –Passing place –Movement speed TimeStatusPlace -StayingHome 09:00MovingHome 09:30PassingFront gate 10:00StayingStudent building 11:00MovingStudent building 11:10PassingCenter road 11:30stayingLibrary GPS sensor data Coordinates conversion Location Positioning Data Integration Context generation Analyzer GPS coordinates velocity Map Info. Place Area x y time Front Gate Student Building Library cur time map 49

51 Location Positioning Place detection methods –Polygon area based: Accurate, difficult to make DB –Center point based: easy to manage, inaccurate Polygon based Center based 50

52 Summary Data preprocessing is a big issue for data management Data preprocessing includes –Data cleaning and data integration –Data reduction and feature selection –Discretization Many methods have been proposed but still an active area of research 51


Download ppt "Preprocessing of Lifelog September 18, 2008 Sung-Bae Cho 0."

Similar presentations


Ads by Google