Preprocessing of Lifelog September 18, 2008 Sung-Bae Cho 0.

Slides:



Advertisements
Similar presentations
UNIT – 1 Data Preprocessing
Advertisements

UNIT-2 Data Preprocessing LectureTopic ********************************************** Lecture-13Why preprocess the data? Lecture-14Data cleaning Lecture-15Data.
DATA PREPROCESSING Why preprocess the data?
1 Copyright by Jiawei Han, modified by Charles Ling for cs411a/538a Data Mining and Data Warehousing v Introduction v Data warehousing and OLAP for data.
Ch2 Data Preprocessing part3 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
Data Mining – Intro.

Data Mining: Concepts and Techniques
Data Preprocessing.
6/10/2015Data Mining: Concepts and Techniques1 Chapter 2: Data Preprocessing Why preprocess the data? Descriptive data summarization Data cleaning Data.
Data Preprocessing CS 536 – Data Mining These slides are adapted from J. Han and M. Kamber’s book slides (
Chapter 3 Pre-Mining. Content Introduction Proposed New Framework for a Conceptual Data Warehouse Selecting Missing Value Point Estimation Jackknife estimate.
Pre-processing for Data Mining CSE5610 Intelligent Software Systems Semester 1.
Chapter 4 Data Preprocessing
Spatial and Temporal Data Mining
Data Preprocessing.
2015年7月2日星期四 2015年7月2日星期四 2015年7月2日星期四 Data Mining: Concepts and Techniques1 Data Transformation and Feature Selection/Extraction Qiang Yang Thanks: J.
Major Tasks in Data Preprocessing(Ref Chap 3) By Prof. Muhammad Amir Alam.
Chapter 1 Data Preprocessing
CS2032 DATA WAREHOUSING AND DATA MINING
Ch2 Data Preprocessing part2 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
September 5, 2015Data Mining: Concepts and Techniques1 Chapter 2: Data Preprocessing Why preprocess the data? Descriptive data summarization Data cleaning.
CS685: Special Topics in Data Mining Jinze Liu September 9,
D ATA P REPROCESSING 1. C HAPTER 3: D ATA P REPROCESSING Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization.
The Knowledge Discovery Process; Data Preparation & Preprocessing
Outline Introduction Descriptive Data Summarization Data Cleaning Missing value Noise data Data Integration Redundancy Data Transformation.
Data Reduction. 1.Overview 2.The Curse of Dimensionality 3.Data Sampling 4.Binning and Reduction of Cardinality.
Data Preprocessing Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2010.
2015年11月6日星期五 2015年11月6日星期五 2015年11月6日星期五 Data Mining: Concepts and Techniques1 Data Preprocessing — Chapter 2 —
Data Preprocessing Dr. Bernard Chen Ph.D. University of Central Arkansas.
November 24, Data Mining: Concepts and Techniques.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.

Data Cleaning Data Cleaning Importance “Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball “Data.
2016年1月17日星期日 2016年1月17日星期日 2016年1月17日星期日 Data Mining: Concepts and Techniques1 Data Mining: Concepts and Techniques — Chapter 2 —
Data Mining: Concepts and Techniques — Chapter 2 —
February 18, 2016Data Mining: Babu Ram Dawadi1 Chapter 3: Data Preprocessing Preprocess Steps Data cleaning Data integration and transformation Data reduction.
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Data Preprocessing: Data Reduction Techniques Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Waqas Haider Bangyal. Classification Vs Clustering In general, in classification you have a set of predefined classes and want to know which class a new.
Data Mining What is to be done before we get to Data Mining?
Bzupages.comData Mining: Concepts and Techniques1 Data Mining: Concepts and Techniques — Slides for Textbook — — Chapter 3 — ©Jiawei Han and Micheline.
1 Web Mining Faculty of Information Technology Department of Software Engineering and Information Systems PART 4 – Data pre-processing Dr. Rakan Razouk.
Data Mining: Data Prepossessing What is to be done before we get to Data Mining?
Pattern Recognition Lecture 20: Data Mining 2 Dr. Richard Spillman Pacific Lutheran University.
Course Outline 1. Pengantar Data Mining 2. Proses Data Mining
Data Transformation: Normalization
Data Mining: Concepts and Techniques
Data Mining: Data Preparation
Noisy Data Noise: random error or variance in a measured variable.
UNIT-2 Data Preprocessing
©Jiawei Han and Micheline Kamber Department of Computer Science
Data Mining – Intro.
©Jiawei Han and Micheline Kamber Department of Computer Science
Data Preprocessing Modified from
Chapter 1 Data Preprocessing
©Jiawei Han and Micheline Kamber
Data Mining: Concepts and Techniques
Data Transformations targeted at minimizing experimental variance
Lecture 1: Descriptive Statistics and Exploratory
Data Mining Data Preprocessing
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
By Sandeep Patil, Department of Computer Engineering, I²IT
Resource: J. Han and other books
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Tel Hope Foundation’s International Institute of Information Technology, (I²IT). Tel
Data Proprocessing — Chapter 2 —
Presentation transcript:

Preprocessing of Lifelog September 18, 2008 Sung-Bae Cho 0

Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

Data Types & Forms Attribute-value data: Data types –Numeric, categorical (see the hierarchy for its relationship) –Static, dynamic (temporal) Other kinds of data –Distributed data –Text, web, meta data –Images, audio/video A1A1 A2A2 …AnAn C 2

Why Data Preprocessing? Data in the real world is dirty –Incomplete: missing attribute values, lack of certain attributes of interest, or containing only aggregate data e.g., occupation=“ ” –Noisy: containing errors or outliers e.g., Salary=“-10” –Inconsistent: containing discrepancies in codes or names e.g., Age=“42” Birthday=“03/07/1997” e.g., Was rating “1,2,3”, now rating “A, B, C” e.g., discrepancy between duplicate records 3

Why Is Data Preprocessing Important? No quality data, no quality results! –Quality decisions must be based on quality data e.g., duplicate or missing data may cause incorrect or even misleading statistics Data preparation, cleaning, and transformation comprises the majority of the work in a lifelog application (90%) 4

Multi-Dimensional Measure of Data Quality A well-accepted multi-dimensional view: –Accuracy –Completeness –Consistency –Timeliness –Believability –Value added –Interpretability –Accessibility 5

Major Tasks in Data Preprocessing Data cleaning –Fill in missing values, smooth noisy data, identify or remove outliers and noisy data, and resolve inconsistencies Data integration –Integration of multiple sources of information, or sensors Data transformation –Normalization and aggregation Data reduction –Obtains reduced representation in volume but produces the same or similar analytical results Data discretization (for numerical data) –Part of data reduction but with particular importance, especially for numerical data 6

Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

Data Cleaning Importance –“Data cleaning is the number one problem in data processing & management” Data cleaning tasks –Fill in missing values –Identify outliers and smooth out noisy data –Correct inconsistent data –Resolve redundancy caused by data integration 8

Missing Data Data is not always available –e.g., many tuples have no recorded values for several attributes, such as GPS log inside a building Missing data may be due to –Equipment malfunction –Inconsistent with other recorded data and thus deleted –Data not entered due to misunderstanding –Certain data may not be considered important at the time of entry –Not register history or changes of the data 9

How to Handle Missing Data? Ignore the tuple Fill in missing values manually: tedious + infeasible? Fill in it automatically with –a global constant : e.g., “unknown”, a new class?! –the attribute mean –the most probable value: inference-based such as Bayesian formula, decision tree, or EM algorithm 10

Noisy Data Noise: random error or variance in a measured variable Incorrect attribute values may due to –Faulty data collection instruments –Data entry problems –Data transmission problems –Technology limitation –Inconsistency in naming convention Other data problems which require data cleaning –Duplicate records –Incomplete data –Inconsistent data 11

How to Handle Noisy Data? Binning method: –First sort data and partition into (equi-depth) bins –Then one can smooth by bin means, smooth by bin median, smooth by bin boundaries, etc Clustering –Detect and remove outliers Combined computer and human inspection –Detect suspicious values and check by human e.g., deal with possible outliers Regression –Smooth by fitting the data into regression functions 12

Binning Attribute values (for one attribute, e.g., age): –0, 4, 12, 16, 16, 18, 24, 26, 28 Equi-width binning – for bin width of e.g., 10: –Bin 1: 0, 4 [ -, 10 ) bin –Bin 2: 12, 16, 16, 18[10, 20) bin –Bin 3: 24, 26, 28[20, + ) bin –Denote negative infinity, + positive infinity Equi-frequency binning – for bin density of e.g., 3: –Bin 1: 0, 4, 12[ -, 14)bin –Bin 2: 16, 16, 18[14, 21) bin –Bin 3: 24, 26, 28[21, + ] bin 13

Clustering Partition data set into clusters, and one can store cluster representation only Can be very effective if data is clustered but not if data is “scattered” There are many choices of clustering definitions and clustering algorithms. 14

Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

Data Integration Data integration: –Combines data from multiple sources Schema integration –Integrate metadata from different sources –Entity identification problem: identify real world entities from multiple data sources e.g., A.cust-id  B.cust-# Detecting and resolving data value conflicts –For the same real world entity, attribute values from different sources are different e.g., different scales, metric vs. British units –Possible reasons: different representations, different scales e.g., metric vs. British units Removing duplicates and redundant data 16

Data Transformation Smoothing –Remove noise from data Normalization –Scaled to fall within a small, specified range Attribute/feature construction –New attributes constructed from the given ones Aggregation –Summarization Generalization –Concept hierarchy climbing 17

Data Transformation: Normalization Min-max normalization Z-score normalization Normalization by decimal scaling where j is the smallest integer such that Max(| |)<1 18

Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

Data Reduction Strategies Data is too big to work with Data reduction –Obtain a reduced representation of the data set that is much smaller in volume but yet produce the same (or almost the same) analytical results Data reduction strategies –Dimensionality reduction — remove unimportant attributes –Aggregation and clustering –Sampling –Numerosity reduction –Discretization and concept hierarchy generation 20

Dimensionality Reduction Feature selection (i.e., attribute subset selection) –Select a minimum set of attributes (features) that is sufficient for the lifelog management task Heuristic methods (due to exponential # of choices) –Step-wise forward selection –Step-wise backward elimination –Combining forward selection and backward elimination –Decision-tree induction 21

Example of Decision Tree Induction Initial attribute set: {A1, A2, A3, A4, A5, A6} A4 ? A1? A6? Class 1 Class 2Class 1 Class 2  Reduced attribute set: {A1, A4, A6} 22

Histograms A popular data reduction technique Divide data into buckets and store average (sum) for each bucket Can be constructed optimally in one dimension using dynamic programming Related to quantization problems 23

Heuristic Feature Selection Methods There are 2 d possible sub-features of d features Several heuristic feature selection methods –Best single features under the feature independence assumption: choose by significance tests –Best step-wise feature selection The best single-feature is picked first Then next best feature condition to the first,... –Step-wise feature elimination Repeatedly eliminate the worst feature –Best combined feature selection and elimination –Optimal branch and bound Use feature elimination and backtracking 24

Data Compression String compression –There are extensive theories and well-tuned algorithms –Typically lossless –But only limited manipulation is possible without expansion Audio/video compression –Typically lossy compression, with progressive refinement –Sometimes small fragments of signal can be reconstructed without reconstructing the whole Time sequence is not audio –Typically short and vary slowly with time 25

Data Compression Original Data Compressed Data lossless Original Data Approximated lossy 26

Numerosity Reduction Parametric methods –Assume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers) –Log-linear models: obtain value at a point in m-D space as the product on appropriate marginal subspaces Non-parametric methods –Do not assume models –Major families: histograms, clustering, sampling 27

Regression & Log-Linear Models Linear regression: Data are modeled to fit a straight line –Often uses the least-square method to fit the line Multiple regression: –Allows a response variable Y to be modeled as a linear function of multidimensional feature vector Log-linear model: –Approximates discrete multidimensional probability distributions 28

Regress Analysis & Log-Linear Models Linear regression: Y =  +  X –Two parameters,  and  specify the line and are to be estimated by using the data at hand –using the least squares criterion to the known values of Y1, Y2, …, X1, X2, …. Multiple regression: Y = b0 + b1 X1 + b2 X2 –Many nonlinear functions can be transformed into the above Log-linear models: –The multi-way table of joint probabilities is approximated by a product of lower-order tables –Probability: p(a, b, c, d) =  ab  ac  ad  bcd 29

Sampling Choose a representative subset of the data –Simple random sampling may have poor performance in the presence of skew Develop adaptive sampling methods –Stratified sampling: Approximate the percentage of each class (or subpopulation of interest) in the overall database Used in conjunction with skewed data Raw Data Cluster/Stratified Sample 30

Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

Discretization Three types of attributes –Nominal — values from an unordered set –Ordinal — values from an ordered set –Continuous — real numbers Discretization –Divide the range of a continuous attribute into intervals because some data analysis algorithms only accept categorical attributes Some techniques –Binning methods – equal-width, equal-frequency –Entropy-based methods 32

Hierarchical Reduction Use multi-resolution structure with different degrees of reduction Hierarchical clustering is often performed but tends to define partitions of data sets rather than “clusters” Parametric methods are usually not amenable to hierarchical representation Hierarchical aggregation –An index tree hierarchically divides a data set into partitions by value range of some attributes –Each partition can be considered as a bucket –Thus an index tree with aggregates stored at each node is a hierarchical histogram 33

Discretization & Concept Hierarchy Discretization –Reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals. Interval labels can then be used to replace actual data values Concept hierarchies –Reduce the data by collecting and replacing low level concepts (such as numeric values for the attribute age) by higher level concepts (such as young, middle-aged, or senior) 34

Discretization & Concept Hierarchy Generation Binning Histogram analysis Clustering analysis Entropy-based discretization Segmentation by natural partitioning 35

Entropy-based Discretization Given a set of samples S, if S is partitioned into two intervals S 1 and S 2 using boundary T, the entropy after partitioning is The boundary that minimizes the entropy function over all possible boundaries is selected as a binary discretization The process is recursively applied to partitions obtained until some stopping criterion is met, e.g., Experiments show that it may reduce data size and improve classification accuracy 36

Motivation Preprocessing techniques –Data cleaning –Data integration and transformation –Data reduction –Discretization Preprocessing examples –Accelerometer data –GPS data Agenda

Accelerometer Sensor Data Geometrical Calculation –Acceleration –Energy –Frequency domain entropy –Correlation –Vibration Parameter tuning of accelerometer –Adjusting parameters for surroundings 38

Acceleration Calculation 3D acceleration(a x, a y, a z ) and gravity (G) Acceleration (a) –Vector combination: (a x, a y, a z ) –Considering gravity (G) a = a x + a y + a z – g γ = a + g = a x + a y + a z 39

Energy Kinetic energy Energy by 3D acceleration –Kinetic energy for each direction –Whole kinetic energy 40

Why Energy? For classification of sedentary activity For consideration bias by the body weight Moderate intensity activities (walking, typing) Vigorous activities (running) vs. Same activity & heavy man 1 Same activity & light man 2 vs. F1>F2, E1 = E2 ∵ force ∝ weight 41

Frequency-domain Entropy Converted value from time scale x(t) to frequency scale X(f) Normalize by entropy calculation Fourier transform –Continuous Fourier Transform –Discrete Fourier Transform Fast Fourier Transform algorithms I(X) is the information content or self-information of X, which is itself a random variable 42

Vibration The variation of distance of acceleration (a x, a y, a z ) from origin –Static condition: a x 2 + a y 2 + a z 2 = g 2 –In action: a x 2 + a y 2 + a z 2 ≠ g 2 –Vibration calculation (Δ) 43

Sensor Parameter Tuning Acceleration variables (a x, a y, a z ) can be tuned by a linear function Setting the values k x, b x by test sample data (v x1, v x2 ) –Measuring on a static condition –Gravity g is generally

LPS DB Labeling Interface Location DB Place Selection Label Inference GPS Data Preprocessing Mapping from GPS coordinates to place Place: –A human readable labeling of positions [Hightower ' 03] Place Mapping Process Scalability? 45

Outlier Elimination To get rid of the peculiar GPS data from its normal boundary –t i is the time of the i th GPS data –d a,b means the distance between the a th and b th coordinates –p i is the i th GPS coordinates –k v denotes the threshold for outlier clearing. 46

Missing Data Correction Regression for filling missing data –t i is the time of the i th GPS data –p i is the i th GPS coordinates Gray dots 47

Mapping Example of Yonsei Univ. Divided the domain area into a lattice and then labeled each region 48

Location Analysis from GPS Data GPS data analysis Location information –Staying place –Passing place –Movement speed TimeStatusPlace -StayingHome 09:00MovingHome 09:30PassingFront gate 10:00StayingStudent building 11:00MovingStudent building 11:10PassingCenter road 11:30stayingLibrary GPS sensor data Coordinates conversion Location Positioning Data Integration Context generation Analyzer GPS coordinates velocity Map Info. Place Area x y time Front Gate Student Building Library cur time map 49

Location Positioning Place detection methods –Polygon area based: Accurate, difficult to make DB –Center point based: easy to manage, inaccurate Polygon based Center based 50

Summary Data preprocessing is a big issue for data management Data preprocessing includes –Data cleaning and data integration –Data reduction and feature selection –Discretization Many methods have been proposed but still an active area of research 51