Download presentation
Presentation is loading. Please wait.
Published byVincent Carson Modified over 9 years ago
1
Some Key Questions about you Data Damian Gordon Brendan Tierney Brian Mac Namee
2
Some students will be using a dataset as part of their research. This is typically thousands of rows of data. We are not talking about the data you might be collecting from surveys and interviews, but rather a pre-existing set of data. If the data is the key consideration in your research (although not all projects will necessarily be concerned with large datasets) it is important to consider several questions. The Data
3
How suitable is the data? What is the type of the data? Where will you get it from? What size is the dataset? What format is it in? How much cleaning is required? What is the quality of the data? How do you deal with missing data? How will you evaluate your analysis? etc. Overview
4
Determining the suitability of the data is a vital consideration, it is not sufficient to simply locate a dataset that is thematically linked to your research question, it must be appropriate to explore the questions that you want to ask. For example, just because you want to do Credit Card Fraud detection and you have a dataset that contains Credit Card transactions or was used in another Credit Card Fraud project, does not mean that it will be suitable for your project. Suitability: Dataset
5
Is the data already labelled? This is very important for supervised learning problems. To take the credit card fraud example again, you can probably get as many credit card transactions as you like but you probably won't be able to get them marked up as fraudulent and non-fraudulent. Suitability: Labelling
6
The same thing goes for a lot of text analytics problems - can you get people to label thousands of documents as being interesting or non-interesting to them so that you can train a predictive model? The availability of labelled data is a key consideration for any supervised learning problem. The areas of semi-supervised learning and active learning try to address this problem and have some very interesting open research questions. Suitability: Labelling
7
Two important considerations: The Curse of Dimensionality – When the dimensionality increases, the volume of the space increases so fast that the available data becomes sparse. In order to obtain a statistically sound result, the amount of data you need often grows exponentially with the dimensionality. The No Free Lunch Theorem - Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems. Suitability: Labelling
8
Also remember for labelling, you might be aiming for one of three goals: Binary classifications – classifying each data item to one of two categories. Multiclass classifications - classifying each data item to more than two categories. Multi-label classifications - classifying each data item to multiple target labels. Suitability: Labelling
9
Federated data High dimensional data Descriptive data Longitudinal data Streaming data Web (scraped) data Numeric vs. categorical vs. text data etc. Types of Data
10
http://researchmethodsdataanalysis.bl ogspot.com/2011/11/dataset-sites.html http://researchmethodsdataanalysis.bl ogspot.com/2011/11/dataset-sites.html e.g. http://www.kdnuggets.com/datasets/ http://www.google.com/publicdata/directory http://opendata.ie/ http://lib.stat.cmu.edu/datasets/ Locating Datasets
11
What is a reasonable size of a dataset? Obviously it vary a lot from problem to problem, but in general we would recommend at least 10 features (columns) in the dataset, and we’d like to see thousands of instances. Size of the Dataset
12
TXT (Text file) MIME (Multipurpose Internet Mail Extensions) XML (Extensible Markup Language) CSV (Comma-Separated Values) ACSII (American Standard Code for Information Interchange) etc. Format of the Data
13
Parsing Correcting Standardizing Matching Consolidating Cleaning of Data
14
Frequency counts Descriptive statistics (mean, standard deviation, median) Normality (skewness, kurtosis, frequency histograms, normal probability plots) Associations (correlations, scatter plots) Quality of the Data
15
Imputation Partial imputation Partial deletion Full analysis Also consider database nullology Missing Data?
16
Training Dataset (Build dataset) Test Dataset Apply Dataset (Scoring Dataset) Dataset types
17
How do we evaluate the research project? Evaluation
18
What about stuff like? Area under the Curve Misclassification Error Confusion Matrix N-fold Cross Validation ROC Graph Log-Loss and Hinge-Loss Evaluation
19
These are good for evaluating the analysis, so they are good for checking how good the model is based on the dataset, and are definitely part of the evaluation, but if you want to discuss the findings with respect to the real-world (and to the research question) you must do the following: Test predictions using the real-world Evaluation
20
Other questions? The Data
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.