Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Science Input: Concepts, Instances and Attributes WFH: Data Mining, Chapter 2 Rodney Nielsen Many/most of these slides were adapted from: I. H. Witten,

Similar presentations


Presentation on theme: "Data Science Input: Concepts, Instances and Attributes WFH: Data Mining, Chapter 2 Rodney Nielsen Many/most of these slides were adapted from: I. H. Witten,"— Presentation transcript:

1 Data Science Input: Concepts, Instances and Attributes WFH: Data Mining, Chapter 2 Rodney Nielsen Many/most of these slides were adapted from: I. H. Witten, E. Frank and M. A. Hall

2 Rodney Nielsen, Human Intelligence & Language Technologies Lab Input: Concepts, Instances, Attributes Terminology ● What’s a concept?  Classification, association, clustering, numeric prediction ● What’s in an example?  Relations, flat files, recursion ● What’s in an attribute?  Nominal, ordinal, interval, ratio ● Preparing the input  ARFF, attributes, missing values, getting to know data

3 Rodney Nielsen, Human Intelligence & Language Technologies Lab Terminology What is a concept as it relates to data science and machine learning?

4 Rodney Nielsen, Human Intelligence & Language Technologies Lab Terminology Components of the input: Concept Things to be learned Concept description Output of learning algorithm Aim Learn an intelligible and operational concept description

5 Rodney Nielsen, Human Intelligence & Language Technologies Lab Styles of Learning Classification learning Predicting a discrete class Association learning Detecting associations between features Clustering Grouping similar instances into clusters Numeric prediction Predicting a numeric quantity

6 Rodney Nielsen, Human Intelligence & Language Technologies Lab Classification Learning Example problems: weather data, contact lenses, irises, labor negotiations Classification learning is supervised Learning algorithm is provided with actual outcome Outcome is called the class of the example Measure/evaluate success on fresh data for which class labels are known (test data) In practice success is often measured subjectively

7 Rodney Nielsen, Human Intelligence & Language Technologies Lab Association Rules? What is the key difference between learning classification rules versus learning association rules?

8 Rodney Nielsen, Human Intelligence & Language Technologies Lab Association Learning Can be applied if no class is specified and many patterns might be considered “interesting” Difference from classification learning: Can predict any attribute’s value, not just the class Hence: far more association rules than classification rules Normally only extracted for subregions of the concept space that appear to have statistically strong patterns Thus: constraints are necessary Minimum coverage and minimum accuracy

9 Rodney Nielsen, Human Intelligence & Language Technologies Lab Clustering? Compare and contrast classification and clustering.

10 Rodney Nielsen, Human Intelligence & Language Technologies Lab Clustering Finding groups of items that are similar Clustering is unsupervised The class of an example is not known Success often measured subjectively … … … Iris virginica 1.95.12.75.8102 101 52 51 2 1 Iris virginica 2.56.03.36.3 Iris versicolor 1.54.53.26.4 Iris versicolor 1.44.73.27.0 Iris setosa 0.21.43.04.9 Iris setosa 0.21.43.55.1 TypePetal widthPetal lengthSepal widthSepal length

11 Rodney Nielsen, Human Intelligence & Language Technologies Lab Numeric Prediction Numeric Prediction is a variant of classification learning where “class” is numeric (usually called “regression”) Learning is supervised Training instances are provided with their target value Measure success on test data …………… 40FalseNormalMildRainy 55FalseHighHotOvercast 0TrueHighHotSunny 5FalseHighHotSunny Play-timeWindyHumidityTemperatureOutlook

12 Rodney Nielsen, Human Intelligence & Language Technologies Lab Concepts versus Instances What is the relation between a concept, an instance and an attribute?

13 Rodney Nielsen, Human Intelligence & Language Technologies Lab What’s in an Example Instance Thing to be classified, associated, or clustered Individual, independent example of target concept Characterized by a predetermined set of attributes Represented by a corresponding set of attribute values Input to learning scheme: set of instances/dataset Represented as a single relation/flat file Rather restricted form of input No relationships between objects Most common form of input in data mining

14 Rodney Nielsen, Human Intelligence & Language Technologies Lab A Family Tree = Steven M Graham M Pam F Grace F Ray M = Ian M Pippa F Brian M = Anna F Nikki F Peggy F Peter M

15 Rodney Nielsen, Human Intelligence & Language Technologies Lab Family Tree Represented as a Table IanPamFemaleNikki IanPamFemaleAnna RayGraceMaleBrian RayGraceFemalePippa RayGraceMaleIan PeggyPeterFemalePam PeggyPeterMaleGraham PeggyPeterMaleSteven ??FemalePeggy ??MalePeter parent2Parent1GenderName

16 Rodney Nielsen, Human Intelligence & Language Technologies Lab Generating a Flat File ● Process of flattening called “denormalization”  Several relations are joined together to make one ● Possible with any finite set of finite relations ● Problematic: relationships without pre-specified number of objects  Example: concept of nuclear-family ● Denormalization may produce spurious regularities that reflect structure of database  Example: “supplier” predicts “supplier address”

17 Rodney Nielsen, Human Intelligence & Language Technologies Lab What’s in an Attribute? ● Each instance is described by a set of values corresponding to fixed predefined set of features or attributes ● But: number of attributes may vary in practice  Possible solution: “irrelevant value” flag ● Related problem: existence of an attribute may depend on value of another attribute ● Possible attribute types:  Nominal, ordinal, interval and ratio

18 Rodney Nielsen, Human Intelligence & Language Technologies Lab Nominal Quantities ● Values are distinct symbols  Values themselves serve only as labels or names  Nominal comes from the Latin word for name ● Example: attribute “outlook” from weather data  Values: sunny, overcast, and rainy ● No relation is implied among nominal values (no ordering or distance measure) ● Only equality tests can be performed

19 Rodney Nielsen, Human Intelligence & Language Technologies Lab Ordinal Quantities ● Impose order on values ● But: no distance between values defined ● Example: ● Attribute “temperature” in weather data ● Values: “hot” > “mild” > “cool” ● Note: addition and subtraction don’t make sense ● Example rule: ● Temperature < hot  play = yes ● Distinction between nominal and ordinal not always clear (e.g. attribute “outlook”)

20 Rodney Nielsen, Human Intelligence & Language Technologies Lab Interval Quantities ● Interval quantities are not only ordered but measured in fixed and equal units ● Example 1: attribute “temperature” expressed in degrees ● Example 2: attribute “year” ● Difference of two values makes sense ● Sum or product doesn’t make sense  Zero point is not defined!

21 Rodney Nielsen, Human Intelligence & Language Technologies Lab Ratio Quantities ● Ratio quantities are ones for which the measurement scheme defines a zero point ● Example: attribute “distance”  Distance between an object and itself is zero ● Ratio quantities are treated as real numbers  All mathematical operations are allowed ● But: is there an “inherently” defined zero point?  Answer depends on scientific knowledge (e.g. Fahrenheit knew no lower limit to temperature)

22 Rodney Nielsen, Human Intelligence & Language Technologies Lab Attribute Types Used in Practice Many schemes accommodate just two levels of measurement: nominal and ordinal Nominal attributes are also called “categorical”, “enumerated”, or “discrete” But: “enumerated” and “discrete” imply order Special case: dichotomy (“Boolean” attribute) Ordinal attributes are often “numeric”, or “continuous” But: “continuous” implies mathematical continuity

23 Rodney Nielsen, Human Intelligence & Language Technologies Lab Metadata ● Information about the data that encodes background knowledge ● Can be used to restrict search space ● Examples:  Dimensional considerations (i.e. expressions must be dimensionally correct)  Circular orderings (e.g. degrees in compass)  Partial orderings (e.g. generalization/specialization relations)

24 Rodney Nielsen, Human Intelligence & Language Technologies Lab Preparing the Input ● Denormalization is not the only issue ● Problem: different data sources (e.g. sales department, customer billing department, …)  Differences: styles of record keeping, conventions, time periods, data aggregation, primary keys, errors  Data must be assembled, integrated, cleaned up  “Data warehouse”: consistent point of access ● External data may be required (“overlay data”) ● Critical: type and level of data aggregation

25 Rodney Nielsen, Human Intelligence & Language Technologies Lab The ARFF Format % % ARFF file for weather data with some numeric features % @relation weather @attribute outlook {sunny, overcast, rainy} @attribute temperature numeric @attribute humidity numeric @attribute windy {true, false} @attribute play? {yes, no} @data sunny, 85, 85, false, no sunny, 80, 90, true, no overcast, 83, 86, false, yes...

26 Rodney Nielsen, Human Intelligence & Language Technologies Lab Sparse Data In some applications most attribute values in a dataset are zero E.g.: word counts in a text categorization problem ARFF supports sparse data This also works for nominal attributes (where the first value corresponds to “zero”) 0, 26, 0, 0, 0,0, 63, 0, 0, 0, “class A” 0, 0, 0, 42, 0, 0, 0, 0, 0, 0, “class B” {1 26, 6 63, 10 “class A”} {3 42, 10 “class B”}

27 Rodney Nielsen, Human Intelligence & Language Technologies Lab Missing Values ● Frequently indicated by out-of-range entries  Types: unknown, unrecorded, irrelevant  Reasons: ● Malfunctioning equipment ● Changes in experimental design ● Collation of different datasets ● Measurement not possible ● Missing value may have significance in itself (e.g. missing test in a medical examination)  Most schemes assume there are no missing values  Might need to be coded as additional value

28 Rodney Nielsen, Human Intelligence & Language Technologies Lab Inaccurate Values ● Reason: data has not been collected for mining ● Result: errors and omissions that don’t affect original purpose of data (e.g. age of customer) ● Typographical errors in nominal attributes  values need to be checked for consistency ● Typographical and measurement errors in numeric attributes  outliers need to be identified ● Errors may be deliberate (e.g. wrong zip codes) ● Other problems: duplicates, stale data

29 Rodney Nielsen, Human Intelligence & Language Technologies Lab Getting to Know the Data ● Simple visualization tools are very useful  Nominal attributes: histograms (Distribution consistent with background knowledge?)  Numeric attributes: graphs (Any obvious outliers?) ● 2-D and 3-D plots show dependencies ● Need to consult domain experts ● Too much data to inspect? Take a sample!

30 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions

31 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions Why are nominal and ordinal attribute types most commonly used for data mining systems? Surely Numerical is also essential? What is a practical way of collecting data, so that you spend less of your time on it? In other words, if I were to begin a brand new data collection process, what method would work the best so that I would not have to spend the bulk of my time on it (as chapter 2 suggests you usually will)? Is there a way that data can be formatted so that it more easily integrates into some kind of data warehouse?

32 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions When given a set of data is it common practice to first go about determining attributes about the data or is that something that the machine language algorithm should be used for? If it is common to manually determine attributes, wouldn't some attributes not thought of be missed and the algorithm possibly not as useful?

33 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions Has machine learning ever presented rules which contradicted experts' knowledge or pioneered scientific breakthroughs? The chapter mainly discusses small improvements made by machine learning over conventional methods. Decision lists are intended to be interpreted in sequence. Can association rules form a decision list or is that reserved only for classification rules? Also, would it be useful to form a decision tree from association rules?

34 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions I understand the idea behind clustering but what I am not sure about is how you evaluate the quality of clustering? How does one pick out new and interesting associations from an overwhelming majority of irrelevant or previously known associations if one is processing enormous volumes of data?

35 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions The text states that “In practical data mining applications, success is measured more subjectively in terms of how acceptable the learned description-such as the rules or decision tree- is to a human user”. Why should we measure success this way? It seems like checking for percent accuracy against test data would be a better method.

36 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions What are the benefits/limitations of using a set of instances derived from all users preferences in developing a machine learning algorithm for a streaming media service like Pandora or Netflix to suggest new content for users?

37 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions What is the importance of ordinal vs nominal if the difference is so obscure? How can we better express multinomial features for use in regression oriented problems?

38 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions Why do each set of attributes need to be a predefined set or a fixed value? Why does duplicating data using machine tools at times produce different types of results for the same value if it uses the same calculation method?

39 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions Why is file mining preferred over database mining? Why would someone falsify data purposely?

40 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions What are some good things to keep in mind when developing a system so that the backing database will be useful for data mining in the future? I'm not 100% clear on the concept of the relation-valued attributes (mentioned right before the Sparse Data section). Is this kind of analogous to Objects in OOP?

41 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions What is another practical example of relational example aside from the family tree example? Can these be considered hierarchical attributes?

42 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions In what situations are Association rules involved in numeric attributes? For the sister example in 2.2 doesn't first person's parent 1 have to equal second person's parent 1 and first person's parent 2 have to equal second person's parent 2 other wise they could possibly be step-sisters instead of full-blood sisters?

43 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions This chapter also makes a mention of inductive logic programming. Are there any important applications for this type of machine learning in data science at the moment? The idea of learning recursive rules seems fascinating.

44 Rodney Nielsen, Human Intelligence & Language Technologies Lab Questions

45 Rodney Nielsen, Human Intelligence & Language Technologies Lab The “sister of” Relation yesAnnaNikki ……… YesNikkiAnna ……… YesPippaIan ……… YesPamSteven NoGrahamSteven NoPeterSteven ……… NoStevenPeter NoPeggyPeter Sister of?Second person First person No All the rest YesAnnaNikki YesNikkiAnna YesPippaBrian YesPippaIan YesPamGraham YesPamSteven Sister of?Second person First person Closed-world assumption

46 Rodney Nielsen, Human Intelligence & Language Technologies Lab A Full Representation in One Table Ian Ray Peggy Parent2 Female Gender Pam Grace Peter Parent1NameParent2Parent1GenderName Ian Ray Peggy Pam Grace Peter Female Male No All the rest YesAnnaNikki YesNikkiAnna YesPippaBrian YesPippaIan YesPamGraham YesPamSteven Sister of? Second personFirst person If second person’s gender = female and first person’s parent = second person’s parent then sister-of = yes

47 Rodney Nielsen, Human Intelligence & Language Technologies Lab The “ancestor-of” Relation Yes Other positive examples here YesIanPamFemaleNikki??FemaleGrace Ray Ian Peggy Parent2 Male Female Male Gender Grace Pam Peter Parent1NameParent2Parent1GenderName ? Peggy ? ? ? ? ? Peter ? ? ? ? Female Male No All the rest YesIanGrace YesNikkiPam YesNikkiPeter YesAnnaPeter YesPamPeter YesStevenPeter Ancestor of? Second personFirst person

48 Rodney Nielsen, Human Intelligence & Language Technologies Lab Recursion Infinite relations require recursion Appropriate techniques are known as “inductive logic programming” Problems: (a) noise and (b) computational complexity If person1 is a parent of person2 then person1 is an ancestor of person2 If person1 is a parent of person2 and person2 is an ancestor of person3 then person1 is an ancestor of person3

49 Rodney Nielsen, Human Intelligence & Language Technologies Lab Multi-instance Concepts Each individual example comprises a set of instances All instances are described by the same attributes One or more instances within an example may be responsible for its classification Goal of learning is still to produce a concept description Important real world applications e.g. drug interactions prediction

50 Rodney Nielsen, Human Intelligence & Language Technologies Lab Additional Attribute Types: ARFF supports string attributes: Similar to nominal attributes but list of values is not pre-specified It also supports date attributes: Uses the ISO-8601 combined date and time format yyyy-MM-dd-THH:mm:ss @attribute description string @attribute today date

51 Rodney Nielsen, Human Intelligence & Language Technologies Lab Relational Attributes Allow multi-instance problems to be represented in ARFF format The value of a relational attribute is a separate set of instances Nested attribute block gives the structure of the referenced instances @attribute bag relational @attribute outlook { sunny, overcast, rainy } @attribute temperature numeric @attribute humidity numeric @attribute windy { true, false } @end bag

52 Rodney Nielsen, Human Intelligence & Language Technologies Lab Multi-instance ARFF % % Multiple instance ARFF file for the weather data % @relation weather @attribute bag_ID { 1, 2, 3, 4, 5, 6, 7 } @attribute bag relational @attribute outlook {sunny, overcast, rainy} @attribute temperature numeric @attribute humidity numeric @attribute windy {true, false} @attribute play? {yes, no} @end bag @data 1, “sunny, 85, 85, false\nsunny, 80, 90, true”, no 2, “overcast, 83, 86, false\nrainy, 70, 96, false”, yes...

53 Rodney Nielsen, Human Intelligence & Language Technologies Lab Attribute Types ● Interpretation of attribute types in ARFF depends on learning scheme  Numeric attributes are interpreted as ● Ordinal scales if less-than and greater-than are used ● Ratio scales if distance calculations are performed (normalization/standardization may be required)  Instance-based schemes define distance between nominal values as 0 if values are equal, 1 otherwise ● Integers in some given data file: nominal, ordinal, or ratio scale?

54 Rodney Nielsen, Human Intelligence & Language Technologies Lab Nominal vs. Ordinal ● Attribute “age” nominal ● Attribute “age” ordinal (e.g. “young” < “pre-presbyopic” < “presbyopic”) If age = young and astigmatic = no and tear production rate = normal then recommendation = soft If age = pre-presbyopic and astigmatic = no and tear production rate = normal then recommendation = soft If age  pre-presbyopic and astigmatic = no and tear production rate = normal then recommendation = soft


Download ppt "Data Science Input: Concepts, Instances and Attributes WFH: Data Mining, Chapter 2 Rodney Nielsen Many/most of these slides were adapted from: I. H. Witten,"

Similar presentations


Ads by Google