Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Peter Fox Data Science – ITEC/CSCI/ERTH 4350/6350 Week 2, September 2, 2014 Example of Data Science and Data and information acquisition (curation) and.

Similar presentations


Presentation on theme: "1 Peter Fox Data Science – ITEC/CSCI/ERTH 4350/6350 Week 2, September 2, 2014 Example of Data Science and Data and information acquisition (curation) and."— Presentation transcript:

1 1 Peter Fox Data Science – ITEC/CSCI/ERTH 4350/6350 Week 2, September 2, 2014 Example of Data Science and Data and information acquisition (curation) and metadata/ provenance - management

2 Admin info (keep/ print this slide) Class: ITEC/CSCI/ERTH-4350/6350 Hours: 9am-11:50am Tuesday Location: Lally 102 Instructor: Peter Fox Instructor contact: pfox@cs.rpi.edu, 518.276.4862 (do not leave a msg)pfox@cs.rpi.edu Contact hours: Monday** 3:00-4:00pm (or by appt) Contact location: **Winslow 2120 (or Lally 207A) TA: Sneha Das; dass4@rpi.edudass4@rpi.edu Web site: http://tw.rpi.edu/web/courses/DataScience/2014http://tw.rpi.edu/web/courses/DataScience/2014 –Schedule, lectures, syllabus, reading, assignments, etc. 2

3 Review from last week Data Information Knowledge Metadata/ documentation Data life-cycle 3

4 Reading Assignments Changing Science: Chris Anderson Rise of the Data Scientist Where to draw the line What is Data Science? An example of Data Science If you have never heard of Data Science BRDI activities Data policy Self-directed study (answers to the quiz) Fourth Paradigm, Digital Humanities 4

5 And now a more detailed e.g. Or … part of my path to data science 5

6 Why do we (I) care about the Sun? The Sun’s radiation is the single largest external input to the the Earth’s atmosphere and thus the Earth system. Add, it varies – in time and wavelength Also, for a long time - Solar Energetic Particles and the near Earth environment (and more recently the effect on clouds?) Observations commenced ~ 1940’s, with a resurgence in the late 1970’s Two quantities of scientific interest –Total Solar irradiance - TSI in Wm -2 (adjusted to 1AU) –Solar Spectral Irradiance - SSI in Wm -2 m -1 or Wm -2 nm -1 Measure, model, understand -> construct, predict

7 Solar radiation as a function of altitude

8

9

10 Spectral synthesis components and flow

11 Summary of Results First comprehensive ‘database’ of: –Empirical models of the thermodynamic structure of the solar atmosphere suitable for different solar magnetic activity levels First comprehensive (70 component) synthetic spectral irradiance database in absolute units –10 disk angles, 7 models, far ultra- violet to far infrared, multi- resolution –~724 GB Strong validation in ultraviolet, visible, lines, infrared –Correct center to limb prediction for red-band irradiances –Found 30-45% network contribution to Ly-  irradiance Several comparisons led to improvements in the atomic parameters Led to choice of PICARD (new satellite) filter wavelengths

12 Which brings us to DATA SCIENCE Drum roll….. Some dirty secrets And some … universal truths…

13 13 Needs (this is our mantra) Scientists should be able to access a global, distributed knowledge base of scientific data that: appears to be integrated appears to be locally available But… data is obtained by multiple means (models and instruments), using various protocols, in differing vocabularies, using (sometimes unstated) assumptions, with inconsistent (or non-existent) meta-data. It may be inconsistent, incomplete, evolving, and distributed. And created in a manner to facilitate its generation NOT its use. And… there exist(ed) significant levels of semantic heterogeneity, large-scale data, complex data types, legacy systems, inflexible and unsustainable implementation technology

14 Back to the TSI time series…

15

16 One composite, one assumption

17 Another composite, different assumption

18 18 Data pipelines: we have problems Data is coming in faster, in greater volumes and forms and outstripping our ability to perform adequate quality control Data is being used in new ways and we frequently do not have sufficient information on what happened to the data along the processing stages to determine if it is suitable for a use we did not envision We often fail to capture, represent and propagate manually generated information that need to go with the data flows Each time we develop a new instrument, we develop a new data ingest procedure and collect different metadata and organize it differently. It is then hard to use with previous projects The task of event determination and feature classification is onerous and we don't do it until after we get the data And now much of the data is on the Internet/Web (good or bad?)

19 20080602 Fox VSTO et al. 19

20 Yes, it all was/ is about Provenance Origin or source from which something comes, intention for use, who/what generated for, manner of manufacture, history of subsequent owners, sense of place and time of manufacture, production or discovery, documented in detail sufficient to allow reproducibility More on that later in the class…

21 Contents Preparing for data collection Managing data Data and metadata formats Data life-cycle : acquisition –Modes of collecting –Examples –Information as data –Bias, provenance Curation Assignment 1 21

22 Data-Information-Knowledge Ecosystem 22 DataInformationKnowledge ProducersConsumers Context Presentation Organization Integration Conversation Creation Gathering Experience

23 MIT DDI Alliance Life Cycle 23

24 20080602 Fox VSTO et al. 24

25 Modes of collecting data, information Observation Measurement Generation Driven by –Questions –Research idea –Exploration 25

26 26 Information has content, context and structure. The notion of “unstructured” really means information that is “unmanaged” with conventional technologies (i.e., metadata, markup and databases). Databases, metadata and markup provide control for managing digital information, but they are not convenient. This is why less than 20% of the available digital records are managed with these technologies (i.e., conventional technologies are not scalable). Search engines are extremely convenient, but provide limited control for managing digital information (i.e., long lists of ranked results conceal relationships within and between digital records). The search engine problem is that accessing more information does not equal more knowledge. We already have effectively infinite and instantaneous access to digital information. The challenge is no longer access, but being able to objectively integrate information based on user-defined criteria independent of scale to discover knowledge. OBSERVATIONSOBSERVATIONS

27 Data Management reading http://libraries.mit.edu/guides/subjects/data-management/cycle.html http://esipfed.org/DataManagement http://wiki.esipfed.org/index.php/Data_Management_Workshop http://lisa.dachev.com/ESDC/ Moore et al., Data Management Systems for Scientific Applications, IFIP Conference Proceedings; Vol. 188, pp. 273 – 284 (2000) Data Management and Workflows http://www.isi.edu/~annc/papers/wses2008.pdf http://www.isi.edu/~annc/papers/wses2008.pdf Metadata and Provenance Management http://arxiv.org/abs/1005.2643 http://arxiv.org/abs/1005.2643 Provenance Management in Astronomy http://arxiv.org/abs/1005.3358 http://arxiv.org/abs/1005.3358 Web Data Provenance for QA http://www.slideshare.net/olafhartig/using-web-data-provenance-for- quality-assessment http://www.slideshare.net/olafhartig/using-web-data-provenance-for- quality-assessment W3C PROV 27

28 Management Creation of logical collections –The primary goal of a Data Management system is to abstract the physical data into logical collections. The resulting view of the data is a uniform homogeneous library collection. Physical data handling –This layer maps between the physical to the logical data views. Here you find items like data replication, backup, caching, etc. 28

29 Management Interoperability support –Normally the data does not reside in the same place, or various data collection (like catalogues) should be put together in the same logical collection. Security support –Data access authorization and change verification. This is the basis of trusting your data. Data ownership –Define who is responsible for data quality and meaning 29

30 Management Metadata collection, management and access. –Metadata are data about data. Persistence –Definition of data lifetime. Deployment of mechanisms to counteract technology obsolescence. Knowledge and information discovery –Ability to identify useful relations and information inside the data collection. 30

31 Management Data dissemination and publication –Mechanism to make aware the interested parties of changes and additions to the collections. 31

32 Logical Collections Identifying naming conventions and organization Aligning cataloguing and naming to facilitate search, access, use Provision of contextual information Related to metadata – why? 32

33 Physical Data Handling Where and who does the data come from? How is it transferred into a physical form? Backup, archiving, and caching... Data formats Naming conventions 33

34 Interoperability Support Bit/byte and platform/ wire neutral encodings Programming or application interface access Data structure and vocabulary (metadata) conventions and standards Definitions of interoperability? –Smallest number of things to agree on so that you do not need to agree on anything else 34

35 Security What mechanisms exist for securing data? Who performs this task? Change and versioning (yes, the data may change), who does this, how? Who has access? How are access methods controlled, audited? Who and what – authentication and authorization? Encryption and data integrity 35

36 Data Ownership Rights and policies – definition and enforcement Limitations on access and use Requirements for acknowledgement and use Who and how is quality defined and ensured? Who may ownership migrate too? How to address replication? How to address revised/ derivative products? 36

37 Metadata Know what conventions, standards, best practices exist Use them – can be hard, use tools Understand costs of incomplete and inconsistent metadata Understand the line between metadata and data and when it is blurred Know where and how to manage metadata and where to store it (and where not to) Metadata CAN be added later in many cases 37

38 Persistence Where will you put your data so that someone else (e.g. one of your class members) can access it? What happens after the class, the semester, after you graduate? What other factors are there to consider? 38

39 Discovery If you choose (see ownership and security), how does someone find your data? How would you provide discovery of collections, versus files, versus ‘bits’? How to enable the narrowest/ broadest discovery? 39

40 Dissemination 40 Who should do this? How and what needs to be put in place? How to advertise? How to inform about updates? How to track use, significance?

41 Data Formats - preview ASCII, UTF-8, ISO 8859-1 Self-describing formats Table-driven Markup languages and other web-based Database Graphs Unstructured Discussion… because this is part of your assignment 41

42 Metadata formats ASCII, UTF-8, ISO 8859-1 Table-driven Markup languages and other web-based Database, graphs, … Unstructured Look familiar? Yes, same as data Next week we’ll look at things like –Dublin Core (dc.x) –Encoding/ wrapper standards - METS –ISO in general, e.g. ISO/IEC 11179 –Geospatial, ISO 19115-2, FGDC –Time, ISO 8601, xsd:datetime 42

43 20080602 Fox VSTO et al. 43

44 Acquisition Learn / read what you can about the developer of the means of acquisition –Even if it is you (the observer) –Beware of bias!!! Document things –See notes from Class 1 Have a checklist (see Management) and review it often Be mindful of who or what comes after your step in the data pipeline 44

45 Modes of collecting data, information Observation Measurement Generation Driven by –Questions –Research idea –Exploration 45

46 Example 1 “the record of the time when the CDTA bus 87 arrives at the bus stop on 15 th street under the RPI walk over bridge. The data collection need is being driven by the desire to have a more precise idea of the time when the bus will arrive at that bus stop in the hopes that it will be closer to reality than the official CDTA schedule for bus 87.” Lessons: –Other buses, hard to see the bus, calibrated time source, unanticipated metadata, better to have prepared tables for recording, … 46

47 Example 2 ‘The goal of the data collection was to explore the relative intensity of the wavelengths in a white-light source through a colored plastic film. By measuring this we can find properties of this colored plastic film.’ ‘We used a special tool called a spectrometer to measure the relative intensity of this light. It’s connected to a computer and records all values by using a software program that interacts with the spectrometer.’ Lessons –Noise from external light, inexperience with the software, needed to get help from experienced users, more metadata than expected, software used different logical organization,... 47

48 Example 3 ‘The goal of my data collection exercise was to observe and generate historical stock price data of large financial firms within a specified time frame of the years 2007 to 2009. This objective was primarily driven by general questions and exploration purposes – in particular, a question I wanted to have answered was how severe the ramifications of the economic crisis were on major financial firms.’ Lessons –Irregularities in data due to company changes (buy-out, bankrupt), no metadata – had to create it all, quality was very high, choice of sampling turned out to be crucial, … 48

49 Example 4 I performed a survey among a sample set of people to determine how many prefer carbonated drinks (like Coke) to fruit juice. The goal of this data collection exercise was to determine which option is more popular and if any health related issues occur due to the consumption of these drinks. The data collection need was primarily driven by the question - whether consumption of caffeine, soda and excessive sugar present in these drinks actually cause health problems like obesity, cholesterol, dental decay etc. The mode of data collection was by observation. Lessons –The measurement unit for the amount of drink consumed daily was not fixed before starting the data collection exercise. During the data collection process, some gave me the amount in ml whereas some in ounces and some others in number of glasses. Later, I had to convert those units to the standard unit that I was using – ml. –Some people were reluctant to disclose health related issues and I had to guarantee them anonymity. This solved the problem to a great extent 49

50 Example 5 This data collection exercise was driven by questions, as to what the ratio of Male to Female friends would a set of people on their Facebook profile. Many questions can be asked and analyzed from this data, like do Females predominantly have females as their Facebook friends, or males? (Similar questions can be asked about the Males) Is the person more of an outgoing person who likes to meet and make new friends, or are they selective about their friends? Why does the ratio for a particular person have a marked departure from that of the others? Does this data vary or is it affected by age? … The mode of collection was Observation, which was carried out by observing the number of Male and Female friends on the Facebook profile. Lessons: –One major problem faced was the obtaining of the data. As many people had a very large number of friends, and Facebook implicitly does not have any mechanism for filtering based on Gender, the data collection had to be manually carried out by the people, which was not an easy task, and also has a higher probability for miscounting. Another problem faced was when trying to use a Facebook application to identify the stats from a person’s Facebook profile. It was discovered after a period of time that the application was not providing accurate results, and hence the data had to be collected again. Lessons learnt were that the next time, it would be more accurate and faster to obtain the data by writing a Facebook code to obtain this data automatically 50

51 Example 6 The idea is to analyse the weekly trends which are prevailing in the popular social networking site Twitter.com and link the trends to the most latest news and happenings or holding events which happened in the same date/week, thereby finding some common conduit which connects the two. It is driven by the rationale to observe how recent or important events affect the psyche of people or what vox populi is at a given time. Lessons: Psyche is much harder to characterize and collect data about than first thought. Amount and type of news and happenings is very large… needed to think about categories. 51

52 Example 7 The goal of my data collection exercise is to generate the average rainfall levels during the hurricane seasons from the year 2005-2010. Along with this data, we would generate data on the average height of the Hudson River during the hurricane season months. The data collection need is being driven by questions such as how likely will it be for the Hudson River to cause major flood damage during the hurricane season to the cities surrounding it. Lesson(s): some historical data is very hard to find. Places where average height is measured do not provide optimal comparison with where rainfall is measured. 52

53 Example 8 Politicians and economists constantly talk about what is best for the country or the economy by proposing their own believes into various laws or actions. However, what is the true impact of these ideologies in the real world? Do these ideas really work? Can everything be solved by a tax cut? Spending increases? Changing the tax code? Which ideologies serve people best? Which ideologies work best under what conditions? A lot of theories are proposed, but how do we know which will work and which won’t? That is the goal of this proposal. The mode of collection will be a complex one. It would require assistance from government agencies, students, and a series of IT experts to manage the data. Within the last decade, government agencies and other organizations have attempted to make all information publicly available. States, countries, and public corporations have all published balance sheets for each year they have been in business, and corporations also have to file every quarter. Various investing institutes have been collecting information on the markets, both the indices and various stocks, and how they have been performing. However, most of this information is not publicly available. Out of what is available, information can only be retrieved in the last 10 years. In addition, most information is not in a format that can be digested automatically. Someone would have to do manual data entry. 53

54 Example 8 ctd The information that would have to be gathered would have to be no less than 100 years worth of information. It must contain the following information: –All indices’ prices, in intervals no greater than once per quarter, with the time and date –of the prices listed. This must be for all indices for all nations part of the global –economy. –All stock prices, in intervals no greater than once per quarter, with the time and date of –all prices listed. This must be for all publicly traded stocks all over the globe. –All laws, passed or not, in all countries, states, and territories throughout the world. A –time and date would have to be recorded with who voted for and against the law. –All world events would have to be recorded, with times and dates. Also, any response –taken by anyone would have to be recorded as well. –All balance sheets from all countries, states, and territories would have to be recorded. 54

55 Data-Information-Knowledge Ecosystem 55 DataInformationKnowledge ProducersConsumers Context Presentation Organization Integration Conversation Creation Gathering Experience

56 Information as a basis for data Don’t over think this… data extracted from an information source, e.g. a web page, an image, a table If information is data in context (for human use) then there is data behind the information, e.g. name, address, for a web page form, measure of intensity of light for an image, numerical values for a table But data can also be acquired from information with a different context, e.g. the number of people in an image that are wearing green 56

57 To incline to one side; to give a particular direction to; to influence; to prejudice; to prepossess. [1913 Webster] A partiality that prevents objective consideration of an issue or situation [syn: prejudice, preconception] For acquisition – sampling bias is your enemy So let’s talk about it… 57

58 Provenance* Origin or source from which something comes, intention for use, who/what generated for, manner of manufacture, history of subsequent owners, sense of place and time of manufacture, production or discovery, documented in detail sufficient to allow reproducibility Internal? External? Mode?

59 When thinking of provenance Who? What? Where? Why? When? How? 59

60 20080602 Fox VSTO et al. 60 Provenance in this data pipeline Provenance is metadata in context What context? –Who you are? –What you are asking? –What you will use the answer for?

61 At the least Keyword-value pair (contextual form of metadata – more on this next week) –Obs_start_time=“Mon 1 Sep 2014 19:22:30 EDT” is this okay? –Observer=“Peter Fox” – is this okay? –You get the idea? 61

62 It is an entire ecosystem The elements that make up provenance are often scattered But these are what enable data scientists to explore/ confirm/ deny their data science investigations Accountability ProofExplanationJustificationVerifiability “Transparency” Trust Provenance Identity

63 Provenance standards International standards: –ISO lineage (see reading) –PROV (reading) Old ones… –Open Provenance Model –Proof Markup Language –… 63

64 Curation (partial) Consider the organization and presentation of your data Document what has been (and has not been) done Consider and address the provenance of the data to date, put yourself in the place of the next person Be as technology-neutral as possible Look to add information and metainformation 64

65 What comes to mind? Assignment 1 – propose two data collection exercises and perform a survey of data formats, metadata and application support for data management suitable for the data you intend to collect in two weeks (10% of grade) – see web page Note this is due NEXT week – why? 65

66 What is next Reading – see web page (Data Management, Provenance) Next week (Data formats, metadata standards, conventions, reading and writing data and information) 66


Download ppt "1 Peter Fox Data Science – ITEC/CSCI/ERTH 4350/6350 Week 2, September 2, 2014 Example of Data Science and Data and information acquisition (curation) and."

Similar presentations


Ads by Google