Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Peter Fox Data Science – ITEC/CSCI/ERTH-6961 Week 10, November 6, 2012 Data Workflow Management, Data Preservation and Stewardship.

Similar presentations


Presentation on theme: "1 Peter Fox Data Science – ITEC/CSCI/ERTH-6961 Week 10, November 6, 2012 Data Workflow Management, Data Preservation and Stewardship."— Presentation transcript:

1 1 Peter Fox Data Science – ITEC/CSCI/ERTH-6961 Week 10, November 6, 2012 Data Workflow Management, Data Preservation and Stewardship

2 Contents Scientific Data Workflows Data Stewardship Summary Next class(es) 2

3 Scientific Data Workflow What it is Why you would use it Some more detail in the context of Kepler –www.kepler-project.orgwww.kepler-project.org Some pointer to other workflow systems 3

4 4 What is a workflow? General definition: series of tasks performed to produce a final outcome Scientific workflow – “data analysis pipeline” –Automate tedious jobs that scientists traditionally performed by hand for each dataset –Process large volumes of data faster than scientists could do by hand

5 5 Background: Business Workflows Example: planning a trip Need to perform a series of tasks: book a flight, reserve a hotel room, arrange for a rental car, etc. Each task may depend on outcome of previous task –Days you reserve the hotel depend on days of the flight –If hotel has shuttle service, may not need to rent a car E.g. tripit.com?

6 6 What about scientific workflows? Perform a set of transformations/ operations on a scientific dataset Examples –Generating images from raw data –Identifying areas of interest in a large dataset –Classifying set of objects –Querying a web service for more information on a set of objects –Many others…

7 7 More on Scientific Workflows Formal models of the flow of data among processing components May be simple and linear or more complex Can process many data types: –Archived data –Streaming sensor data –Images (e.g., medical or satellite) –Simulation output –Observational data

8 8 Challenges Questions: –What are some challenges for scientists implementing scientific workflows? –What are some challenges to executing these workflows? –What are limitations of writing a program?

9 9 Challenges Mastering a programming language Visualizing workflow Sharing/exchanging workflow Formatting issues Locating datasets, services, or functions

10 10 Kepler Scientific Workflow Management System Graphical interface for developing and executing scientific workflows Scientists can create workflows by dragging and dropping Automates low-level data processing tasks Provides access to data repositories, compute resources, workflow libraries

11 11 Benefits of Scientific Workflows Documentation of aspects of analysis Visual communication of analytical steps Ease of testing/debugging Reproducibility Reuse of part or all of workflow in a different project

12 12 Additional Benefits Integration of multiple computing environments Automated access to distributed resources via web services and Grid technologies System functionality to assist with integration of heterogeneous components

13 Why not just use a script? Script does not specify low-level task scheduling and communication May be platform-dependent Can’t be easily reused May not have sufficient documentation to be adapted for another purpose 13

14 Why is a GUI useful? No need to learn a programming language Visual representation of what workflow does Allows you to monitor workflow execution Enables user interaction Facilitates sharing of workflows 14

15 The Kepler Project Goals –Produce an open-source scientific workflow system enable scientists to design scientific workflows and execute them –Support scientists in a variety of disciplines e.g., biology, ecology, astronomy –Important features access to scientific data flexible means for executing complex analyses enable use of Grid-based approaches to distributed computation semantic models of scientific tasks effective UI for workflow design

16 Usage statistics Source code access –154 people accessed source code –30 members have write permission –Projects using Kepler: SEEK (ecology) SciDAC (molecular bio,...) CPES (plasma simulation) GEON (geosciences) CiPRes (phylogenetics) CalIT2 ROADnet (real-time data) LOOKING (oceanography) CAMERA (metagenomics) Resurgence (Computational chemistry) NORIA (ocean observing CI) NEON (ecology observing CI) ChIP-chip (genomics) COMET (environmental science) Cheshire Digital Library (archival) Digital preservation (DIGARCH) Cell Biology (Scripps) DART (X-Ray crystallography) Ocean Life Assembling theTree of Life project Processing Phylodata (pPOD) FermiLab (particle physics) Kepler downloads Total = 9204 Beta = 6675 red=Windows blue=Macintosh

17 Distributed execution Opportunities for parallel execution –Fine-grained parallelism –Coarse-grained parallelism Few or no cycles Limited dependencies among components ‘Trivially parallel’ Many science problems fit this mold –parameter sweep, iteration of stochastic models Current ‘plumbing’ approaches to distributed execution –workflow acts as a controller stages data resources writes job description files controls execution of jobs on nodes –requires expert understanding of the Grid system Scientists need to focus on just the computations –try to avoid plumbing as much as possible

18 –Higher-order component for executing a model on one or more remote nodes –Master and slave controllers handle setup and communication among nodes, and establish data channels –Extremely easy for scientist to utilize requires no knowledge of grid computing systems Distributed Kepler OUT IN MasterSlave Controller

19 Token {1,5,2} Need for integrated management of external data –EarthGrid access is partial, need refactoring –Include other data sources, such as JDBC, OpeNDAP, etc. –Data needs to be a first class object in Kepler, not just represented as an actor –Need support for data versioning to support provenance e.g., Need to pass data by reference –workflows contain large data tokens (100’s of megabytes) –intelligent handling of unique identifiers (e.g., LSID) Token ref-276 {1,5,2} Data Management AB

20 Science Environment for Ecological Knowledge SEEK is an NSF-funded, multidisciplinary research project to facilitate … Access to distributed ecological, environmental, and biodiversity data –Enable data sharing & reuse –Enhance data discovery at global scales Scalable analysis and synthesis –Taxonomic, spatial, temporal, conceptual integration of data, addressing data heterogeneity issues –Enable communication and collaboration for analysis –Enable reuse of analytical components –Support scientific workflow design and modeling

21 SEEK data access, analysis, mediation Data Access (EcoGrid) –Distributed data network for environmental, ecological, and systematics data –Interoperate diverse environmental data systems Workflow Tools (Kepler) –Problem-solving environment for scientific data analysis and visualization  “scientific workflows” Semantic Mediation (SMS) –Leverage ontologies for “smart” data/component discovery and integration

22 Managing Data Heterogeneity Data comes from heterogeneous sources –Real-world observations –Spatial-temporal contexts –Collection/measurement protocols and procedures –Many representations for the same information (count, area, density) –Data, Syntax, Schema, Semantic heterogeneity Discovery and “synthesis” (integration) performed manually –Discovery often based on intuitive notion of “what is out there” –Synthesis of data is very time consuming, and limits use

23 Scientific workflow systems support data analysis KEPLER

24 Composite Component (Sub-workflow) Loops often used in SWFs; e.g., in genomics and bioinformatics (collections of data, nested data, statistical regressions,...) A simple Kepler workflow (T. McPhillips)

25 Workflow runs PhylipPars iteratively to discover all of the most parsimonious trees. UniqueTrees discards redundant trees in each collection. Lists Nexus files to process (project) Reads text filesParses Nexus format Draws phylogenetic trees PhylipPars infers trees from discrete, multi-state characters. A simple Kepler workflow (T. McPhillips)

26 An example workflow run, executed as a Dataflow Process Network A simple Kepler workflow

27 Smart Mediation Services (SMS) motivation Scientific Workflow Life-cycle –Resource Discovery discover relevant datasets discover relevant actors or workflow templates –Workflow Design and Configuration data  actor (data binding) data  data (data integration / merging / interlinking) actor  actor (actor / workflow composition) Challenge: do all this in the presence of … –100’s of workflows and templates –1000’s of actors (e.g. actors for web services, data analytics, …) –10,000’s of datasets –1,000,000’s of data items –… highly complex, heterogeneous data – price to pay for these resources: $$$ (lots) – scientist’s time wasted: priceless!

28 Approach & SMS capabilities Ontologies Semantic Annotation Iterative Development Iterative Development Resource Discovery Annotations “connect” resources to ontologies –Conceptually describe a resource and/or its “data schema” –Annotations provide the means for ontology- based discovery, integration, … Workflow Validation Resource Integration Resource Integration Workflow Elaboration Workflow Elaboration

29 “Hybrid” types … Semantic + Structural Typing Structural Types: Given a structural type language S –Datasets, inputs, and outputs can be assigned structural types S  S Semantic Types: Given an ontology language O (e.g., OWL-DL) –Datasets, inputs, and outputs can be assigned ontology types O  O S out S O out O O : Observation   obsProperty.SpeciesOccurrence S : SpeciesData(site, day, spp, occ) O : Observation   obsProperty.SpeciesOccurrence S : SpeciesData(site, day, spp, occ) S O S out O out S in O in   Semantically compatible but structurally incompatible A1A1 A1A1 A2A2 A2A2 Semantic & structural types can be combined using logic constraints  := (  site, day, sp, occ ) SpeciesData ( site, day, sp, occ )  (  y ) Observation (y), obsProp ( y, occ ), SpeciesOccurrence ( occ )  := (  site, day, sp, occ ) SpeciesData ( site, day, sp, occ )  (  y ) Observation (y), obsProp ( y, occ ), SpeciesOccurrence ( occ )

30 Semantic Type Annotation in Kepler Component input and output port annotation –Each port can be annotated with multiple classes from multiple ontologies –Annotations are stored within the component metadata

31 Component Annotation and Indexing Component Annotations –New components can be annotated and indexed into the component library (e.g., specializing generic actors) –Existing components can also be revised, annotated, and indexed (hiding previous versions)

32 Approach & SMS capabilities Ontologies Semantic Annotation Iterative Development Iterative Development Resource Discovery Ontology-based “smart” search –Find components by semantic types –Find components by input/output semantic types –Ontology-based query rewriting for discovery/integration Joint work with GEON project (see SSDBM-04, SWDB-04) Workflow Validation Resource Integration Resource Integration Workflow Elaboration Workflow Elaboration

33 Smart Search Find a component (here: an actor) in different locations (“categories”) … based on the semantic annotation of the component (or its ports) Browse for ComponentsSearch for Component NameSearch for Category / Keyword

34 Searching in context Search for components with compatible input/output semantic types –… searches over actor library –… applies subsumption checking on port annotations

35 Approach & SMS capabilities Ontologies Semantic Annotation Iterative Development Iterative Development Resource Discovery Workflow validation and analysis –Check that workflows are semantically & structurally well-typed –Infer semantic type annotations of derived data (ie, type inference) An initial approach and prototype based on mapping composition (see QLQP-05) –User-oriented provenance Collect & query data-lineage of WF runs (see IPAW-06) Workflow Validation Resource Integration Resource Integration Workflow Elaboration Workflow Elaboration

36 Workflow validation in Kepler Navigate errors and warnings within the workflow –Search for and insert “adapters” to fix (structural and semantic) errors … Statically perform semantic and structural type checking

37 Approach & SMS capabilities Ontologies Semantic Annotation Iterative Development Iterative Development Resource Discovery Integrating and transforming data –Merge (“smart union”) datasets –Find mappings between data schemas for transformation data binding, component connections (see DILS-04) Workflow Validation Resource Integration Resource Integration Workflow Elaboration Workflow Elaboration

38 Smart (Data) Integration: Merge Discover data of interest … connect to merge actor … “compute merge” –align attributes via annotations –open dialog for user refinement –store merge mapping in MOML … enjoy! –… your merged dataset –almost, can be much more complicated

39 a3a3 a6a6 a1a1 a8a8 a4a4 a1 a3 a4 a 5.0 10 b 6.0 11 a 0.1 c 0.2 d 0.3 a1 a3 a4 a 5.0 10 b 6.0 11 a 0.1 c 0.2 d 0.3 Merge Result a1 a2 a3 a4 a 5 10 b 6 11 a1 a2 a3 a4 a 5 10 b 6 11 a5 a6 a7 a8 0.1 a 0.2 c 0.3 d a5 a6 a7 a8 0.1 a 0.2 c 0.3 d Merge a1a8 a3a6 a4 Biomass Site Under the hood of “Smart Merge” … Exploits semantic type annotations and ontology definitions to find mappings between sources Executing the merge actor results in an integrated data product (via “outer union”)

40 Approach & SMS capabilities Ontologies Semantic Annotation Iterative Development Iterative Development Resource Discovery Workflow design support –(Semi-) automatically combine resource discovery, integration, and validation –Abstract  Executable WF –… ongoing work! Workflow Validation Resource Integration Resource Integration Workflow Elaboration Workflow Elaboration Automated SWF Refinement

41 Initial Work on Provenance Framework Provenance –Track origin and derivation information about scientific workflows, their runs and derived information (datasets, metadata…) Need for Provenance –Association of process and results –reproduce results –“explain & debug” results (via lineage tracing, parameter settings, …) –optimize: “Smart Re-Runs” Types of Provenance Information: –Data provenance Intermediate and end results including files and db references –Process (=workflow instance) provenance Keep the wf definition with data and parameters used in the run –Error and execution logs –Workflow design provenance (quite different) WF design is a (little supported) process (art, magic, …) for free via cvs: edit history need more “structure” (e.g. templates) for individual & collaborative workflow design

42 Kepler Provenance Recording Utility Parametric and customizable –Different report formats –Variable levels of detail Verbose-all, verbose-some, medium, on error –Multiple cache destinations Saves information on –User name, Date, Run, etc…

43 Provenance: Possible Next Steps Provenance Meeting: Last week at SDSC –Deciding on terms and definitions –.kar file generation, registration and search for provenance information –Possible data/metadata formats –Automatic report generation from accumulated data –A GUI to keep track of the changes –Adding provenance repositories –A relational schema for the provenance info in addition to the existing XML

44 Some other workflow systems SCIRun Sciflo Triana Taverna Pegasus Some commercial tools: –Windows Workflow Foundation –Mac OS X Automator http://www.isi.edu/~gil/AAAI08TutorialSlides/5- Survey.pdfhttp://www.isi.edu/~gil/AAAI08TutorialSlides/5- Survey.pdf http://www.isi.edu/~gil/AAAI08TutorialSlides/ See reading for this week 44

45 Data Stewardship Putting a number of data life cycle, management aspects together Keep the ideas in mind as you complete your assignments Why it is important Some examples 45

46 Why it is important 1976 NASA Viking mission to Mars (A. Hesseldahl, Saving Dying Data, Sep. 12, 2002, Forbes. [Online]. Available: http://www.forbes.com2002/09/12/0912data_print.html ) http://www.forbes.com2002/09/12/0912data_print.html 1986 BBC Digital Domesday (A. Jesdanun, “Digital memory threatened as file formats evolve,” Houston Chronicle, Jan. 16, 2003. [Online]. Available: http://www.chron.com/cs/CDA/story.hts/tech/1739675 ) http://www.chron.com/cs/CDA/story.hts/tech/1739675 R. Duerr, M. A. Parsons, R. Weaver, and J. Beitler, “The international polar year: Making data available for the long- term,” in Proc. Fall AGU Conf., San Francisco, CA, Dec. 2004. [Online]. Available: ftp://sidads.colorado.edu/pub/ppp/conf_ppp/Duerr/The_Inter national_Polar_Year:_Making_Data_and_Information_Availa ble_for_the_Long_Term.ppt ftp://sidads.colorado.edu/pub/ppp/conf_ppp/Duerr/The_Inter national_Polar_Year:_Making_Data_and_Information_Availa ble_for_the_Long_Term.ppt 46

47 At the heart of it Inability to read the underlying sources, e.g. the data formats, metadata formats, knowledge formats, etc. Inability to know the inter-relations, assumptions and missing information We’ll look at a (data) use case for this shortly But first we will look at what, how and who in terms of the full life cycle 47

48 What to collect? Documentation –Metadata –Provenance Ancillary Information Knowledge 48

49 Who does this? Roles: –Data creator –Data analyst –Data manager –Data curator 49

50 How it is done Opening and examining Archive Information Packages Reviewing data management plans and documentation Talking (!) to the people: –Data creator –Data analyst –Data manager –Data curator Sometimes, reading the data and code 50

51 Data-Information-Knowledge Ecosystem 51 DataInformationKnowledge ProducersConsumers Context Presentation Organization Integration Conversation Creation Gathering Experience

52 Acquisition Learn / read what you can about the developer of the means of acquisition –Documents may not be easy to find –Remember bias!!! Document things as you go Have a checklist (see the Data Management list) and review it often 52

53 20080602 Fox VSTO et al. 53

54 Curation (partial) Consider the organization and presentation of the data Document what has been (and has not been) done Consider and address the provenance of the data to date, you are now THE next person Be as technology-neutral as possible Look to add information and metainformation 54

55 Preservation Usually refers to the full life cycle Archiving is a component Stewardship is one act of preservation Intent is that ‘you can open it any time in the future’ and that ‘it will be there’ This involves steps that may not be conventionally thought of Think 10, 20, 50, 200 years…. looking historically gives some guide to future considerations 55

56 Some examples and experience NASA, NOAA http://wiki.esipfed.org/index.php/Preservation _and_Stewardshiphttp://wiki.esipfed.org/index.php/Preservation _and_Stewardship Library community Note: –Mostly in relation to publications, books, etc but some for data –Note that knowledge is in publications but the structure form is meant for humans not computers, despite advances in text analysis –Very little for the type of knowledge we are considering: in machine accessible form 56

57 Back in the day... NASA SEEDS Working Group on Data Lifecycle Second Workshop Report o http://esdswg.gsfc.nasa.gov/pdf/W2_lcbo_bothwell.pdf o Many LTA recommendations Earth Sciences Data Lifecycle Report o http://esdswg.gsfc.nasa.gov/pdf/lta_prelim_rprt2.pdf o Many lessons learned from USGS experience, plus some recommendations SEEDS Final Report (2003) - Section 4 o http://esdswg.gsfc.nasa.gov/pdf/FinRec.pdf o Final recommendations vis a vis data lifecycle MODIS Pilot Project GES DISC, MODAPS, NOAA/CLASS, ESDIS effort Transferred some MODIS Level 0 data to CLASS

58 Mostly Technical Issues Data Preservation o Bit-level integrity o Data readability Documentation Metadata Semantics Persistent Identifiers Virtual Data Products Lineage Persistence Required ancillary data Applicable standards

59 Mostly Non-Technical Issues Policy (constrained by money…) Front end of the lifecycle o Long-term planning, data formats, documentation... Governance and policy Legal requirements Archive to archive transitions Money (intertwined with policy) Cost-benefit trades Long-term needs of NASA Science Programs User input o Identifying likely users Levels of service Funding source and mechanism

60 HDF4 Format "Maps" for Long Term Readability C. Lynnes, GES DISC R. Duerr and J. Crider, NSIDC M. Yang and P. Cao, The HDF Group Use case: a real live one; deals mostly with structure and (some) content HDF=Hierarchical Data Format NSIDC=National Snow and Ice Data Center GES=Goddard Earth Science DISC=Data and Information Service Center

61 In the year 2025... A user of HDF-4 data will run into the following likely hurdles: The HDF-4 API and utilities are no longer supported... o...now that we are at HDF-7 The archived API binary does not work on today's OS's o...like Android 9.1 The source does not compile on the current OS o...or is it the compiler version, gcc v. 12.x? The HDF spec is too complex to write a simple read program... o...without re-creating much of the API What to do?

62 HDF Mapping Files Concept: create text-based "maps" of the HDF-4 file layouts while we still have a viable HDF-4 API (i.e., now) XML Stored separately from, but close to the data files Includes o internal metadata o variable info o chunk-level info  byte offsets and length  linked blocks  compression information Task funded by ESDIS project The HDF Group, NSIDC and GES DISC

63 Map sample (extract) 0 0 180 360

64 Status and Future Status Map creation utility (part of HDF) Prototype read programs o C o Perl Paper in TGRS special issue Inventory of HDF-4 data products within EOSDIS Possible Future Steps Revise XML schema Revise map utility and add to HDF baseline Implement map creation and storage operationally o e.g., add to ECS or S4PA metadata files

65 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group NASA/ MODIS Contextual Info

66 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Instrument/sensor characteristics 66 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

67 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Processing Algorithms & Scientific Basis 67 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

68 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Ancillary Data 68 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

69 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Processing History including Source Code 69 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

70 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Quality Assessment Information 70 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

71 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Validation Information 71 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

72 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Other Factors that can Influence the Record 72 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

73 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Bibliography 73 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

74 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Contextual Information: Instrument/sensor characteristics including pre-flight or pre-operational performance measurements (e.g., spectral response, noise characteristics, etc.) Instrument/sensor calibration data and method Processing algorithms and their scientific basis, including complete description of any sampling or mapping algorithm used in creation of the product (e.g., contained in peer-reviewed papers, in some cases supplemented by thematic information introducing the data set or derived product) Complete information on any ancillary data or other data sets used in generation or calibration of the data set or derived product 74 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

75 Presented by R. Duerr at the Summer Institute on Data Curation, June 2-5, 2008 Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign Contextual Information (continued): Processing history including versions of processing source code corresponding to versions of the data set or derived product held in the archive Quality assessment information Validation record, including identification of validation data sets Data structure and format, with definition of all parameters and fields In the case of earth based data, station location and any changes in location, instrumentation, controlling agency, surrounding land use and other factors which could influence the long-term record A bibliography of pertinent Technical Notes and articles, including refereed publications reporting on research using the data set Information received back from users of the data set or product 75 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group

76 However… Even groups like NASA do not have a governance model for this work Governance: is the activity of governing. It relates to decisions that define expectations, grant power, or verify performance. It consists either of a separate process or of a specific part of management or leadership processes. Sometimes people set up a government to administer these processes and systems. (wikipedia) 76

77 Who cares… Stakeholders: –NASA for integrity of their data holdings (is it their responsibility?) –Public for value for and return on investment –Scientists for future use (intended and un- intended) –Historians 77

78 Library community OAIS – Open Archival Information System, http://en.wikipedia.org/wiki/Open_Archival_Inf ormation_System http://en.wikipedia.org/wiki/Open_Archival_Inf ormation_System OAI (PMH and ORE) – Open Archives Initiative (Protocol for Metadata Harvesting and Object Reuse and Exchange), http://www.openarchives.org/ http://www.openarchives.org/ Do some reading on your own for this 78

79 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group Metadata Standards - PREMIS Provide a core preservation metadata set with broad applicability across the digital preservation community Developed by an OCLC and RLG sponsored international working group –Representatives from libraries, museums, archives, government, and the private sector. Based on the OAIS reference model Preservation Metadata Interchange Std.

80 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group Metadata Standards - PREMIS Maintained by the Library of Congress Editorial board with international membership User community consulted on changes through the PREMIS Implementers Group Version 1 was released in June 2005 Version 2 was just released

81 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group Rights Events Agents “a coherent set of content that is reasonably described as a unit” For example, a web site, data set or collection of data sets “a coherent set of content that is reasonably described as a unit” For example, a web site, data set or collection of data sets “a discrete unit of information in digital form” For example, a data file “a discrete unit of information in digital form” For example, a data file “assertions of one or more rights or permissions pertaining to an object or an agent” e.g., copywrite notice, legal statute, deposit agreement “assertions of one or more rights or permissions pertaining to an object or an agent” e.g., copywrite notice, legal statute, deposit agreement “an action that involves at least one object or agent known to the preservation repository” e.g., created, archived, migrated “an action that involves at least one object or agent known to the preservation repository” e.g., created, archived, migrated “a person, organization, or software program associated with preservation events in the life of an object” e.g., Dr. Spock donated it PREMIS - Entity-Relationship Diagram Intellectual Entities Objects

82 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group PREMIS - Types of Objects Representation - “the set of files needed for a complete and reasonable rendition of an Intellectual Entity” File Bitstream - “contiguous or non-contiguous data within a file that has meaningful common properties for preservation purposes”

83 7th Joint ESDSWG meeting, October 22, Philadelphia, PA Data Lifecycle Workshop sponsored by the Technology Infusion Working Group Information from users Data Errors found Quality updates Things that need further explanation Metadata updates/additions? Community contributed metadata????

84 Back to why you need to… E-science uses data and it needs to be around when what you create goes into service and you go on to something else That’s why someone on the team must address life-cycle (data, information and knowledge) and work with other team members to implement organizational, social and technical solutions to the requirements 84

85 (Digital) Object Identifiers Object is used here so as not to pre-empt an implementation, e.g. resource, sample, data, catalog –DOI = http://www.doi.org/, e.g. 10.1007/s12145- 008-0001-8 – visit crossref.org and see where this leads you.http://www.doi.org/ –URI, http://en.wikipedia.org/wiki/Uniform_Resource_Id entifier e.g. http://www.springerlink.com/content/0322621781 338n85/fulltext.pdf http://en.wikipedia.org/wiki/Uniform_Resource_Id entifier http://www.springerlink.com/content/0322621781 338n85/fulltext.pdf –XRI (from OAIS), http://www.oasis- open.org/committees/xrihttp://www.oasis- open.org/committees/xri 85

86 Versioning Is a key enabler of good preservation Is a tricky trap for those that do not conform to written guidelines for versioning http://en.wikipedia.org/wiki/Rev ision_controlhttp://en.wikipedia.org/wiki/Rev ision_control 86

87 Summary The progression toward more formal encoding of science workflow, and in our context data-science workflow (dataflow) is substantially improving data management Awareness of preservation and stewardship for valuable data and information resources is receiving renewed attention in the digital age Workflows are a potential solution to the data stewardship challenge Which brings us to the final assignment 87

88 Final assignment See web (10% of grade). 88

89 What is next Next week – Data quality, uncertainty, bias… Week after - Webs of Data and Data on the Web, the Deep Web, Data Discovery, Data Integration Reading for this week – see web site Last class is week 13, Nov. 27 – project presentations (and final assignment due) 89


Download ppt "1 Peter Fox Data Science – ITEC/CSCI/ERTH-6961 Week 10, November 6, 2012 Data Workflow Management, Data Preservation and Stewardship."

Similar presentations


Ads by Google