Assessment of Metadata Remediation Efforts Jenn Riley Metadata Librarian Indiana University Digital Library Program
General approach to assessment Traditional library approach Measure inter-indexer consistency Count errors Record focus For this project, I’ll argue User & task focus Define accuracy in terms of functionality achieved Metadata Enhancement and OAI Workshop
Metadata Enhancement and OAI Workshop Why that approach? Functionalities we’re thinking about aren’t supported well by traditional library structures and vocabularies High-level browsing Genre access “Ground truth” doesn’t really exist for things like “subject” and “genre” We need to figure out where human effort is most useful Metadata Enhancement and OAI Workshop
One size does not fit all for metadata enhancement activities (a la DCMI Type) (syntax normalization) (syntax normalization, plus ?) (form? style?) Dates Resource Type Names Subject Genre Factual Interpretive easy? hard? What about geography? Metadata Enhancement and OAI Workshop
Very preliminary possible metrics Syntax normalization: what functionality can we provide on the records after syntax normalization? More interpretive tasks: do the resources retrieved in response to a query match user expectations? All: how confident are we in the results of the automatic enhancement? Need actual numbers at some point To figure out how well services will work Think about pre-human involvement vs. post-human involvement Metadata Enhancement and OAI Workshop
Metadata Enhancement and OAI Workshop Woof! Metadata Enhancement and OAI Workshop