Design of metadata surrogates in search result interfaces of learning object repositories: Linear versus clustered metadata design Panos Balatsoukas Anne.

Slides:



Advertisements
Similar presentations
Performance Assessment
Advertisements

Problem solving skills
Agent-Based Architecture for Intelligence and Collaboration in Virtual Learning Environments Punyanuch Borwarnginn 5 August 2013.
School leaders’ professional development: what do they think about it? Dr Athena Michaelidou Educational Research and Evaluation Centre and Open University.
Web Search Results Visualization: Evaluation of Two Semantic Search Engines Kalliopi Kontiza, Antonis Bikakis,
Developing and Evaluating a Query Recommendation Feature to Assist Users with Online Information Seeking & Retrieval With graduate students: Karl Gyllstrom,
Bucharest, March 16th Quality of Life for Adults with Intellectual Disability & Complex Need: Lessons from an Irish Perspective. Dr. Barry Coughlan,
Project Proposal.
Hui-Hsien Tsai Jiazhen Wang University of Missouri.
Enhancing Data Quality of Distributive Trade Statistics Workshop for African countries on the Implementation of International Recommendations for Distributive.
Mapping Studies – Why and How Andy Burn. Resources The idea of employing evidence-based practices in software engineering was proposed in (Kitchenham.
Each individual person is working on a GUI subset. The goal is for you to create screens for three specific tasks your user will do from your GUI Project.
WP5 – Knowledge Resource Sharing and Management Kick-off Meeting – Valkenburg 8-9 December 2005 Dr. Giancarlo Bo Giunti Interactive Labs S.r.l.
Usability Evaluation Evaluation should occur continually through the design and implementation process. Evaluation methods are applied as the interface.
Increasing Preservice Teachers' Capacity for Technology Integration Through the Use of Electronic Models Melissa Dark Purdue University.
UI Standards & Tools Khushroo Shaikh.
Part 4: Evaluation Chapter 20: Why evaluate? Chapter 21: Deciding on what to evaluate: the strategy Chapter 22: Planning who, what, where, and when Chapter.
© Tefko Saracevic, Rutgers University1 digital libraries and human information behavior Tefko Saracevic, Ph.D. School of Communication, Information and.
Formative and Summative Evaluations
Data-collection techniques. Contents Types of data Observations Event logs Questionnaires Interview.
U.C. Berkeley Calendar Network Usability Evaluation Nadine Fiebrich & Myra Liu IS214 May 4, 2004.
Principles and Methods
© Tefko Saracevic, Rutgers University1 digital libraries and human information behavior Tefko Saracevic, Ph.D. School of Communication, Information and.
1 User Interface Design CIS 375 Bruce R. Maxim UM-Dearborn.
TC176/IAF ISO 9001:2000 Auditing Practices Group.
Software Construction and Evolution - CSSE 375 Software Documentation 1 Shawn & Steve Right – For programmers, it’s a cultural perspective. He’d feel almost.
Evaluation of digital Libraries: Criteria and problems from users’ perspectives Article by Hong (Iris) Xie Discussion by Pam Pagels.
1. Learning Outcomes At the end of this lecture, you should be able to: –Define the term “Usability Engineering” –Describe the various steps involved.
류 현 정류 현 정 Human Computer Interaction Introducing evaluation.
1 Commissioned by PAMSA and German Technical Co-Operation National Certificate in Paper & Pulp Manufacturing NQF Level 3 Operate a computer.
Evaluation Experiments and Experience from the Perspective of Interactive Information Retrieval Ross Wilkinson Mingfang Wu ICT Centre CSIRO, Australia.
Design Science Method By Temtim Assefa.
Evaluation of Adaptive Web Sites 3954 Doctoral Seminar 1 Evaluation of Adaptive Web Sites Elizabeth LaRue by.
Planning and Writing Your Documents Chapter 6. Start of the Project Start the project by knowing the software you will write about, but you should try.
Andrew Brasher Andrew Brasher, Patrick McAndrew Userlab, IET, Open University Human-Generated Learning.
Creating Tutorials for the Web: a Designer’s Challenge Module 4: Checking for Effectiveness.
Interacting with IT Systems Fundamentals of Information Technology Session 5.
Pedagogical Features Influencing Instructional Activities in a Discussion Board Forum Cathryn Friel Yanfei Ma Wanli Xing University of Missouri School.
Jane Reid, AMSc IRIC, QMUL, 16/10/01 1 Evaluation of IR systems Jane Reid
ZLOT Prototype Assessment John Carlo Bertot Associate Professor School of Information Studies Florida State University.
Learning From Assessment: Evaluating the Benefits of DALI (Diagnostic Assessment Learning Interface) Hershbinder Mann & Guinevere Glasfurd-Brown, University.
DL.org All WGs Meetings, Rome, May 2010 Quality Interoperability Approaches, case studies and open issues DL.org Quality Working Group Rome, 28 th.
Optimizing Resource Discovery Service Interfaces in Statewide Virtual Libraries: The Library of Texas Challenge William E. Moen, Ph.D. Texas Center for.
Lecture 7: Requirements Engineering
Quality Evaluation methodologies for e-Learning systems (in the frame of the EC Project UNITE) Tatiana Rikure Researcher, Riga Technical University (RTU),
Sharing Design Knowledge through the IMS Learning Design Specification Dawn Howard-Rose Kevin Harrigan David Bean University of Waterloo McGraw-Hill Ryerson.
The Impact of the Interface Design on web-base Learning: Visual and Navigation Dynamics on Learners’ Satisfaction 指導教授: Chen, Ming- Puu 報告者 : Chang, Chen-Ming.
1 Evaluating the Quality of the e-Learning Experience in Higher Education Anne Jelfs and Keir Thorpe, Institute of Educational Technology (IET), The Open.
Welcome to the Usability Center Tour Since 1995, the Usability Center has been a learning environment that supports and educates in the process of usability.
Based on Korean Mental Model Icon Development 전 윤 우.
AMSc Research Methods Research approach IV: Experimental [1] Jane Reid
Digital Library Repositories and Instructional Support Systems: Repository Interoperability Working Group Leslie Johnston University of Virginia Library.
Digital Libraries1 David Rashty. Digital Libraries2 “A library is an arsenal of liberty” Anonymous.
1 IFLA FRBR & MIC Metadata Evaluation Ying Zhang Yuelin Li October 14, 2003.
1 Evaluating the User Experience in CAA Environments: What affects User Satisfaction? Gavin Sim Janet C Read Phil Holifield.
The Review Process: Where Do We Begin? Jennifer L. Bishoff June 7, 2001.
Evaluating Engagement Judging the outcome above the noise of squeaky wheels Heather Shaw, Department of Sustainability & Environment Jessica Dart, Clear.
M & E System for MI’s Training Program & Guidelines for MI’s Completion Report Presented by Monitoring and Evaluation Officer Mekong Institute January.
TC176/IAF ISO 9001:2000 Auditing Practices Group.
Day 8 Usability testing.
Report Writing Lecturer: Mrs Shadha Abbas جامعة كربلاء كلية العلوم الطبية التطبيقية قسم الصحة البيئية University of Kerbala College of Applied Medical.
WP8: Demonstrators (UniCam – Regione Marche)
Sunan Kalijaga Islamic University, Indonesia 9-10, 2017
User Interface HEP Summit, DESY, May 2008
Module 8- Stages in the Evaluation Process
EPQ Learner Outcomes identify, design, plan and complete an individual project, applying a range of organisational skills and strategies to meet.
Writing reports Wrea Mohammed
Writing Careful Long Reports
1. INTRODUCTION.
Presentation transcript:

Design of metadata surrogates in search result interfaces of learning object repositories: Linear versus clustered metadata design Panos Balatsoukas Anne Morris Ann O’Brien

Contents  Definition of user-centred metadata  Evolution of metadata surrogate design  Aim and Objectives of the usability test  The META-LOR 1 prototype  Methodology  Results  Conclusions – Recommendations  Future Research

Metadata definitions  ‘Data about data’  “Structured data about an object that supports functions associated with the designated object” – (Greenberg, 2005)  Learning object metadata: metadata used for the efficient description of learning objects and the effective support of educational-learning functions related to the described learning objects.

User-centred metadata ContentPresentation Learner Relevance Usability Technologies

Evolution of metadata presentation and content

Copied from: Marchionini et al (1993)

Design of metadata surrogates  Metadata elements providing access or arranging access to the resource should follow content related elements such as title, abstract, subject heading or keywords.  Users prefer content related metadata for finding and identifying sources and technical and physical metadata for selecting and obtaining access to the resource.  There is a debate among researchers as to whether metadata surrogates should be displayed in list, tabular, dynamic or category-based format in search result interfaces.  It is suggested that abstracts should contain contextualised information relevant to users’ search query.  Metadata surrogates should not include only topical or subject related information.

Design of learning object metadata in search result interfaces  The need to include a description/abstract of the contents of the learning object in the metadata surrogate;  The use of user-centred metadata terminology and vocabularies; and  The use of clustered rather than linear and information cluttered learning object metadata surrogates.

Aims and objectives  To examine users’ interaction with two different learning object metadata surrogates: 1. a linear metadata surrogate interface, and 2. a clustered metadata surrogate interface.  The objectives of this study were:  To investigate the time needed by learners to identify a relevant learning object, using both interfaces;  To study the impact of task complexity on users’ interaction with both interfaces; and  To examine learners’ subjective satisfaction for both interfaces.

Linear metadata surrogate Pop up box

Clustered metadata surrogate Metadata categories

Methodology 1  Usability participants’ profile:  12 postgraduate students in Information and Computer Science  Task List analysis and scenarios:  3 tasks with varying degrees of complexity (Low, Medium and High complexity)  Error rate and Time  Observation (Think Aloud protocol)  Background and post test questionnaires  Post test interviews

Methodology 2 The three Tasks:

Results of the usability test

Differences in Time  participants performed the three tasks slightly faster using the clustered metadata surrogate interface.  Mean time of 314 secs in the Linear.  Mean time of 301 in the Clustered

Task complexity and Interface  There were no significant differences observed between task complexity and metadata interface design

Subjective satisfaction  Subjects were significantly more satisfied with the clustered metadata surrogate interface.  Mean overall satisfaction for clustered metadata surrogate = 7.8.  Mean overall satisfaction for the linear metadata surrogate interface = 6.3

Qualitative results (1)  Participants (n=10) liked the way metadata was presented in the clustered metadata surrogate interface:  Plausibility and engagement  Structure and organisation of information  Two participants preferred the linear interface (prior familiarisation; not meaningful metadata clustering).

Qualitative results (2)  Subjects liked the use of most of the general and technical category metadata (e.g. title, subject, description, format, identifier)  Few of the educational related metadata were perceived as useful (e.g. Audience, interactivity level, difficulty)

Qualitative results (3)  Subjects did not like the inclusion of many metadata elements and lengthy metadata surrogates.  Some participants (n=4) would like to select the metadata elements to be displayed in the surrogate.  Other metadata elements:  Relation metadata  People’s comments  The time it takes for a learning object to be downloaded/accessed  Accessibility needs  Information about the quality of learning objects

Conclusions - Recommendations  The provision for alternative displays of metadata surrogates, for example, both in linear and clustered forms.  The design of adaptive interfaces that present the content and format of metadata surrogates according to learners’ needs.  The use of pop up boxes for documenting and presenting the meaning of learning object metadata elements to users.  Need to extend the LOM standard with new metadata elements, such as, ‘the time it takes for a learning object to be downloaded’, ‘accessibility needs’ information, as well as ‘information about the ‘quality’ of a learning object.

Research in progress…  Usability assessment of three learning object repositories (MERLOT, ARIADNE Knowledge Pool and JORUM/UK).  Survey of students’ perceptions of the importance of learning object metadata elements.  User study on the criteria students employ to judge the relevance of learning objects.  Development of Heuristic evaluation checklist for the evaluation of metadata surrogates in search and search result interfaces.  Development of guidelines and recommendations for the design of learning object metadata schemas and Learning Object Repositories.