Download presentation
Presentation is loading. Please wait.
1
Evaluating Adaptive Authoring of AH
Maurice Hendrix IAS seminar, The University of Warwick, 09/02/2007
2
Outline Why automatic authoring System overview Semantic Desktop Adding resources Evaluation
3
How to do it, in general, for AH?
We want personalization, thus multiple paths Simple idea: Say we have a basic course on a topic Adding papers (articles) on the same topic, would make it an advanced course So we would have 2 versions: one for beginners, one for advanced learners => thus AH material How to do this automatically?
4
Screenshot beginner version
Screenshot of course delivered by AHA! Without any link to papers.
5
Screenshot advanced version
Screenshot of course delivered by AHA! with link to papers.
6
Why automatic authoring
Make authoring task easier Manual annotation is bottleneck By integrating authoring environment into semantic desktop
7
System overview The advantage of using an intermediate CAF file is that using an intermediate file in a format used by multiple systems makes it extensible to other systems. RDF is also used as an export/import format, because information on the Semantic Desktop is often stored in RDF.
8
MOT hierarchy structure
Concept maps and lessons are hierarchies: In MOT the domain model as well as the goal & constraints model are hierarchies. They can be represented as a tree structure as we see here. A located resource has to be added somewhere to the tree like the yellow article.
9
Semantic Desktop Desktop where everything is stored with extra metadata We uses RDF as storage format Example RDF (also has an XML representation): Note that resources such as in the example may have attributes which themselves have attributes. For example in the example above, the conference can have a date venue etc.
10
Adding Resources MOT goal/domain maps are hierarchies with tree structure, siblings are concepts at the same level The Semantic Desktop can be searched for resources. They are ranked by 2 formulae
11
Ranking Concept oriented Article Oriented where:
rank(a,c) is the rank of article a with respect to the current domain concept c; k(c) is the set of keywords belonging to the current domain concept c; k(a) is the set of keywords belonging to the current article a; |S| = the cardinality of the set S, for a given set S.
12
Selection of ranking method - snapshot
13
Equal ranks
14
Allow duplicates among siblings
We call concepts in MOT at the same depth in the hierarchy Siblings The author has to make a choice. Adding to all siblings can mean students get the link multiple times Choosing one of the siblings can mean students don’t always get the link when relevant.
15
Selection of duplicates/none snapshot
16
Add meta-data as separate concepts
The retrieved resources might have attributes themselves If resources have further attributes, these can be added as domain attributes in MOT The resource can also be made into a domain concept with its own separate domain attributes
17
Add metadata as attributes
18
Add metadata as Separate concepts
19
Separate concepts/ attributes snapshot
20
Compute resource keywords as set
The number of times a keyword occurs might indicate the relevance of the keyword. The ranking formulae can be computed on sets of keywords or multisets.
21
Set/ multiset snapshot
22
Before MOT hierarchy snapshot
23
After MOT hierarchy snapshot
24
Evaluation intensive two-week course on AH & Semantic Web
33 out of 61 students selected: 4th year Engineering & 2nd year MsC in CS After week: theoretical exam (for selecting) at the end: practical exam & 5 questionaires 3 systems: OLD MOT, NEW MOT & Sesame2MOT 3 SUS 2 more specific
25
Hypotheses The respondents enjoyed working as authors in the system.
The respondents understood the system. The respondents considered that theory and practice match. The respondents considered the general idea of Adaptive Authoring useful
26
Questionnaires Constructed direct questions: enjoy, understand in line with theory, preference etc Based upon division of main hypotheses Mostly multiple choice Some open questions for later analyses
27
Hypotheses results The respondents enjoyed working as authors in the system. The respondents understood the system. The respondents considered that theory and practice match. The respondents considered the general idea of Adaptive Authoring useful
28
SUS System Usability Scale Measure for comparing systems.
10 questions, 5 positive 5 negative to make respondents think. Score 1-5 Normalised score: for positive questions: score-1 for negative questions 5-score Total score: Sum of scores * 2.5
29
Extended hypotheses Respondents enjoyed the 3 systems
NEW MOT preferable to the OLD MOT in terms of usability sesame2MOT preferable to OLD/NEW MOT NEW MOT is easier to work with then OLD MOT Sesame2MOT is easier to work with then OLD/NEW MOT NEW MOT is more enjoyable then the OLD MOT Sesame2MOT is more enjoyable then OLD/NEW MOT NEW MOT is easier to learn then OLD MOT Sesame2MOT is easier to learn then OLD/NEW MOT
30
SUS Results
31
Extended hypotheses Respondents enjoyed the 3 systems
NEW MOT preferable to the OLD MOT in terms of usability sesame2MOT preferable to OLD/NEW MOT NEW MOT is easier to work with then OLD MOT Sesame2MOT is easier to work with then OLD/NEW MOT NEW MOT is more enjoyable then the OLD MOT Sesame2MOT is more enjoyable then OLD/NEW MOT NEW MOT is easier to learn then OLD MOT Sesame2MOT is easier to learn then OLD/NEW MOT
32
SUS results correlation
The scores of the SUS questionnaires are all significantly correlated These correlations are all significant From reactions we suspect respondents did not notice the difference between the SUS questionnaires.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.