Download presentation
Presentation is loading. Please wait.
Published byShon Welch Modified over 9 years ago
1
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 17 July, 2003 ONR Advanced Distributed Learning Greg Chung Bill Bewley UCLA/CRESST Ontologies and Bayesian Networks in Assessment 2003 Regents of the University of California
2
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 1 Problem Statement How do you link information from assessments to individualized instructional recommendations in a DL context? –Content is going online –Assessments are going online –Couple content and assessment
3
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 2 Ontologies An ontology is a conceptual representation of a domain expressed in terms of concepts and the relationships among the concepts –Support knowledge capture, representation, sharing –Fielded technology Medical, engineering, e-commerce, military… Development tools and APIs available today (e.g., Protégé) Support rapid development and testing of prototypes
4
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 3 Bayesian Networks Graphical modeling of the causal structure of a phenomenon in terms of nodes and relations –Nodes represent states, links represent the influence relations –Supports fusion of observable data (e.g., “correct on item 1”) into high-level hypotheses (e.g., “understands breath control”) –Fielded technology Development tools available (HUGIN, MSBNx) ONR-funded research
5
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 4 Assessment Application Example Content recommendation –Deliver individualized instructional content based on assessment results Approach –Use a domain ontology to represent content –Use assessments to measure students’ knowledge of the domain –Use a Bayesian network to model knowledge dependencies
6
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 5 Rifle Marksmanship Ontology Capture hierarchical structure of a domain –Field manuals, doctrine, training videos –Bind content to structure (text, video, graphics) Capture conceptual representation –Experts (coaches, snipers, rifle team) –Upper-level ontology captured using knowledge maps
7
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 6 Based on corpus of marksmanship literature and doctrine Currently 168 concepts (classes) Content directly bound to each node –Important if you want to make use of the information Hierarchical Representation
8
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 7 Binding Content to Structure
9
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 8 Application of Ontology Marksmanship ontology serves as testbed to evaluate feasibility of approach –Pilot test of approach — 2nd Lts. undergoing entry-level marksmanship training –Design Individualized content recommendation vs. control (no recommendation) –Examine shooting outcome, learning outcomes, changes in BN due to instruction, Marines’ perceptions of learning
10
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 9 Linking Assessment and Instruction Approach –Depict knowledge dependencies among marksmanship concepts using a Bayesian network –Administer assessment to gather information on Marines’ understanding of rifle marksmanship –Take assessment results — item-level data — and update BN
11
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 10 Linking Assessment and Instruction Approach (continued) –Identify concepts that have low probabilities in BN — interpreted as poor understanding –Make use of cognitive demands of tasks and items to infer depth of a Marine’s understanding –Deliver different content based on depth of understanding
12
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 11 Content Recommendation
13
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 12 Example of Feedback
14
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 13 Preliminary Results Within-group analyses –BN probabilities increased for concepts that had instructional content served up –BN probabilities did not change for concepts that did not have instructional content –High-level BN topics correlated with measure it was derived from as well as reasoning measure –BN “scores” corresponded with Marines’ self- ratings of their level of knowledge (80% agreement)
15
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 14 Preliminary Results Between-group analyses inconclusive –Small sample size (n = 16) –Experimental-condition Marines Qualified in thunderstorm Learned more from classroom training than expected (i.e., > 70% of topics “correct”) Knowledge map scores appear to be increasing at a faster rate than the control group, but differences not statistically significant
16
CRESST ONR/NETC Meetings, 17-18 July 2003, v1 15 Summary An important opportunity of online assessments is the potential to measure many aspects of human behavior under a variety of different conditions An important challenge is extracting meaningful information from (potentially) voluminous amounts of data Bayesian networks and ontologies may be one approach to address
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.