Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Massive User Modelling System (MUMS) Christopher Brooks, Mike Winter, Jim Greer, and Gordon McCalla Advanced Research in Intelligent Educational Systems.

Similar presentations


Presentation on theme: "The Massive User Modelling System (MUMS) Christopher Brooks, Mike Winter, Jim Greer, and Gordon McCalla Advanced Research in Intelligent Educational Systems."— Presentation transcript:

1 The Massive User Modelling System (MUMS) Christopher Brooks, Mike Winter, Jim Greer, and Gordon McCalla Advanced Research in Intelligent Educational Systems (ARIES) Laboratory Computer Science Department University of Saskatchewan Saskatoon, SK, Canada

2 Copyright © 2004. Overview of Presentation 1.Motivations –The ITS and the LMS, where are we going, and what do we need to get there? 2.The Massive User Modeling System (MUMS) –An architectural solution to the need –An implementation that supports this solution 3.Conclusions –Directions to look to next

3 Copyright © 2004. Motivations – State of the Art Two kinds of electronic learning environments: –Intelligent Tutoring Systems (ITS)  A centralized application that does all of the data collection, data analysis, domain modelling, and pedagogical feedback. (sandboxing users)  Techniques tend to be highly coupled with the domain and specific tools that the domain use –Learning Management Systems (LMS)  Focusing on supporting the learning process, generally through learning portals (mix of LMS/LCMS)  Centralized, web based (e.g. Webct, Blackboard, uPortal etc.)  Technologies focus on enabling human-human contact (e.g. CSCL, evaluation/feedback, calendaring, etc), and human- content delivery (e.g. online course material)

4 Copyright © 2004. Motivations – The next step While development of these approaches happened in parallel, it’s seems a merger of these systems could be beneficial But is an iLMS all we need? NO –Learning is more decentralized than this –Diversity is important: Specialization of tools to particular domains/purposes Questions to think about: –How do we share information between many learning applications? –What information should we share? –How can we enact institutional policies over this information? –Can we include applications on the periphery of the learning process (e.g. email analysis, instant messenger tools, etc)? –Can we do it all in a domain neutral way?

5 Copyright © 2004. Motivations – Example Consider an example, introduction computer science in Java: –Actors: instructors, tutors, classmates, learners –Core tools:  Online course content (e.g. learning objects)  Peer discussion board  Quiz/testing applications  Pedagogical tutor –Domain specific tools:  Integrated development environment  Run time debugger –Periphery tools:  Collaboration tools (instant messenger, email, other groupware)  Web browsing habits  Scheduling applications Fanciful right now: –Neither the ITS, nor the LMS has access to all of these tools/actors to build more comprehensive learner models

6 Copyright © 2004. Motivations – Our needs from experiences I-Help –a peer discussion board, with expertise location –Multi-year project with a number of faculty, researchers, and graduate students involved –Everyone had their research components to integrate, constant issues with what and how information should be collected (and used!) –The result was lots of data duplication, and schema inconsistency –And this was a single web based system – things only get more complex when you want to include more systems…

7 Copyright © 2004. Motivations – The need The need then is a framework for supporting the sharing of information between various e-learning tools A framework to support user modelling must be: –Heterogeneous and distributed (operating system/toolkit neutral) –Domain neutral (should be able to distribute arbitrary learner models) –Real-time and reflective (changes to user models should be both instantaneously transmitted, as well as archived for reflection) –Lightweight (ready for adoption by researchers/tool developers!)

8 Copyright © 2004. MUMS – The here and now To address these needs we have created a framework (MUMS) that facilitates the collection and distribution of learner modelling information The central artifact of the framework is the opinion: –objective data about a user –relevant from the perspective of who created it –time-dependant in nature (when was it valid) Opinions are not constrained to any particular ontology or vocabulary –different producers of modelling information can use whatever taxonomies and vocabularies they feel are expressive

9 Copyright © 2004. MUMS – 3 Entities Opinions are used by three computational entities –Evidence Producers: observe user interaction with an application and produce and publish opinions about the user. –Modellers: are interested in acting on opinions about the user, usually by reasoning over these to create a user model (e.g. the tutor!) –Broker: acts as an intermediary between producers and modellers, providing routing and quality of service functions for opinions. From this, we can derived fourth entity of interest (adaptor pattern) –Filter: act as broker, modeller, and producer of opinions. By registering for and reasoning over opinions from producers, a filter can create higher level opinions.

10 Copyright © 2004. MUMS – Architectural Overview (real-time) Evidence Producers BrokerModeller 1 Modeller 2 Filters 1. Observe user interaction 2. Form opinion about user 3. Publish opinion to broker 4. Store opinion 5. Route opinions to interested modellers and filters 5. Route opinions to interested modellers and filters 6. Reason over opinions forming higher level statements 7. Route higher level Opinions to interested modellers and filters 8. Reason over opinions 9. Act!

11 Copyright © 2004. MUMS – Architectural Overview (archival) Evidence Producers Broker Modellers 1. Observe user interaction 2. Form opinion about user 3. Publish opinion to broker 4. Store opinion 5. Query broker for interesting opinions 6. Reason over resultant opinions 9. Act!

12 Copyright © 2004. MUMS – Benefits of architecture 1.Routing of opinions is semantic (content-based) –Lessens dependencies between producers and consumers of information 2.Loose coupling between producers and modellers allows for adding new entities to the system in an dynamic manner –New grad students == new data collection/production needs –Maintains system coherence –New ideas get real usage data immediately! 3.Evidence producers can be lightweight, as minimal reasoning is required on their part –Minimal development time to add simple functionality to producers, encourages adoption 4.Logical centralization of the broker allows for setting institutional policies such as privacy, data archival, and security –Filtering of sensitive information, and user consent

13 Copyright © 2004. MUMS – An Implementation Prototype This architecture is being realized through an implementation prototype –Opinions are expressed as Resource Description Framework (RDF) statements, the lingua-franca of the semantic web –Entities within MUMS utilize web service/semantic web technologies for transmitting opinions  WSDL descriptions provide descriptions for MUMS services  SOAP bindings are used for service interaction  RDF opinions are wrapped in Web Service Event (WS-Events) notifications  RDQL is the query language used for subscribing to opinion streams

14 Copyright © 2004. MUMS – Deployed prototype Public Discussion Forum Content management system QTILite testing tool IRC Chat applet Web Browser Proxy Learning Object Repository Student Diagnosis Engine Broker 1 Broker 2 Broker 3 Clustered BrokerModellers Real-time Open User Model Archive Querying Privacy

15 Copyright © 2004. MUMS – Initial reactions Initial developer reactions are favourable –Each evidence producer was created by a different person, sometimes by a small team –Integration of MUMS within the evidence producers took a minimal amount of time  I-help public discussions: 3-4 days  Content management system: 1 day  QTILite quiz: 3 days –Both producers and consumers have been built using Java and C#, with no interoperability problems –Initial implementation provides a reasonable quality of service with current deployment  Pentium 3 733Mhz, w/512 megabyte RAM on windows 2003  10 opinions per second, with an average of 10 RDF statements per opinion produced minimal lag  Trivial to distribute broker implementation over several machines

16 Copyright © 2004. MUMS – Issues to explore There are only guesses at the size, speed, and number of user modelling events produced –Expect that most evidence producers will be in the range of 10-25 statements per opinion –Dealing with “bursting” is likely to become an issue –Long term archival of student information may lead to very large data stores (millions of RDF statements) Discovery and understandability of evidence producer ontologies is a must –Otherwise a semantic gap exists between producer/modeller authors Centrally maintaining privacy in an ontology neutral manner: is this possible? –The end use of information is sometimes more important than where it came from and where it is going –Strengthening the bonds between a privacy filter, and the modellers –Perhaps users (or their agents!) can support filtering of private data

17 Copyright © 2004. For more information Christopher Brooks Research Officer University of Saskatchewan Saskatoon, SK, Canada cab938@mail.usask.ca 1-306-966-1442 http://www.cs.usask.ca/research/research_groups/aries/


Download ppt "The Massive User Modelling System (MUMS) Christopher Brooks, Mike Winter, Jim Greer, and Gordon McCalla Advanced Research in Intelligent Educational Systems."

Similar presentations


Ads by Google