Initializing Student Models in Web-based ITSs: a Generic Approach Victoria Tsiriga & Maria Virvou Department of Informatics University of Piraeus.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Web Passive Voice Tutor: an Intelligent Computer Assisted Language Learning System over the WWW Maria Virvou & Victoria Tsiriga Department of Informatics,
VIRGE: Tutoring English over the Web through a Game Maria Virvou, George Katsionis Department of Informatics University of Piraeus Piraeus 18534, Greece.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Mathematics Unit 6 - Getting Ready for the Unit
Department of Mathematics and Science
DECISION TREES. Decision trees  One possible representation for hypotheses.
Search in Source Code Based on Identifying Popular Fragments Eduard Kuric and Mária Bieliková Faculty of Informatics and Information.
Cognitive Modelling – An exemplar-based context model Benjamin Moloney Student No:
WORKSHOP OF A SPECIAL INTEREST GROUP ON "ADAPTIVE EDUCATIONAL HYPERMEDIA SYSTEMS" Department of Informatics – University of Piraeus Adaptive Hypermedia.
Trends in Agriculture and Livestock Production in SD By: Paul Peterson Watertown High School Prairie to Mountain Explorer for South Dakota.
A cognitive theory for affective user modelling in a virtual reality educational game George Katsionis, Maria Virvou Department of Informatics University.
Relating Error Diagnosis and Performance Characteristics for Affect Perception and Empathy in an Educational Software Application Maria Virvou, George.
K Means Clustering , Nearest Cluster and Gaussian Mixture
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Chapter 2: Pattern Recognition
2D1431 Machine Learning Boosting.
Ensemble Learning: An Introduction
Active Learning with Support Vector Machines
Designing clustering methods for ontology building: The Mo’K workbench Authors: Gilles Bisson, Claire Nédellec and Dolores Cañamero Presenter: Ovidiu Fortu.
1 Ensembles of Nearest Neighbor Forecasts Dragomir Yankov, Eamonn Keogh Dept. of Computer Science & Eng. University of California Riverside Dennis DeCoste.
What is Learning All about ?  Get knowledge of by study, experience, or being taught  Become aware by information or from observation  Commit to memory.
© Prentice Hall1 DATA MINING Introductory and Advanced Topics Part II Margaret H. Dunham Department of Computer Science and Engineering Southern Methodist.
12 -1 Lecture 12 User Modeling Topics –Basics –Example User Model –Construction of User Models –Updating of User Models –Applications.
Virtual Reality edutainment: cost-effective development of personalised software applications Maria Virvou, Konstantinos Manos & George Katsionis Department.
What kind of task will help students synthesize their learning?
Grade 1 Mathematics in the K to 12 Curriculum Soledad Ulep, PhD UP NISMED.
Learner Modelling in a Multi-Agent System through Web Services Katerina Kabassi, Maria Virvou Department of Informatics, University of Piraeus.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Challenges in Information Retrieval and Language Modeling Michael Shepherd Dalhousie University Halifax, NS Canada.
Made with Protégé: An Intelligent Medical Training System Olga Medvedeva, Eugene Tseytlin, and Rebecca Crowley Center for Pathology Informatics, University.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Evaluation of a Hybrid Self-improving Instructional Planner Jon A. Elorriaga and Isabel Fernández-Castro Computer Languages and Systems Dept. University.
For Friday Read chapter 18, sections 3-4 Homework: –Chapter 14, exercise 12 a, b, d.
Introduction to machine learning and data mining 1 iCSC2014, Juan López González, University of Oviedo Introduction to machine learning Juan López González.
Diagnosing Language Transfer in a Web-based ICALL that Self-Improves its Student Modeler Victoria Tsiriga & Maria Virvou Department of Informatics, University.
The Perceptron. Perceptron Pattern Classification One of the purposes that neural networks are used for is pattern classification. Once the neural network.
Curriculum Report Card Implementation Presentations
Individualizing a cognitive model of students’ memory in Intelligent Tutoring Systems Maria Virvou, Konstantinos Manos Department of Informatics University.
Today Ensemble Methods. Recap of the course. Classifier Fusion
Exploiting Context Analysis for Combining Multiple Entity Resolution Systems -Ramu Bandaru Zhaoqi Chen Dmitri V.kalashnikov Sharad Mehrotra.
Evaluation of the Advice Generator of an Intelligent Learning Environment Maria Virvou, Katerina Kabassi Department of Informatics University of Piraeus.
Graph-based Text Classification: Learn from Your Neighbors Ralitsa Angelova , Gerhard Weikum : Max Planck Institute for Informatics Stuhlsatzenhausweg.
Carnegie Mellon Novelty and Redundancy Detection in Adaptive Filtering Yi Zhang, Jamie Callan, Thomas Minka Carnegie Mellon University {yiz, callan,
Science Process Skills By: Stephanie Patterson and Martha Seixas.
Statistics What is the probability that 7 heads will be observed in 10 tosses of a fair coin? This is a ________ problem. Have probabilities on a fundamental.
KNN & Naïve Bayes Hongning Wang Today’s lecture Instance-based classifiers – k nearest neighbors – Non-parametric learning algorithm Model-based.
Machine Learning Concept Learning General-to Specific Ordering
Data Mining and Decision Support
© 2013 UNIVERSITY OF PITTSBURGH Supporting Rigorous Mathematics Teaching and Learning Shaping Talk in the Classroom: Academically Productive Talk Features.
© 2013 UNIVERSITY OF PITTSBURGH Supporting Rigorous Mathematics Teaching and Learning Shaping Talk in the Classroom: Academically Productive Talk Features.
By Christine, Katlin G, Kailee, Ashley, Brittany and Melissa.
Key Ideas In Content Math 412 January 14, The Content Standards Number and Operations Algebra Geometry Measurement Data Analysis and Probability.
Pattern Recognition. What is Pattern Recognition? Pattern recognition is a sub-topic of machine learning. PR is the science that concerns the description.
Kansas State University Department of Computing and Information Sciences CIS 830: Advanced Topics in Artificial Intelligence Wednesday, January 26, 2000.
Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods.
FNA/Spring CENG 562 – Machine Learning. FNA/Spring Contact information Instructor: Dr. Ferda N. Alpaslan
KNN & Naïve Bayes Hongning Wang
Methods of multivariate analysis Ing. Jozef Palkovič, PhD.
Maria Virvou, Maria Moundridou, Victoria Tsiriga, Katerina Kabassi,
Machine Learning Week 1.
A Unifying View on Instance Selection
Revealing priors on category structures through iterated learning
The Naïve Bayes (NB) Classifier
Evolutionary Ensembles with Negative Correlation Learning
A task of induction to find patterns
A task of induction to find patterns
Presentation transcript:

Initializing Student Models in Web-based ITSs: a Generic Approach Victoria Tsiriga & Maria Virvou Department of Informatics University of Piraeus

Adaptivity in Web-based Tutoring Systems Adaptivity is crucial in Web-based tutoring systems. To be adaptive, a Web-based educational system should be able to draw inferences about individual students. Therefore, the student modelling component is crucial for the purpose of adaptation.

Student Modeler The student modeling component performs two main functions: creates the model of a new student, and updates the student model based on the students interaction with the system.

Initializing Student Models It seems unreasonable to assume that every student starts up with the same knowledge and misconceptions about the domain being taught. An ITS may be considered as worthless, if it fails to make plausible hypotheses about a student, before the student loses her/his patience with the system.

Initializing Student Models- Approaches The ITS may assume that a student knows nothing or has some standard prior knowledge. The students prior knowledge may be evaluated by using a pre-test. Exhaustive pre-tests. Adaptive pre-tests. The system may use patterns among students in order to group similar students to categories (e.g. stereotypes).

Initializing Student Models (ISM) Framework It makes initial estimations concerning the knowledge level and the error proneness of a new student in each domain concept. It uses an innovative combination of stereotypes and the distance weighted k- nearest neighbor algorithm. It has been applied in two different Web- based ITSs.

The ISM Framework - Architecture Interview Preliminary Test Generation of the second student model vector using the distance weighted k-NN algorithm Students of the Same Knowledge Level Stereotype Category Student Models Knowledge Base Generation of the first student model vector Stereotypes Knowledge Base Personal Characteristics Prior Knowledge First student model vector Second student model vector

Representation of the Student Model in ISM The student model is represented as a pair of feature vectors. The first student model vector is constructed based on an interview and a preliminary test: The second student model vector is constructed taking into account other similar students:

Distance Weighted k-NN – Main Decisions The features that would be used to formulate the input space of the distance function have to be selected. A distance function must be identified to estimate the similarity between two instances. The number of neighbors (k) that would participate in the classification task should be defined. A function has to be designed in order to classify new instances.

Distance/Similarity Attributes They should influence the students process of learning. Different for different tutoring domains. They can be selected: by human teachers, by empirical studies that involve human teachers and students.

Calculating Distance between Students Distance between two values x and y of a given attribute a: where: The overall difference measure of two students s x and s y is calculated as: where n is the number of attributes used to measure the similarity between students.

Defining k in the k-NN Algorithm In ISM the number of k is defined to be the number of students that belong to the same stereotype category with the new student. Students that belong to different stereotypes are not expected to have similar knowledge of the domain, irrespective of their other personal characteristics.

Classification Function

Case Study I Application of ISM to Web-Passive Voice Tutor (Web-PVT). ISM is instantiated by assuming that students of similar knowledge level of English, who have the same mother tongue and know the same foreign languages have similar strengths and weaknesses when they learn the passive voice.

Representation of the Student Model in Web-PVT The student model is represented as a set of feature vectors.

Evaluation of the Initialization Module (1) Participants: 3 teachers of English and their students. The teachers were asked to evaluate 5 randomly chosen initial student models from each supported stereotype (novice, beginner, intermediate and advanced) at two phases: before any student of this particular stereotype had been registered to the system. after Web-PVT had constructed individualized models of 15 students of each stereotype.

Evaluation of the Initialization Module (2) The experimental hypothesis was that the initial student models of the second phase would be superior to the initial student models of the first phase. The hypothesis was evaluated using a one-tailed paired t-test. The results showed that the student modeler in all the cases performed better at initializing the model of a new student when it took into account other students of the same knowledge level stereotype.

Case Study II Application of ISM to Web-EasyMath. ISM is instantiated by assuming that students of similar knowledge level, who attend the same class (and instructors) and have similar skills in simple arithmetic operations have similar strengths and weaknesses when learning the new topic of algebraic powers.

Similarities and Differences with Web-PVT Difference: the student attributes of the first student model vector. Similarities: The second student model vector. The way the second student model vector is produced.

Main Points Generation of a framework for the initialization of student models in Web-based ITSs - ISM. ISM uses of a novel combination of stereotypes and the distance weighted k-Nearest Neighbor algorithm. ISM has been applied to two totally different tutoring domains: language learning (Web-PVT) and mathematics (Web-EasyMath). The evaluation of the student modeler of Web-PVT, showed that ISM produced more individualized initial student models than stereotypes alone.