Inga ZILINSKIENE a, and Saulius PREIDYS a a Institute of Mathematics and Informatics, Vilnius University
This paper deals with one of Technology Enhanced Learning (TEL) problems - the personalized selection of a learning scenario. Personalization is treated here as appropriateness of a learning scenario to preferences of a particular student, mainly, his/her learning style. The paper proposes an extended approach to modeling learning scenario selection based on preferences of a student‘s learning style. An ant colony optimization algorithm is modified and applied. In order to give a theoretical background the main concepts of personalization, learning scenario and learning style are briefly presented.
First, data mining technique to obtain a student‘s learning style is presented, second, a model for personalized selection of a learning scenario is proposed.
A general definition of personalized learning looks upon the learning process from the perspective of a student and tries to adapt the whole process to the student‘s needs. In general, an LS can be represented as a graph which contains all possible navigation tours or LSs that a student can follow to reach his/her learning goal. So there is a need to optimize such navigation paths as well as to select the LS that is most suitable to a student. Learning styles could be defined as “a description of the attitudes and behaviors which determine an individual’s preferred way of learning”.
The objective of the Curriculum sequencing is to replace the rigid, general, and one-size-fits-all course structure set by the instructor or the pedagogical team, with a more flexible and personalized learning path. Literature review shows that there are two main approaches to the Curriculum sequencing problem: Social Sequencing and Individual Sequencing. According to S. Al-Muhaideb et al. Swarm intelligence methods, like an Ant colony optimization (ACO) and Particle Swarm optimization, are the most promising Social Sequencing methods, while genetic algorithms and Memetic algorithms are the most used Individual Sequencing methods.
Social sequencing approach abstracts away the individual properties of learners drawing efficient learning paths from the emergent and collective behavior of a ‘‘swarm’’ of learners.
Although the parameters and functions used in the paper are the same as those defined in ACO there are two extensions: an extended model of a student‘s LSt and additional information to improve cold-start of ACO. According to the mentioned extensions heuristic information, pheromone update and local search functions are modified. The main idea of this model is that the trails‘ pheromones are updated for different LSt of students in order to create a learning style-based dynamic learning object recommendation.
1) the information about a student’s LSt is known; 2) the course should be attended by many students; 3) the course structure is given by a tutor keeping in mind the time allotted; 4) the course is comprised of LOs and their alternatives; 5) for each LSt type its preferences according to LO is known; 6) the efficiency of algorithm is treated here as an improvement of students’ performances, viz., the algorithm is going to be efficient if students show better results in their learning.
The authors propose a schema optimized by pedagogical knowledge to select and recommend an LS for a student, who are at the initial position.
In this stage initial parameters are initialized: heuristic information η rs, α, β and evaporation rate ρ, and setting the pheromone trails to an initial value τ 0 > 0. For each learning object a time slot value t is assigned. This provides initial logical grouping and sequencing of learning material and corresponds to real world terms like lesson, theme, module, learning goal, etc. Each time slot can have several learning objects assigned.
The heuristic information η rs is expressed as the conscious intensity of learning from the rth node to the next sth node and in this paper is defined as follows: where Δt is a time unit difference between the rth node to the next sth node. The heuristic information provided by time slots defines the appropriate selection probability of another learning object, which is arranged in arc(r, s) by value defined above.
The proposed model is based on the idea that the more exact information there is about the student the better are the recommendations for choosing LOs. By applying the ACO algorithm for the recommendation of LS the authors assume that each student is represented as an ant with LSt identified with the help of tests or other techniques.
So a student represented as an ant k should have LSt represented as {w k1, w k2, w k3, w k4 }, where w kx in [0..1]. By visiting learning objects the ant leaves the trail on a path by a set of pheromones corresponding to ant’s LSt and achieved results. Another ant z reacts to a pheromone traces by its pheromones sensitivity. It means that each ant may leave up to 4 different pheromone traces and correspondingly may react to 4 pheromones.
An LS is presented in the paper as a fully connected graph G V = (V,L) whose nodes (LO) are the components V, and the set L fully connects the components V. At each construction step, an ant k applies a probabilistic action choice rule to decide which node to visit next. The picture presents an example of pheromone trace after ideal students finished their tours.
Pheromone trail intensity τ rs is expressed as the relational strength between the rth node and the sth node. Therefore, if the k ant has completed its tour in a period (t-1,t), the authors can accumulate the ranked nodes on arc(r, s) by the pheromone trail intensity τ rs (t). The incremental intensity Δτ rs (t) is that which locates pheromone trail value at the time t, and is updated as the formula
In the proposed model there are two extra assumptions: An ant will leave a certain amount of pheromone only, if after finished its learning tour the ant got very good grade > S goodgrade. It is reasonable to do that in order to get qualitative pheromones from the ant. This helps to prevent accumulation of pheromones in tours generating bad results. Each ant leaves pheromone according to its LSt preferences and performance.
As a result the modified pheromone updating rule which includes evaporated pheromones and the amount of the pheromone ant k deposits on the arcs it has visited multiplied to an ant’s LSt type l proportion value, is defined as below:
Consequently, at each decision step, ant k applies a probabilistic action choice rule to decide which node to visit next. A modified rules according to the proposed model is as follows: when q q 0, ant k at node n selects the next node s to move to else
This paper deals with the selection of LS depending on LSt. For the concrete learner’s LSt identification data mining methods are applied. This method is better than questionnaire, because students don’t know and can’t manipulate the responses. The authors proposed a modified ant colony optimization algorithm for the solution of selection of LS problem.
There are two aspects for the improvement of LS selection proposed: 1) the student model according to his/her LSt is expanded with more detailed information about a student‘s LSt; 2) a model for the solution of cold-start problem is described as well. These aspects is based on additional and heuristic information about LSt preferences taken into account.
Consequently the proposed model described in the paper not only offers such LS selection which may efficiently use collective experience but also comes up with emergent information that can be used to help the tutor or course designer identify the strengths and weaknesses of the given learning material. On the ground of the proposed model an experiment will be performed in order to validate and refine proposed theoretical backgrounds: heuristic information and a pheromone update rule.