LOG O Development of a diagnostic system using a testing-based approach for strengthening student prior knowledge Computers & Education (September 2011) Yi-Chun Lin, Yen-Ting Lin, Yueh-Min Huang* Department of Engineering Science, National Cheng Kung University
Introduction Assist instructors in diagnosing and strengthening students’ prior knowledge before new instructions and to enable students to attain greater learning motivation and improved learning performance A testing-based diagnosis system is proposed in this study to cope with these problems
Methodology To measure the strength of understanding of prior knowledge Prior knowledge diagnosis (PKD) model is proposed Two data sources: Testing information assigned by teachers Testing information derived by students Represents a relationship between each concept and test item in a test, and the relationships among the concepts Represents a relationship between student’s answers and the test items
Methodology A course specifies n concepts C 1, C 2, C 3,…, C i,… C m,…, C n Prior knowledge of the subject for r participating students S 1, S 2, S 3,…, S l,…, S r Teacher select k test items from the test item bank to form the pre-test T 1, T 2, T 3,…, T j,…, T k
Methodology X mj indicates the degree of relevance between the m-th concept and the j-th test item represent the degree of relevance between each concept and test item Z im indicates the relationship between the ith and the mth concepts (ranged from 0 to 1) represent the relationship between the concepts
Methodology - Strength of concept The strength of concept C i in the pre-test Z im represents the relationship between the i-th and the m-th concepts, 0 ≤ Z im ≤ 1 X mj indicates the degree of relevance between the m-th concept and the j-th test item, 0 ≤ X mj ≤ 1 0 ≤ S(C i ) ≤ nk
Methodology - Importance ratio of concept The importance ratio of concept C i in the pre-test Z im represents the relationship between the i-th and m-th concepts, 0 ≤ Z im ≤ 1 X mj indicates the relevance degree between the m-th concept and the j-th test item, 0 ≤ X mj ≤ 1 0 ≤ IRP(C i ) ≤ 1
Methodology - Understanding strength of the lth student The understanding strength of the lth student on the ith concept R lj indicates the answer of the l-th student on the j-th test item If the student answers the test item correctly, R lj is 1; otherwise R lj is 0 Z im represents the relationship between the i-th and the m-th concepts X mj indicates the degree of relevance between the m-th concept and the j-th test item
Methodology - Understanding strength of the concept Translate the importance ratio of the concept into the understanding strength of the concept To t(C i ) represents the threshold value of the i-th concept, 0 ≤ t(C i ) ≤ 1 m indicates the gradient of the function, m = 1 IRP(C i ) represents the importance ratio of concept C i in the pre-test, 0 ≤ IRP(C i ) ≤ 1 b is the point at which the line crosses the y- axis, b = 0
Illustrative example Test itemConcept C1C2C3C4C5 T T T T T Illustrative example of relationship between test items and concepts
Illustrative example Concept C1C2C3C4C5 C C C C C Illustrative example of relationship between concepts
Strength of Concept The strength of the first concept (C 1 ): S(C 1 ) = Z 11 *X 11 + Z 11 * X 12 + Z 13 * X 32 = 1.0 × × × 0.6 = 1.44 S(C 2 ) = Z 21 *X 23 + Z 21 * X 24 + Z 23 * X 32 + Z 25 * X 52 + Z 25 * X 54 = 1.0* * * * *0.4 = 1.88 Test item Concept C1C2C3C4C5 T T T T T Concept C1C2C3C4C5 C C C C C
Importance Ratio of Concept Importance ratio of concept Concept C1C2C3C4C5 IPR The importance ratio of second concept (C 2 ) IRP(C 2 ) = 1.88 / 7.52 = 0.25
Illustrative example Test item Student S1S2S3S4S5 T T T T T Illustrative example of the relationship between students’ answers and test items
Relationship between students’ answers and test items (R) Illustrative example Student Concept C1C2C3C4C5 S S S S S Relationship between students’ understanding strength and concepts (USS) Test item Student S1S2S3S4S5 I I I I I Test item Concept C1C2C3C4C5 I I I I I Concept C1C2C3C4C5 C C C C C Relationship between concepts (Z) Relationship between test items and concepts (X) R 41 Z 51 X 11 R 41 Z 52 X 21 R 41 Z 53 X 31 R 41 Z 54 X 41 R 41 Z 55 X 51 R 42 Z 51 X 12 R 42 Z 52 X 22 R 42 Z 53 X 32 R 42 Z 54 X 42 R 42 Z 55 X 52 R 43 Z 51 X 13 R 43 Z 52 X 23 R 43 Z 53 X 33 R 43 Z 54 X 43 R 43 Z 55 X 53 R 44 Z 51 X 14 R 44 Z 52 X 24 R 44 Z 53 X 34 R 44 Z 54 X 44 R 44 Z 55 X 54 R 45 Z 51 X 15 R 45 Z 52 X 25 R 45 Z 53 X 35 R 45 Z 54 X 45 R 45 Z 55 X x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x x 0.0 x x 0.2 x x 0.6 x x 0.4 x x 1.0 x 0.0
Illustrative example Threshold value of concept Concept C1C2C3C4C5 IPR *m = 1, b = 0 for threshold function in this case
PKT&D System Architecture
Experiment Participants : A course instructor 80 university students Course: bioinformatics Group: Control group: 40 students (used the PKT&D system) Experiment group: 40 students (did not use the PKT&D system)
Experiment Subject: sequence analysis approaches and tools Concepts in prior knowledge: sequence characteristics and structures, statistical hypothesis testing, and formula expression format UnitInstruction activitiesTime (min) Understanding the importance of similarity 1.A series of guided questions (5) 2.Slide presentation (15) 3.Discussions (10) 30 Introduction to the most popular data-mining tool: BLAST 1.A series of guided questions (5) 2.Slide presentation (15) 3.Practice (10) 30 BLASTing protein sequences 1.A series of guided questions (5) 2.Slide presentation (10) 3.Practice (15) 30 Understanding BLAST output 1.Slide presentation (15) 2.Discussions (15) 30 BLASTing DNA sequences 1.A series of guided questions (5) 2.Slide presentation (10) 3.Practice(15) 30 The BLAST way of doing things 1.Slide presentation (15) 2.Practice (15) 30
Experiment Process
The learning motivation post-test score between the two groups Group Number of students MeanS.D. Adjusted mean F(1, 77)P-value Experimental group * Control group Total number of students To measure the students’ learning motivation, Motivated Strategies for Learning Questionnaire was adopted in this study. using nine questionnaire items and a seven-point Likert scale
The paired t-test results of learning motivation for the two groups of students GroupTestsNMeanS.D.t(39) Experiment al group Pre-test * Post-test Control group Pre-test * Post-test
Students’ attitude towards bioinformatics learning #Item Experiment Group (Mean, S.D.) Control Group (Mean, S.D.) t-value 1I like learning bioinformatics5.45/ / The bioinformatics learning activities are helpful 5.68/ / I like to practice using the software in the bioinformatics learning 5.75/ / I had enough ability to learn the bioinformatics material 5.28/ / * 5 I can meet the instructor’s requirements during the bioinformatics learning process 5.38/ / * 6 I can understand the bioinformatics material taught by the instructor 5.45/ / *
Experiment group students’ perceptions of using the PKT&D system #QuestionEU(%)QU(%)SU(%) Neither(%)SL(%)QL(%)EL(%)Mean 1 Using the PKT&D system in learning bioinformatics would enable me to diagnose and strengthen prior knowledge more effectively Using the PKT&D system would improve my bioinformatics learning performance Using the PKT&D system in learning bioinformatics would increase my learning comprehension productivity Using the PKT&D system would make it easier to learn bioinformatics I would find the PKT&D system useful in the bioinformatics class Note: EU: Extremely Unlikely; QU: Quite Unlikely; SU: Slightly Unlikely; SL: Slightly Likely; QL: Quite Likely; EL: Extremely Likely.
Independent pre-test on knowledge of bioinformatics of the two groups Variable Pre-test t-value NMeanS.D. Experiment group Control group
The paired t-test results of the learning improvement for the two groups GroupTestsNMeanS.D.t-value Experimental group Pre-test * Post-test Control group Pre-test * Post-test * p<0.05.
Independent Post-test on knowledge of bioinformatics of the two groups Variable Post-test t-value Number of students MeanS.D. Experimental group * Control group
Example interview comments about the three topics Inductive topics PerspectivesIntervieweesTranscript sample of interview comments Instruction Realization of students IN I felt that the students in the experiment group better met my requirements during the course. Achievement of students IN I felt that the students in the experiment group demonstrated high performance in each learning activity. Progress of instruction IN I had much more time to teach the concepts in more detail and interact with the students of experiment group. Learning situationIN Only one third of the students from the control group could fully follow the activities and understand my instructions. SC & SE Learning about bioinformatics software through practicing using it was interesting. I would have preferred more direct instruction from the course instructor. SC I felt some of the concepts were difficult to grasp, which led to obstacles when I used the software during the course. Interaction Discussion willingness IN The students in the experiment group had better discussions than the control group. Responsiveness of students IN The students in the experiment group often gave feedback and asked questions. Technology System usefulness SE I felt the user interface of the PKT&D system was clear, straightforward, and convenient to use..I clearly saw the diagnostic results and learning suggestions. Auxiliary components IN & SE I felt that the PKT&D system served as a guide that helps the students to diagnose the weakness of their concepts. I can learn more prior knowledge using the PKT&D system.
Evaluation of correctness rate results for the three concepts diagnosis
Student ID Correctness rate of diagnoses Expert 1100% 66.60%100% Expert 2100%66.60%100%66.60%100% Student ID Correctness rate of diagnoses Expert %100%66.60%100% Expert 2100% 33.30%100% Student ID Correctness rate of diagnoses Expert 1100% Expert 2100% 66.60%100% Student ID Correctness rate of diagnoses Expert 1100%66.60%33.30%100% 66.60% Expert 2100% 66.60%100% Student ID Correctness rate of diagnoses Expert 1100% 33.30%100%33.30%100% Expert 2100% 33.30%100%
Evaluation of correctness rate results for the five concepts diagnosis Student ID Correctness rate of diagnoses Expert 180%100% 80%100% 80%100% Expert 2100% 80%100% Student ID Correctness rate of diagnoses Expert 1100% 80%100%80%100%60% Expert 2100% 80%100%40% Student ID Correctness rate of diagnoses Expert 1100%60%100%60%100% Expert 260%40%80%60%100%80%100% Student ID Correctness rate of diagnoses Expert 140%80%100% Expert 2100%80%100% 80%100% Note: Correctness rate obtained by comparing the diagnoses of the experts with those obtained using the proposed approach to artificial intelligence course. The average correctness rates are % and 90% for the students.
Conclusion & Discussion Propose a testing-based approach to diagnose the strength of individual students’ prior knowledge of concepts, and then provide them with appropriate materials to strengthen this Provide instructors can undertake their teaching more smoothly Educators can use the proposed system in different educational contexts
Limitation Two variables of the linear function have to be adjusted based on instructors’ expertise in different educational contexts
Future Work The number of test items in the item bank should be continually increased to address various subject objectives and the instructors’ needs Students’ learning portfolio can be integrated into the proposed system to develop more appropriate diagnosis mechanisms