Comparing Decision Rules

Slides:



Advertisements
Similar presentations
Sequential Three-way Decision with Probabilistic Rough Sets Supervisor: Dr. Yiyu Yao Speaker: Xiaofei Deng 18th Aug, 2011.
Advertisements

Record Linkage Simulation Biolink Meeting June Adelaide Ariel.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Hypothesis Testing: Type II Error and Power.
PSY 1950 Confidence and Power December, Requisite Quote “The picturing of data allows us to be sensitive not only to the multiple hypotheses that.
The t Tests Independent Samples.
Reliability of Selection Measures. Reliability Defined The degree of dependability, consistency, or stability of scores on measures used in selection.
Classification with several populations Presented by: Libin Zhou.
Decision Tree Models in Data Mining
1/2555 สมศักดิ์ ศิวดำรงพงศ์
Determining Sample Size
Non-Traditional Metrics Evaluation measures from the Evaluation measures from the medical diagnostic community medical diagnostic community Constructing.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Use of web scraping and text mining techniques in the Istat survey on “Information and Communication Technology in enterprises” Giulio Barcaroli(*), Alessandra.
Phone Contacts Vs GPA Is there a Correlation between the number of Contacts in someone's phone and their G.P.A?
Comparing Decision Rules Decision accuracy of different decision rules combining multiple measures in a higher educational context Iris Yocarini, Samantha.
Combining multiple learners Usman Roshan. Bagging Randomly sample training data Determine classifier C i on sampled data Goto step 1 and repeat m times.
BAGGING ALGORITHM, ONLINE BOOSTING AND VISION Se – Hoon Park.
Stefan Mutter, Mark Hall, Eibe Frank University of Freiburg, Germany University of Waikato, New Zealand The 17th Australian Joint Conference on Artificial.
Module 3: Using Statistics for Benchmarking and Comparison Common Measures Training Chelmsford, MA September 28, 2006.
Evaluating Results of Learning Blaž Zupan
Week 8 – Power! November 16, Goals t-test review Power! Questions?
1 Where we are going : a graphic: Hypothesis Testing. 1 2 Paired 2 or more Means Variances Proportions Categories Slopes Ho: / CI Samples Ho: / CI Ho:
Jen-Tzung Chien, Meng-Sung Wu Minimum Rank Error Language Modeling.
© Copyright McGraw-Hill 2004
Evaluating Classification Performance
Topic #5: Selection Theory
Grade Point Average, among working and non-working students Group 4 ●Bre Patroske ●Marcello Gill ●Nga Wargin.
Combining multiple learners Usman Roshan. Decision tree From Alpaydin, 2010.
Hypothesis Testing Steps for the Rejection Region Method State H 1 and State H 0 State the Test Statistic and its sampling distribution (normal or t) Determine.
Type I and II Errors Power.  Better Batteries a) What conclusion can you make for the significance level α = 0.05? b) What conclusion can you make for.
Critical Appraisal Course for Emergency Medicine Trainees Module 5 Evaluation of a Diagnostic Test.
Pairwise comparisons: Confidence intervals Multiple comparisons Marina Bogomolov and Gili Baumer.
The Good, The Bad and the Ugly
Machine Learning – Classification David Fenyő
Logistic Regression: To classify gene pairs
Consumer Decision Making
Sample Power No reading, class notes only
Chapter 3: Measurement: Accuracy, Precision, and Error
Evaluating Results of Learning
Understanding Results
Rule Exercises Status of the Ball Definitions and Rule 15
Rule Exercises Status of the Ball Definitions and Rule 15
Statistics for Psychology
Objective: To Divide Integers
Statistical Reasoning in Everyday Life
MANA 4328 Dr. Jeanne Michalski
Factors, multiple, primes: Factors from prime factors
Students: Meiling He Advisor: Prof. Brain Armstrong
MANA 4328 Dr. George Benson Combining Test Data MANA 4328 Dr. George Benson 1.
دانشگاه شهیدرجایی تهران
Comparison of observed switching behavior to ideal switching performance. Comparison of observed switching behavior to ideal switching performance. Conventions.
تعهدات مشتری در کنوانسیون بیع بین المللی
Classification of class-imbalanced data
Figure 1. Table for calculating the accuracy of a diagnostic test.
Age distribution of tested dogs
2-1 Inductive Reasoning.
ERRORS, CONFOUNDING, and INTERACTION
One Way ANOVAs One Way ANOVAs
Chapter 7: Statistical Issues in Research planning and Evaluation
Sample Mean Compared to a Given Population Mean
Sample Mean Compared to a Given Population Mean
Basics of ML Rohan Suri.
Number Lines.
Roc curves By Vittoria Cozza, matr
AP STATISTICS LESSON 10 – 4 (DAY 2)
Factors, multiple, primes: Multiples
Statistical Power.
If there is any case in which true premises lead to a false conclusion, the argument is invalid. Therefore this argument is INVALID.
If there is any case in which true premises lead to a false conclusion, the argument is invalid. Therefore this argument is INVALID.
Presentation transcript:

Comparing Decision Rules Decision accuracy of different decision rules combining multiple tests in a higher educational context Iris Yocarini, Samantha Bouwmeester, Guus Smeets, & Lidia Arends 18/03/2016

Decision accuracy of BSA decision Comparison decision based on true score vs. observed score Compensatory vs. conjunctive decision rule Evaluation argument pro compensation: ‘average is more reliable’  correct decision Decision based on true score Negative BSA Positive BSA Decision based on observed score Correct classification Misclassification False negative False positive Correct classificatie

Results Compensatory vs. conjunctive rule Compensatory lower proportion misclassifications Decision Rule GPA Minimum Mean Proportion Error 1 5.5 0.06 2 3 0.10 4 0.17 5 0.26 0.24 6 0.14 7 0.15 8 0.18 9 0.25 10 11 6.5 0.23 12 0.22 13 14 15

Results Compensatory vs. conjunctive rule Compensatory lower proportion misclassifications Decision Rule GPA Minimum Mean Proportion Error Sample Size True Positive BSA 1 5.5 0.06 92% 2 3 0.10 4 0.17 88% 5 0.26 61% 0.24 40% 6 0.14 78% 7 0.15 8 0.18 77% 9 0.25 60% 10 21% 11 6.5 0.23 53% 12 0.22 13 14 48% 15 9%

Results Compensatory vs. conjunctive rule Compensatory higher sensitivity rate  lower false negative rate Compensatory lower specificity rate  higher false positive rate Decision Rule GPA Minimum Mean Proportion Error Sensitivity Specificity 1 5.5 0.06 0.98 0.38 2 3 0.10 0.93 0.51 4 0.17 0.86 0.59 5 0.26 0.74 0.71 0.24 0.69 0.79 6 0.14 0.97 0.45 7 0.15 0.94 0.52 8 0.18 0.88 9 0.25 0.75 10 0.63 11 6.5 0.23 0.95 0.56 12 0.22 0.58 13 0.90 0.62 14 0.80 0.72 15

Conclusion & Discussion Decision accuracy important consideration Compensatory decision rule: relatively fewer false negatives and more false positives  depends on specific setting & tests used Focus on both specific decision rule as well as selected tests

Thank you for your attention! Questions? Contact: yocarini@fsw.eur.nl