Download presentation
Presentation is loading. Please wait.
1
Classification Breakdown
Performance Evaluation of Machine Learning Algorithms by K-Fold Cross-Validation and Hold-Out Validation for Classification of Survey Write-in Responses Andrea Roberson Any views expressed are those of the author and not necessarily those of the U.S. Census Bureau. 50% HV vs 2 KFCV Example Background Methodology Results Conclusions Analysis of KFCV over HV on large write-in datasets We can use KFCV over HV with varying values of k for quality and more efficient classification. The computational time of fitting 100 folds was under 8.5 minutes. Accuracy increased with KFCV by up to 3%. This finding has been absent from most literature on Cross Validation. The Annual Capital Expenditures Survey (ACES) provides national estimates of capital investments in U.S. businesses. U.S. Census Bureau studies identified areas of improvement in our editing processes in order to improve the timeliness and quality of our estimates while reducing cost. A U.S. Census Bureau Economic Edit Reduction team identified edits and processes that can be automated. Suggestions included automating the manual examination of ACES survey write-ins. Technique LR 20% HV 10% HV 6.7% HV 5% HV 4% HV 2% HV 1% HV 0.913 0.916 0.907 0.917 0.899 5 KFCV 10 KFCV 15 KFCV 20 KFCV 25 KFCV 50 KFCV 100 KFCV 0.927 0.938 0.930 0.931 0.932 We compared the predictive performance of Support Vector Machines (SVMs) and Logistic Regression (LR) to predict write-in responses using various fine tuning parameters. After selecting LR for our production model, we calculated the percentage of cases where KFCV prevails over HV. Current Work The experiments were performed using Python with and an Intel core i7 processor with 16GB RAM. We consider large to be over 1,000 instances, our dataset has 18,789 instances. LR was applied on 5 KFCV and 20% HV and then again, for several values of folds between 5 and 100. A summary of these results is shown below A is the number of cases when KFCV gives better accuracy than HV. B is the number of cases when HV gives better accuracy than KFCV. C is the percentage of A over the 7 cases. Develop U.S. Census Bureau-wide best practices to apply Cross-Validation for automated text analysis. Analyze other U.S. Census Bureau training sets of various sizes that have been developed for text classification. Consider other ML methods such as Random Forests and Ensembles. Predict the upper limit of k for computational efficiency of KFCV over HV. Classification Breakdown 13% 13% Goal 63% Research and deploy a Machine Learning (ML) classifier to accurately predict the correct class of capital expenditures 10% Challenges Data was acquired from ACES survey staff for the years 2015 and 2016 50% HV 2 KFCV References There are many strategies available for Cross-validation, a method to determine the generalizability of a model to unseen data. It is unclear whether Hold-Out validation (HV) is a better validation scheme than K-Fold Cross-validation (KFCV) for large write-in survey datasets. 50% HV vs 2 KFCV Example Wong, T. (2015). Performance Evaluation of Classification Algorithms by k-fold and leave-one-out- cross validation. Pattern Recognition, vol.48, no. 9, Zhang, Y. and Yang, Y. (2015). Cross-validation for selecting a model selection procedure. Journal of Econometrics, vol.187, no. 1, U.S. Census Bureau. (2018). Annual Capital Expenditures Survey. < > 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Dataset A B C ACES 7 100% training evaluation
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.