Why is the accuracy reported in the Classification Learner
Why is the accuracy reported in the Classification Learner app different from the accuracy of the exported model on the training data set?
When I train a model using the Classification Learner app in MATLAB, the accuracy reported is very low (near 20%). However, after exporting the model, I see nearly 100% accuracy on the training data.
ANSWER
Matlabsolutions.com provide latest MatLab Homework Help,MatLab Assignment Help for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research.
The Classification Learner app is reporting the validation accuracy on the data based on the validation scheme that we choose when starting a new session in the app. The default setting in MATLAB R2018a is 5-fold cross-validation, and as such the accuracy reported in the app is based on the accuracy on the held-out validation set after training on the other 4 folds.
When the model is exported to the workspace, it is trained using the full data set. As a result, when we predict on that same data set, the accuracy is very high. However, if we were to predict on unseen data, the accuracy would be much lower.
To verify this, when loading the data into the Classification Learner app, you may set the ‘Validation’ option in the right-hand pane to ‘No Validation’. After training, you should see that the accuracy reported in the app is near 100%.
You can also verify this by splitting your data into training and testing sets. Then, you may train the model in the app using only the training set and export the model. The accuracy of the exported model on the test set should be comparable to the accuracy reported in the app. I tried this myself by randomly splitting the data table using the following code:
SEE COMPLETE ANSWER CLICK THE LINK