LOOCV for Evaluating Machine Learning Algorithms


The Leave-One-Out Cross-Validation , or LOOCV , procedure is used to estimate the performance of machine learning algorithms when they are used to make predictions on data not used to train the model.
It is a computationally expensive procedure to perform, although it results in a reliable and unbiased estimate of model performance. Although simple to use and no configuration to specify, there are times when the procedure should not be used, such as when you have a very large dataset or a computationally expensive model to evaluate.
In this tutorial, you will discover how to evaluate machine learning models using leave-one-out cross-validation.
After completing this tutorial, you will know:

The leave-one-out cross-validation procedure is appropriate when you have a small dataset or when an accurate estimate of model performance is more important than the computational cost of the method.
How to use the scikit-learn machine learning library to perform the leave-one-out cross-validation procedure.
How to evaluate machine learning algorithms for classification and regression using leave-one-out cross-validation.

Let’s get started.

LOOCV for Evaluating Machine Learning Algorithms Photo by Heather Harvey , some rights reserved.

Tutorial Overview
This tutorial is divided into three parts; they are:

LOOCV Model Evaluation
LOOCV Procedure in Scikit-Learn
LOOCV to Evaluate Machine Learning Models

LOOCV for Classification
LOOCV for Regression

LOOCV Model Evaluation
Cross-validation, or k-fold cross-validation, is a procedure used to estimate the performance of a machine learning algorithm when making predictions on data not used during the training of the model.
The cross-validation has a single hyperparameter “ k ” that controls the number of subsets that a dataset is split into. Once split, each subset is given the opportunity to be used as a test set while all other subsets together are used as a training dataset.
This means that k-fold cross-validation involves fitting and evaluating k models. This, in turn, provides k estimates of a model’s performance on the dataset, which can be reported using summary statistics such as the mean and standard deviation. This score can then be used to compare and ultimately select a model and configuration to use as the “ final model ” for a dataset.
Typical values for k are k=3, k=5, and k=10, with 10 representing the most common value. This is because, given extensive testing, 10-fold cross-validation provides a good balance of low computational cost and low bias in the estimate of model performance as compared to other k values and a single train-test split.
For more on k-fold cross-validation, see the tutorial:

A Gentle Introduction to k-fold Cross-Validation

Leave-one-out cross-validation, or...

Top