Hypothesis Test for Comparing Machine Learning Algorithms


Machine learning models are chosen based on their mean performance, often calculated using k-fold cross-validation.
The algorithm with the best mean performance is expected to be better than those algorithms with worse mean performance. But what if the difference in the mean performance is caused by a statistical fluke?
The solution is to use a statistical hypothesis test to evaluate whether the difference in the mean performance between any two algorithms is real or not.
In this tutorial, you will discover how to use statistical hypothesis tests for comparing machine learning algorithms.
After completing this tutorial, you will know:

Performing model selection based on the mean model performance can be misleading.
The five repeats of two-fold cross-validation with a modified Student’s t-Test is a good practice for comparing machine learning algorithms.
How to use the MLxtend machine learning to compare algorithms using a statistical hypothesis test.

Kick-start your project with my new book Statistics for Machine Learning , including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.

Hypothesis Test for Comparing Machine Learning Algorithms Photo by Frank Shepherd , some rights reserved.

Tutorial Overview
This tutorial is divided into three parts; they are:

Hypothesis Test for Comparing Algorithms
5×2 Procedure With MLxtend
Comparing Classifier Algorithms

Hypothesis Test for Comparing Algorithms
Model selection involves evaluating a suite of different machine learning algorithms or modeling pipelines and comparing them based on their performance.
The model or modeling pipeline that achieves the best performance according to your performance metric is then selected as your final model that you can then use to start making predictions on new data.
This applies to regression and classification predictive modeling tasks with classical machine learning algorithms and deep learning. It’s always the same process.
The problem is, how do you know the difference between two models is real and not just a statistical fluke?
This problem can be addressed using a statistical hypothesis test .
One approach is to evaluate each model on the same k-fold cross-validation split of the data (e.g. using the same random number seed to split the data in each case) and calculate a score for each split. This would give a sample of 10 scores for 10-fold cross-validation. The scores can then be compared using a paired statistical hypothesis test because the same treatment (rows of data) was used for each algorithm to come up with each score. The Paired Student’s t-Test could be used.
A problem with using the Paired Student’s t-Test, in this case, is that each evaluation of the model is not independent. This is...

Top