How to Use Polynomial Feature Transforms for Machine Learning
Often, the input features for a predictive modeling task interact in unexpected and often nonlinear ways.
These interactions can be identified and modeled by a learning algorithm. Another approach is to engineer new features that expose these interactions and see if they improve model performance. Additionally, transforms like raising input variables to a power can help to better expose the important relationships between input variables and the target variable.
These features are called interaction and polynomial features and allow the use of simpler modeling algorithms as some of the complexity of interpreting the input variables and their relationships is pushed back to the data preparation stage. Sometimes these features can result in improved modeling performance, although at the cost of adding thousands or even millions of additional input variables.
In this tutorial, you will discover how to use polynomial feature transforms for feature engineering with numerical input variables.
After completing this tutorial, you will know:
Some machine learning algorithms prefer or perform better with polynomial input features.
How to use the polynomial features transform to create new versions of input variables for predictive modeling.
How the degree of the polynomial impacts the number of input features created by the transform.
Let’s get started.
How to Use Polynomial Feature Transforms for Machine Learning Photo by D Coetzee , some rights reserved.
This tutorial is divided into five parts; they are:
Polynomial Feature Transform
Polynomial Feature Transform Example
Effect of Polynomial Degree
Polynomial features are those features created by raising existing features to an exponent.
For example, if a dataset had one input feature X, then a polynomial feature would be the addition of a new feature (column) where values were calculated by squaring the values in X, e.g. X^2. This process can be repeated for each input variable in the dataset, creating a transformed version of each.
As such, polynomial features are a type of feature engineering, e.g. the creation of new input features based on the existing features.
The “ degree ” of the polynomial is used to control the number of features added, e.g. a degree of 3 will add two new variables for each input variable. Typically a small degree is used such as 2 or 3.
Generally speaking, it is unusual to use d greater than 3 or 4 because for large values of d, the polynomial curve can become overly flexible and can take on some very strange shapes.
— Page 266, An Introduction to Statistical Learning with Applications in R , 2014.
It is also common to add new variables that represent the interaction between features,...