>Business >Regression Metrics for Machine Learning

Regression Metrics for Machine Learning

Regression is in reference to predictive modelling issues that consist of forecasting a numeric value.

It differs from classification that consists of forecasting a class label. Differing from classification, you cannot leverage classification precision to assess the predictions made by a regression model.

Rather, you must leverage error metrics particularly developed for assessing predictions made on regression problems.

After going through this guide, you will be aware of:

  • Regression predictive modelling are those problems that consist of forecasting a numeric value.
  • Metrics for regression consist of calculating an error score to summarize the predictive skill of a model.
  • How to calculate and report mean squared error, root mean squared error, and mean absolute error.

Tutorial Summarization

This tutorial is subdivided into three portions, which are:

1] Regression Predictive Modelling

2] Evaluating Regression Models

3] Metrics for Regression

  • Mean Squared Error
  • Root Mean Squared Error
  • Mean Absolute Error

Regression Predictive Modelling

Predictive modelling is the issue of generating a model leveraging historical data to make a forecast on new data where we do not have the solution.

Predictive modelling can be detailed as the mathematical problem of approximating a mapping function (f) from input variables (X) to output variables (y). This is referred to as the issue of function approximation.

The function of the modelling algorithm is to identify the best mapping function we can provided the time and resources available.

Regression predictive modelling is the activity of approximation of a mapping function (f) from input variables (X) to a continuous output variable (y).

Regression differs from classification, which consists of forecasting a category or class label.

A continuous output variable is a real-value, like an integer or floating point value. These are usually quantities, like amounts and sizes.

For instance, a home may be forecasted to sell for a particular dollar value, probably in the range of $100,000 to $200,000.

  • A regression problem needs the forecasting of a quantity.
  • A regression can possess real-valued or discrete input variables.
  • A problem with several input variables is usually referred to as a multivariate regression problem.
  • A regression problem where input variables are ordered by time is referred to as a time series forecasting problem.

Now that we are acquainted with regression predictive modelling, let’s observe how we might assess a regression model.

Evaluating Regression Models

A typical question posed by starters to regression predictive modelling projects is:

“How do I calculate precision for my regression model?”

Accuracy, for example, classification accuracy, is a measure for classification, not regression.

We cannot calculate precision for a regression model.

The skill or performance of a regression model must be reported as an error in those forecasts.

This is logical if you think about it. If you are forecasting a numeric value like a height or a dollar amount, you don’t wish to know if the model forecasted the value precisely (this may be intractably tough in practice); rather, we wish to know how close the forecasts were to the predicted values.

Error addresses precisely this and summarizes on average and how near predictions were to their expected values.

There are a trio of error metrics that are typically leveraged for evaluating and reporting the performance of a regression model, which are:

  • Mean Squared Error (MSE)
  • Root Mean Squared Error (RMSE)
  • Mean Absolute Error (MAE)

There are several other metrics for regression, even though these are the most broadly leveraged. In the next part of the blog post, let’s take a closer look at every turn.

Metrics for Regression

In this portion of the blog post, we will take a deeper look at the widespread metrics for regression models and how to calculate them for your predictive modelling project.

Mean Squared Error

Mean Squared Error, or MSE for short, is a widespread error metric for regression problems.

It is also a critical loss function for algorithms fit or optimized leveraging the least squares framing of a regression problem. Here “least squares” references to minimization of the mean squared error between forecasts and predicted values.

The MSE is calculated as the mean or average of the squared differences between forecasted and expected target values within a dataset.

MSE = 1/N * sum for i to N (y_i – yhat_i)^2

Where y_i is the ith expected value within the dataset and yhat_i is the i’th predicted value. The difference amongst these two values is squared, which has the impact of eradicating the sign, having the outcome of a positive error value.

The squaring also has the impact of inflating or magnifying major errors. That is, the larger the difference between the forecasted and expected values, the larger the outcome squared positive error. This has the impact of “punishing” models more for bigger errors when MSE is leveraged as a loss function. It also has the impact of “punishing” models through inflation of the average error score when leveraged as a metric.

We can create a plot to obtain a feeling for how the modification in prediction error impacts the squared error.

The instance below provides a small contrived dataset of all 1.0 values and forecasts that range from ideal (1.0) to wrong (0.0) by 0.1 increments. The squared error between every prediction and expected value is calculated and then plotted to display the quadratic increase in squared error.

 

1

2

3

# calculate error

err = (expected[i] – predicted[i])**2

 

The complete instance is detailed here:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

# example of increase in mean squared error

from matplotlib import pyplot

from sklearn.metrics import mean_squared_error

# real value

expected = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

# predicted value

predicted = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0]

# calculate errors

errors = list()

for i in range(len(expected)):

# calculate error

err = (expected[i] – predicted[i])**2

# store error

errors.append(err)

# report error

print(‘>%.1f, %.1f = %.3f’ % (expected[i], predicted[i], err))

# plot errors

pyplot.plot(errors)

pyplot.xticks(ticks=[i for i in range(len(errors))], labels=predicted)

pyplot.xlabel(‘Predicted Value’)

pyplot.ylabel(‘Mean Squared Error’)

pyplot.show()

 

Running the instance first reports the predicted value, expected value, and squared error for every scenario.

We can observe that the error escalates quickly, quicker than linear (a straight line)

1

2

3

4

5

6

7

8

9

10

11

>1.0, 1.0 = 0.000

>1.0, 0.9 = 0.010

>1.0, 0.8 = 0.040

>1.0, 0.7 = 0.090

>1.0, 0.6 = 0.160

>1.0, 0.5 = 0.250

>1.0, 0.4 = 0.360

>1.0, 0.3 = 0.490

>1.0, 0.2 = 0.640

>1.0, 0.1 = 0.810

>1.0, 0.0 = 1.000

 

A line plot is developed displaying the curved or super-linear increase in the squared error value as the difference between the expected and forecasted value is increased.

The curve is not a straight line as we would naively assume for an error metric.

E

The individual error terms are averaged so that we can go about reporting the performance of a model with regard to the degree of error the model commits generally when making forecasts, instead of particularly for a provided example.

The units of the MSE are squared inputs.

For instance, if your target value indicates “dollars”, then the MSE will be “squared dollars”. This can confuse some stakeholders, thus, when reporting outcomes, often the root mean squared error is leveraged instead (spoken about it in the next portion of the blog)

The mean squared error between your predicted and expected values can be calculated leveraging the mean_squared_error() function from the scikit-learn library.

The function takes a one-dimensional array of listings of expected values and forecasted values and returns the mean squared error value.

 

1

2

3

# calculate errors

errors = mean_squared_error(expected, predicted)

 

The instance below provides an instance of calculating the mean squared error amongst a list of contrived predicted and expected values.

1

2

3

4

5

6

7

8

9

10

# example of calculate the mean squared error

from sklearn.metrics import mean_squared_error

# real value

expected = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

# predicted value

predicted = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0]

# calculate errors

errors = mean_squared_error(expected, predicted)

# report error

print(errors)

 

Running the instance calculates and prints the mean squared error.

0.35000000000000003

A perfect mean squared error value is 0.0, which implies that all forecasts matched the expected values precisely.

This is nearly never the scenario, and if it occurs, it indicates your predictive modelling problem is trivial.

A good MSE is relative to your particular dataset.

It is a good thought to initially establish a baseline MSE for your dataset leveraging a naïve prediction models, like forecasting the mean target value from the training dataset. A model that accomplishes an MSE in a better fashion than the MSE for the naïve model has skill.

Root Mean Squared Error

The Root Mean Squared Error, or RMSE is an extension of the mean squared error.

Critically, the square root of the error is calculated, which implies that the units of the RMSE are the same as the original units of the target value that is being forecasted.

For instance, if your target variable has the units “dollars”, then the RMSE error scores will also have the unit “dollars” and not “squared dollars” like the MSE.

As such, it might be typical to leverage MSE loss to train a regression predictive model, and to leverage RMSE to evaluate and report its performance.

The RMSE can be calculated as follows:

  • RMSE = sqrt(1 / N * sum for i to N (y_1 – yhat_i)^2)

Where y_i is the i’th expected value in the dataset, yhat_i is the i’th predicted value, and sqrt() is the square root function.

We can re-mention the RMSE in terms of the MSE as:

  • RMSE = sqrt(MSE)

Observe that the RMSE cannot be calculated as the average of the square root of the mean squared error values. This is a typical error committed by starters and is an instance of Jensen’s inequality.

You might remember that the square root is the inverse of the square operation. MSE leverages the square operation to remove the sign of every error value and to punish major errors. The square root reverses this operation, even though it makes sure that the outcome stays positive.

The root mean squared error amongst your expected and predicted values can be calculated leveraging the mean_squared_error() function from scikit-learn library.

By default, the function does some calculations for the MSE, but we can set it up to calculate the square root of the MSE by setting the “squared” argument to False.

The function takes a 1D array or listing of expected values and forecasted values and gives back the mean squared error value.

 

1

2

3

# calculate errors

errors = mean_squared_error(expected, predicted, squared=False)

 

The instance below provides an instance of calculating the root mean squared error between a listing of contrived, expected, and predicted values.

# example of calculate the root mean squared error

from sklearn.metrics import mean_squared_error

# real value

expected = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

# predicted value

predicted = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0]

# calculate errors

errors = mean_squared_error(expected, predicted, squared=False)

# report error

print(errors)

 

Running the instance calculates and prints the root mean squared error.

0.5916079783099616

A perfect RMSE value is 0.0,which implies that all forecasts matched the expected values precisely.

This is nearly never the scenario, and if it occurs, it indicates your predictive modelling problem is trivial.

A good RMSE is comparative to your particular dataset,

It is a good thought to first setup a baseline RMSE for your dataset leveraging a naïve predictive model, like forecasting the mean target value from the training dataset. A model that accomplishes an RMSE better than the RMSE for the naïve model has skill.

Mean Absolute Error

Mean Absolute Error, or MAE, is a famous metric as, like RMSE, the uni

ts of the error score match the units of the target value that is being forecasted.

Not like the RMSE, the modifications in MAE are linear and thus intuitive.

That is, MSE and RMSE punish bigger errors more than smaller errors, amplifying or magnifying the mean error score. This is owing to the square of the error value. The MAE does not provide more or less weight to differing variants of errors and rather the scores increase linearly with escalations in error.

As its name indicates, the MAE score is quantified as the average of the absolute error values. Absolute or abs() is a mathematical function that merely makes a number positive. Thus, the variation amongst an expected and predicted value might be positive or negative and is forced to be positive when calculating the MAE.

The MAE can be calculated as follows:

  • MAE = 1 / N * sum for i to N abs(y_i – yhat_i)

 

Where y_i is the i’th expected value in the dataset, yhat_i is the i’th forecasted value and abs() is the absolute function.

We can develop a plot to obtain a feeling for how the modification in prediction error impacts the MAE.

The instance below provides a small contrived dataset of all 1.0 values and predictions that vary from perfect (1.0) to wrong (0.0) by 0.1 increments. The absolute error between every prediction and expected value is calculated and plotted to display the linear increase in error.

 

1

2

3

# calculate error

err = abs((expected[i] – predicted[i]))

 

The complete instance is detailed below.

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

# plot of the increase of mean absolute error with prediction error

from matplotlib import pyplot

from sklearn.metrics import mean_squared_error

# real value

expected = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

# predicted value

predicted = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0]

# calculate errors

errors = list()

for i in range(len(expected)):

# calculate error

err = abs((expected[i] – predicted[i]))

# store error

errors.append(err)

# report error

print(‘>%.1f, %.1f = %.3f’ % (expected[i], predicted[i], err))

# plot errors

pyplot.plot(errors)

pyplot.xticks(ticks=[i for i in range(len(errors))], labels=predicted)

pyplot.xlabel(‘Predicted Value’)

pyplot.ylabel(‘Mean Absolute Error’)

pyplot.show()

 

Running the instance first reports the expected value, forecasted value, and absolute error for every case.

We can observe that the error escalates linearly, which is intuitive and simple to understand.

 

1

2

3

4

5

6

7

8

9

10

11

>1.0, 1.0 = 0.000

>1.0, 0.9 = 0.100

>1.0, 0.8 = 0.200

>1.0, 0.7 = 0.300

>1.0, 0.6 = 0.400

>1.0, 0.5 = 0.500

>1.0, 0.4 = 0.600

>1.0, 0.3 = 0.700

>1.0, 0.2 = 0.800

>1.0, 0.1 = 0.900

>1.0, 0.0 = 1.000

 

A line plot is developed displaying the straight line or linear increase in the absolute error value as the difference amongst the expected and predicted value is increased.

The mean absolute error amongst your predicted and expected values can be quantified leveraging the mean_absolute_error() function from the scikit-learn library.

The function takes a 1D array or list of expected values and forecasted values and returns the mean absolute error value.

 

1

2

3

# calculate errors

errors = mean_absolute_error(expected, predicted)

 

The instance below provides an instance of calculating the mean absolute error amongst a list of contrived expected and predicted values.

 

1

2

3

4

5

6

7

8

9

10

# example of calculate the mean absolute error

from sklearn.metrics import mean_absolute_error

# real value

expected = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

# predicted value

predicted = [1.0, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.0]

# calculate errors

errors = mean_absolute_error(expected, predicted)

# report error

print(errors)

 

Running the instance calculates and prints the mean absolute error.

0.5

A perfect mean absolute error value is 0.0, which implies that all predictions matched the expected values precisely.

This is nearly never the scenario, and if it occurs, it indicates that your predictive modelling problem is trivial.

A good MAE is comparative to your particular dataset.

It is a good thought to first setup a baseline MAE for your dataset leveraging a naïve predictive model, like predicting the mean target value from the training dataset. A model that accomplishes a MAE better than the MAE for the naïve model possesses skill.

Further Reading

This section furnishes additional resources on the subject if you are looking to delve deeper.

APIs

Scikit-learn API: Regression Metrics

Scikit-Learn User Guide Section 3.3.4 Regression metrics

Sklearn.metrics.mean_squared_error API

mean_absolute_error API

Articles

Mean squared error, Wikipedia

Root-mean-square deviation, Wikipedia

Mean absolute error, Wikipedia

Coefficient of determination, Wikipedia

Conclusion

In this guide, you found out how to calculate error for regression predictive modelling projects.

Particularly, you learned:

  • Regression predictive modelling are those problems that consist of forecasting a numeric value.
  • Metrics for regression consist of calculating an error score to summarize the predictive skill of a model
  • How to calculate and report mean squared error, root mean squared error, and mean absolute error.
Add Comment