Plot F Test Vs MiΒΆ

=========================================== Comparison of F-test and mutual informationΒΆ

This example illustrates the differences between univariate F-test statistics and mutual information.

We consider 3 features x_1, x_2, x_3 distributed uniformly over [0, 1], the target depends on them as follows:

y = x_1 + sin(6 * pi * x_2) + 0.1 * N(0, 1), that is the third feature is completely irrelevant.

The code below plots the dependency of y against individual x_i and normalized values of univariate F-tests statistics and mutual information.

As F-test captures only linear dependency, it rates x_1 as the most discriminative feature. On the other hand, mutual information can capture any kind of dependency between variables and it rates x_2 as the most discriminative feature, which probably agrees better with our intuitive perception for this example. Both methods correctly mark x_3 as irrelevant.

Imports for Comparing F-test and Mutual Information Feature ScoringΒΆ

f_regression measures only linear statistical dependence between each feature and the target: The F-test computes the ratio of explained variance to unexplained variance under a linear model for each feature independently, producing a statistic that is high when the feature has a strong linear correlation with y. For x_1 (which enters the target function linearly), the F-test correctly assigns a high score. However, for x_2 (which enters as sin(6pix_2) – a highly nonlinear oscillating relationship), the linear correlation is nearly zero despite x_2 being the dominant source of variation in y, so the F-test incorrectly rates it as uninformative.

mutual_info_regression captures arbitrary functional dependencies via entropy estimation: Mutual information I(X;Y) = H(Y) - H(Y|X) measures how much knowing X reduces uncertainty about Y, regardless of whether the relationship is linear, polynomial, periodic, or otherwise. It uses a k-nearest-neighbor entropy estimator that adapts to local density, making it effective for detecting nonlinear dependencies like the sinusoidal relationship of x_2. The normalized scores (divided by the maximum) make the two methods directly comparable on the same 0-to-1 scale. Both methods are univariate – they evaluate each feature independently, so they cannot detect features that are only informative in combination with other features (interaction effects).

# Authors: The scikit-learn developers
# SPDX-License-Identifier: BSD-3-Clause

import matplotlib.pyplot as plt
import numpy as np

from sklearn.feature_selection import f_regression, mutual_info_regression

np.random.seed(0)
X = np.random.rand(1000, 3)
y = X[:, 0] + np.sin(6 * np.pi * X[:, 1]) + 0.1 * np.random.randn(1000)

f_test, _ = f_regression(X, y)
f_test /= np.max(f_test)

mi = mutual_info_regression(X, y)
mi /= np.max(mi)

plt.figure(figsize=(15, 5))
for i in range(3):
    plt.subplot(1, 3, i + 1)
    plt.scatter(X[:, i], y, edgecolor="black", s=20)
    plt.xlabel("$x_{}$".format(i + 1), fontsize=14)
    if i == 0:
        plt.ylabel("$y$", fontsize=14)
    plt.title("F-test={:.2f}, MI={:.2f}".format(f_test[i], mi[i]), fontsize=16)
plt.show()