# Bessel’s Correction

Correcting the bias in estimators of population variance
Statistical Modeling
Python
Published

February 1, 2024

Bessel’s correction is the use of $$n-1$$ instead of $$n$$ in the sample variance formula where $$n$$ is the number of observations in a sample. This method corrects the bias in the estimation of the population variance.

Recall that bias is defined as:

$\mathrm{Bias}(\theta) = E[\hat{\theta}] - \theta,$

where $$\theta$$ represents the actual parameter value, and $$E[\hat{\theta}]$$ is an estimator of the parameter $$\theta$$. A desirable property of an estimator is that its expected value equals the parameter being estimated, or $$E[\hat{\theta}] = \theta$$. When this occurs, the estimator is said to be unbiased. Let $$\sigma^{2}$$ represent the population variance, given by

$\hat{\sigma}^{2} = \frac{1}{n}\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2}.$

To show that $$\hat{\sigma}^{2}$$ is a biased estimator for $$\sigma^{2}$$, let $$Y_{1}, Y_{2}, \cdots, Y_{n}$$ be a random sample with $$E[Y_{i}] = \mu$$ and $$Var[Y_{i}] = \sigma^{2}$$. First, note that

$\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2} = \sum_{i=1}^{n}Y_{i}^{2} - n\bar{Y}^{2},$

and as a result

$E\Big[\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2}\Big] = E\Big(\sum_{i=1}^{n}Y_{i}^{2}\Big) - nE(\bar{Y}^{2}) = \sum_{i=1}^{n}E(Y_{i}^{2}) - nE(\bar{Y}^{2}).$

Rearranging the familiar expression for variance yields

$E[Y^{2}] = Var[Y] + E[Y]^{2} = \sigma^{2} + \mu^{2},$

and similarly,

$Var[\bar{Y}] + E[\bar{Y}]^{2} = \sigma^{2}/n + \mu^{2}.$

Therefore

\begin{align*} E\Big[\sum_{i=1}^{n}(Y_{i}-\bar{Y})^{2}\Big] &= \sum_{i=1}^{n}\sigma^{2}+\mu^{2}-n\Big(\frac{\sigma^{2}}{n} + \mu^{2}\Big) \\ &=n(\sigma^{2} + \mu^{2}) - n\Big(\frac{\sigma^{2}}{n} + \mu^{2}\Big) \\ &=n\sigma^{2} - \sigma^{2} = (n-1)\sigma^{2}. \end{align*}

Thus,

$E[\hat{\sigma}^{2}] = \frac{1}{n}E\Big[\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2}\Big] = \frac{1}{n}(n-1)\sigma^{2} = \Big(\frac{n-1}{n}\Big)\sigma^{2},$

and we conclude that $$\sigma^{2}$$ is biased since $$E[\hat{\sigma}^{2}] \ne \sigma^{2}$$. We now consider the sample variance $$S^{2}$$:

$E[S^{2}] = \frac{1}{n-1}E\Big[\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2}\Big] = \frac{1}{n-1}(n-1)\sigma^{2} = \sigma^{2},$

and since $$E[S^{2}] = \sigma^{2}$$, we conclude that $$S^{2}$$ is an unbiased estimator for $$\sigma^{2}$$.

### Demonstration

An important property of an unbiased estimator of a population parameter is that if the sample statistic is evaluated for every possible sample and the average computed, the mean over all samples will exactly equal the population parameter. For a given population with mean $$\mu$$ and variance $$\sigma^{2}$$, if the sample variance (division by $$(n−1)$$) is computed for all possible permutations of the dataset, the average of the sample variances will exactly equal $$\sigma^{2}$$. This also demonstrates (indirectly) that division by $$n$$ would consistently underestimate the population variance.

We now attempt to verify this property on the following dataset:

$7, 9, 10, 12, 15$

The Python itertools module exposes a collection of efficient iterators that stream values on-demand based on various starting and/or stopping conditions. For example, the permutations implementation takes as arguments an iterable and the length of the permutation r. It returns all r-length permutations of elements from the iterable (itertools also exposes a combinations function that does the same for all r-length combinations). The product function generates the cartesian product of the specified iterables, and takes an optional repeat argument. From the documentation:

To compute the product of an iterable with itself, specify the number of repetitions with the optional repeat keyword argument. For example, product(A, repeat=4) means the same as product(A, A, A, A).

product is used to compute the average sample variance for all 2, 3 and 4-element permutations from $$[7, 9, 10, 12, 15]$$, and the result is compared to the population variance. Before we begin, lets calculate the population mean and variance:

\begin{align*} \bar{Y} &= 10.6 \\ \sigma^{2} &= \sum_{i=1}^{5}\frac{Y_{i}^{2}}{n} - \bar{Y}^{2} \\ &= 119.8 - 10.6^{2} \\ &= 7.44 \end{align*}

We now compute the average of the sample variance for all $$k$$-element permutations from $$[7, 9, 10, 12, 15]$$ for $$2 \le k \le 5$$:

"""
Demonstrating that the sample variance is an unbiased estimator
of the population variance.

Generate all possible 2, 3, 4 and 5-element permutations from
[7, 9, 10, 12, 15], and determine the sample variance of each
sample. The average of the sample variances will exactly equate
to the population variance if the sample variance is an unbiased
estimator of the population variance.
"""
import itertools
import numpy as np

v = [7, 9, 10, 12, 15]

# Verify that the average of the sample variance
# for all 2-element samples equates to 7.44.
s2 = list(itertools.product(v, repeat=2))
result2 = np.mean([np.var(ii, ddof=1) for ii in s2])

# Verify that the average of the sample variance
# for all 3-element samples equates to 7.44.
s3 = list(itertools.product(v, repeat=3))
result3 = np.mean([np.var(ii, ddof=1) for ii in s3])

# Verify that the average of the sample variance
# for all 4-element samples equates to 7.44.
s4 = list(itertools.product(v, repeat=4))
result4 = np.mean([np.var(ii, ddof=1) for ii in s4])

# Verify that the average of the sample variance
# for all 5-element samples equates to 7.44.
s5 = list(itertools.product(v, repeat=5))
result5 = np.mean([np.var(ii, ddof=1) for ii in s5])

print(f"result2: {result2}")
print(f"result3: {result3}")
print(f"result4: {result4}")
print(f"result5: {result5}")
result2: 7.44
result3: 7.4399999999999995
result4: 7.44
result5: 7.44

Since the sample variance is an unbiased estimator of the population variance, these results should come as no surprise, but it is an interesting demonstration nonetheless.