Bessel’s correction is the use of \(n − 1\) instead of \(n\) in the formula for the sample variance, where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance.

Recall that bias is defined as:

$$ bias(\theta) = E[\hat{\theta}] - \theta, $$


where \(\theta\) represents the actual parameter value, and \(E[\hat{\theta}]\) is an estimator of the parameter \(\theta\). A desirable property of an estimator is that its expected value equals the parameter we’re estimating, or \(E[\hat{\theta}] = \theta\). When this occurs, the estimator is said to be unbiased.
Let \(\hat{\sigma}^{2}\) represent the population variance, given by

$$ \hat{\sigma}^{2} = \frac{1}{n}\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2}. $$


To show that \(\hat{\sigma}^{2}\) is a biased estimator for \(\sigma^{2}\), let \(Y_{1}, Y_{2}, \cdots, Y_{n} $ be a random sample with $E[Y_{i}] = \mu\) and \(Var[Y_{i}] = \sigma^{2}\). First, note that

$$ \sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2} = \sum_{i=1}^{n}Y_{i}^{2} - n\bar{Y}^{2}, $$


and as a result,

$$ E\Big[\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2}\Big] = E\Big(\sum_{i=1}^{n}Y_{i}^{2}\Big) - nE(\bar{Y}^{2}) = \sum_{i=1}^{n}E(Y_{i}^{2}) - nE(\bar{Y}^{2}). $$


Rearranging the familiar expression for variance yields

$$ E[Y^{2}] = Var[Y] + E[Y]^{2} = \sigma^{2} + \mu^{2}, $$


and similiarly,

$$ Var[\bar{Y}] + E[\bar{Y}]^{2} = \sigma^{2}/n + \mu^{2}. $$


Therefore

$$ \begin{align*} E\Big[\sum_{i=1}^{n}(Y_{i}-\bar{Y})^{2}\Big] &= \sum_{i=1}^{n}\sigma^{2}+\mu^{2}-n\Big(\frac{\sigma^{2}}{n} + \mu^{2}\Big) \\ &=n(\sigma^{2} + \mu^{2}) - n\Big(\frac{\sigma^{2}}{n} + \mu^{2}\Big) \\ &=n\sigma^{2} - \sigma^{2} = (n-1)\sigma^{2}. \end{align*} $$


Thus,

$$ E[\hat{\sigma}^{2}] = \frac{1}{n}E\Big[\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2}\Big] = \frac{1}{n}(n-1)\sigma^{2} = \Big(\frac{n-1}{n}\Big)\sigma^{2}, $$


and we conclude that \(\hat{\sigma}^{2}\) is biased since \(E[\hat{\sigma}^{2}] \ne \sigma^{2}\). We now consider the sample variance \(S^{2}\):

$$ E[S^{2}] = \frac{1}{n-1}E\Big[\sum_{i=1}^{n}(Y_{i} - \bar{Y})^{2}\Big] = \frac{1}{n-1}(n-1)\sigma^{2} = \sigma^{2}, $$


and since \(E[S^{2}] = \sigma^{2}\), we conclude that \(S^{2}\) is an unbiased estimator for \(\sigma^{2}\).

Demonstration

An important property of an unbiased estimator of a population parameter is that if the sample statistic is evaluated for every possible sample and the average computed, the mean over all samples will exactly equal the population parameter. So for a given dataset with mean \(\mu\) and variance \(\sigma^{2}\), if the sample variance (division by \((n-1)\)) is determined for all possible permutations of the dataset, the average of the sample variances will exactly equal \(\sigma^{2}\). This also demonstrates (indirectly) that division by \(n\) would consistently underestimate the population variance.

We now attempt to verify this property on the following dataset:

$$ 7, 9, 10, 12, 15 $$


The Python itertools module exposes a collection of extremely efficient iterators that stream values on-demand based on various starting and/or stopping conditions. For example, the permutations function takes as arguments an iterable and a length r. The function returns all r-length permutations of elements from the iterable (there is also a combinations function that does the same for all r-length combinations). The product function generates the cartesian product of the provided iterables, and takes an optional repeat argument. According to the docs:

To compute the product of an iterable with itself, specify the number of repetitions with the optional repeat keyword argument. For example, product(A, repeat=4) means the same as product(A, A, A, A).

We will leverage product to compute the average sample variance for all 2, 3 and 4-element permutations from [7, 9, 10, 12, 15], and compare the result to the population variance. Before we begin, lets calculate the population mean and variance:

$$ \begin{align*} \bar{Y} &= 10.6 \\ \sigma^{2} &= \sum_{i=1}^{5}\frac{Y_{i}^{2}}{n} - \bar{Y}^{2} \\ &= 119.8 - 10.6^{2} \\ &= 7.44 \end{align*} $$


We now compute the average of the sample variance for all k-element permutations from [7, 9, 10, 12, 15] for \(2 \le k \le 5\):

"""
Demonstrating that the sample variance is an unbiased estimator 
of the population variance. 

Generate all possible 2, 3, 4 and 5-element permutations from 
[7, 9, 10, 12, 15], and determine the sample variance of each 
sample. The average of the sample variances will exactly equate 
to the population variance if the sample variance is an unbiased 
estimator of the population variance.
"""
import itertools
import numpy as np

v = [7, 9, 10, 12, 15]

def sample_var(vals):
    """
    Compute sample variance of elements in vals.
    """
    n = len(vals)
    arr = np.array(vals)
    ybar = arr.mean()
    return((n/(n-1))*((np.sum(arr**2)/arr.shape[0])-ybar**2))


# Verify that the average of the sample variance
# for all 2-element samples equates to 7.44 =>
s2 = list(itertools.product(v, repeat=2))
result2 = sum(sample_var(i) for i in s2)/len(s2)
print(result2) # returns 7.44


# Verify that the average of the sample variance
# for all 3-element samples equates to 7.44 =>
s3 = list(itertools.product(v, repeat=3))
result3 = sum(sample_var(i) for i in s3)/len(s3)
print(result3) # returns 7.44


# Verify that the average of the sample variance
# for all 4-element samples equates to 7.44 =>
s4 = list(itertools.product(v, repeat=4))
result4 = sum(sample_var(i) for i in s4)/len(s4)
print(result4) # returns 7.44


# Verify that the average of the sample variance
# for all 5-element samples equates to 7.44 =>
s5 = list(itertools.product(v, repeat=5))
result5 = sum(sample_var(i) for i in s5)/len(s5)
print(result5) # returns 7.44


Since the sample variance is an unbiased estimator of the population variance, these results should come as no suprise, but it is an interesting demonstration nonetheless.