HomeWHICHWhich Statistic Is The Best Unbiased Estimator For μ

Which Statistic Is The Best Unbiased Estimator For μ

Answer

Recall that if (X_i) is a normally distributed random variable with mean (mu) and variance (sigma^2), then (E(X_i)=mu) and (text{Var}(X_i)=sigma^2). Therefore:

(E(bar{X})=Eleft(dfrac{1}{n}sumlimits_{i=1}^nX_iright)=dfrac{1}{n}sumlimits_{i=1}^nE(X_i)=dfrac{1}{n}sumlimits_{i=1}mu=dfrac{1}{n}(nmu)=mu)

The first equality holds because we’ve merely replaced (bar{X}) with its definition. Again, the second equality holds by the rules of expectation for a linear combination. The third equality holds because (E(X_i)=mu). The fourth equality holds because when you add the value (mu) up (n) times, you get (nmu). And, of course, the last equality is simple algebra.

In summary, we have shown that:

(E(bar{X})=mu)

Therefore, the maximum likelihood estimator of (mu) is unbiased. Now, let’s check the maximum likelihood estimator of (sigma^2). First, note that we can rewrite the formula for the MLE as:

(hat{sigma}^2=left(dfrac{1}{n}sumlimits_{i=1}^nX_i^2right)-bar{X}^2)

because:

(displaystyle{begin{aligned} hat{sigma}^{2}=frac{1}{n} sum_{i=1}^{n}left(x_{i}-bar{x}right)^{2} &=frac{1}{n} sum_{i=1}^{n}left(x_{i}^{2}-2 x_{i} bar{x}+bar{x}^{2}right) &=frac{1}{n} sum_{i=1}^{n} x_{i}^{2}-2 bar{x} cdot color{blue}underbrace{color{black}frac{1}{n} sum x_{i}}_{bar{x}} color{black} + frac{1}{color{blue}cancel{color{black} n}}left(color{blue}cancel{color{black}n} color{black}bar{x}^{2}right) &=frac{1}{n} sum_{i=1}^{n} x_{i}^{2}-bar{x}^{2} end{aligned}})

Then, taking the expectation of the MLE, we get:

(E(hat{sigma}^2)=dfrac{(n-1)sigma^2}{n})

as illustrated here:

begin{align} E(hat{sigma}^2) &= Eleft[dfrac{1}{n}sumlimits_{i=1}^nX_i^2-bar{X}^2right]=left[dfrac{1}{n}sumlimits_{i=1}^nE(X_i^2)right]-E(bar{X}^2) &= dfrac{1}{n}sumlimits_{i=1}^n(sigma^2+mu^2)-left(dfrac{sigma^2}{n}+mu^2right) &= dfrac{1}{n}(nsigma^2+nmu^2)-dfrac{sigma^2}{n}-mu^2 &= sigma^2-dfrac{sigma^2}{n}=dfrac{nsigma^2-sigma^2}{n}=dfrac{(n-1)sigma^2}{n} end{align}

The first equality holds from the rewritten form of the MLE. The second equality holds from the properties of expectation. The third equality holds from manipulating the alternative formulas for the variance, namely:

Refer to more articles:  Which Of The Following Is Equivalent To Tan 5pi 6

(Var(X)=sigma^2=E(X^2)-mu^2) and (Var(bar{X})=dfrac{sigma^2}{n}=E(bar{X}^2)-mu^2)

The remaining equalities hold from simple algebraic manipulation. Now, because we have shown:

(E(hat{sigma}^2) neq sigma^2)

the maximum likelihood estimator of (sigma^2) is a biased estimator.

RELATED ARTICLES

Most Popular

Recent Comments