Point estimation only gives a reference value of parameter, but does not offer the error range of the value.
But interval estimation gives a estimation range with the degree of reliability that the parameter is in the interval.
Confidence interval
The distribution function of population $X$ is $F(x;\theta)$, $(X_1, …X_n)$ are samples, for a given $\alpha(0<\alpha<1)$,
if $\exist$ estimators $\hat \theta_1 = \theta_1(X_1, …, X_n), \hat \theta_2 = \theta_2(X_1,…,X_n)$, s.t. $P\{\hat\theta_1<\theta<\hat\theta_2\}≥1-\alpha, \forall \theta\in \Theta.$
then call interval $(\hat\theta_1, \hat\theta_2)$ the confidence interval of $\theta$ with confidence level $1-\alpha$, call $\hat \theta_1$ the lower confidence limit, $\hat \theta_2$ the upper confidence limit.
Implication: Sample 100 times(same capacity $n$), each sample determines an interval $(\hat \theta_1, \hat\theta_2)$. From Bernoulli’s law of large numbers, among these intervals, about $100(1-\alpha)\%$ includes $\theta$, but not the probability that $\theta$ falls in a certain interval!
Call the mean length $E(\hat\theta_2-\hat\theta_1)$ the accuracy of interval, $\frac{E(\hat\theta_2-\hat\theta_1)}{2}$ the error limit.
Length $\nearrow$, confidence $\nearrow$, accuracy $\searrow$.
Neyman-Pearson Lemma
Among intervals with same confidence level, choose the one with highest accuracy.
Equal confidence interval
One-sided confidence interval
The distribution function of population $X$ is $F(x;\theta)$, $(X_1, …X_n)$ are samples, for a given $\alpha(0<\alpha<1)$,
if $\exist$ estimators $\hat \theta_1 = \theta_1(X_1, …, X_n)$, s.t. $P\{\theta>\hat\theta_1\}≥1-\alpha, \forall \theta\in \Theta.$
then call $\hat \theta_1$ the one-sided lower confidence limit of $\theta$ with confidence level $1-\alpha$, $(\hat\theta_1, +\infty)$ one-sided confidence interval.
if $\exist$ estimators $\hat \theta_2 = \theta_2(X_1, …, X_n)$, s.t. $P\{\theta<\hat\theta_2\}≥1-\alpha, \forall \theta\in \Theta.$
then call $\hat \theta_1$ the one-sided upper confidence limit of $\theta$ with confidence level $1-\alpha$, $(-\infty, \hat\theta_2)$ one-sided confidence interval.
if $\hat \theta_1$ is a one-sided lower confidence limit of $\theta$ with confidence level $1-\alpha_1,$ $\hat\theta_2$ a one-sided upper confidence limit of $\theta$ with confidence level $1-\alpha_2$,
then $(\hat\theta_1, \hat\theta_2)$ is a confidence interval of $\theta$ with confidence level $1-\alpha_1-\alpha_2$.
<aside> <img src="/icons/light-bulb_lightgray.svg" alt="/icons/light-bulb_lightgray.svg" width="40px" /> e.g.14 $X\sim N(\mu, \sigma^2), X_1, …, X_n$ are samples, $\sigma^2$ known, solve the confidence interval of $\mu$ with confidence level $1-\alpha$. Because $\hat \mu = \hat \mu_L=\bar X$, it is rational to take $\bar X$ as estimator of $\mu$. $d_1,d_2>0, P(\bar X-d_1<\mu<\bar X+d_2)=1-\alpha$ $\iff P(-d_2<\bar X-\mu < d_1) = 1-\alpha \iff P(\frac{-d_2}{\frac{\sigma}{\sqrt{n}}}<\frac{\bar X-\mu}{\frac{\sigma}{\sqrt{n}}}<\frac{d_1}{\frac{\sigma}{\sqrt{n}}}) = 1-\alpha\iff P(k_1<\frac{\bar X-\mu}{\frac{\sigma}{\sqrt{n}}}<k_2) =1-\alpha$. From Neyman-Pearson Lemma, take $k_2=z_\frac{\alpha}{2}=-k_1$. then $P(-z_\frac{\alpha}{2}<\frac{\bar X-\mu}{\frac{\sigma}{\sqrt{n}}}<z_\frac{\alpha}{2}) =1-\alpha \Rarr P(\bar X-z_\frac{\alpha}{2}\frac{\sigma}{\sqrt{n}}<\mu<\bar X+z_\frac{\alpha}{2}\frac{\sigma}{\sqrt{n}})=1-\alpha.$ therefore the confidence interval is $(\bar X-z_\frac{\alpha}{2}\frac{\sigma}{\sqrt{n}},\bar X+z_\frac{\alpha}{2}\frac{\sigma}{\sqrt{n}})$.
</aside>
From solving the question above, we’ve already applied the Pivot method.