動差估計(Method Of Moments Estimation, MOME)
得到樣本 X 1 , ⋯ , X n ∼ iid X X_1,\cdots,X_n\stackrel{\text{iid}}{\sim} X X 1 , ⋯ , X n ∼ iid X with X ∼ iid f ( x ; θ ) , θ ∈ Ω ⊆ R r X\stackrel{\text{iid}}{\sim} f(x;\theta), \theta\in\Omega\subseteq\R^r X ∼ iid f ( x ; θ ) , θ ∈ Ω ⊆ R r
我們定義 k 次動差:
母體(Population):E θ ( X k ) = m k ( θ ) E_\theta(X^k)=m_k(\theta) E θ ( X k ) = m k ( θ )
樣本(Sample): 1 n ∑ i = 1 n X i k = m k ( X ~ ) \frac{1}{n}\sum_{i=1}^nX_i^k=m_k(\utilde{X}) n 1 ∑ i = 1 n X i k = m k ( X )
根據大數法則,隨著樣本數量增加,樣本動差會收斂到母體動差,i.e. m k ( X ~ ) → P m k ( θ ) m_k(\utilde{X})\xrightarrow{P}m_k(\theta) m k ( X ) P m k ( θ ) as n → ∞ n\to\infty n → ∞ .
如果一個分佈有若干參數,並且這個分佈的動差可以用這些參數表示,那我們可以令 m k ( θ ) = m k ( X ~ ) , k = 1 , 2 , ⋯ m_k(\theta)=m_k(\utilde{X}),k=1,2,\cdots m k ( θ ) = m k ( X ) , k = 1 , 2 , ⋯ ,然後解方程組得到這些參數的估計量。
如果我們得到了 θ \theta θ 的 MOME θ ^ \hat{\theta} θ ^ ,那麼對於 η ( θ ) \eta(\theta) η ( θ ) ,我們可以直接帶入 θ ^ \hat{\theta} θ ^ ,i.e. η ( θ ) ^ = η ( θ ^ ) \widehat{\eta(\theta)}=\eta(\hat{\theta}) η ( θ ) = η ( θ ^ )
EX : X 1 , ⋯ , X n ∼ iid E ( X ) = μ , V a r ( X ) = σ 2 < ∞ X_1, \cdots,X_n\stackrel{\text{iid}}{\sim} E(X)=\mu, Var(X)=\sigma^2<\infty X 1 , ⋯ , X n ∼ iid E ( X ) = μ , Va r ( X ) = σ 2 < ∞
set
{ μ = E ( X ) = m 1 ( θ ) = m 1 ( X ~ ) = 1 n ∑ i = 1 n X i = X ˉ μ 2 + σ 2 = E ( X 2 ) = m 2 ( θ ) = m 2 ( X ~ ) = 1 n ∑ i = 1 n X i 2 \begin{cases}
\mu=E(X)=m_1(\theta)=m_1(\utilde{X})=\frac{1}{n}\sum_{i=1}^nX_i=\bar{X}\\
\mu^2+\sigma^2=E(X^2)=m_2(\theta)=m_2(\utilde{X})=\frac{1}{n}\sum_{i=1}^nX_i^2
\end{cases} { μ = E ( X ) = m 1 ( θ ) = m 1 ( X ) = n 1 ∑ i = 1 n X i = X ˉ μ 2 + σ 2 = E ( X 2 ) = m 2 ( θ ) = m 2 ( X ) = n 1 ∑ i = 1 n X i 2
⟹ { μ = X ˉ σ 2 = 1 n ∑ i = 1 n X i 2 − X ˉ 2 = 1 n ∑ i = 1 n ( X i − X ˉ ) 2 \implies
\begin{cases}
\mu=\bar{X}\\
\sigma^2=\frac{1}{n}\sum_{i=1}^nX_i^2-\bar{X}^2=\frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})^2
\end{cases} ⟹ { μ = X ˉ σ 2 = n 1 ∑ i = 1 n X i 2 − X ˉ 2 = n 1 ∑ i = 1 n ( X i − X ˉ ) 2
⟹ { μ ^ M O M E = X ˉ σ ^ M O M E 2 = 1 n ∑ i = 1 n ( X i − X ˉ ) 2 = S ∗ 2 \implies
\begin{cases}
\hat{\mu}_{MOME}=\bar{X}\\
\hat{\sigma}^2_{MOME}=\frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})^2=S^2_*
\end{cases} ⟹ { μ ^ MOME = X ˉ σ ^ MOME 2 = n 1 ∑ i = 1 n ( X i − X ˉ ) 2 = S ∗ 2
對於兩個參數的分佈,基本上都可以用以上的方式得到參數的 MOME。
1 n ∑ i = 1 n ( X i − X ˉ ) 2 = S ∗ 2 ≠ S 2 = 1 n − 1 ∑ i = 1 n ( X i − X ˉ ) 2 \frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})^2=S^2_*\neq S^2=\frac{1}{n-1}\sum_{i=1}^n(X_i-\bar{X})^2 n 1 i = 1 ∑ n ( X i − X ˉ ) 2 = S ∗ 2 = S 2 = n − 1 1 i = 1 ∑ n ( X i − X ˉ ) 2 S 2 S^2 S 2 是方差的無偏估計量,S ∗ 2 S^2_* S ∗ 2 是方差的有偏估計量。
EX : X 1 , ⋯ , X m ∼ iid U ( α , β ) , θ = ( α , β ) X_1,\cdots,X_m\stackrel{\text{iid}}{\sim} U(\alpha, \beta), \theta=(\alpha, \beta) X 1 , ⋯ , X m ∼ iid U ( α , β ) , θ = ( α , β )
{ α + β 2 = X ˉ ( α − β ) 2 12 = S ∗ 2 \begin{cases}
\frac{\alpha+\beta}{2}=\bar{X}\\
\frac{(\alpha-\beta)^2}{12}=S^2_*
\end{cases} { 2 α + β = X ˉ 12 ( α − β ) 2 = S ∗ 2
⟹ { α ^ M O M E = X ˉ − 3 S ∗ β ^ M O M E = X ˉ + 3 S ∗ \implies
\begin{cases}
\hat{\alpha}_{MOME}=\bar{X}-\sqrt{3}S_*\\
\hat{\beta}_{MOME}=\bar{X}+\sqrt{3}S_*
\end{cases} ⟹ { α ^ MOME = X ˉ − 3 S ∗ β ^ MOME = X ˉ + 3 S ∗
Recall : T = ( X ( 1 ) , X ( n ) ) T=(X_{(1)},X_{(n)}) T = ( X ( 1 ) , X ( n ) ) is minimal sufficient for α , β \alpha, \beta α , β , and α ^ M O M E , β ^ M O M E \hat{\alpha}_{MOME},\hat{\beta}_{MOME} α ^ MOME , β ^ MOME are not functions of T T T . Hence, they can be improved by Rao-Blackwell Theorem.