Restricted Likelihood Maximum Sample Clauses
Restricted Likelihood Maximum. Maximum likelihood estimation (MLE) is a well-known and widely used approach to estimating unknown parameters in statistical models, which has various exten- sions. When used in variance components inference, however, MLE produces biased estimates since MLE fails to take into account the loss in degrees of freedom res- ulting from estimating the nuisance fixed effect parameters (▇▇▇▇▇▇▇▇, 1977), but its estimator is asymptotically unbiased. In contrast with MLE, ReML is an alternat- ive form of MLE, which accounts for the loss of degrees of freedom and, in general, produces less biased variance component estimates than MLE. The ReML approach was first proposed and introduced by ▇▇▇▇▇▇▇▇▇ and ▇▇▇▇▇▇▇▇ (1971), and later reviewed and summarised by ▇▇▇▇▇▇▇▇ (1977). Currently ReML has become the most commonly used means of variance component analysis. The ReML log-likelihood, after removing the constant term, is expressed as A(ρ|Y) = − 1 Σ log |V| + log |XTV−1X| + (Y − XβGLS)TV−1(Y − XβGLS)Σ, (3.9) where T −1 T −1 −1 T −1 βGLS = arg min(Y − Xβ) V (Y − Xβ) = (X V X) X V Y is the generalized least squares (GLS) estimator for β (▇▇▇▇▇▇▇▇, 1977). Simplifying Equation (3.9) and denoting R = I − X(XTV−1X)−1XTV−1 yield a simplified expression of ReML log-likelihood: A(ρ|Y) = − 2 Σ log |V| + log |XTV−1X| + YTV−1RYΣ. The ReML estimate is obtained by maximizing this ReML log-likelihood function. But the solution of this optimization problem is implicit and no analytical expres- sions can be achieved, so an iterative algorithm—▇▇▇▇▇▇ scoring algorithm (FSA) is employed to numerically and iteratively approximate the optimal value.
