Convergence and Reliability Sample Clauses
Convergence and Reliability. The iteration solution of optimization (A-8) is terminated by convergence criteria. The commonly used stopping criteria include two types of tests. The first one is based on the magnitude of the RMSSR as defined by: O(b) M + N + L D RMSSR ≤ τ1 or D RMSSR RMSSR + εa ≤ τ1' (A-10) whereas the second criterion was the relative change in parameter values: Dbi ≤ τ2 or D bi ≤ τ 2 bi + ε b where εa and εb are small values ensuring that the denominators are not equal to zero, τ1, τ1’ and τ2 are convergence accuracy tolerances. Usually, the second criterion (A-11) is tested after the first criterion (A-10) is satisfied. Only using the second criterion (A-11) or meeting a small step ∆b does not guarantee that the solution is at a minimum, since a large value for λ will also produce a very small step ∆b. The accuracy tolerance τ1 ' is equal to 0 in order for the RMSSR to change toward to the decrease direction. The accuracy tolerance τ2 is often problem dependent and is a compromise between estimation accuracy and computational expense. For example, when the level of uncertainty in input data is increased, objective function has flat minimum, the parameters will tend to wander around near the minimum. In that case, the smaller convergence criterion has little effect on estimation accuracy, but leads to additional iterations, thereby significantly increasing computational expense. An accuracy tolerance τ2 = 0.01 is chosen in our study. The uncertainty measurement of the estimated parameters is expressed by their confidence region, which is derived from linear regression analysis under the assumption of normality and linearity. The normality assumption is that the distribution of a sum of random variables always tends towards normal if the sample size is sufficiently large. The implication is that the measurement errors are dependent on a linear combination of a large number of small random factors. The linearity assumption is that nonlinear functions of parameters b can be approximated by a linearization within the confidence region. For a maximum likelihood estimator, the parameter covariance matrix is asymptotically given by : o Cˆ = s 2H(bˆ )−1 (A-12) or under Hessian approximation of H = JT WJ , get: o Cˆ = s 2 ( JTWJ)−1 (A-13) with so 2 = eTWe n − m (A-14) o where, the circumflex indicates a posterior value, n is the total number of observations, m is the number of parameters to be estimated, and s 2 is the estimated residual variance at the optimum. Consequently,...
