Iterative Deconvolution Sample Clauses

Iterative Deconvolution. With iterative methods, a sequence of approximations of f is constructed, where hopefully subsequent approximations provide better reconstructions. Mathemati- cally this is equivalent to solving a particular optimization problem involving K and g, which could be formulated as something simple like a linear least squares problem, or something more complicated that incorporates (possibly nonlinear) constraints. As with spectral filtering methods, regularization must be incorpo- rated using, for example, a priori constraints, or through appropriate convergence criteria, or even a combination of such techniques. All the algorithms consid- ered here have the general form shown in Algorithm 1. The most computation- ally expensive operations are performed in line 3 of this algorithm and include a matrix-vector product with K and a linear system solve involving the precondi- tioner P. In most cases, both of these operations can be efficiently implemented using trigonometric transforms or with sparse matrix computations. The goal of preconditioning is to speed-up the convergence, but also not too significantly in- crease the computational cost per iteration. 1. f0 = initial estimate of f 2. for j = ▇, ▇, ▇, . . .