I summarize the differences between model-space and data-space regularization in Table 1. Which of the two approaches is preferable in practical applications? In the case of ``trivial'' regularization (i.e., constraining the problem by the model power minimization), the answer to this question depends on the relative size of the model and data vectors: data-space regularization may be preferable when the data size is noticeably smaller than the model size. In the case of non-trivial regularization, the answer may additionally depend on the following questions:

- Which of the operators
*D*or*P*(*C*or^{-1}*C*) is easier to construct and implement? - Is an initial estimate for
*x*available? In data-space regularization, it is difficult to start from a non-zero value of the model*m*. - Is it possible to approximate or to compute analytically one of the inverted matrices in formula (27)?

Ryzhikov and Troyan (1991) present a curious interpretation of the operator *L C L*^{T} in
ray tomography applications. In these applications, each data point
corresponds to a ray, connecting the source and receiver pair. If
the model-space operator *C ^{-1}* =

The scaling parameter controls the relative amount of *a
priori* information added to the problem. In a sense, it allows
us to reduce the search for an adequate model null space to a
one-dimensional problem of choosing the value of . Solving
the optimization problem with different values of leads to
the continuation approach, proposed by Bube and Langan (1994). As
Nichols (1994) points out, preconditioning can reduce
the sensitivity of the problem to the parameter if initial
system (1) is essentially under-determined.

11/11/1997