next up previous print clean
Next: One-dimensional synthetic examples Up: Three-dimensional seismic data regularization Previous: Acknowledgments

Iterative data regularization

  According to the theoretical conclusions of Chapter [*], data regularization can be formulated as an optimization problem. In fact, there are two theoretically equivalent formulations: model-space regularization and data-space regularization. The former is closely related to Tikhonov's regularization for ill-posed inverse problems Tikhonov and Arsenin (1977). Mathematically, it extends the data space and constructs a composite column operator. Data-space regularization extends the model space and constructs a composite row operator. It leads to the concept of model preconditioning Nichols (1994).

Though the final results of the model-space and data-space regularization are theoretically identical, the behavior of iterative gradient-based methods, such as the method of conjugate gradients, is different for the two cases. The obvious difference is in the case where the number of model parameters is significantly larger than the number of data measurements. In this case, the dimensions of the inverted matrix in the case of the data-space regularization are smaller that those of the model-space matrix, and the convergence of the iterative conjugate-gradient iteration is correspondingly faster. But even in the case where the number of model and data parameters are comparable, preconditioning changes the iteration behavior. This follows from the fact that the objective function gradients with respect to the model parameters are different. The first iteration of the model-space regularization yields $\bold{L}^T \bold{d}$ as the model estimate regardless of the regularization operator $\bold{D}$,while the first iteration of the data-space regularization yields $\bold{C L}^T \bold{d}$, which is an already ``simplified'' version of the model. Since iteration to the exact solution is never achieved in the large-scale problems, the results of iterative optimization may turn out quite differently. Harlan (1995) points out that the two components of the model-space regularization [Equations ([*]) and ([*])] conflict with each other: the first one emphasizes ``details'' in the model, while the second one tries to smooth them out. He describes the advantage of preconditioning:

The two objective functions produce different results when optimization is incomplete. A descent optimization of the original (model-space ) objective function will begin with complex perturbations of the model and slowly converge toward an increasingly simple model at the global minimum. A descent optimization of the revised (data-space ) objective function will begin with simple perturbations of the model and slowly converge toward an increasingly complex model at the global minimum. ...A more economical implementation can use fewer iterations. Insufficient iterations result in an insufficiently complex model, not in an insufficiently simplified model.

In this chapter, I illustrate the two approaches on synthetic and real data examples from simple environmental data sets. All examples show that when we solve the optimization problem iteratively and take the output only after a limited number of iterations, it is preferable to use the preconditioning approach. A particularly convenient method is preconditioning by recursive filtering, which is extended to the multidimensional case with the help of Claerbout's helix transform Claerbout (1998a). Invertible multidimensional filters can be created by helical spectral factorization.



 
next up previous print clean
Next: One-dimensional synthetic examples Up: Three-dimensional seismic data regularization Previous: Acknowledgments
Stanford Exploration Project
12/28/2000