Regularization is a method of imposing additional conditions for solving inverse problems with optimization methods. When model parameters are not fully constrained by the problem (the inverse problem is mathematically ill-posed), regularization limits the variability of the model and guides the iterative optimization to the desired solution by adding assumptions about the model power, smoothness, predictability, etc. In other words, it constrains the model null space to an a priori chosen pattern. A thorough mathematical theory of regularization has been introduced by works of Tikhonov's school Tikhonov and Arsenin (1977).
In this paper, I discuss two alternative formulations of regularized least-square inversion problems. The first formulation, which I call model-space , extends the data space and constructs a composite column operator. The second, data-space , formulation extends the model space and constructs a composite row operator. This second formulation is intrinsically related to the concept of model preconditioning. I illustrate the general theory with examples from Three-Dimensional Filtering Claerbout (1994).
Two excellent references cover almost all of the theoretical material in this note. One is the paper by Ryzhikov and Troyan (1991). The other one is a short note by Harlan, available by courtesy of the author on the World Wide Web Harlan (1995). I have attempted to translate some of the ideas in these two references to the linear operator language, familiar to the readers of Claerbout (1992, 1994).