The simple test examples in this section are borrowed from Claerbout (1999).

data
The input data (right) are
irregularly spaced samples of a sinusoid (left).
Figure 1 |

In the first example, the input data were randomly subsampled (with decreasing density) from a sinusoid (Figure ). The forward operator in this case is linear interpolation. In other words, we seek a regularly sampled model on 200 grid points that could predict the data with a forward linear interpolation. Sparse irregular distribution of the input data makes the regularization enforcement a necessity. Following Claerbout (1999), I applied convolution with the simple (1,-1) difference filter as the operator that forces model continuity (the first-order spline). An appropriate preconditioner in this case is recursive causal integration. Figures and show the results of inverse interpolation after exhaustive 300 iterations of the conjugate-direction method. The results from the model-space and data-space regularization look similar except for the boundary conditions outside the data range. As a result of using the causal integration for preconditioning, the rightmost part of the model in the data-space case stays at a constant level instead of decreasing to zero. If we specifically wanted a zero-value boundary condition, we could easily implement it by adding a zero-value data point at the boundary.

im1
Estimation of a continuous
function by the model-space regularization. The difference operator
is the derivative operator (convolution with (1,-1)).
Figure 2 |

fm1
Estimation of a continuous
function by the data-space regularization. The preconditioning
operator is causal integration.
Figure 3 |

As expected from the general theory, the model preconditioning provides a much faster rate of convergence. I measured the rate of convergence using the model residual, which is a distance from the current model to the final solution. Figure shows that the preconditioning (data regularization) method converged to the final solution in about 6 times fewer iterations than the model regularization. Since the cost of each iteration for each method is roughly equal, the computational economy is evident. Figure shows the final solution, and the estimates from model- and data-space regularization after only 5 iterations of conjugate directions. The data-space estimate looks much closer to the final solution than its competitor.

Figure 4

schwab1
Convergence of the iterative
optimization, measured in terms of the model residual. The ``d''
points stand for data-space regularization; the ``m'' points for
model-space regularization.
Figure 5 |

Changing the preconditioning operator changes the regularization
result. Figure shows the result of data-space
regularization after a triangle smoother is applied as the model
preconditioner. Triangle smoother is a filter with the *Z*-transform
Claerbout (1992). I chose the filter length *N*=6.

fm6
Estimation of a smooth function
by the data-space regularization. The preconditioning operator
is a triangle smoother.
Figure 6 |

If, instead of looking for a smooth interpolation, we want to limit
the number of frequency components, then the best choice for the
model-space regularization operator is a prediction-error
filter (PEF). To obtain a mono-frequency output, we can use a
three-point PEF, which has the *Z*-transform representation *D* (*Z*) = 1
+ *a _{1}*

pm1
Estimation of a mono-frequency
function by the data-space regularization. The preconditioning
operator is a recursive filter (the inverse of PEF).
Figure 7 |

12/28/2000