We solved the same problem using Gockenbach's `HCL_UMin_lbfgs`.
The functional we minimized, was the norm-squared of internal
convolution of the data-set with the prediction-error filter. To use
the BFGS solver, we had to construct the functional to take a single
vector argument. Thus, we combined the data-set and filter into a
`HCL_ProductVector`. Specifically, we minimized the functional

Here, represents internal convolution, and the
subscripts *f* and *d* represent the filter and the
data, respectively. For both the filter and the data, there are known
components, , which are given, and unknown components,
, for which we solve. Finally, the 's
represent masking operators which zero the known components of the
data or filter. Thus, we find a filter, , and data, , which are
constrained to contain known values. A prediction-error filter is
most useful in interpolating missing data, when it nearly zeroes the
known data under convolution. That explains why we minimize the norm of
. To keep the resulting filter from simply being zero, we
constrain it to contain a 1 by setting as
appropriate for a prediction-error filter.

Because the functional, *g*, is based on a bilinear operator,
, using a general nonlinear minimizer is probably
unwarranted. Claerbout has a nonlinear solver which takes advantage
of the bilinear property of convolution.
His results for this synthetic data problem appear similar to
ours. Both appear to approximate the original data very well. So,
although we probably applied a solver which was too general, we
learned that it is relatively easy to use `HCL_UMin_lbfgs`. We
envision applying it successfully to future problems involving more
complicated functionals. Our results are shown in
Figure 1.

Figure 1

It should be noted that although our nonlinear solver's result is more accurate than the infill method's, it took an order of magnitude longer to calculate. In fact, had we let the infill solver run for more iterations, it may have converged to a better answer.

11/11/1997