A problem frequently encountered in the earth sciences, and in other physical and biomedical sciences as well, requires deducing physical parameters of the system of interest from measurements of some other (hopefully) closely related physical quantity. The obvious example in seismology (either surface reflection seismology or crosswell seismic tomography) is the use of measurements of sound wave traveltime to deduce wavespeed distribution in the earth and then subsequently to infer the values of other physical quantities of interest such as porosity, water or oil saturation, permeability, etc.
Many of the problems of interest can be formulated in a way such that the measured quantities (often called the ``output'' for reasons that will become clear in a moment) may be compared to predicted values of those same quantities and the resulting observed discrepancies then used to make ``improvements'' in the system model parameters of real interest. Predicted values are obtained by forward modeling based on some assumed model of the physical quantities of interest, and these predicted values are clearly the ``output'' of such a forward modeling code. Comparisons between predicted output and measured output parameters may be done in a variety of ways, but a common method is the output least-squares method: discrepancies are squared and summed, and some numerical procedure is established to reduce the overall least-squares error in the output quantity.
The trick in all this is to find a method that actually does guarantee convergence of the output least-squares functional to zero, or at least to a small number whose size is comparable to that expected from the sums of squares of the measurement errors. The purpose of this paper is to show that such a procedure can essentially always be constructed as long as one additional condition is present: If the nonlinear functional of the model parameters used to compute the outputs can be differentiated once with the respect to each of the model parameters of interest (whether this derivative is taken analytically or numerically does not appear to be important), then an evolution equation can be found that will essentially guarantee that a sequence of models gradually improving the agreement between the measured data and the predicted data can be found in a systematic way. The only caveat is that the step size of the improvement from one iteration to the next will be problem dependent and may be rather small in some applications of interest.