Without a notion of convergence, the Monte Carlo technique could flounder
around all over model space forever, wasting valuable CPU time. In order
to overcome this possibility, I have defined a measure of ``convergence.''
I know the maximum semblance value at each time sample (which will occur
at some variable velocity) by doing a `max()` sift through the velocity
semblance scan, looping over velocity at each fixed time. Therefore I know
the peak possible integrated semblance value by summing all the temporal
maxima. My definition of convergence occurs when three consecutive optimal
random models differ in integrated semblance by less than 0.1% of each
other, normalized to the peak semblance value. I use this convergence in both
the footstep and the random walk loops. Finally, I have global convergence
criterion on the maximum number of random walks to be performed; if the
best fitting semblance integral value has not changed in the last 10
random *walks*, then I assume I am at the global maximum attainable
within the physical constraints listed above. These convergence parameters
are specified as input to the algorithm, but the values listed above seemed
to give favorable results in less than 10 CPU seconds per velocity scan on
an IBM RS/6000 Model 530.

The following is a flowchart description of the Monte Carlo fitting procedure:

MONTE CARLO VELOCITY ====================* Start with optimal parametric fit Vest* (interval)

* Loop over Nw random "walks"

* Loop over Ns random "steps"

* Perturb Vest(t) for ALL z simultaneously = Vmc (int)

* Check Vint constraints

* Calculate corresponding Vmc (rms)

* If Fit(Vmc) > Fit(Vest*); then Vest* = Vmc

* Check convergence logic

* Check convergence logic

* End Loops

* RESULT:

- a Vrms* NONLINEAR fit that is globally "optimal" - a Vint* model that is physically reasonable - Semblance MISFIT error estimates (use for QC?)

11/17/1997