Preconditioned least-squares reverse-time migration using random phase encoding

Next: Hessian Estimation Up: Method Previous: Method

## Least-squares RTM

The two-way wave equation is linearized over the slowness squared as follows:

 (1)

where is the slowness, is the background model, which is a smooth version of the slowness squared, is the model, and is the model coordinate. Then, the Green's functions that satisfy the acoustic wave equation using the background model are defined as follows:

 (2) (3)

where is the Green's function, and are the source and receiver coordinates, and is frequency. The forward modeling operator is defined using the Green's functions as follows:

 (4)

where is the surface data and is the source function. The forward modeling operator can be written in matrix form as follows:

We now define the objective function as:

 (6)

where is the observed surface data. The quadratic objective function can be minimized iteratively using the following scheme (Claerbout and Fomel, 2011):


iterate {

}

The indicates the adjoint operator. The cost of each iteration equals the cost of the forward and adjoint operators. The stepper function is either steepest-descent or conjugate gradient.

There are several encoding functions that can be used in LSRTM (Godwin and Sava, 2011; Perrone and Sava, 2009). However, a single-sample random phase function gives the best convergence results (Romero et al., 2000; Krebs et al., 2009). This encoding function results in crosstalk artifacts in the estimated models. These artifacts are reduced by averaging several realizations of the encoding function. The source-side encoding function is defined as follows (Tang, 2009):

 (7)

where is the realization index, is the number of realizations, and is a random sequence of signs (i.e. +1 and -1). The encoding function is used to blend the observed data as follows:

 (8)

Similarly, the same encoding function is used to blend the modeled data:

 (9)

where is the blended source wavefield. Due to the linearity of the wave equation, this wavefield can be simply computed by simultaneously injecting the source functions at different locations after multiplying by the proper weight. Once is computed, the blended forward modeling operator can defined as follows:

 (10)

where is used to indicate blending. The blended forward modeling operator can also be expressed in matrix form:

Finally, the objective function of the blended operator can be written as follows:

 (12)

Notice that the objective functions changes between realizations, since the encoding function changes. This change of the objective function requires a modification in the minimization scheme as follows:

iterate {

}

There are two changes in the minimization scheme of the blended objective function compared to the conventional one. First, the computation of the residual is moved inside the loop, because the encoding function changes in each iteration. This change adds the cost of a forward modeling operator to each iteration. Second, the stepper algorithm can only be steepest-descent if the step size is determined with linear optimization. Otherwise, a non-linear conjugate gradient can be performed, requiring a line search in each iteration. In this paper I present only the result of using steepest-descent stepper, because the iteration cost is consistent.

 Preconditioned least-squares reverse-time migration using random phase encoding

Next: Hessian Estimation Up: Method Previous: Method

2011-09-13