next up previous [pdf]

Next: Hessian Estimation Up: Method Previous: Method

Least-squares RTM

The two-way wave equation is linearized over the slowness squared as follows:

$\displaystyle s^2(\mathbf x) = b(\mathbf x) + m(\mathbf x),$ (1)

where $ s$ is the slowness, $ b$ is the background model, which is a smooth version of the slowness squared, $ m$ is the model, and $ \mathbf x$ is the model coordinate. Then, the Green's functions that satisfy the acoustic wave equation using the background model are defined as follows:

$\displaystyle \left( \nabla^2 + \omega^2 b(\mathbf x) \right) G_0(\mathbf x,\mathbf x_s,\omega)=-\delta(\mathbf x-\mathbf x_s),$ (2)
$\displaystyle \left( \nabla^2 + \omega^2 b(\mathbf x) \right) G_0(\mathbf x,\mathbf x_r,\omega)=-\delta(\mathbf x-\mathbf x_r),$ (3)

where $ G_0$ is the Green's function, $ \mathbf x_s$ and $ \mathbf x_r$ are the source and receiver coordinates, and $ \omega$ is frequency. The forward modeling operator is defined using the Green's functions as follows:

$\displaystyle d(\mathbf x_r,\mathbf x_s,\omega)=- \omega^2 f(\omega) \sum_{\mat...
...0(\mathbf x,\mathbf x_s,\omega) G_0(\mathbf x,\mathbf x_r,\omega) m(\mathbf x),$ (4)

where $ d$ is the surface data and $ f$ is the source function. The forward modeling operator $ \mathbf F$ can be written in matrix form as follows:

$\displaystyle \mathbf d = \mathbf F \mathbf m.$ (5)

We now define the objective function $ J$ as:

$\displaystyle J(\mathbf m) = \lVert \mathbf F \mathbf m - \mathbf d_{\rm obs} \rVert^2_2,$ (6)

where $ \mathbf d_{\rm obs}$ is the observed surface data. The quadratic objective function $ J$ can be minimized iteratively using the following scheme (Claerbout and Fomel, 2011):

		 
$ \mathbf r \longleftarrow \mathbf F \mathbf m - \mathbf d_{\rm obs}$
 

iterate {
$ \Delta \mathbf m \longleftarrow \mathbf F^* \mathbf r$
$ \Delta \mathbf r\ \longleftarrow \mathbf F \Delta \mathbf m$
$ (\mathbf m) \longleftarrow {\rm stepper} (\mathbf m,\mathbf r, \Delta \mathbf m,\Delta \mathbf r )$
}
The $ ^*$ indicates the adjoint operator. The cost of each iteration equals the cost of the forward and adjoint operators. The stepper function is either steepest-descent or conjugate gradient.

There are several encoding functions that can be used in LSRTM (Godwin and Sava, 2011; Perrone and Sava, 2009). However, a single-sample random phase function gives the best convergence results (Romero et al., 2000; Krebs et al., 2009). This encoding function results in crosstalk artifacts in the estimated models. These artifacts are reduced by averaging several realizations of the encoding function. The source-side encoding function $ \alpha$ is defined as follows (Tang, 2009):

$\displaystyle \alpha(\mathbf x_s, p_s) = \frac{1}{\sqrt{N_{\rm realize}}} \gamma(\mathbf x_s, p_s),$ (7)

where $ p_s$ is the realization index, $ N_{\rm realize}$ is the number of realizations, and $ \gamma$ is a random sequence of signs (i.e. +1 and -1). The encoding function is used to blend the observed data as follows:

$\displaystyle \widetilde{d}_{\rm obs}(\mathbf x_r, p_s,\omega)= \sum_{\mathbf x_s} \alpha(\mathbf x_s, p_s) d_{\rm obs}(\mathbf x_r,\mathbf x_s,\omega).$ (8)

Similarly, the same encoding function is used to blend the modeled data:

$\displaystyle S(\mathbf x, p_s,\omega)= \sum_{\mathbf x_s} \alpha(\mathbf x_s, p_s) f(\omega) G_0(\mathbf x,\mathbf x_s,\omega),$ (9)

where $ S$ is the blended source wavefield. Due to the linearity of the wave equation, this wavefield can be simply computed by simultaneously injecting the source functions at different locations after multiplying by the proper weight. Once $ S$ is computed, the blended forward modeling operator can defined as follows:

$\displaystyle \widetilde{d}(\mathbf x_r, p_s,\omega)=- \omega^2 \sum_{\mathbf x} S(\mathbf x, p_s,\omega) G_0(\mathbf x,\mathbf x_r,\omega) m(\mathbf x),$ (10)

where $ \widetilde{}$ is used to indicate blending. The blended forward modeling operator $ \widetilde{\mathbf F}$ can also be expressed in matrix form:

$\displaystyle \widetilde{\mathbf d} = \widetilde{\mathbf F} \mathbf m.$ (11)

Finally, the objective function of the blended operator can be written as follows:

$\displaystyle \widetilde{J}(\mathbf m) = \lVert \widetilde{\mathbf F} \mathbf m - \widetilde{\mathbf d}_{\rm obs} \rVert^2_2.$ (12)

Notice that the objective functions changes between realizations, since the encoding function changes. This change of the objective function requires a modification in the minimization scheme as follows:

		 iterate {                                                     

$ \widetilde{\mathbf r} \longleftarrow \widetilde{\mathbf F} \mathbf m - \widetilde{\mathbf d}_{\rm obs}$
$ \Delta \mathbf m \longleftarrow \widetilde{\mathbf F}^* \widetilde{\mathbf r}$
$ \Delta \widetilde{\mathbf r}\ \longleftarrow \widetilde{\mathbf F} \Delta \mathbf m$
$ \mathbf m \longleftarrow {\rm stepper} (\mathbf m, \Delta \mathbf m,\Delta \widetilde{\mathbf r} )$
}
There are two changes in the minimization scheme of the blended objective function compared to the conventional one. First, the computation of the residual is moved inside the loop, because the encoding function changes in each iteration. This change adds the cost of a forward modeling operator to each iteration. Second, the stepper algorithm can only be steepest-descent if the step size is determined with linear optimization. Otherwise, a non-linear conjugate gradient can be performed, requiring a line search in each iteration. In this paper I present only the result of using steepest-descent stepper, because the iteration cost is consistent.


next up previous [pdf]

Next: Hessian Estimation Up: Method Previous: Method

2011-09-13