next up previous [pdf]

Next: Non-linear solver Up: UNKNOWN SHOT WAVEFORM Previous: UNKNOWN SHOT WAVEFORM

Block cyclic solver

In the Block Cyclic solver (I hope this is the correct term.), we have two half cycles. In the first half we take one of the variables known and the other unknown. We solve for the unknown. In the next half we switch the known for the unknown. The beauty of this approach is that each half cycle is a linear problem so its solution is independent of the starting location. Hooray! Even better, repeating the cycles enough times should converge to the correct solution. Hooray again! The convergence may be slow, however, so at some stage (maybe just one or two cycles) you can safely switch over to the nonlinear method which converges faster because it deals directly with the interactions of the two variables.

We could begin from the assumption that the shot waveform is an impulse and the reflectivity is the data. Then either half cycle can be the starting point. Suppose we assume we know the reflectivity, say $ \bold c$, and solve for the shot waveform $ \bold s$. We use the reflectivity $ \bold c$ to make a convolution matrix $ \bold C$. The regression pair for finding $ \bold s$ is

$\displaystyle \bold 0$ $\displaystyle \approx$ $\displaystyle \bold C\bold s  - \bold d$ (28)
$\displaystyle \bold 0$ $\displaystyle \approx$ $\displaystyle \epsilon_s \bold I \; \bold s$ (29)

These would be solved for $ \bold s$ by familiar least squares methods. It's a very easy problem because $ \bold s$ has many fewer components than $ \bold c$. Now with our source estimate $ \bold s$ we can define the operator $ \bold S$ that convolves it on reflectivity $ \bold c$.

The second half of the cycle is to solve for the reflectivity $ \bold c$. This is a little trickier. The data fitting may still be done by an $ L2$ type method, but we need something like an $ L1$ method for the regularization to pull the small values closer to zero to yield a more spiky $ c(t)$.

$\displaystyle \bold 0$ $\displaystyle \approx_{L2}$ $\displaystyle \bold r_d \quad=\quad \bold S\bold c  - \bold d$ (30)
$\displaystyle \bold 0$ $\displaystyle \approx_{L1}$ $\displaystyle \bold r_m \quad=\quad \bold I \; \bold c$ (31)

Normally we expect an $ \epsilon_c$ in equation (31) but now it comes in later. (It might seem that the regularization (29) is not necessary, but without it, $ \bold c$ might get smaller and smaller while $ \bold s$ gets larger and larger. We should be able to neglect regression (29) if we simply rescale appropriately at each iteration.) We can take the usual L2 norm to define a gradient vector for model perturbation $ \Delta \bold c = \bold S'\bold r_d$. From it we get the residual perturbation $ \Delta\bold r_d = \bold S\Delta \bold c$. We need to find an unknown distance $ \alpha$ to move in those directions. We take the norm of the data fitting residual, add to it a bit $ \epsilon$ of the model styling residual, and set the derivative to zero.

$\displaystyle \bold 0 \quad=\quad \frac{\partial}{\partial\alpha}  [ N_d(\bold...
...\alpha \; \Delta\bold r )  + \epsilon N_m(\bold c + \alpha \Delta \bold c ) ]$ (32)

We need derivatives of each norm at each residual. We base these on the convex function $ C(r)$ of the Hybrid norm. Let us call these $ A_i$ for the data fitting, and $ B_i$ for the model styling.
$\displaystyle A_i$ $\displaystyle =$ $\displaystyle C(R_d, r_i)$ (33)
$\displaystyle B_i$ $\displaystyle =$ $\displaystyle C(R_m, c_i)$ (34)

(Actually, we don't need $ A_i$ (because for Least Squares, $ A_i'=r_i$ and $ A_i''=1$), but I include it here in case we wish to deal with noise bursts in the data.) As earlier, expanding the norms in Taylor series, equation (32) becomes

$\displaystyle 0 \quad=\quad \sum_i A_i' \; \Delta r_i  + \alpha \sum_i A_i'' ...
...left( \sum_i B_i' \; \Delta c_i + \alpha \sum_i B_i'' \; \Delta c_i^2 \right)$ (35)

which gives the $ \alpha$ we need to update the model $ \bold c$ and the residual $ \bold r_d$.

$\displaystyle \alpha \quad=\quad - \frac{ \sum_i A_i' \; \Delta r_i  + \epsi...
...i } { \sum_i A_i'' \; \Delta r_i^2  + \epsilon \sum_i B_i'' \; \Delta c_i^2 }$ (36)

This is the steepest descent method. For the conjugate directions method there is a $ 2\times 2$ equation like equation (24).


next up previous [pdf]

Next: Non-linear solver Up: UNKNOWN SHOT WAVEFORM Previous: UNKNOWN SHOT WAVEFORM

2009-10-19