previous up next print clean
Next: WEIGHTED MEDIANS Up: Claerbout: Medians in regression Previous: MEDIANS IN AUTOREGRESSION

INVERSE HYPERBOLA SUPERPOSITION

Our basic model is
\begin{displaymath}
\bold d = \bold H \bold m\end{displaymath} (9)
where $\bold m$ is model space, $\bold d$ is data space, and $\bold H$ is a curve superposition such as hyperbola superposition. Obviously we could be talking about velocity analysis or migration and the number of components in model space is huge. These unknown components may be partly correlated, but since there are so many unknowns we will suppose that it is useful to seek an initial guess at a solution by assuming the unknowns are not correlated. To aid in our motivation, we also presume the data contains much bursty noise so the usual ways of doing migration or velocity analysis are in trouble; thus we have a license to seek new ways that are fast and robust.

We begin by loading up a residual vector $\bold r=-\bold d$with the negative of the data. We could start an inverse problem with a gradient, say
\begin{displaymath}
\Delta \bold m \quad =\quad\bold H' \; \mbox{sgn}(\bold r)\end{displaymath} (10)
but here I propose something much simpler. We process only one component of model space. (Later and independently we need to process likewise all other components of model space, and then iterate some more.) Thus $\Delta \bold m$ has only one nonzero component. We may as well take that component to be unity, say $\Delta m_j=1$ for some particular j. This perturbation allows us to create a perturbed residual $ \Delta \bold r$.Allow me an approximation of physics and numerical analysis where a single point of model space does not go to every point in data space. In the most elementary modeling procedure the number of points in data space affected by a single point of model space is simply the number of offsets, say nx. The residual perturbation is
\begin{displaymath}
\Delta\bold r \quad =\quad\bold H \; \Delta\bold m\end{displaymath} (11)
which gives us a $ \Delta \bold r$ of nx affected components and the values $\Delta r_i$of those components are the values of the amplitude versus offset of the hyperbola in our simple modeling. As before, the new model is $\bold m +\alpha\Delta\bold m$, where

\begin{displaymath}
\alpha \quad =\quad-\; \mbox{median}_i\left( { r_i\over \Delta r_i} \right)\end{displaymath} (12)

It is fun to compare this iterative process to migration since it is similar, but it leads to a kind of inversion. To begin with, if we had no bursty noise, we could simplify life just a little bit by choosing $\alpha$ to be
\begin{displaymath}
\alpha \quad =\quad-\; { \sum_i r_i \Delta r_i \over \sum_i \Delta r_i \Delta r_i}\end{displaymath} (13)
This $\alpha$ is the usual normalization that might be used in stacking. Since we work recursively, however, the job of reducing the residual is never done, and the values on the residual plane depend upon the order in which we do the work.

We could choose to do the work in some order to make the code fast. We could also choose to do the work in some ``best'' order (yet to be defined) for the quality of the results.

There is no need for the data to lie on a regular mesh or for a ``complete'' data set. Aliasing issues have yet to be examined.


previous up next print clean
Next: WEIGHTED MEDIANS Up: Claerbout: Medians in regression Previous: MEDIANS IN AUTOREGRESSION
Stanford Exploration Project
11/12/1997