next up [*] print clean
Next: CONCLUSIONS Up: Table of Contents

 

Short Note
Semblance picking

Robert G. Clapp

bob@sep.stanford.edu

Tomography in the post-migrated domain is quickly becoming the standard approach for obtaining velocities in complex areas. The standard flow to update the velocity through tomography in the post-migrated domain begins by migrating the data using some initial velocity model. This migration method can be Kirchhoff Etgen (1990), downward continuation methods Clapp (2001b); Mosher et al. (2001), or any other method that can produce an image as a function of offset or reflection angle. Some type of moveout analysis is then performed on the migrated volume. This moveout analysis can take the form of one Clapp (2001b); Etgen (1990) or two Biondi (1990) parameter curvature analysis or something more complex like a residual migration scaling parameter Clapp (2002); Etgen (1990). These moveouts are then converted to time errors. We approximate the non-linear relationship between slowness and travel times by linearizing around an initial slowness model. We can then solve the resulting linear system and update our slowness field.

Until recently, computer power limitations forced some shortcuts in the ideal processing flow. One of these shortcuts was to use a sparse set of gathers rather than the entire volume. Another was to limit moveout analysis was often limited to a set of a priori chosen reflectors. Both of these limitations have drawbacks. Etgen et al. (2002) showed that an improved measure of residual moveout can be achieved by calculating moveout at all image locations and then smoothing along reflectors. In addition, Clapp (2001a); Woodward et al. (1998) showed that not being limited to a sparse set of picked reflectors can improve the inversion result and limit the amount of human processing time. These two results indicate that what we would ideally like to do is obtain a field of moveout parameters. The problem is that our data has one more dimension (offset/angle) than our model. This is the same problem faced in NMO velocity analysis Toldi (1985). One of the best solution to this type of problem is the Differential Semblance Operator (DSO) approach Symes and Carazzone (1991). The problem with a DSO type approach is that it is quite expensive and sensitive to noise.

In this paper I demonstrate an alternate method to estimate the semblance field. The general approach is to solve the non-linear problem through a sequence of simple linear problems. I pick a maximum semblance value at every location. I then form a simple inversion problem to estimate a smooth field of moveout parameters weighted by image and semblance strength. I pick a new maximum in a range around the estimated semblance, estimate a new model and repeat until a satisfactory model is obtained. As the non-linear iterations increase, I decrease the smoothing criteria and range around the estimated semblance.

Review The standard methodology used to update our migration velocity estimate is to start from the fact that the position of a reflection should be independent of the angle at which it is illuminated. As a result, if we look at a common reflection point gather, the reflection depth should be constant as a function of angle (downward continuation methods) or offset (Kirchoff and Gaussian Beam methods). Figure [*] shows the result of migrating a 2-D marine set with a velocity that is significantly in error. When the depth of a reflector is not constant (left panel of Figure [*]) as a function of our independent variable (angle/offset) it is indicative of some error in our migration velocity. We set up an inversion problem that relates the deviation from constant depth $\bf dz$ to some change in slowness $\bf \Delta s$ through a linear approximation $\bf T_{}$,
\begin{displaymath}
\bf dz \approx \bf T_{} \bf \Delta s.\end{displaymath} (1)
We have many choices of how to describe $\bf dz$. We could use an auto-picker that attempts to follow reflections. The problem with this approach is that auto-pickers, especially in complex data, have trouble following events and are susceptible to cycle skip problems. A more common approach is to attempt to describe the moveout as a function of offset/angle by two Biondi (1990) or more commonly one parameter Biondi and Symes (2003); Clapp (2001b); Etgen (1990). The normal procedure is to scan over the parameter space forming semblance gathers (right panel of Figure [*]). Normally these semblance gathers are calculated on a sparse set of CRP gathers and the maximum extracted at pre-determined reflector locations.

 
mig
mig
Figure 1
The top panel shows the migrated image using the zero offset imagining condition. The bottom panel shows every 20th angle gather of the same dataset. Note the significant moveout in the angle gathers indicating velocity errors.
view burn build edit restore

 
semb-gather
Figure 2
The left panel shows a CRP gather from a 2-D dataset shown in Figure [*]. Note that the moveout varies as a function of angle indicating an error in the velocity model. The right panel is a semblance gather calculated from the CRP gather in the right panel. Gamma is a scaling parameter applied to the velocity field, a scaling of 1 indicates a flat gather.

semb-gather
[*] view burn build edit restore

Selecting a sparse set of CRP gathers and selecting the maximum can give misleading results. If a selected CRP gather has a low signal-noise ratio at the reflector location we can get misleading information. Also, if our CRP analysis points are too sparse we can miss important features in the dataset. Finally, the maximum, especially if automatically selected, can be a dangerous choice. If the semblance has multiple distinct peaks (Figure [*]) the maximum might indicate a spurious moveout. Clapp (2001b); Etgen et al. (2002) took the approach that a better solution is to do the semblance analysis at each CRP location and then smooth the semblance values along the reflectors. This approach is a definite improvement over the conventional approach but still does not effectively handle the core problem that our data space, moveout as a function of angle/offset, is one more dimension than our model, maximum semblance, and writing a linear relationship between the two is problematic.

 
peaks
Figure 3
Semblance in complicated portion of the data. Note the 3 distinct peaks. In this case the peak with the maximum amplitude is the result of noise in the data rather than the moveout of the reflector.
peaks
view burn build edit restore

Global solution This type of estimation problem is well known in the literature for NMO velocity analysis Symes and Carazzone (1991); Toldi (1985). One solution to the NMO velocity analysis problem, presented in Clapp et al. (1998) was to:

The advantage of this approach is that it is efficient and the $\bf W$ appropriately gives reflectors with higher coherence more weight in the inversion. However, this approach is still susceptible to erroneous maximum picks.

We can write a similar set of fitting goals for picking the best semblance for our tomography problem,
   \begin{eqnarray}
\bf 0&\approx&\bf W( \bf d- \bf m) \nonumber \\ \bf 0&\approx&\epsilon \bf A\bf m
.\end{eqnarray}
(3)
In this case $\bf d$ is the gamma value of the maximum semblance at each location and $\bf m$ is some smoothed version of it. Our $\bf W$ operator can be semblance, stack power, or some other measure of data quality. I found that using a combination of the envelope of the stack and semblance gave the best result,  
 \begin{displaymath}
\bf W= \bf S \bf E
,\end{displaymath} (4)
where $\bf S$ is the maximum semblance at a given location, and $\bf E$ is the amplitude of the envelope of the migration. Figure [*] shows the maximum semblance for a 2-D marine dataset. Note the highly variable nature of the maximums. Figure [*] shows the envelope of the image, the maximum semblance value, and the weight operator (the $\bf E$, $\bf S$, and $\bf W$ arrays in (4)). Figure [*] shows the resulting smooth semblance values. The resulting model is smoother and more reasonable than the unsmoothed field (Figure [*]), but still has problems. Specifically, some of our original data picks are suspicious. If you look at the top-left portion of Figure [*] you will note large moveout values at shallow depths which seem unreasonable. In addition, Figure [*] shows the CRP gather at x=7, note the events at about 3 km. They look flat, but if we look at both the data (Figure [*]) and our model (Figure [*]) we see significant moveout. Both areas suffer from the poor data pick problem mentioned above.

 
data0
Figure 4
Maximum semblance values at model locations. Note that gamma=1. indicates no moveout.
data0
view burn build edit restore

 
wt
wt
Figure 5
Left panel is the envelope of the migrated image. The center panel is the maximum semblance at each model location. The right panel is a combination of the two. These are the three quantities in equation (4) used in fitting goals (3).
view burn build edit restore

 
model0
Figure 6
Model estimated using fitting goals (3), the data shown in Figure [*] and the weight function shown in Figure [*].
model0
view burn build edit restore

 
bad
Figure 7
A CRP gather at X=7.3km. Note the reflection at 3km. The reflection is flat but the artifacts cause the maximum semblance (Figure [*]) and the estimated model (Figure [*]) to indicate that residual moveout exists.
bad
view burn build edit restore

We can add another twist to the problem of bad maximums by repicking our maximum semblance values to be within some tolerance range around our estimated semblance values, then resolving the inversion problem. Figure [*] shows our new data picks limiting the range around estimated model. We can repeat this procedure several times continually decreasing the range around which we will accept new data values. This approach has similarities to simulated annealing. We begin by searching the entire range of our space, trying to get close to the correct solution. Later we limit our search space and try to get the finer features. Using the same analogy, it makes sense to also have $\epsilon$ be a function of iteration. At early iterations we have a large $\epsilon$ and by reducing $\epsilon$as a function of non-linear iteration we get the finer features of the model. Figure [*] shows the results after repeating this procedure five times. Note the absence of the problem seen in Figure [*]. We have successfully honored the data where we have good semblance and strong reflectors without introducing artifacts in regions where data quality is more questionable.

 
data-new
Figure 8
The result of limiting the range of acceptable data values to around a reasonable range defined from solving fitting goals (3) once.
data-new
view burn build edit restore

 
model-final
Figure 9
Our final estimated solving fitting goals (3) several times reducing the range of acceptable data values, and decreasing or smoothing constraints.
model-final
view burn build edit restore



 
next up [*] print clean
Next: CONCLUSIONS Up: Table of Contents
Stanford Exploration Project
7/8/2003