next up previous print clean
Next: Summary and conclusions Up: Prucha and Biondi: STANFORD Previous: Modeling with the proposed

Discussion

It should be obvious that the real difference between the standard and the proposed methodology will only surface when the data is imaged before stack. Then the larger offsets corresponding to the larger dips and the better illumination will really come into play. In this paper I chose to image the data post-stack only for simplicity. Also, it should be apparent that the expected differences between the standard and the proposed approach will be an order of magnitude larger for 3-D data.

I will now addressed some questions that are important and whose answers may not be immediately obvious or may be a matter of conjecture.

1.
Why not simply use all the available receivers for each shot and forget about optimum designs?

In 2-D, with todays recording equipment having thousands of available channels, it may be reasonable to consider a ``brute force'' approach in which receivers for the whole line are deployed on the ground at the beginning of the acquisition and offsets 2, 3 or 5 times larger than the maximum target depth are used for every shot. This is clearly overkill and relies on the assumption that during processing the useful offsets will be ``sorted out.'' The problem is that at processing it may be too time consuming to determine which traces to actually keep for each shot or CMP. As a result, we may end up with lots of essentially useless traces getting in the way of efficient processing.

In 3-D the brute-force approach will not work at all for any reasonably large survey because of the sheer volume of equipment it would require. Even if the recording equipment itself can handle all the channels, there are still the problems of the cables, the recording boxes, the communication channels, the antennas, the batteries, and so on.

2.
What are the implications for the logistics of operation of the proposed methodology? The logistics of operation do not need to be strongly affected since the receivers will be deployed as usual and the recording equipment will electronically connect and disconnect the required receiver stations for each shot based on the information of the geometry (SPS) files. Thus, the fact that the template may change from shot to shot (or from salvo to salvo in 3-D) is not a negative logistics issue.
3.
Why acquire the intermediate traces? If we already computed the optimum positions for sources and receivers so as to have ``perfect'' illumination what valuable information can there be in the intermediate traces? Why go to all the trouble of finding optimum receiver positions if we are going to use the intermediate receivers as well?

The intermediate traces, although possibly contributing redundant illumination, will be useful for random noise suppression, for velocity computations and for offset sampling necessary for prestack migration. Besides they will not require any significant extra acquisition effort.

4.
What are the implications of the non-uniform offset distribution for prestack migration of the data? Can we guarantee that there will not be spatial aliasing in the offset dimension?

This is an open question for which I don't have a definitive answer yet. The idea is to include the sampling requirements for prestack migration as constraints to the inversion process, so that additional receiver or shot positions be considered to satisfy that constraint. The details of how to do that, especially in 3-D, is an interesting research issue.

5.
What would be the situation for 3-D acquisition?

For 3-D the situation is more challenging but also much more interesting and useful. We have now not only the degree of freedom afforded by the choice of offsets but by the choice of azimuths as well. Besides, the basic geometry template can also be considered a design parameter that (unlike common practice) can change spatially. I anticipate that the inversion process will be extremely difficult and strongly non-unique.

A more philosophical question, but one that has an important meaning is: What kind of data would we regard as ideal from the point of view of imaging? That is, assuming that we have no logistic or economic restriction whatsoever (except that the data can only be acquired at the surface), what would be the ideal data?

For the standard approach, some characteristics of this ideal data immediately come to mind: data in a very fine regular grid with a very large aperture. This is fine, and would provide us with a good image. I believe, however, that the ideal data would be data with very fine, regular subsurface illumination, with aperture being a function of the illumination requirements.

That subsurface illumination is an important attribute of a good design is not new. In fact most commercial software for seismic survey design offer an option to trace rays into the subsurface for a given design to produce illumination maps of the targets of interest. The maps obtained with different designs are compared and this information taken into account when deciding what the best design is or changes may be introduced to the designs and the process iterated. This is an example of the forward problem. What I propose is to base the survey design on the inverse problem: start with an initial model and choose the layout of sources and receivers to obtain optimum illumination for that model.


next up previous print clean
Next: Summary and conclusions Up: Prucha and Biondi: STANFORD Previous: Modeling with the proposed
Stanford Exploration Project
6/7/2002