I will now addressed some questions that are important and whose answers may not be immediately obvious or may be a matter of conjecture.
In 2-D, with todays recording equipment having thousands of available channels, it may be reasonable to consider a ``brute force'' approach in which receivers for the whole line are deployed on the ground at the beginning of the acquisition and offsets 2, 3 or 5 times larger than the maximum target depth are used for every shot. This is clearly overkill and relies on the assumption that during processing the useful offsets will be ``sorted out.'' The problem is that at processing it may be too time consuming to determine which traces to actually keep for each shot or CMP. As a result, we may end up with lots of essentially useless traces getting in the way of efficient processing.
In 3-D the brute-force approach will not work at all for any reasonably large survey because of the sheer volume of equipment it would require. Even if the recording equipment itself can handle all the channels, there are still the problems of the cables, the recording boxes, the communication channels, the antennas, the batteries, and so on.
The intermediate traces, although possibly contributing redundant illumination, will be useful for random noise suppression, for velocity computations and for offset sampling necessary for prestack migration. Besides they will not require any significant extra acquisition effort.
This is an open question for which I don't have a definitive answer yet. The idea is to include the sampling requirements for prestack migration as constraints to the inversion process, so that additional receiver or shot positions be considered to satisfy that constraint. The details of how to do that, especially in 3-D, is an interesting research issue.
For 3-D the situation is more challenging but also much more interesting and useful. We have now not only the degree of freedom afforded by the choice of offsets but by the choice of azimuths as well. Besides, the basic geometry template can also be considered a design parameter that (unlike common practice) can change spatially. I anticipate that the inversion process will be extremely difficult and strongly non-unique.
A more philosophical question, but one that has an important meaning is: What kind of data would we regard as ideal from the point of view of imaging? That is, assuming that we have no logistic or economic restriction whatsoever (except that the data can only be acquired at the surface), what would be the ideal data?
For the standard approach, some characteristics of this ideal data immediately come to mind: data in a very fine regular grid with a very large aperture. This is fine, and would provide us with a good image. I believe, however, that the ideal data would be data with very fine, regular subsurface illumination, with aperture being a function of the illumination requirements.
That subsurface illumination is an important attribute of a good design is not new. In fact most commercial software for seismic survey design offer an option to trace rays into the subsurface for a given design to produce illumination maps of the targets of interest. The maps obtained with different designs are compared and this information taken into account when deciding what the best design is or changes may be introduced to the designs and the process iterated. This is an example of the forward problem. What I propose is to base the survey design on the inverse problem: start with an initial model and choose the layout of sources and receivers to obtain optimum illumination for that model.