Quantified Quality Assurance (QQA) with Reverse DMO ABSTRACT Good experimental data should lead to a model from which the data can be predicted by forward modeling. One can employ Reverse DMO as a modeling operator to quantify the quality of given data; RDMO applied to the estimated zero-offset cube should predict the given data. Any difference between such predicted data and given data is due to a combination of noise in the data, inaccurate positioning, and sub-optimal processing. QUANTIFIED QUALITY ASSURANCE WITH FORWARD MODELING If we drop a ball from various levels of the tower of Pisa and measure the time it falls to the ground, we expect it to be proportional to the square root of the height. If it is not, then we either have a faulty watch, or, have a faulty detector for the start and end events, or we failed to take into account another factor (like the air friction). If it is proportional to the square root of time, chances are that our data are good, although there is still some small risk that different errors have cancelled each other. Similarly, when we send waves into the ground and record the echoes, we expect to be able to come up with an earth model, from which we can predict the data. This earth model may be, for example, the acoustic impedance as a function of depth, and the instruments' response, or wavelet. If we can predict the recorded data with a synthetic seismogram generator it probably means that the acoustic impedance model is good. The smaller the difference between predicted and given data, the higher the quality of the model. The so called `stack' is an estimate of the wavefield one would record if he had an well sampled zero-offset survey. The stack is a less ambitious, but more robust alternative to the acoustic impedance as a function of depth. If we apply reverse DMO and reverse NMO to the stack, extrapolating it outward in offset to the offsets (and azimuths) of the recorded data, we can quantify the quality of the stack and of the data. If there is a large error it might be due to, 1. Noise in the data 2. Wrong positioning 3. Sub-optimal processing If the error is associated with certain shots or certain receivers it is a hint that those stations are mispositioned or noisy. If the error is associated with a certain type of events like high dips, it is a hint that our processing had been sub-optimal, like negligence of amplitude versus offset and/or negligence of anisotropy. If the error is associated with events under a salt dome it probably means that we failed to estimate the velocity, or that we failed to take it into account in the processing. There is little risk that our modeling errors would undo our processing errors leading to an over optimistic quality estimate, because the redundancy in the data implies a model which is much smaller in size, still equal in information content to the data. In other words, if for example, we neglect anisotropy we loose the dipping reflectors and no isotropic modeling can bring them back when we attempt to predict the data from our model. On the other hand, passing a modeling test successfully does not guarantee good spatial coverage; if the data are completely blind to a certain part of the earth, then the forward modeling test would be blind to that part of the model. In other words it will be in the `null space'. The modeling test is still valuable, however, to test the equalization of wave propagation operators like DMO and full prestack migration; if the coverage is irregular but is still salvageable by optimal processing, only amplitude preserving processing, taking the irregular geometry into account, can produce a good zero-offset model from which the data can be predicted. Consumers of seismic data find such data-prediction tests useful. In their absence, they normally resort to more risky quality estimates, such as the reputation of the seismic data vendor (usually as measured by synthetics made by the consumer's research group) or by visual inspection of the model looking for expected geological features and unexpected footprints and ghosts. REVERSE DMO Most attention so far has been on accurate Forward DMO programs (Black & Schleicher, Bagaini, ...?) and not on Reverse DMO. The few who paid attention to Reverse DMO (Ronen et al, 1991; Fomel and Bleistein, SEP 92) were either theoretical or motivated by the need to process data with irregular geometry. QQA may give further motivation to explore Reverse DMO. So far, my initial attempts to predict data from stacks were less than satisfactory; I had relative errors above and beyond what I expected from accumulative round-off errors, even when the data were noise free synthetic, had accurate positioning, and adequate spatial sampling. This probably means that I either my processing, or my modeling (or both) were not amplitude preserving. I hope to investigate this further on my visit to SEP. THE IMPORTANCE OF LOG-STRETCH F-X-Y IMPLEMENTATION DMO and Reverse DMO are time consuming processes. They may not be suitable for on-board QQA. However, implemented in the frequency domain, following log-stretch, the quality can be quantified from a small number of frequencies, on which DMO and Reverse DMO can be applied on board. This means that we are most interested in amplitude preserving log-stretch DMO. AVO If the reflection coefficient of one reflector or more depend on the angle of incidence creating an amplitude versus offset effect, DMO- stacking will fail to image the reflector with the proper amplitude. Reverse DMO will not be able to reproduce the amplitude versus offset of the data. If this happens, the indication would be that while most reflectors (which do not have a strong AVO variation) are predicted well, some (which have a strong AVO variation) would have an error associated with the offset. In this case, the error may be by itself a useful seismic attribute for predicting the rock properties above and below the reflector. In such cases, the model required to describe enough information so that the data can be predicted cannot be the zero-offset stack. It probably should be migrated and parameterize both the AVO intercept and gradient. DMO may still be used to produce such a model, if properly applied on offset groups, before migration, and with AVO analysis replacing stacking after migration. Even in such cases, Reverse DMO can provide reliable QQA, as long as there are enough reflectors without a strong AVO variation to confirm the quality of data and processing.