previous up next print clean
Next: STACKING VELOCITY ANALYSIS Up: Lumley: Reservoir velocity analysis Previous: Lumley: Reservoir velocity analysis

INTRODUCTION

I am interested in 3-D prestack seismic reservoir characterization (see Lumley, 1991). I am actively seeking a 3-D prestack surface seismic data set, shot over an existing reservoir in a development and/or production stage. An outline of the general project goals consists of the following steps:

Several items in each of these phases will likely involve some interaction with the Stanford Center for Reservoir Forecasting (SCRF), a consortium of geostatisticians and reservoir engineers, and the Stanford Rock and Borehole Project (SRB) rock physics consortium.

The critical first phase of this project, upon which all subsequent phases depend, requires a high resolution 3-D prestack migration velocity analysis. By ``high resolution'' I mean increased velocity resolution in comparison to conventional surface seismic velocity analyses, in terms of increased 3-D spatial and temporal sampling.

To understand the intensity of the computational requirements for such a high resolution reservoir velocity analysis, consider a small 3-D reservoir survey with a surface migration image area of about 4 km2. Given a CMP spacing of 25 m both in- and cross-line (after interpolation), this represents about 6400 full-fold CMPs. If the fold is 48, the entire 3-D prestack data set represents 300,000 traces. Given a sampling interval of 2 ms and a 4 s record length, the data set requires 2.5 Gbytes of storage. If we migrate 48 velocities at each of the 6400 CMP bin locations to produce a migration velocity analysis at every 25 m spatial interval, and require 50 flops per input trace per output pixel, the total flop count is about 1016, or 10,000 Teraflops. Implementing the required algorithm on a supercomputer at 1 Gflop/s would require about 4 cpu months to complete the job! Clearly, we need to decrease this cpu time by at least an order of magnitude. Decreasing the run time could be accomplished by: (a) a sparser spatial resolution of the velocity analysis, (b) a sparser temporal velocity resolution, (c) decreasing the number of migration velocities using a priori knowledge of stacking velocities, (d) decreasing the number of input seismic traces, and (e) using a faster computer and/or algorithm. For example, halving the number of migration velocities to 24, decimating the input traces to 8 ms, and performing the velocity analysis every 100 m reduces the job described above to about 24 cpu hours, which is much more manageable.

Due to the large computational effort required by migration velocity analysis, I decided to implement the process on our Connection Machine. The first section of this paper discusses the stacking velocity analysis I developed on the CM. The next section describes the CM prestack time migration velocity analysis. Both methods are based on the Kirchhoff approach. After introducing each method and its implementation on the CM, I show an application of each to a 2-D marine data set exhibiting a bright spot from the Gulf of Mexico, and discuss pertinent run times and I/O issues.


previous up next print clean
Next: STACKING VELOCITY ANALYSIS Up: Lumley: Reservoir velocity analysis Previous: Lumley: Reservoir velocity analysis
Stanford Exploration Project
11/18/1997