next up previous print clean
Next: Imposing prior knowledge of Up: OPTIMUM FILLING OF EMPTY Previous: Two-dimensional convolutions, transient and

Finding the prediction-error filter

The first stage of the least-squares estimation is computing the prediction-error filter. The second stage will be using it to find the missing data. The input data space contains a mixture of known data values and missing unknown ones. For the first stage of finding the filter, we generally have many more fitting equations than we need so we can proceed by ignoring the fitting equations that involve missing data values. We ignore them everywhere that the missing inputs hit the filter, either by hitting the filter's ``1'' or by hitting the filter's adjustable values. To identify these less desirable fitting equations, we prepare a mask from two other masks. First we make a mask plane dfre that is the size of the data plane which has mask values being +1.0 where data is missing and being 0.0 where data is known. Likewise we make a filter mask afre equal +1.0 where there could be nonzero filter coefficients and equal 0.0 where filter coefficients are zero. (This mask is almost the same as the afre previously defined, but we also need to reject equations where the filter's ``1'' hits a missing data value. Such a fitting equation might be linear but could involve both the unknown filter and also a missing data value under the filter's ``1''. Here we seek only fitting equations for the filter.)

 

module misinput {	               # find a mask of missing filter inputs
  use helicon
contains
  subroutine find_mask( dd, aa, lag, misin) {
    real,    dimension( :), intent( in)  :: dd
    logical, dimension( :), intent( out) :: misin
    real,    dimension( :), pointer      :: aa
    integer, dimension( :), pointer      :: lag
    real,    dimension( size( dd))       :: rr, dfre
    integer                              :: stat
    where( dd == 0.) 
       dfre = 1.    
    elsewhere
       dfre = 0.    
    aa = 1.
    call helicon_init( aa, lag)
    stat = helicon_lop( .false., .false., dfre, rr)
    where ( rr > 0.)   misin = .true.	
    aa = 0. 
 }
}

The codes here do not address the difficulty that maybe too much data is missing so that all weights are zero. To add stabilization we could supplement the data volume with a synthetic ``training dataset'' or by a ``prior filter''. With things as they are, if there is not enough data to specify a prediction-error filter, you will encounter the error exit from cgstep() [*].

 
hole90
hole90
Figure 6
Original data (left), with a zeroed hole (center), and as restored (right).


view burn build edit restore

The synthetic data in Figure 6 is a superposition of two plane waves of different directions, each with a random (but low-passed) waveform. After punching a hole in the data, we find that the lost data is pleasingly restored, though a bit weak near the side boundary. This imperfection could result from the side-boundary behavior of the operator or from an insufficient number of missing-data iterations.  

module pef {              # Find prediction-error filter (helix magic)
  use hconest
  use cgstep_mod
  use solver_mod
contains
  subroutine find_pef( yy, aa, lag, mx, niter) {
    integer,                intent( in)  :: niter
    real,    dimension( :), pointer      :: yy
    logical, dimension( :), pointer      :: mx
    integer, dimension( :), pointer      :: lag
    real,    dimension( :), intent( out) :: aa
    call hconest_init( mx, yy, lag)
    call solver( hconest_lop, cgstep, aa, -yy, niter)
    call cgstep_close()
  }
}


next up previous print clean
Next: Imposing prior knowledge of Up: OPTIMUM FILLING OF EMPTY Previous: Two-dimensional convolutions, transient and
Stanford Exploration Project
2/27/1998