next up previous print clean
Next: REFERENCES Up: Nonstationarity: patching Previous: The human eye as

SPACE-VARIABLE DECONVOLUTION

Filters sometimes change with time and space. We sometimes observe signals whose spectrum changes with position. A filter that changes with position is called nonstationary. We need an extension of our usual convolution operator hconest [*]. Conceptually, little needs to be changed besides changing aa(ia) to aa(ia,iy). But there is a practical problem. Fomel and I have made the decision to clutter up the code somewhat to save a great deal of memory. This should be important to people interested in solving multidimensional problems with big data sets.

Normally, the number of filter coefficients is many fewer than the number of data points, but here we have very many more. Indeed, there are na times more. Variable filters require na times more memory than the data itself. To make the nonstationary helix code more practical, we now require the filters to be constant in patches. The data type for nonstationary filters (which are constant in patches) is introduced in module nhelix, which is a simple modification of module helix [*].  

module nhelix {                            # Define nonstationary filter type
  use helix
  type nfilter {                                 #  nd  is data length.
    logical,       dimension(:), pointer :: mis  # (nd) boundary conditions
    integer,       dimension(:), pointer :: pch  # (nd) patches
    type( filter), dimension(:), pointer :: hlx  # (np) filters
  }
contains
  subroutine nallocate( aa, nh, pch) {         # allocate a filter
    type( nfilter)                       :: aa
    integer, dimension(:), intent( in)   :: nh, pch  
    integer                              :: ip, np, nd
    np = size( nh); allocate( aa%hlx( np))
    do ip = 1, np
       call allocatehelix( aa%hlx( ip), nh( ip))
    nd = size( pch); allocate( aa%pch( nd)) 
    aa%pch = pch
    nullify( aa%mis)				   # set null pointer for mis.
  }
  subroutine ndeallocate( aa) {		           # destroy a filter
    type( nfilter) :: aa
    integer        :: ip
    do ip = 1, size( aa%hlx)
	    call deallocatehelix( aa%hlx( ip))
    deallocate( aa%hlx, aa%pch)
    if( associated( aa%mis))			   # if logicals were allocated
        deallocate( aa%mis)			   # free them
  }
}

What is new is the integer valued vector pch(nd) the size of the one-dimensional (helix) output data space. Every filter output point is to be assigned to a patch. All filters of a given patch number will be the same filter. Nonstationary helixes are created with createnhelixmod, which is a simple modification of module createhelixmod [*].  

module createnhelixmod {      # Create non-stationary helix filter lags and mis
use createhelixmod
use nhelix
use nbound
contains
  function createnhelix( nd, center, gap, na, pch) result (nsaa) {
    type( nfilter)                     :: nsaa   # needed by nhelicon
    integer, dimension(:), intent(in)  :: nd, na # data and filter axes
    integer, dimension(:), intent(in)  :: center # normally (na1/2,na2/2,...,1)
    integer, dimension(:), intent(in)  :: gap    # normally ( 0,   0,  0,...,0)
    integer, dimension(:), intent(in)  :: pch	 # (prod(nd)) patching
    type( filter)                      :: aa
    integer                            :: n123, np, ip
    integer, dimension(:), allocatable :: nh
    aa = createhelix( nd, center, gap, na)
    n123 = product( nd);  np = maxval(pch)
    allocate (nh (np)); nh = size (aa%lag)
    call nallocate( nsaa, nh, pch)
    do ip = 1, np
       nsaa%hlx( ip)%lag = aa%lag
    call deallocatehelix (aa)
    call nboundn(1, nd, na, nsaa) 
    }
}

Notice that the user must define the pch(product(nd)) vector before creating a nonstationary helix. For a simple 1-D time-variable filter, presumably pch would be something like $(1,1,2,2,3,3,\cdots)$.For multidimensional patching we need to think a little more.

Finally, we are ready for the convolution operator. The operator nhconest [*] allows for a different filter in each patch.  

module nhconest {           # Nonstationary convolution, adjoint is the filter.
use nhelix
  real, dimension(:), pointer                 :: x
  type( nfilter)                              :: aa
  integer                                     :: nhmax
  integer, private                            :: np  
#% _init( x, aa, nhmax)
    np = size( aa%hlx)
#% _lop( a( nhmax, np), y(:))
    integer                        :: ia, ix, iy, ip
    integer, dimension(:), pointer :: lag
    do iy = 1, size( y) {  if( aa%mis( iy)) cycle
        ip = aa%pch( iy); lag => aa%hlx( ip)%lag
    	do ia = 1, size( lag) {
		ix = iy - lag( ia);   if( ix < 1) cycle
		     if( adj)    a( ia, ip) +=  y( iy) * x( ix)
		     else        y( iy) +=  a( ia, ip) * x( ix)
		 }
        }
}

A filter output y(iy) has its filter from the patch ip=aa%pch(iy). The line t=a(ip,:) extracts the filter for the ipth patch. If you are confused (as I am) about the difference between aa and a, maybe now is the time to have a look at beyond Loptran to the Fortran version.[*]

Because of the massive increase in the number of filter coefficients, allowing these many filters takes us from overdetermined to very undetermined. We can estimate all these filter coefficients by the usual deconvolution fitting goal ([*])
\begin{displaymath}
\bold 0 \quad\approx\quad
\bold r \eq \bold Y \bold K \bold a +\bold r_0\end{displaymath} (15)
but we need to supplement it with some damping goals, say  
 \begin{displaymath}
\begin{array}
{lll}
\bold 0 &\approx& \bold Y \bold K \bold ...
 ...d r_0
\\ \bold 0 &\approx& \epsilon\ \bold R \bold a\end{array}\end{displaymath} (16)
where $\bold R$ is a roughening operator to be chosen.

Experience with missing data in Chapter [*] shows that when the roughening operator $\bold R$ is a differential operator, the number of iterations can be large. We can speed the calculation immensely by ``preconditioning''. Define a new variable $\bold m$ by $\bold a=\bold R^{-1}\bold m$and insert it into (16) to get the equivalent preconditioned system of goals.
      \begin{eqnarray}
\bold 0 &\approx & \bold Y \bold K \bold R^{-1}\bold m \\ \bold 0 &\approx & \epsilon \ \bold m\end{eqnarray} (17)
(18)

The fitting (17) uses the operator $\bold Y\bold K\bold R^{-1}$.For $\bold Y$ we can use subroutine nhconest() [*]; for the smoothing operator $\bold R^{-1}$ we can use nonstationary polynomial division with operator npolydiv():  

module npolydiv {			  # Helix polynomial division
use nhelix
integer                           :: nd
type( nfilter)                    :: aa
real, dimension (nd), allocatable :: tt
#%  _init ( nd, aa)
#%  _lop  ( xx, yy)
integer                        :: ia, ix, iy, ip
integer, dimension(:), pointer :: lag
real,    dimension(:), pointer :: flt
tt = 0.
if( adj) {
        tt = yy
	do iy= nd, 1, -1 { ip = aa%pch( iy)
		lag => aa%hlx( ip)%lag; flt => aa%hlx( ip)%flt
		do ia = 1, size( lag) {
			ix = iy - lag( ia);     if( ix < 1)  cycle
			tt( ix) -=  flt( ia) * tt( iy)
			} 
		}
	xx += tt
 } else { 
        tt = xx
	do iy= 1, nd { ip = aa%pch( iy)
		lag => aa%hlx( ip)%lag; flt => aa%hlx( ip)%flt
		do ia = 1, size( lag) {
			ix = iy - lag( ia);      if( ix < 1)  cycle
			tt( iy) -=  flt( ia) * tt( ix)
			} 
		}
	yy += tt
        }
}

Now we have all the pieces we need. As we previously estimated stationary filters with the module pef [*], now we can estimate nonstationary PEFs with the module npef [*]. The steps are hardly any different.  

module npef {                                   # Estimate non-stationary PEF
  use nhconest
  use npolydiv2
  use cgstep_mod
  use solver_mod
contains
  subroutine find_pef( dd, aa, rr, niter, eps, nh) {
    integer,          intent( in)    :: niter, nh      # number of iterations
    real,             intent( in)    :: eps	       # epsilon
    type( nfilter)                   :: aa             # estimated PEF output.
    type( nfilter),   intent( in)    :: rr             # roughening filter.
    real,    dimension(:), pointer   :: dd             # input data
    integer                          :: ip, ih, np, nr # filter lengths
    real, dimension (:), allocatable :: flt	       # np*na filter coefs
    np = size( aa%hlx)			# data   length
    nr = np*nh
    allocate( flt( nr))
    call nhconest_init( dd, aa, nh)
    call npolydiv2_init( nr, rr)
    call solver_prec( nhconest_lop, cgstep, x= flt, dat= -dd, niter= niter,
		      prec= npolydiv2_lop, nprec= nr, eps= eps)
    call   cgstep_close()
    call npolydiv2_close()
    call nhconest_close()
    do ip = 1, np
       do ih = 1, size( aa%hlx(ip)%lag)	
          aa%hlx( ip)%flt( ih) = flt( (ip-1)*nh + ih)
    deallocate( flt)
    }
}

Near the end of module npef is a filter reshape from a 1-D array to a 2-D array. If you find it troublesome that nhconest [*] was using the filter during the optimization as already multidimensional, perhaps again, it is time to examine the Fortran code. The answer is that there has been a conversion back and forth partially hidden by Loptran.

Figure 15 shows a synthetic data example using these programs. As we hope for deconvolution, events are compressed. The compression is fairly good, even though each event has a different spectrum. What is especially pleasing is that satisfactory results are obtained in truly small numbers of iterations (about three). The example is for two free filter coefficients (1,a1,a2) per output point. The roughening operator $\bold R$ was taken to be (1,-2,1) which was factored into causal and anticausal finite difference.

 
tvdecon90
tvdecon90
Figure 15
Time variable deconvolution with two free filter coefficients and a gap of 6. Four iterations is better than many.


view burn build edit restore

I hope also to find a test case with field data, but experience in seismology is that spectral changes are slow, which implies unexciting results. Many interesting examples should exist in two- and three-dimensional filtering, however, because reflector dip is always changing and that changes the spatial spectrum.

In multidimensional space, the smoothing filter $\bold R^{-1}$can be chosen with interesting directional properties. Sergey, Bob, Sean and I have joked about this code being the ``double helix'' program because there are two multidimensional helixes in it, one the smoothing filter, the other the deconvolution filter. Unlike the biological helixes, however, these two helixes do not seem to form a symmetrical pair.

EXERCISES:

  1. Is nhconest [*] the inverse operator to npolydiv [*]? Do they commute?
  2. Sketch the matrix corresponding to operator nhconest [*]. HINTS: Do not try to write all the matrix elements. Instead draw short lines to indicate rows or columns. As a ``warm up'' consider a simpler case where one filter is used on the first half of the data and another filter for the other half. Then upgrade that solution from two to about ten filters.


next up previous print clean
Next: REFERENCES Up: Nonstationarity: patching Previous: The human eye as
Stanford Exploration Project
12/26/2000