next up previous print clean

Packing both missing data and filter into a CG vector

Define a roughening filter A(Z) and a data signal Y(Z) at some stage of interpolation. The regression is $0 \approx A(Z) Y(Z)$where the filter A(Z) has at least one coefficient constrained to be nonzero and the data contains both known and missing values. Think of perturbations $\Delta A$ and $\Delta Y$.Neglecting the nonlinear term $\Delta A\,\Delta Y$ we have $-A(Z)Y(Z) \approx A(Z)\, \Delta Y(Z) + Y(Z)\, \Delta A(Z)$.To make a program such as miss1() from chapter 6. we need to pack both unknowns into a single vector $\Delta X=(\Delta Y,\Delta A)$ before calling the conjugate-gradient program. Likewise the resulting filter and data coming out must be unpacked. Also, the gradient now has two contributions, one from $A\,\Delta Y$ and one from $Y\,\Delta A$ and these must be added. These book-keeping operations are done in subroutine cgpack().
subroutine cgpack( iter, ny,y,dy, na,a,da, nr,r,dry,dra, s, ss)
integer iter, iy, ny, ia,na, ir,nr,  cg
real y(ny),dy(ny),dry(nr),  a(na),da(na),dra(nr), r(nr), s(ny+na),ss(ny+na-1)
temporary real x(ny+na), g(ny+na), gg(nr)
do iy=1,ny { x(iy)    =  y(iy); 	g(iy)    = dy(iy) }
do ia=1,na { x(ny+ia) =  a(ia); 	g(ny+ia) = da(ia) }
do ir=1,nr { gg(ir) = dry(ir) + dra(ir) }
call cgstep( iter, na+ny, x, g, s, nr, r, gg, ss)
do iy=1,ny { y(iy) = x(iy);		dy(iy) = g(iy) }
do ia=1,na { a(ia) = x(ny+ia);		da(ia) = g(ny+ia) }
return; end

The program, missif() that made Figures 1 thru 3, closely resembles miss1(). Besides using cgpack() the difference is that to avoid accumulation of errors from the neglect of the nonlinear product $\Delta A\,\Delta Y$,the residual is recalculated inside the iteration loop instead of once at the beginning.

# find filter and missing data on 1-axis by minimizing power out.
subroutine missif( na, lag, a, ny, y, copy, niter)
integer	i, ia, lag, na,  iy,ny,  ir,nr,  iter,niter
real	y(ny)		# input: known data with zeros for missing values.
			# output: known plus missing data.
real	copy(ny)	# copy(iy) vanishes where y(iy) is a missing value.
real	a(na)		# input and output: roughening filter
temporary real	da(na), sa(na), dy(ny), sy(ny)
temporary real  dry(ny+na-1), dra(ny+na-1), dr(ny+na-1), r(ny+na-1)
temporary real  s(ny+na), ss(ny+na-1)
nr = ny+na-1
if( a(lag) == 0. )	call erexit('missif: input needs a(lag)!= 0.')
do iter = 0, niter {				# niter= number missing about
	call contran( 0, na,a, ny,y,  r)	# r = a*y
	do ir=1,nr
		r(ir) = - r(ir)			# residual = - filtered data
	call contran( 1, na,a, ny,dy, r)	# dy(a,r)
	call contran( 1, ny,y, na,da, r)	# da(y,r)
	do iy=1,ny
		if( copy(iy) != 0.)		# missing data where copy(iy)==0
			dy(iy) = 0.		# can't change known data
	da(lag) = 0.				# constrained filter point
	call contran( 0, na,a, ny,dy, dry)	# dry=a*dy
	call contran( 0, ny,y, na,da, dra)	# dra=y*da
	call cgpack( iter, ny,y,dy, na,a,da, nr,r,dry,dra, s, ss)
return; end

I'll probably return to missif() and do something about preconditioning. The convergence can be very slow or fail if a() and y() are much out of scale with each other.

I have noticed that the convergence no longer appears to be quadratic, so I'll be checking into ``partan'' and other methods of optimization found in Luenburger's book. I believe quadratic convergence can be extremely valuable and is very little extra effort and should not be neglected. The Polak-Ribiere method described in Luenburger is now being used by Biondo and Jos.

next up previous print clean
Stanford Exploration Project