next up previous print clean
Next: Missing data with other Up: MODELING BY RESTORATION OF Previous: MODELING BY RESTORATION OF

Missing data program

This method of finding missing data values uses conjugate gradients and it introduces the notion of simple constraints to least squares fitting. The addition to the basic CG algorithm is uncomplicated, regardless of the amount of missing data or the complexity of its geometrical arrangement. You will see the program that made Figures 1-5 is pleasingly concise.

Formally, our problem is $\bold 0 \approx \bold F(\bold k + \bold m) = \bold F\bold k + \bold F\bold m$where $\bold F$ is a filter (a shifted column matrix like in ([*])), and $\bold k$ and $\bold m$ have elements in data space. Every point in data space is either given or missing so where a component in the vector of known givens, $\bold k$, is nonzero, that component in $\bold m$, must be zero, and vice versa. Recall that the program cgmeth() solves sets like $\bold y \approx \bold A \bold x$.So for that program $\bold A$ is $\bold F$,$\bold x$ is $\bold m$,and $\bold y$ is $-\bold F \bold k$.The trouble with that program is that it will change the known data values unless we recode the filter operator to exclude them. Instead we'll reorganize the CG algorithm to include explicitly these constraints so then we can use any filter program from our library.

The vector $\bold y$ is the entire data space $\bold k + \bold m$.It is input to the missing-data program containing $\bold k + \bold 0$.The program begins by loading the negative filtered $\bold y$into a residual. The iterations proceed as with cgmeth() except that the matrix multiply routine is replaced by the convolution routine contran(). The new ingredient in the missing-data subroutine miss1() is the simple constraints that the given data cannot be changed. Thus after the gradient is computed, the components that correspond to given data values are set to zero.

# fill in missing data on 1-axis by minimizing power out of a given filter.
#
subroutine miss1( na, a, ny, y, copy, niter)
integer	na,  iy,ny,  ir,nr,  iter,niter
real	y(ny)		# in: known data with zeros for missing values.
			# out: known plus missing data.
real	copy(ny)	# in: copy(iy) vanishes where y(iy) is a missing value.
real	a(na)		# in: roughening filter
temporary real	dy(ny),sy(ny),  r(ny+na-1),dr(ny+na-1),sr(ny+na-1)
nr = ny+na-1
call contran( 0, na, a, ny, y, r)		# r = a*y   convolution
do ir= 1, nr
	r(ir) = - r(ir)				# residual = - filtered data
do iter = 0, niter {				# niter= number missing or less
	call contran( 1, na,a, ny,dy, r)	# dy(a,r)   correlation
	do iy= 1, ny
		if( copy(iy) != 0.)		# missing data where copy(iy)==0
			dy(iy) = 0.		# can't change known data
	call contran( 0, na,a, ny,dy, dr)	# dr=a*dy   convolution
	call cgstep( iter, ny,y,dy,sy, nr,r,dr,sr)	# y=y+dy; r=r-dr
	}
return; end

That prevents changes to the given data by motion any distance along the gradient. Likewise motion along previous steps cannot perturb the given data values. So the CG method (finding the minimum power in the plane spanned by the gradient and the previous step) leads to minimum power while respecting the constraints.


next up previous print clean
Next: Missing data with other Up: MODELING BY RESTORATION OF Previous: MODELING BY RESTORATION OF
Stanford Exploration Project
1/13/1998