If we are not careful, our calculation of the PEF could have the pitfall that it would try to use the missing data to find the PEF, and hence it would get the wrong PEF. To avoid this pitfall, imagine a PEF finder that uses weighted least squares where the weighting function vanishes on those fitting equations that involve missing data. The weighting would be unity elsewhere. Instead of weighting bad results by zero, we simply will not compute them. The residual there will be initialized to zero and never changed. Likewise for the adjoint, these components of the residual will never contribute to a gradient. Module hconest assumes it knows the bad locations (where outputs are dependent on unknown inputs) and ignores computation there.
module hconest { # masked helix convolution, adjoint is the filter.
use helix
real, dimension (:), pointer :: x
type( filter) :: aa
#% _init( x, aa)
#% _lop( a, y)
integer ia, ix, iy
do ia = 1, size( a) {
do iy = 1 + aa%lag( ia), size( y) { if( aa%mis( iy)) cycle
ix = iy - aa%lag( ia)
if( adj) a( ia) += y( iy) * x( ix)
else y( iy) += a( ia) * x( ix)
}
}
}
Suppose that y4 were a missing or a bad data value in the fitting goal (27). That would spoil the 4th, 5th and 6th fitting equations. Thus we would want to be sure that w4, w5, and w6 were zero.
![]() |
(27) |
.
module misinput { # find a mask of missing filter inputs
use helicon
contains
subroutine find_mask( known, aa) {
logical, intent( in) :: known(:)
type( filter) :: aa
real, dimension( size (known)) :: rr, dfre
integer :: stat
where( known) dfre = 0.
elsewhere dfre = 1.
call helicon_init( aa)
aa%flt = 1.
stat = helicon_lop( .false., .false., dfre, rr)
aa%flt = 0.
where ( rr > 0.) aa%mis = .true.
}
}
to design a filter, the one-dimensional gradient along the track.
Choose suitable regularization.
The result should be a track-free map of the lake.
If the suspicious data points were all found by the pre-editor,
the spikes should be gone too.
Another description of this problem is at
http://sepwww.stanford.edu/sep/jon/trash/mad/