next up previous print clean
Next: REFERENCES Up: Brown: Systematic error estimation Previous: Results

Discussion

I presented a new method for building maps from tracked datasets. As discussed in previous works dealing with the Galilee bathymetry data, the biggest problem in building a map is systematic error between neighboring acquisition tracks. My method estimates systematic error between tracks by directly analyzing the difference between tracks at ``crossing points'' and using least squares optimization to estimate the error in points in between.

As we saw in tests on the Galilee data, my method effectively unbiases the data residual without a loss of resolution. My approach produces the most interpretable Galilee maps of the three approaches I discussed. Although the track artifacts are not removed completely, I believe the increased resolution relative to IRLS + track derivative approach weighs the balance in my approach's favor.

Looking to the future, I believe my approach offers a fundamental advantage over those of system (3)'s ilk. The philsophy behind system (3) assumes that we know that the residual is biased, but that we don't necessarily understand the data errors that cause the bias. In many cases, we do have strong prior information on the cause, magnitude, physics, and covariance of the systematic error. A good estimate of the systematic error may have interpretive value.

One weakness of my approach is that it reeks of ``preprocessing''. A more general and rigorous approach is that of Brown and Clapp (2001), which is basically an iterative variant of the approach that I've proposed here. Unfortunately, without prior information on the character of the systematic error, such an approach would be wasted.


next up previous print clean
Next: REFERENCES Up: Brown: Systematic error estimation Previous: Results
Stanford Exploration Project
9/18/2001