next up previous print clean
Next: About this document ... Up: ELIMINATING SHIP TRACKS IN Previous: Regridding

Treasure hunting at Galilee

Before I go diving, digging, or dreaming at Galilee, there are a few more things to attend to. The original data is recorded at about 25-m intervals but the maps shown here are at 50 m, so to find small treasure I should make maps at higher resolution. This aggravates the noise problem. We see that all the Galilee maps contain glitches that are suspiciously nongeological and nonarcheological. Ironically, the process of getting rid of the tracks in Figures 7 and 8 creates glitches at track ends. There would be no such glitches if the vessel switched on the depth sounder in port in the morning and switched it off after return. The problem arises when tracks start and stop in the middle of the lake. To handle the problem we should keep track of individual tracks and, with the noise PEF (approximately (1,-1)), convolve internally within each track. In principle, the extra bookkeeping should not be a chore in a higher-level computing language. Perhaps when I become more accustomed to F90, I will build a data container that is a collection of tracks of unequal lengths and a filter program to deal with that structure. Figure 9 shows not the tracks (there are too many), but the gaps between the end of one track and the beginning of the next wherever that gap exceeds 100 m, four times the nominal data-point separation.

 
seegap
Figure 9
The lines are the gaps between successive vessel tracks, i.e., where the vessel turned off the depth sounder. At the end of each track is a potential glitch.

seegap
view burn build edit restore

Besides these processing defects, the defective data values really should be removed. The survey equipment seemed to run reliably about 99% of the time but when it failed, it often failed for multiple measurements. Even with a five-point running median (which reduces resolution accordingly), we are left with numerous doubtful blips in the data. We also see a few points and vessel tracks outside the lake (!); these problems suggest that the navigation equipment is also subject to occasional failure and that a few tracks inside the lake may also be mispositioned.

Rather than begin hand-editing the data, I suggest the following processing scheme: To judge the quality of any particular data-point, we need to find from each of two other nearby tracks the nearest data point. If the depth at our point is nearly the same depth of either of the two then we judge it to be good. We need two other tracks and not two points from a single other track because bad points often come in bunches.

I plan to break the data analysis into tracks. A new track will be started wherever (xi-xi-1)2 + (yi-yi-1)2 exceeds a threshold. Additionally, a new track will be started whenever the apparent water-bottom slope exceeds a threshold. After that, I will add a fourth column to the (xi,yi,zi) triplets, a weighting function wi which will be set to zero in short tracks. As you might imagine, this will involve a little extra clutter in the programs. The easy part is the extra weighting function that comes along with the data. More awkward is that loops over data space become two loops, one over tracks and one within a track.


next up previous print clean
Next: About this document ... Up: ELIMINATING SHIP TRACKS IN Previous: Regridding
Stanford Exploration Project
2/27/1998