next up previous print clean
Next: Why take the derivative Up: Rickett, et al.: STANFORD Previous: Introduction

Fitting Goals

All four data sets were merged into one and sorted into quadruplets of (x,y,z,w). The x and y components are longitude and latitude in degrees, respectively. The z value is sea-surface height above a reference ellipsoid in millimeters. Lastly, the w component is a weighting value that is nonzero for any suspicious z value and at the track-ends.

I used fitting goals that are essentially the same as those applied to the Sea of Galilee ().

      \begin{eqnarray}
\ {\bf W} \frac{d}{dt}[{\bf L}{\bf m} -{\bf d}] &\approx& {\bf 0}
\\  \ \epsilon {\bf A m} &\approx& {\bf 0}
 \end{eqnarray} (113)
(114)

The first goal is to find a map, ${\bf m}$, that when sampled into data-space by linear interpolation, has a derivative that is equal to the derivative along the tracks of the observed data, ${\bf d}$. The weighting term ${\bf W}$, will throw out any derivative values influenced by noisy values or track-ends.

The next goal applies a regularization operator, ${\bf A}$, which insures that the model with infilled missing data is smooth. I regularized with the 2D gradient, (${\nabla_x},{\nabla_y}$), and the 2D Laplacian, $\nabla^2$. Finally, I began testing 2D prediction-error filters (PEFs), which were estimated on the dense data.

The models in this paper have $400 \times 400$ bins which seem to have enough resolution to make detailed images while not slowing down the solver too much. The entire merged data set consists of 537979 samples.



 
next up previous print clean
Next: Why take the derivative Up: Rickett, et al.: STANFORD Previous: Introduction
Stanford Exploration Project
7/5/1998