next up previous print clean
Next: Inverse Theory for signal Up: Background and definitions Previous: An example of calculating

Prediction-error filtering in the frequency domain

  If the convolution $\sv f \ast \sv d = \sv r$ is expressed in matrix form as $\st D \sv f = \sv r$, where $\st D$ is the convolution matrix of $\sv d$,the filter $\sv f$ can be solved for to get the least-squares minimum of $\sv r$.The normal equations expression for the least-squares inverse is $(\st D^{\dagger} \st D) \sv f = \st D^{\dagger} \sv r$,or $\sv f = (\st D^{\dagger} \st D)^{-1} \st D^{\dagger} \sv r$.This expression for $\sv f$ may be decomposed into simpler expressions in the frequency domain since $\sv f \ast \sv d = \sv r$may be expressed as $f(\omega) d(\omega) = r(\omega)$.The expression $\sv f = (\st D^{\dagger} \st D)^{-1} \st D^{\dagger} \sv r$may be transformed into the frequency domain as $f(\omega)=(\overline{d(\omega)} d(\omega))^{-1} \overline{d(\omega)}r(\omega)$or $f(\omega)= \overline{d(\omega)}r(\omega) / (\overline{d(\omega)} d(\omega))$.(Here $d(\omega)$ indicates a component of the Fourier transform of the data, and $\overline{d(\omega)}$ indicates the complex conjugate of $d(\omega)$). Canceling out $\overline{d(\omega)}$ gives $f(\omega)= r(\omega) / d(\omega)$.Thus in the frequency domain, where filtering is described as a multiplication such as $f(\omega) d(\omega) = r(\omega)$,inversion is simply division, or $f(\omega)= r(\omega) / d(\omega)$.The values of $f(\omega)$, $d(\omega)$, and $r(\omega)$ are scalars (although they are complex numbers).

The $\overline{ d(\omega)} d(\omega)$ term in the denominator is the Fourier transform of the autocorrelation of $\sv d$.If $\sv d^{\dagger} \sv d$ is the identity matrix $\st I$,$\overline{ d(\omega)} d(\omega)$ will be constant. This corresponds to an input with a white spectrum. If all the terms of $\sv d^{\dagger} \sv d$ are constant, $\overline{ d(\omega)} d(\omega)$ will be non-zero only at $\omega=0$,and the inversion will be unstable. This corresponds to a data series $\sv d$ containing a constant. It can be seen that $\overline{d(\omega)} A(\omega)$ is a measure of the information available at $\omega$, and $(\overline{d(\omega)} d(\omega))^{-1}$ is a function of the uncertainty, or variance, at $\omega$.The original autocorrelation matrix $\sv d^{\dagger} \sv d$ is the information matrix, and its inverse $(\sv d^{\dagger} \sv d)^{-1}$ is the covariance matrixStrang (1986).

The expression $f(\omega)= \overline{d(\omega)}r(\omega) / (\overline{d(\omega)} d(\omega))$will generally have a stabilizer in the denominator to avoid having $m(\omega)$ approach infinity when $\overline{ d(\omega)} d(\omega)$ gets small. Adding this stabilizer in the frequency domain corresponds to adding a small value to the diagonal of the autocorrelation matrix. In the cases discussed here, the stabilizer will seldom be needed since random noise in the data generally keeps $\overline{ d(\omega)} d(\omega)$ from going to zero.


next up previous print clean
Next: Inverse Theory for signal Up: Background and definitions Previous: An example of calculating
Stanford Exploration Project
2/9/2001