next up previous print clean
Next: The Nyquist frequency Up: Convolution and Spectra Previous: Gaussian examples

FT AS AN INVERTIBLE MATRIX

Happily, Fourier sums are exactly invertible: given the output, the input can be quickly found. Because signals can be transformed to the frequency domain, manipulated there, and then returned to the time domain, convolution and correlation can be done faster. Time derivatives can also be computed with more accuracy in the frequency domain than in the time domain. Signals can be shifted a fraction of the time sample, and they can be shifted back again exactly. In this chapter we will see how many operations we associate with the time domain can often be done better in the frequency domain. We will also examine some two-dimensional Fourier transforms.

A Fourier sum may be written
\begin{displaymath}
B(\omega) \eq \sum_t \ b_t \ e^{i\omega t}
 \eq \sum_t \ b_t \ Z^t\end{displaymath} (24)
where the complex value Z is related to the real frequency $\omega$ by $Z=e^{i\omega}$.This Fourier sum is a way of building a continuous function of $\omega$from discrete signal values bt in the time domain. Here we specify both time and frequency domains by a set of points. Begin with an example of a signal that is nonzero at four successive instants, ( b0, b1, b2, b3). The transform is
\begin{displaymath}
B(\omega) \eq b_0 + b_1 Z + b_2 Z^2 + b_3 Z^3\end{displaymath} (25)
The evaluation of this polynomial can be organized as a matrix times a vector, such as  
 \begin{displaymath}
\left[ \begin{array}
{c}
 B_0 \\  B_1 \\  B_2 \\  B_3 \end{a...
 ...gin{array}
{c}
 b_0 \\  b_1 \\  b_2 \\  b_3 \end{array} \right]\end{displaymath} (26)
Observe that the top row of the matrix evaluates the polynomial at Z=1, a point where also $\omega=0$.The second row evaluates $B_1=B(Z=W=e^{i\omega_0})$,where $\omega_0$ is some base frequency. The third row evaluates the Fourier transform for $2\omega_0$,and the bottom row for $3\omega_0$.The matrix could have more than four rows for more frequencies and more columns for more time points. I have made the matrix square in order to show you next how we can find the inverse matrix. The size of the matrix in (26) is N=4. If we choose the base frequency $\omega_0$ and hence W correctly, the inverse matrix will be  
 \begin{displaymath}
\left[ \begin{array}
{c}
 b_0 \\  b_1 \\  b_2 \\  b_3 \end{a...
 ...gin{array}
{c}
 B_0 \\  B_1 \\  B_2 \\  B_3 \end{array} \right]\end{displaymath} (27)
Multiplying the matrix of (27) with that of (26), we first see that the diagonals are +1 as desired. To have the off diagonals vanish, we need various sums, such as 1+W +W2+W3 and 1+W2+W4+W6, to vanish. Every element (W6, for example, or 1/W9) is a unit vector in the complex plane. In order for the sums of the unit vectors to vanish, we must ensure that the vectors pull symmetrically away from the origin. A uniform distribution of directions meets this requirement. In other words, W should be the N-th root of unity, i.e.,  
 \begin{displaymath}
W \eq
\sqrt[N]{1} \eq
e^{2\pi i/N}\end{displaymath} (28)

The lowest frequency is zero, corresponding to the top row of (26). The next-to-the-lowest frequency we find by setting W in (28) to $Z=e^{i\omega_0}$.So $\omega_0=2\pi /N$; and for (27) to be inverse to (26), the frequencies required are  
 \begin{displaymath}
\omega_k \eq { (0, 1, 2, \ldots , N- 1) \, 2\pi \over N}\end{displaymath} (29)


 
next up previous print clean
Next: The Nyquist frequency Up: Convolution and Spectra Previous: Gaussian examples
Stanford Exploration Project
3/1/2001