next up previous print clean
Next: Multi-dimensional spectral factorization on Up: Introduction Previous: Introduction

Spectral factorization

Calculating the autocorrelation of a function is similar to calculating the square of a number. However, rather than just losing information about the sign, we lose information about the function's phase. Rather than there being two numbers that both have the same square, there is an infinite family of functions that share the same autocorrelation. This leaves us with a much larger ambiguity.

The simplest way to resolve the ambiguity is to set the phase to zero. Unfortunately, this solution results in acausal functions -- energy starts to arrive before time zero. This is somewhat non-intuitive, and inappropriate for most physical systems, as it implies a ball moves before it's kicked.

A second alternative to resolve the ambiguity is to insist on a causal function, but one whose energy is packed as close to time zero as possible. This is known as the minimum-phase solution. As well as being causal and relatively compact in time, minimum-phase functions have another interesting property: the inverse of a minimum-phase function is also minimum phase, and hence causal. Because of these properties, it turns out that many physical systems fit the minimum phase model.

There is only one possible minimum-phase function with a given autocorrelation, and spectral factorization is the problem of determining that unique function.

One-dimensional solutions to the spectral factorization problem are well known: for example, Claerbout (1992) describes several approaches, including the Kolmogorov algorithm Kolmogorov (1939) which I briefly review in Chapter [*]. For multi-dimensional signals, however, the problem itself is less clear: what exactly constitutes a causal function in multi-dimensional spaces?



 
next up previous print clean
Next: Multi-dimensional spectral factorization on Up: Introduction Previous: Introduction
Stanford Exploration Project
5/27/2001