next up previous print clean
Next: Solution Up: Forward interpolation Previous: Interpolation theory

Function basis

A particular form of the solution (1) arises from assuming the existence of a basis function set $\{\psi_k(x)\},\;k \in
K$, such that the function f (x) can be represented by a linear combination of the basis functions in the set, as follows:  
 \begin{displaymath}
 f (x) = \sum_{k \in K} c_k \psi_k (x)\;.\end{displaymath} (8)
We can find the linear coefficients ck by multiplying both sides of equation (8) by one of the basis functions (e.g. $\psi_j (x)$). Inverting the equality  
 \begin{displaymath}
 \left( \psi_j (x), f (x)\right) = \sum_{k \in K} c_k \Psi_{jk}\;,\end{displaymath} (9)
where the parentheses denote the dot product, and  
 \begin{displaymath}
\Psi_{jk} = \left( \psi_j (x), \psi_k (x)\right) \;,\end{displaymath} (10)
leads to the following explicit expression for the coefficients ck:  
 \begin{displaymath}
 c_k = \sum_{j \in K} \Psi^{-1}_{kj} \left( \psi_j (x), f
 (x)\right) \;.\end{displaymath} (11)
Here $\Psi^{-1}_{kj}$ refers to the kj component of the matrix, which is the inverse of $\Psi$. The matrix $\Psi$ is invertible as long as the basis set of functions is linearly independent. In the special case of an orthonormal basis, $\Psi$ reduces to the identity matrix:  
 \begin{displaymath}
\Psi_{jk} = \Psi^{-1}_{kj} = \delta_{jk}\;.\end{displaymath} (12)

Equation (11) is a least-squares estimate of the coefficients ck: one can alternatively derive it by minimizing the least-squares norm of the difference between f(x) and the linear decomposition (8). For a given set of basis functions, equation (11) approximates the function f(x) in formula (1) in the least-squares sense.


next up previous print clean
Next: Solution Up: Forward interpolation Previous: Interpolation theory
Stanford Exploration Project
12/30/2000