next up previous print clean
Next: The Loptran computer dialect Up: Introduction Previous: Introduction

Scaling up to big problems

Although most the examples in this book are presented as toys, where results are obtained in a few minutes on a home computer, we have serious industrial-scale jobs always in the backs of our minds. This forces us to avoid representing operators as matrices. Instead we represent operators as a pair of subroutines, one to apply the operator and one to apply the adjoint (transpose matrix). (This will be more clear when you reach the middle of chapter 2.)

By taking a function-pair approach to operators instead of a matrix approach, this book becomes a guide to practical work on realistic-sized data sets. By realistic, I mean as large and larger than those here; i.e., data ranging over two or more dimensions, and the data space and model space sizes being larger than about 105 elements, about a $300\times 300$ image. Even for these, the world's biggest computer would be required to hold in random access memory the $10^5\times 10^5$ matrix linking data and image. Mathematica, Matlab, kriging, etc, are nice tools but[*] it was no surprise when a curious student tried to apply one to an example from this book and discovered that he needed to abandon 99.6% of the data to make it work. Matrix methods are limited not only by the size of the matrices but also by the fact that the cost to multiply or invert is proportional to the third power of the size. For simple experimental work, this limits the matrix approach to data and images of about 103 elements, a $30\times 30$ image.


next up previous print clean
Next: The Loptran computer dialect Up: Introduction Previous: Introduction
Stanford Exploration Project
2/27/1998