A valuable goal is to isolate the CG solving program from
all physical aspects of the linear operator.
We wish to abstract the CG solver,
to put it in a compiled subroutine library
where we can feed it inputs and get back the outputs,
and never again to see the internals of how it works
(like the fast Fourier transform).
Unfortunately, the primitive nature of the **Fortran**-**Ratfor** language
does not appear to allow this abstraction.
The reason is that the CG program needs to call
your linear operator routine and,
to do this, it needs to know not only the name of your routine but
how to supply its arguments.
(I recall from years ago a **Fortran** where subroutines could have several
entry points. This might help.)
Thus, everywhere in this book where I solve a model-fitting problem,
we must see some of the inner workings of CG.
To keep this from becoming too objectionable,
I found the nonstandard CG method we have been using
and coupled it with Bill **Harlan**'s idea of isolating
its inner workings into `cgstep()`.
Because of this we can see complete code for many examples,
and the code is not awfully cluttered.
Unfortunately, my CG is not Hestenes' CG.

In many of the **Fortran** codes you see in this book
it is assumed that the abstract vector input and vector output
correspond to physical one-dimensional arrays.
In real life, these abstract vectors are often matrices,
or matrices in special forms,
such as windows on a wall of data
(nonpacked arrays),
and they may contain complex numbers.
Examining equation (53), we notice that the space
of residuals for a damped problem is composed of two parts,
the residual of the original problem
and a part the size of the unknowns.
These two parts are packed, somewhat uncomfortably,
into the abstract residual vector.

A linear operator really consists of (at least) four subroutines,
one for applying the operator, one for its adjoint,
one for a dot product in the space of inputs,
and one for a dot product in the space of outputs.
Modern programming theory uses the terms ``data abstraction''
and ``**object-oriented programming** (**OOP**)''
to describe methods and languages that are well suited
to the problems we are facing here.
The linear-operator object is what the CG solver needs to be handed,
along with an instance of the input abstract vector
and a pointer to space for the output vector.
(The linear-operator object, after it is created,
could also be handed to a universal dot-product test routine.
With **Fortran** I write a separate dot-product test program
for each operator.)

Another abstraction that **Fortran** cannot cope with is this:
the CG program must allocate space for the gradient and past-steps vectors.
But the detailed form of these abstract vectors
should not be known to the CG program.
So the linear-operator object requires four more routines
(called ``methods" in OOP)
that the CG routine uses to allocate and free memory
(to *create* and *destroy*
*objects* from the physical space of inputs and outputs).
In this way OOP allows us to isolate concepts,
so that each concept need only be expressed once.
A single version of a concept can thus be reused without being replicated
in a form blended with other concepts.

As I am going along generating examples for this book,
and as the examples get more complicated, I am wondering
just where I will drop the idea of exhibiting complete codes.
Obviously, if I switched from **Fortran** to a more modern language,
such as **C++**, I could get further.
The disadvantages of C++ are that I am not experienced in it,
few of my readers will know it,
and its looping statements are cluttered and
do not resemble mathematics.
Instead of `do i=1,n`,
C and C++ use `for( i=0; i<=n; i++)`.
It would be fun to do the coding in a better way,
but for now,
I am having more fun identifying new problems to solve.

10/21/1998