Cluster computing, with relatively inexpensive computational units, looks to be the most cost effective high performance computing option for at least the next five years. Unfortunately, cluster computing is not the most convenient processing environment for the type of problems routinely performed in seismic exploration.
Many of the tasks we want to perform require both large memory and massive disk space. These requirements make grid based computing () generally impractical. Storing and accessing these large datasets is also problematic. The Parallel Virtual File System (PVFS) () is an attempt to create a virtual file system across several machines. It is still relatively new, and doesn't allow good user control of data locality (e.g. you often know you want to store frequency `x' on node one because that is where you plan to process it).
When writing an application for a cluster you often face difficult additional challenges. Problems specific to coding in SEPlib, but which are indicative of more wide-spread problems include: each process may not be able to read a shared a history file; each process needs a unique tag descriptor when writing out a temporary file; all shared binary files need to be read locally and then distributed (through some mechanism such as MPI ()); and writing effective check-pointing in a parallel code can be extremely cumbersome. These problems significantly increase the complexity or writing and debugging applications.
In this paper I discuss an extension of the
basic SEP data description to help solve the
It extends the definition of a SEPlib dataset
to include one that is stored on multiple machines.
It allows the same code base to be used for both
serial and parallel applications. All that changes
is which libraries you link with.
Parallel datasets can be easily created and accessed,
greatly simplifying coding and debugging in a cluster
In this paper I describe how the library works,
and provide several examples of its use.