My thoughts about electronic publication for the SEG

Since the SEG meeting, I've spent many evenings and half of each weekend ``surfing'' the net (and bombarding people foolish enough to respond to my queries with multiple return volleys of e-mail) trying to find out more about the current state of electronic publication, and where it might be going. What I've found is that it's currently rather a big confusing mess, with different groups doing all sorts of different things and not all that much communication going on between them. It surely looks like it should have an exciting future, though.

Reflecting the fact that it's a big confusing mess, this document will be somewhat rambling. Sorry, but I don't have time to organize it any better than this.

I'm providing my summary as a hypertext document so while reading my thoughts you can do some ``directed surfing'' of your own. This is not meant to be a comprehensive list of all the sites I found, nor all the e-mail I received, but just the highlights of what I thought we have the most to learn from. There are probably other sites out there and other people out there that would be just as interesting (or even better) that I simply didn't happen to connect with.

What's electronic publication?

The first question to answer is, what do we mean by ``electronic publication'' anyway? I've broken it down into these parts:

In the following sections I examine each of these topics in turn. (There is lots of overlap, of course.)

Formats for accepting digital manuscripts

One group of people with a lot of experience at accepting papers are the ones that specialize in ``grey literature'' archives: large collections of generic manuscripts that have been subjected to an unspecified level of review, and that may or may not ever appear in a more ``respectable'' venue. The premier site I know of currently hosting grey literature is the e-print archive ``XXX'' at LANL. XXX caters to the broad physics community. The XXX people much prefer TeX source, but will also take postscript if that's the best the author can do.

The problem with postscript is that it loses all contextual information. To quote Paul Ginsparg (the fearless leader of XXX):

``... horrifying to contemplate armies of people adding hyperlink overlays `by hand' after the fact, especially when much of the contextual structure is already present in the TeX source, only to be lost in the conversion to dvi and then e.g. to PostScript.''

Mark Doyle, ``doyle@mmm.lanl.gov'', explained to me further how they avoid this problem using TeX:

``So we (and others) have developed HyperTeX to preserve all of this information and we have hacked dvips to stamp all of this information into /pdfmark operators in the PostScript so that the distiller can pick up the information and you get automatically linked documents (click on a reference number and jump to the reference.) We also convert all references to other papers on our archives into URL's to the paper's abstract and this too can be put into the PDF using the 2.1 Distiller so that clicking on that communciates with your Web browser. This effectively turns our raw database into a fully hyperlinked one, especially as it becomes more and more common for people to use the archive paper numbers in their references. Also, HyperTeX is completely retroactive, so all old papers merely need to be re-TeX'ed with the new HyperTeX macros. You can not do any of this with PostScript (well, not nearly as easily and automagically). A paper in PS is basically frozen for all time. It can't be easily updated to take advantage of newer formats like PDF.''

The XXX people also automatically flag things that look like URL's in the text and turn them into links.

Another site I found that had a lot to say about electronic submission formats was the ``IEEE Journal on Selected Areas in Communications -- Special issue on the Internet''.

They also prefer TeX, but accept other submission formats as well. They comment that ``Translators exist for converting a number of formats, such as Microsoft RTF or troff, into either LaTeX or HTML.'' (RTF stands for ``Rich Text Format''. Most PC word processors can write and read documents using RTF without losing formatting information, as happens if a document is written as plain text and then read back in.) I asked the author of that comment, Henning Schulzrinne, ``schulzrinne@fokus.gmd.de'', for more information about format translation. His reply:

``I'm no expert in these matters, but generally speaking, there are no tools that directly convert from RTF or FrameMaker to LaTeX, only tools that convert to HTML and then from HTML to LaTeX. In any event, it is very likely that this will require significant hand labor, as mathematics, figures, tables and the like are unlikely to survive the two-step translation process. It is also usually difficult to decide which 'bold, big' text is a section or subsection.

Conversion from LaTeX to other formats is far easier, even if non-trivial to difficult for math and tables.

Realistically, the best non-LaTeX users can do is to send ASCII, PostScript figures and a PostScript rendition of the whole paper. This saves you from extracting the textual stuff; somebody will have to rekey the tables, figure captions and math in any event.''

Unfortunately, from his comments it sounds like RTF also has some of the disadvantages of postscript (losing context information, possible missing font problems). He suggests that to find out more we look at this list of WWW translators and the commercial product IEEE is currently using, ArborText.

Perhaps the most advanced journals at accepting electronic submissions are the AIP ones, (for PRINTING, not for online reviewing, nor for electronic dissemination, although they are rapidly advancing on those fronts as well). One of the AIP journals is one many seismologists have probably seen: JASA. The AIP journals accept LaTeX (in particular LaTeX using a style called REVTeX, which GEOPHYSICS also participates in), Word, and WordPerfect.

By providing Word and WordPerfect macros and templates they appear to get around the problems mentioned above with RTF. (Knowing nothing about Word and WordPerfect myself, I can't comment further. The instructions that go with the templates are also in Word and WordPerfect!)

Chris Hamlin, ``chamlin@aip.org'', told me how AIP does things (``XYvision'' appears to be some sort of professional typesetting language):

``Here is what we do: convert REVTeX to Xyvision. When we added the Word and Wordperfect templates, we built the translation process to go to REVTeX first. So, if the back end changes, we just need to redo one translation. But, note that we do not use REVTeX in the actual publication of anything---it is just an input format.''

When I asked what publisher AIP uses he commented that:

``AIP does everything up to providing a tape of PS files to a printer. Then negatives and plates are made by the printer, who prints and mails the books. So, in your sense of the word, AIP does its own publishing.''

I asked him if GEOPHYSICS would also be able to use their Word and WordPerfect templates:

``The templates are not that complicated, just enough to provide information on structure; e.g., this is a table, this is an abstract, etc. Making up the templates was a good bit of work (what isn't?), but then you need to be able to translate to REVTeX, which is the really hard part.

I don't know what might be needed to adapt to Geophysics use. That would mostly depend on what structure you want to be marked in the article. For example, you may have keywords in the article, which AIP (and REVTeX) do not have.''

I asked him if GEOPHYSICS could get their hands on these magical format converters (Word and WordPerfect to REVTeX, REVTeX to XYvision):

``After lots of work it does a surprisingly good job (converts tables and math, for example). ... Any questions about licensing, etc., should go to my boss. I am just a programmer-type. His email is `bfilaski@aip.org' [Bill Filaski, 516/576-2322]. No idea if AIP would be receptive to this idea.''

They seem to be having a lot of success; Chris also told me:

``We just finished publishing a proceedings for the Acoustical Society of America where we created a macro package for submission and got more than 1/2 the abstracts in electronically (first time!). Actually, accepting things electronically is a big job and we are getting more and more requests for this sort of service, so that eats up lots of work time.''

We do know our publisher can't yet easily use LaTeX. Jerry Henry, ``76622.3721@compuserve.com'', told Bill Harlan, ``harlan@sep.stanford.edu'', the following:

``Ah, the continuing saga of the attempt to use author submitted LaTeX ms in GEO. We're still trying. Up to now, we have not succeeded in establishing a viable LaTeX processing system. We can and have processed them, but they take much longer to process and are much more expensive than double key entry from hard copy. I've discussed the problem with many publishers and commercial typesetters and they all say the same thing. The problem is the authors like to do things their own way, and the disk editing required to bring the document into conformity is expensive and time consuming. The groups that are going ahead are doing so in spite of the increased cost.

A recently acquired division of Cadmus (our printer) that has more experience with LaTeX is looking at some of our files to see what can be done to make them practically useful. My hope is that they can help us improve our macros to limit an author's options. That's just one of several alternatives being looked at. Another is automatic conversion to SGML. GEOPHYSICS will be totally digitized beginning with the Jan-Feb '96 issue. We will be publishing formats for author submitted figures.

If you'd like more details, please let me know. Meanwhile, we're keeping the pressure on.''

From these comments it appears that the trick is designing a rigid format that can be automatically converted into whatever, without lots of exceptions. Here is what Chris Hamlin had to say on the conversion problem:

``No, we won't go ahead at any cost. I think we took some cost in developing the programs and procedures. And it is not even close to effortless even now. But my impression is that at least it is not draining our life-force completely.

You need to give the authors all the rules up front, and an explanation is also good. If you tell them `no macros' then they will put them in and say that such small macros couldn't possibly harm anyone :). If you tell them why macros are no good then they are more likely to help out. Authors will in general try to help, but there will still be lots of problems.''

Given the difficulties in conversion, perhaps I should not have been surprised to discover that several fields are developing their OWN specialized markup languages to be used for submissions to their journals! The International Union of Crystallography has developed its very own markup language called ``CIF''. Another group is designing a new markup language for Chemistry journals to use.

I should note that our sister organization the AGU is working on electronic publication. The most interesting thing I heard from them was from Jon Sears, ``JSears@agu.org'', who told me

```Earth Interactions', an interactive electronic journal for the earth system sciences, is under joint collaborative development by AGU and AMS.''

The latest AGU meeting accepted abstracts electronically, and many authors availed themselves of this possibility. (Prospective authors were given a LaTeX template to insert their text into.) They also have some journal ``supplements'' online at their site.

I'm still trying to find out further details about plans there. I couldn't find much from their web pages, but now I've been hearing from readers of this document that the AGU has a lot more stuff going on behind the scenes. Well, if anyone has better ``contacts'' at AGU, maybe you can find out for me and I can include what you find here!!

Most journals I looked at still ask for all submissions on paper. They may also ask for a floppy disk if possible, along with a written description (on paper) of what's on the disk and what format it is in. If the disk happens to be in a format they can read and convert, they use what they can from it and rekey the rest. Often they only bother with that AFTER the paper has already been reviewed and accepted in the usual way.

Graphics

As for graphics, the various sites I looked at seemed to prefer postscript for black and white line drawings, TIFF (or GIF) for color graphics, and high-resolution JPEG for photographs. (I think most online journals will prefer to digitize photographs themselves, to ensure high quality)

I should add the caveat that perhaps there were some papers out there that used color PDF, and I just didn't know it: my Acrobat PDF viewer always complained that it ``couldn't open 71 contiguous color table entries'' and so was ``running in monochrome mode''. I know in the past I've always had lots of trouble viewing color postscript on screen devices. Grey-scale figures (such as are commonly used for seismic data) often become splashes of rather unappetizing mottled pastels... bleagh.

Displaying papers over the network

I have found different journals currently use many different formats for making papers available:

The most advanced places offer several of these ways to view a paper and let the viewer choose among them. There is general agreement among the people I talked to that JBC Online, published by HighWire Press, is the most advanced of the electronic journals, at least with regards to making their papers available on the network.

Unfortunately, none of the above formats are ideal yet.

Plain HTML without bitmaps is the most general, but has severe limitations. A bitmap image of a journal page lets you reproduce it, but takes up lots of space, is hard to view on a screen, and gains nothing over a FAX or xerox. LaTeX is only useful if the viewer has some simple way to process it (including any macros and style files, etc, assumed to be present in the document). HTML plus bitmaps can take a long time to download and can be hard to read if the bitmaps don't mesh well with the font the viewer happened to choose to render the text.

The most polished way to show a paper across the network is to provide Adobe PDF files that can be rendered using the free Adobe Acrobat reader. Note that standardizing on PDF leaves you at Adobe's mercy to support your users' platforms, though. (Is there Acrobat for Linux, for example? I don't think so. And the best viewer available for SunOS Sparc's is out of date, but it's all I had to use.)

On the other hand, maybe I'm underestimating how quickly the free software people can support something (the PDF specifications are publically available). Mark Doyle tells me

``Well, versions of ghostscript/ghostview exist that handle PDF on most X-windows platforms. On NeXTSTEP (which is what I use) there are three public domain PDF viewers. The real problem is waiting for Adobe to get around to seeing the value of things like putting URL's in PDF. With 2.1 they seem to have all of the necessary bits for electronic publishing enterprises like ours.''

Also note that Adobe's programs that create PDF from postscript are NOT free, and they are mostly meant for the IBM-PC / Macintosh world so don't always work as well as they should for postscript files coming from UNIX users. (I've seen many complaints in usenet postings that certain glyph numbers used by TeX but not by PC word processors have associated bugs that Adobe has been somewhat slow to fix, since they don't bother the customers they care about.)

In any case, whether postscript or PDF is used some care must be taken to generate postscript that can be universally displayed, distills well into PDF, and avoids copyright entanglements.

Authors or electronic publishers should avoid using fonts that someone else may not have, in which case the document won't print. On the other hand, if you include fonts in the file so that it can be printed on any postscript printer, redistribution on the network may be a violation of copyright law because the included fonts may be proprietary! Here are the guidelines the IEEE journal requires its authors to follow to try to avoid these problems.

Postscript generated from TeX almost always uses public-domain bitmapped fonts. That avoids both those problems, but introduces another: bitmapped fonts look awful when converted to PDF and viewed on a display screen. I was told by Berthold K.P. Horn, ``bkph@kauai.ai.mit.edu'', (the main guy at Y&Y, who sell lots of commercial fonts for TeX use, so he knows what he is talking about with fonts and licensing, etc) that newer revisions of TeX (for example ``Latex2e'') can use the kinds of fonts PDF works well with, specifically ``Adobe Type Manager (ATM) compatible fonts in Adobe Type 1 format''. He also notes that

``Most (but not all) fonts in the Adobe library can be used without special requests or approval. Other vendors have other policies. Specialized fonts (such as math fonts for TeX) typically can also be used by getting an `electronic journal license' (as distinguished from a single user end license).''

Alternatively, Chris Hamlin, ``chamlin@aip.org'', the AIP REVTeX guru, says all you have to do is

``Be sure to say (and do) the following: `I will be using Distiller 2.0 or higher, with font subsetting on.' This means that only the chars needed from a font are embedded in a particular file. This seems to allay font makers' fears of having fonts stolen from online documents.''

Of course, I also came across postings in comp.lang.postscript directed at Adobe complaining that simply turning font subsetting off isn't good enough; there are other rather well hidden software switches in the Distiller you have to throw to be SURE the entire font doesn't get loaded into the PDF output anyway! The Adobe people agreed and promised to try to improve the documentation.

Unfortunately, the newer versions of TeX may also require slight changes to the various LaTeX style files and macros (but not the documents themselves). This is a very good reason to have authors using LaTeX use the GEOPHYSICS REVTeX LaTeX package, and submit TeX source, not postscript. Then the journal can worry about converting that into the right flavor of postscript and PDF (and those will probably change with time). The maintainers of the GEOPHYSICS REVTeX style file already have a ``LaTeX2E'' update for it.

Newer versions of HTML that support mathematics, etc, may solve all these problems. (Perhaps I'm being overly optimistic.) In the meantime, the choices seem to be either PDF or HTML plus in-line bitmaps. Chris Hamlin, ``chamlin@aip.org'', doesn't seem to think the latter is such a bad way to do things:

``This seems to be the normal way of putting things on the web now. Depending on what format you have your articles in, it should not be TOO hard to convert to html with in-line gifs (jpegs?) for math. Just take the math, convert as needed, run through TeX, use dvips, use ghostscript to convert from ps to some bitmap, use netpbm to crop, size, etc., and stick the gif in. These journals I see seem to be optimized to an extent where if there is just an alpha, for example, it uses a stock gif called "alpha.gif". If there is an equation like `alpha + 1', then a custom gif for that article is produced. That seems like a smart thing to do. And, you need an on-line database to hook to for the references.''

He added later that

``Even still, I don't think in-line bitmaps are very good for math, just the best thing I have seen so far (except Acrobat). While talking to Netscape, they expressed no interest at all in adding math display to their products, so things look dim from the HTML viewpoint.''

I suspect Chris has a faster and more reliable internet connection than I do! I always seem to get occasional little ``Mosaic'' symbols interspersed through the text, where there was an in-line GIF that failed to download properly because of a network hiccup. And HTML by design gives very little control over formatting. PDF is preferable for viewing scientific papers, I think.

Manuscript archives

Jon Claerbout has proposed setting up a site at SEP for distributing exploration-geophysics preprints. There are a few points to ponder first. It's instructive to see how the XXX folks do it.

First, how do we accept manuscripts?

The XXX archive accepts submissions by either ftp or e-mail. The author is identified by their e-mail address, just like with mailing lists. Thus, if you want to update or delete a paper you have submitted previously you have to do it from the same e-mail address you used before. This is a little less safe than having username/passwords for each author, but a lot easier to implement. We might want to copy that idea.

Mark Doyle tells me that

``When Web browsers become available that let you send files easily (not cutting and pasting them in some small window), we will move to a username/password system. We already have one in place for announcements and for on-line refereeing that we are implementing for a new, archive-based (meaning the it will be an overlay to the archives) all-electronic journal.''

How should we screen incoming manuscripts?

I asked Mark Doyle, ``doyle@mmm.lanl.gov'' at XXX, how they set reviewing standards and how much trouble they have from lunatics:

``Yes, I have heard of Abian (and Archimedes Plutonium, formally Ludwig Plutonium, and Hannu). Surprisingly enough, we get almost none of this stuff (by outright loons that is). Part of my job is to screen all submissions for reasonable content. We are much looser than any journal. Anyway, crackpots do not seem to be attracted to a forum like ours. Either the minor technical hurdle of submitting a proper abstract is too much for them, or they assume their stuff won't make it through, or USENET newsgroups are far more effective for their purposes (they can elicit immediate reactions there). We also hide behind the "requirement" of an academic affiliation. There are a few nuisance authors who put work that we consider crackpot, but they are mostly harmless and we will exclude them if they abuse the system (put too many papers, excessively cross to other archives, etc.). They are mostly benign though. There is a well-known physicist who has done some good work, but writes a lot of junk. We can't really exclude him, so we use his junk as a "standard" for what we are willing to put up with. So far in practice we do pretty well.''

I wondered whether heavy network traffic might be an issue; evidently Los Alamos already gets so much that XXX (which generates about 50MB/hour of network traffic) does not place an undue additional burden. I was surprised, though, to discover that ``Web robots'' have been a problem at XXX.

How can we make manuscripts available for browsing?

The XXX folks can (and do) generate postscript ``on the fly'' from TeX as needed. I see no reason why they couldn't also distill postscript to PDF ``on the fly'' as well, although they don't yet. I asked Mark Doyle about this and he replied:

``We should be starting to do this soon. We need the Unix 2.1 distiller first. Also, there are some issues which you alluded to about font inclusion. We are likely to use the public domain Bakoma fonts AND not include them in the files, leaving it to users to use any fonts they like (bakoma, or if they want to spend money, Blue Sky or Y&Y's computer modern fonts). There are a few gotchas to solve, but none seem insurmountable.''

For papers that don't have TeX source, or don't have standard TeX source, they simply make available whatever they have and leave it to the viewer to sort out.

The thorniest issues, as usual, are the legal ones. If you submit a paper to a manuscript archive have you ``published'' it? The issue of copyright and disclosure was discussed at a recent AIP forum on electronic publication.

``Paul Berman, of the law firm Covington and Burling, is a copyright and patent specialist. Covington and Burling is APS's principal legal advisor. Berman gave a very thoughtful and comprehensive review of these issues, as they relate to eprint archives. His views as to what constitute publication were fairly strict; he is very confident that posting something on an electronic bulletin board for free or on any readily accessible basis most definitely constitutes publication, legally speaking. This is in principle different than a paper preprint distribution which usually is limited to a relatively small (not more than a few hundred) circulation. Electronic posting is without control, so that the potential audience for an eprint can be as large as you want. The courts would surely interpret this as publication (although this has mostly not yet been tested in the courts). Operators of eprint archives are publishers! It is very important to know who owns the copyright--we are in a new era with no definite answers. The situation is very confusing.''

In other words, authors who submit articles to a manuscript archive AND to a journal that demands signing over of the copyright (most of them) are probably breaking a legal contract. Is it the author's problem, or the manuscript archive's? (LANL, being an arm of the US government, doesn't have to worry about being sued as much as the rest of us. Also, US government workers are not allowed to sign over the copyrights to their works in any case, giving them greater freedom as authors as well.)

The SEG has already told me that it was OK for me to place a few of my old GEOPHYSICS papers online, because the SEG does not take away from the author the right to make personal copies of their own works. If I submitted these papers to the LANL e-print archive, would I be making them ``more available'' than they are now, and thus ``publishing'' them, and violating SEG's copyright? What if LANL merely carried a LINK to the papers where they are now? (Some companies feel that even downloading an inline image from their public pages ``out of context'' is violating copyright law.)

Personally, I don't think the journals can really afford to crack down too hard; they depend on the scientists to write their material and to review it and to read it, and can't afford to alienate them. A lesson of history is that it's better not to try to assert a right it's hopeless to enforce. The SEG should come up with a reasonable policy now while there's still time! GEOPHYSICS will still have the great advantage of collecting all the papers that have appeared in it together in one place, something that no other archive could match. For the next several years at least the vast majority of papers will NOT be placed by their authors online, I expect.

I'm not sure SEP knows what it's getting into in offering to run an e-print archive for the SEG. Maintaining it properly could be a lot of work! Maybe the XXX people would be willing to ``franchise'' their software? That would help cut the workload a lot. Alternatively, maybe SEG should consider asking XXX to start up an ``exploration geophysics'' area, and give SEG authors permission to to place their preprints and papers there?

Mark Doyle tells me:

``Many physics journals will now accept electronic versions for submission. In fact, for Nucl. Phys. B, you apparently only have to give the xxx archive paper number, i.e. hep-th/9511030.''

Electronic reviewing

There is a noticeable lack of progress in online reviewing, I suspect because the software tools aren't there yet. Adobe is working on some ``workgroups'' products for PC and Macintosh, but they will cost money and will lag in availability for UNIX users. Adobe Express allows marking up of PDF files, but again it costs money and is targeted at PC and Mac users.

The Journal of Seismic Exploration has proven, however, that we DON'T need more technology than we've already got to get fast turnaround on reviews. Their secret is the FAX machine, first-class mail, and diligent whipping of reviewers.

For a beginning, if a manuscript has been submitted electronically in the proper format, it should be possible to turn it into postscript. Net-capable reviewers could then receive a URL (either ftp or WWW) and password via e-mail. Using that, they could download the file and print it out. They could then FAX back the marked-up manuscript (and e-mail comments). The ``form'' part of the review could be done via a web page (in fact... I've already DONE THAT once reviewing for JGR, but now I can't find that URL!! Can anyone point me to it?).

Here is one URL I did find for a journal that appears to be doing something very much like what I describe above already. Unfortunately my connection to their site is so flakey I haven't been able to get it to work recently! Maybe you'll have more luck?

As was mentioned above, the XXX people are working hard on the problem of on-line reviewing right now. They didn't have anything on the web I could look at yet, but they are confident that (at least for the UNIX world) good solutions will be in hand very soon now. They are probably right.

I should comment that while some people think downloading a postscript file and printing it out is MORE convenient for reviewers, several people have also expressed the opinion that it is much LESS convenient, and they feel forcing reviewers to use a computer would be ``an excessive burden''. Ditto for filling out a review form on paper and mailing it back versus filling out a web-page review form.

My suspicion is that the people who prefer paper have secretaries to keep their papers organized and mail things out for them. (For myself, I love the idea of doing it electronically because it saves me having to drive to the post office and pay several dollars of my own money to mail the review form back.) In any case, whatever we do, it had better remain possible to do things the traditional way too.

On-line journals versus printed journals

The big question is, how do you keep the Society from losing money if they make it too easy to get to their papers without a subscription? One possibility is to do like the Journal of Molecular Biology, and give away abstracts for free but require proof of society membership to get access to the full text.

Over at AIP, Chris Hamlin, ``chamlin@aip.org'', told me what they are doing:

``We already do publish a journal electronically, in cooperation with the OCLC. You can see it at http://www.ref.oclc.org:2000/html/ejo_demo.htm after clicking around. It is the online version of Applied Physics Letters. This is a demo. The on-line journal is run as a subscription service, so it is not free.

We already publish a journal on CD via Acrobat, and I think we are adding another one or two soon. We are considering other projects in online journals also.''

However, at the SEG meeting Sven remarked that most of our members don't read GEOPHYSICS anyway... they are members for other reasons. Maybe GEOPHYSICS can afford to be quite open? Certainly that idea has great appeal to me as a likely future author.

I think I'll have to punt on these money questions! I'm probably too idealistic...!

As for how to proceed, the easiest first step should be accepting manuscripts via e-mail. Chris Hamlin remarks:

``Another thing authors like is the ability to submit electronically, even if the file is not used later on. They do want the file to be used later, but they also want to be able to email/ftp the files to the editorial office. With this you just need to have a TeX guru at the office to work out difficult cases, and a nice PS printer. And, good internet connectivity. This service is much appreciated, for example, by authors in the FSU who say simply `The mail here does not work', and you do not have to reuse the files, if you do not want to.''

As for electronic publishing, I think we should try talking to HighWire Press. They appear to be more interested in getting the electronic revolution going than making a profit, and state in their pages that they are willing to ``franchise'' their software and methods to other organizations. (They are associated with Stanford's publishing house.) However, they are only working on the problem of publishing, not the problem of receiving manuscripts. The director of HighWire Press, John Sack, told me:

``We're not working on the `input' side -- getting the science from the scientists -- but only on the delivery/distribution side. The intake is very complicated, and there are some research projects (I know of one at MIT) and some commercial companies that are working on that; their progress is very, very slow.''

As for submission, we need to get the SEG office staff talking with the people who put together the AIP journals (the ones who gave us REVTeX). I'm sure we can learn a lot from them.

Evidently they are setup to provide some services; Chris Hamlin told me:

``We do the following for our own journals, translation journals, and member society journals:

Editing, Translation services, Indexing, Keyboarding, Proofreading, Data prep/conversion, Image scanning and inclusion, Complete PS files to printer, Web publishing. This will be growing a lot as Pinet is made available on the web.

If you are interested in any services for future projects, email to jjd@aip.org (Jim Donohue). He is head of production services here. Bill Filaski heads the Publishing Services group in Information Technology.''

Enough already! I've already spent FAR too much time on this!!! However, if you have any comments to add... do send them to me. I'll keep updating this document as I get comments and/or complaints from informed readers. I'm sure to have gotten a few things wrong!

I can be reached at ``joe@sep.stanford.edu'' (forwarded).


Thanks to Mark Doyle and Chris Hamlin for their exemplary patience in answering my many questions.


Return to ``Electronic documents and the SEG''.