Differences

This shows you the differences between two versions of the page.

sep:about:computing [2008/07/15 19:01]
mohammad created
sep:about:computing [2015/05/27 02:06] (current)
Line 3: Line 3:
Advanced research in 3-D reflection seismology requires the solution of large computational and data intensive problems. SEP has a long tradition of early adoption and effective utilization of the leading edge scientific computers available (PDP-11 - Vax 780 - Convex C1 - CM-5 - Power Challenge). Advanced research in 3-D reflection seismology requires the solution of large computational and data intensive problems. SEP has a long tradition of early adoption and effective utilization of the leading edge scientific computers available (PDP-11 - Vax 780 - Convex C1 - CM-5 - Power Challenge).
-The late 1990s saw a shift from large shared memory machines to inexpensive PC hardware formed into clusters running [[http://www.linux.com|Linux]] (often referred to as [[http://www.beowulf.com|"beowulf" clusters]]). These new machines have the advantage of providing inexpensive computational power, but normally require considerablly more programming knowledge for full utilization.+The late 1990s saw a shift from large shared memory machines to inexpensive PC hardware formed into clusters running [[http://www.linux.com|Linux]] (often referred to as [[http://www.beowulf.com|"beowulf" clusters]]). These new machines have the advantage of providing inexpensive computational power, but normally require considerably more programming knowledge for full utilization.
-In 1999 SEP bought five four-processor 1400L boxes from [[http://www.sgi.com|SGI]]. In 2001, SEP bought a [[specs.html|16 node dual processor machine, //omu//]] from [[http://www.lnxi.com| Linux Networks]]. In 2001-2002 expanded its computational and storage capacity further by building its own Linux machines out of components. We built a [[specs.html|2TB disk server, //koko//]] and a [[specs.html|32 node, 64 processor cluster, //okok//]]. Here are pictures from the process of building our [[http://sepwww.stanford.edu/sep/bob/picts/disk/|first linux server]] and our [[http://sepwww.stanford.edu/sep/bob/picts/clust/|cluster]].+In 1999 SEP bought five four-processor 1400L boxes from [[http://www.sgi.com|SGI]]. In 2001, SEP bought a [[specs|16 node dual processor machine, omu]] from [[http://www.lnxi.com| Linux Networks]]. In 2001-2002 expanded its computational and storage capacity further by building its own Linux machines out of components. We built a [[specs|2TB disk server, koko]] and a [[specs|32 node, 64 processor cluster, okok]].
-In 2003 SEP started a Linux Cluster Initiative with some of its industrial sponsors. This initiative proved funds for three additional beowulf clusters from [[http://www.californiadigital.com/| California Digital ]]. The first cluster, bought in 2003, consisted of 40 dual 2.4 xeon processors with 2 GBs of ram per node. For applications that are memory intentsive in 2003 we also bought an 8 node, dual 2.4 xeon cluster with 6 gigabytes of RAM (the maximum practical memory for a 32 bit chip). Our memory needs continue to grow so in January 2005 we bought a 32 node, dual Xeon64 cluster. Half of the nodes with 4 GB of ram, the other half with 8 GB.+In 2003 SEP started a Linux Cluster Initiative with some of its industrial sponsors. This initiative proved funds for three additional Beowulf clusters from [[http://www.californiadigital.com/| California Digital ]]. The first cluster, bought in 2003, consisted of 40 dual 2.4 xeon processors with 2 GBs of ram per node. For applications that are memory intensive in 2003 we also bought an 8 node, dual 2.4 xeon cluster with 6 gigabytes of RAM (the maximum practical memory for a 32 bit chip). Our memory needs continue to grow so in January 2005 we bought a 32 node, dual Xeon64 cluster. Half of the nodes with 4 GB of ram, the other half with 8 GB.
-This computational power is being used by [[http:3.jpg|SEP researchers]] in developing parallel applicatiohs for computationally demanding applications such as:+SEP researchers use these computer facilities to develop parallel applications to solve computationally intensive problems, such as:
-{{/gifs/balls/ball.red.gif|+}} 2-D and 3-D prestack and poststack imaging and inversion {{/gifs/balls/ball.red.gif|+}} 2-D and 3-D seismic tomography.{{/gifs/balls/ball.red.gif|+}} 2-D and 3-D acoustic and elastic modeling {{/gifs/balls/ball.red.gif|+}} 3-D seismic visualization {{/gifs/balls/ball.red.gif|+}} 2-D and 3-D shear wave processing {{/gifs/balls/ball.red.gif|+}} 2-D and 3-D multiple elimination {{/gifs/balls/ball.red.gif|+}} 4-D seismic processing+  * 2-D and 3-D prestack and poststack imaging and inversion  
 +  * 2-D and 3-D seismic tomography. 
 +  * 2-D and 3-D acoustic and elastic modeling  
 +  * 3-D seismic visualization  
 +  * 2-D and 3-D shear wave processing 
 +  * 2-D and 3-D multiple elimination 
 +  * 4-D seismic processing
-To find out more about the research performed on these topics you can look at SEP [[http://sepwww.stanford.edu/people#students|student]] and [[http://sepwww.stanford.edu/people|faculty]] research pages.+To find out more about the research performed on these topics you can look at SEP [[sep:people:people#students|student]] and [[sep:people:people|faculty]] research pages.
-Research projects are usually developed using SEP's seismic processing system [[http://sepwww.stanford.edu/software/seplib/|(a.k.a. SEPlib)]]. SEP provides the unique opportunity to learn first-hand about all aspect of scientific computing, from building and designing systems, to administering them, to writing software that utilizes their abilities.+Research projects are usually developed using SEP's seismic processing system [[sep:software:seplib|(a.k.a. SEPlib)]]. SEP provides a unique opportunity to learn first-hand about all aspects of scientific computing, including system building and design, system administration, and software development.
If you are interested in scientific computing, and parallel computing in particular, you may find it interesting to look to the WWW pages of these other research programs at Stanford: the [[http://www-sccm.stanford.edu/|Scientific Computing/Computational Mathematics]] program and the [[http://www-flash.stanford.edu/|FLASH]] project. If you are interested in scientific computing, and parallel computing in particular, you may find it interesting to look to the WWW pages of these other research programs at Stanford: the [[http://www-sccm.stanford.edu/|Scientific Computing/Computational Mathematics]] program and the [[http://www-flash.stanford.edu/|FLASH]] project.
\\ \\ \\ \\
-{{page>sep:footer}}+{{page>share:footer&noeditbtn&nofooter}} 
/web/html/data/attic/sep/about/computing.1216148488.txt.gz · Last modified: 2015/05/26 22:40 (external edit)
www.chimeric.de Creative Commons License Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0