Differences

This shows you the differences between two versions of the page.

sep:about:computing [2008/07/15 19:14]
mohammad
sep:about:computing [2015/05/27 02:06] (current)
Line 5: Line 5:
The late 1990s saw a shift from large shared memory machines to inexpensive PC hardware formed into clusters running [[http://www.linux.com|Linux]] (often referred to as [[http://www.beowulf.com|"beowulf" clusters]]). These new machines have the advantage of providing inexpensive computational power, but normally require considerably more programming knowledge for full utilization. The late 1990s saw a shift from large shared memory machines to inexpensive PC hardware formed into clusters running [[http://www.linux.com|Linux]] (often referred to as [[http://www.beowulf.com|"beowulf" clusters]]). These new machines have the advantage of providing inexpensive computational power, but normally require considerably more programming knowledge for full utilization.
-In 1999 SEP bought five four-processor 1400L boxes from [[http://www.sgi.com|SGI]]. In 2001, SEP bought a [[specs|16 node dual processor machine, //omu//]] from [[http://www.lnxi.com| Linux Networks]]. In 2001-2002 expanded its computational and storage capacity further by building its own Linux machines out of components. We built a [[specs|2TB disk server, //koko//]] and a [[specs|32 node, 64 processor cluster, //okok//]].+In 1999 SEP bought five four-processor 1400L boxes from [[http://www.sgi.com|SGI]]. In 2001, SEP bought a [[specs|16 node dual processor machine, omu]] from [[http://www.lnxi.com| Linux Networks]]. In 2001-2002 expanded its computational and storage capacity further by building its own Linux machines out of components. We built a [[specs|2TB disk server, koko]] and a [[specs|32 node, 64 processor cluster, okok]].
In 2003 SEP started a Linux Cluster Initiative with some of its industrial sponsors. This initiative proved funds for three additional Beowulf clusters from [[http://www.californiadigital.com/| California Digital ]]. The first cluster, bought in 2003, consisted of 40 dual 2.4 xeon processors with 2 GBs of ram per node. For applications that are memory intensive in 2003 we also bought an 8 node, dual 2.4 xeon cluster with 6 gigabytes of RAM (the maximum practical memory for a 32 bit chip). Our memory needs continue to grow so in January 2005 we bought a 32 node, dual Xeon64 cluster. Half of the nodes with 4 GB of ram, the other half with 8 GB. In 2003 SEP started a Linux Cluster Initiative with some of its industrial sponsors. This initiative proved funds for three additional Beowulf clusters from [[http://www.californiadigital.com/| California Digital ]]. The first cluster, bought in 2003, consisted of 40 dual 2.4 xeon processors with 2 GBs of ram per node. For applications that are memory intensive in 2003 we also bought an 8 node, dual 2.4 xeon cluster with 6 gigabytes of RAM (the maximum practical memory for a 32 bit chip). Our memory needs continue to grow so in January 2005 we bought a 32 node, dual Xeon64 cluster. Half of the nodes with 4 GB of ram, the other half with 8 GB.
-This computational power is being used by SEP researchers in developing parallel applications for computationally demanding applications such as:+SEP researchers use these computer facilities to develop parallel applications to solve computationally intensive problems, such as:
  * 2-D and 3-D prestack and poststack imaging and inversion   * 2-D and 3-D prestack and poststack imaging and inversion
Line 21: Line 21:
To find out more about the research performed on these topics you can look at SEP [[sep:people:people#students|student]] and [[sep:people:people|faculty]] research pages. To find out more about the research performed on these topics you can look at SEP [[sep:people:people#students|student]] and [[sep:people:people|faculty]] research pages.
-Research projects are usually developed using SEP's seismic processing system [[sep:software:seplib|(a.k.a. SEPlib)]]. SEP provides the unique opportunity to learn first-hand about all aspect of scientific computing, from building and designing systems, to administering them, to writing software that utilizes their abilities.+Research projects are usually developed using SEP's seismic processing system [[sep:software:seplib|(a.k.a. SEPlib)]]. SEP provides a unique opportunity to learn first-hand about all aspects of scientific computing, including system building and design, system administration, and software development.
If you are interested in scientific computing, and parallel computing in particular, you may find it interesting to look to the WWW pages of these other research programs at Stanford: the [[http://www-sccm.stanford.edu/|Scientific Computing/Computational Mathematics]] program and the [[http://www-flash.stanford.edu/|FLASH]] project. If you are interested in scientific computing, and parallel computing in particular, you may find it interesting to look to the WWW pages of these other research programs at Stanford: the [[http://www-sccm.stanford.edu/|Scientific Computing/Computational Mathematics]] program and the [[http://www-flash.stanford.edu/|FLASH]] project.
\\ \\ \\ \\
-{{page>sep:footer}}+{{page>share:footer&noeditbtn&nofooter}} 
/web/html/data/attic/sep/about/computing.1216149281.txt.gz · Last modified: 2015/05/26 22:40 (external edit)
www.chimeric.de Creative Commons License Valid CSS Driven by DokuWiki do yourself a favour and use a real browser - get firefox!! Recent changes RSS feed Valid XHTML 1.0