Linux Today: Linux News On Internet Time.
Search Linux Today
Linux News Sections:  Developer -  High Performance -  Infrastructure -  IT Management -  Security -  Storage -
Linux Today Navigation
LT Home
Contribute
Contribute
Link to Us
Linux Jobs


Top White Papers

More on LinuxToday


DrugDiscoveryOnline: The New Workhorse of Gene Sequencing, Proteomics and Drug Development

Feb 09, 2002, 01:30 (1 Talkback[s])
(Other stories by Joshua Harr)
"A node within a Linux cluster is the basic unit of processing. Typically, it is a server or workstation dedicated to processing information to aid in the massive amounts of number crunching involved in biotech research. With information being processed at such rapid rates, scientists and engineers can devote their time and energy analyzing the information processed by the Linux clusters, not waiting or worrying about speedy or accurate results. Although some traditional supercomputer programs must be adapted for clusters, this investment is attractive for two reasons.

First, nodes consist of readily available off-the-shelf components with commodity pricing. Most users of Linux clusters benchmark a price/performance improvement that is literally 10 times better than other traditional alternatives. Second, the Linux operating system and cluster management software both allow clusters to scale from four processors to several hundred. Users can take advantage of the scalability of Linux clusters to grow their system as demand warrants. The scalability and price/performance ratio of clustering allows program managers to economically add nodes as needed without changing software programs.

Because of the standard components used in Linux clusters, organizations involved with biotechnology research can afford a small cluster to start processing data, and scale as demands increase. Companies such as Tularik and Celera have chosen Linux clusters to facilitate their supercomputing needs and to significantly speed their research and development processes while increasing reliability. The risk of losing an entire processing run can be eliminated when the cluster is programmed to be forgiving of node failures. A portion of computational capacity may be lost without compromising the data or completion of the task."

Complete Story [ By the CTO of Linux NetWorX, a Linux clustering company ]

Related Stories: