Linux Today: Linux News On Internet Time.
Search Linux Today
Linux News Sections:  Developer -  High Performance -  Infrastructure -  IT Management -  Security -  Storage -
Linux Today Navigation
LT Home
Contribute
Contribute
Link to Us
Linux Jobs


More on LinuxToday


LinuxPlanet: Tolerating Fault in an Intolerant World

Dec 23, 2002, 19:00 (1 Talkback[s])
(Other stories by Brian Proffitt)

"High-availability computing is something else that clusters can be used for. If you have a need for a lot of transaction-handling that need 24/365 uptime, clusters are good because if one processor fails, then the load will automatically be handled by the other nodes until the faulty processor can be repaired.

"This sounds very good, and it is. But there are some challanges to making this all work smoothly. For instance, not all software can run on a cluster. It's not something that you just bring up on the screen, type 'go,' and expect to take full advantage of the parallelism that makes a cluster really shine. There is a significant amount of work that needs to be done to re-tool an application to run in a cluster.

"For high-performance work, this sort of thing is a necessary evil. After all, you were likely going to have to port the application to a new platform anyway, and porting to a clustered Linux farm is still a lot easier than porting to a proprietary mainframe OS.

"But to have to do this sort of work for a high-availability cluster is a notion that one major computer manufacturer is challanging. In fact, this company is turning the whole notion of using clusters for high availabilty computing on its ear..."

Complete Story

Related Stories: