Thanks to John Gowin
for this link.
“Clustering is a plodding solution to the old problem of
achieving scalability. But the work being done behind the scenes on
Linux clusters is likely to make them a corporate favorite.”
“The appeal is that you can start with one server, then gain
both horsepower and reliability by linking one or more servers
together. A Linux cluster would tie together machines with both a
low-cost operating system and low-cost hardware to achieve
something that has proven highly elusive in the past: low-cost
scalability.”
“The big drawback is that clusters swiftly accumulate overhead,
which must be subtracted from their combined horsepower. If one
server needs a file, for example, it must first ask other servers
in some fashion whether the file is already in use. It takes only a
few additions to a small cluster for the overhead of this simple
question-and-answer process to mount up until it starts to
overwhelm the performance gain. In its first implementation, the
cluster resembled the office coffee klatch – lots of communication
going on but not much work getting done.”
“Thus, the cluster remained a poor man’s answer to big server
computing. Due to its inherent limits, it could never really
challenge the IBM mainframe’s data swapping capability. As late as
1995, IBM’s clustering expert, Gregory Pfister, could write in
In Search of Clusters (Prentice Hall), “The topic of
clusters is not recognized as a coherent discipline. Its current
state mirrors that of the state of geography in the Middle Ages . .
. a ragbag filled with odds and ends of knowledge and
pseudo-knowledge.”