---

LinuxMall.com: Scalability Weighs in

[ Thanks to LinuxNews.com for this link.
]

“Posing his own answer to how far we need to scale, McVoy said,
“There’s a certain amount of geeky sex appeal in saying `mine is
bigger and goes faster than yours,’ but we need to go a lot farther
than four-to-eight CPUs.” Government labs, for example, want
1,000-CPU systems, he explained, adding, “Clustering works very
well on some things, but not if you have to share the data.”

“In a recent interview, VA Linux Systems’ Senior Cluster
Developer John Goebel emphasized what he characterized as the
“cluster computing on shared nothing machines view.” When asked if
scalability could occur, Goebel replied, “Maybe not. I don’t know.
No one has gone past 1,024 interconnected nodes. There are problems
that occur in distributed systems that researchers are knocking
their heads against: system date, uptime, message synchronization
and bounding problems, to name a few.”

“Still, Goebel stressed the importance of the s-word’s
definition. “One of the issues is how you define scalability.
For instance, must scaling be linear? What is an acceptable
performance loss? No one that has any experience is going to
believe that you can have linear scaling past 128
–I’m being
generous: 64 nodes has dramatic fall off. Many people would be
pleased with scalability up to 80% efficiency in all the systems of
a distributed machine with 512-plus CPUs.”

Complete
Story

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends, & analysis