---

Will Mindcraft II Be Better?

By Dave Whitinger and
Dwight Johnson

The Mindcraft affair is riddled with suspicious details: NT
versus Linux performance test sponsored by Microsoft and run from a
Microsoft lab in Redmond, Linux community members not allowed to
contribute, and so on.

While the press has been skeptical to accept these tests as
accurate, now Mindcraft is conducting a re-test. They apparently
hope to gain more credibility by testing this time around with the
assistance of leading members from the Linux community. Linus
Torvalds, Alan Cox and Jeremy Allison, among others, have all been
solicited by Mindcraft.

But Linus Torvalds told Linux Today that nobody in the Linux
community is really working on the Mindcraft test per se, because
Mindcraft hasn’t allowed them access to the test site.

Quite frankly nobody in
the Linux community is really working on the mindcraft thing per se
— because mindcraft doesn’t allow us access to the test site. So
there’s really little point. — Linus
Torvalds

Linus concedes that they have given Mindcraft some hints about
different things to try, but the opaqueness of what they are
testing basically means that it’s not worth their time to try to
second guess what the problem is.

Linus told Linux Today that his impression was that “Mindcraft”
is really only one person (maybe two), and that they don’t even
have a testing laboratory: everything points to the fact that
Mindcraft ran the test at the Microsoft performance lab.

All the e-mails from Mindcraft people (Bruce Weiner and an
e-mail alias that goes by the name of “will”, who seems to be just
another account of Bruce Weiner’s) have IP addresses inside of
Microsoft. If you look up the original newgroup
posting
that they refer to in the original test paper, you find
that it was posted through a site with IP address 131.107.3.71. If
you actually look up that address, you see that it is
“tide71.microsoft.com”.

Linus suggests, if you’ve been in e-mail contact with Mindcraft,
just for fun look into the extended header information in the
e-mails.

Jeremy Allison, also contacted by Mindcraft, provides Linux
Today readers additional insights:

Mindcraft was contacted by a journalist after their
original white paper. The journalist, who has interviewed Linus in
the past, suggested to Mindcraft that they contact Linus for tuning
advice. They did so, and Linus pulled Alan Cox, Dave Miller on the
kernel side, myself on the Samba side (and I pulled Andrew in) and
someone on the Apache side (sorry, not sure who) into the e-mail
discussions in order to help.

One of our early concerns was that one of us would be
present on site to help advise on the Linux box and to be a Linux
community representative in the tests.

Mindcraft was unable to allow this, claiming “other tests
under NDA” in their lab prohibited others being on site.

I personally believe (NOTE: *My opinion only* !) that this
is because the lab Mindcraft is using is actually situated on the
Microsoft campus, in which case I can well believe there are NDA
issues with a Linux representitive being there.

(Evidence to support this supposition is located

here
. -lt ed)

We tried to advise Mindcraft as best as we could under the
circumstances (via e-mail and phone calls) how to get the best
performance on their machine using Samba and Linux.

Unfortunately, Mindcraft was not able to get credible results on
their re-test. Note, I’m defining credible here as “within the same
range as the numbers obtained by PC Week, which are close to
numbers otained in a benchmark run on an SGI Intel-cpu Linux server
in my lab here at SGI” (see this
story
for the numbers obtained by PC Week using Samba and Linux
2.2).

The PC Week numbers were run in their magazine lab, and I
obviously believe my own numbers :-). As an example, I can say that
running NT on the same SGI Intel-cpu hardware that we ran Linux on,
and applying the same tuning to NT that Mindcraft did, we get
nearly identical results from the NT server to the ones published
by Mindcraft.

The essense of scientific testing is *repeatability* of the
experiment, and I can confirm that we have reproduced Mindcraft’s
NT server numbers here in our lab. It is a shame that they cannot
reproduce the PC Week Linux numbers in theirs.

Without access to the machine there is no way to know exactly
why Mindcraft is getting lower results, and there the re-run sits
at the moment.

I’m sure if Mindcraft can re-run their benchmark in an open lab
setting allowing full access by the Linux people involved then I
think we will have a much fairer result.

Note that this doesn’t mean Linux will neccessarily win, (it
doesn’t when serving Win95 clients here in my lab, although it does
when serving NT clients), but that we will have a fairer
comparison.

It seems very unlikely from all of this that the Mindcraft
re-test will be any more credible than the last.

Update (May 4th): Mindcraft has published
a rebuttal to
this article.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends, & analysis