Thanks to Jason Perlow
for this information.
FYI:
Jason Perlow
Contributing Editor
ZD Sm@rt Reseller
----Original Message Follows---- From: Chris Lemmons To: Nick Stam CC: Jason Perlow, dvorak@dvorak.org,Steve Buehler, Jeffrey Witt,Eric Hale, Tom Ponzo Subject: Re: contacts at PC mag for Linux performance test Date: Mon, 19 Apr 1999 17:36:24 -0400
Greetings :
Just wanted to give a quick response to this thread. First, the
ServerBench port for Linux is available for testing. There were
some performance issues early in development that turned out to be
mainly related to the network configuration we were using. Once
these were worked out, things got a lot better for Linux compared
to NT 4.0.
We did a lot of investigation of ServerBench performance with
Linux concerning how Linux manages their data and write cache that
had an impact on the ServerBench scores. What it boils down to is
that, at least for ServerBench, because of the way Linux manages
cache, it runs out of data cache before NT 4.0. This causes the
disk subsystem to become a bigger factor earlier in the test.
Obviously, when you have to start hitting the disk, the
performance is going to suffer, particularly if you have a poorly
tuned RAID or no RAID at all.
We haven’t really looked at NetBench/Linux in that detail, but
plan to now that we’ve pretty much got the new network benchmarks
out the door. I do remember seeing the results where Linux beat NT
using NetBench. As I recall, the server had a single disk spindle
and only 64MB of RAM. With that amount of RAM, NT is only going to
be able to cache the data for about 2 NetBench clients after taking
what it needs for the OS before having the test grind to because of
the disk bottleneck created by the single slow disk drive. I
believe Linux has a much smaller footprint and could probably cache
the data for a few more NetBench clients. I think this is probably
the main difference in the scores.
After reading the Mindcraft white paper referred to in this
thread, there are a couple of things that I’d keep in mind for
WebBench testing.
1. They concentrated solely on the WebBench static test. To
achieve the best results here, the web server needs to cache the
entire contents of the workload. This is about 6000 files of about
60 MB in size. IIS 4.0/NT can cache this workload automatically
with 256MB of RAM or more. To the best of my knowledge, Apache does
not cache HTTP requests in and of itself. It needs something like
Squid configured to perform caching.
I didn’t see anything in the Mindcraft white paper mentioning
that there was any caching configured for Apache.
2. When ZD runs WebBench tests, we generally run the CGI tests
in addition to the static tests. the CGI tests are much more CPU
intensive than the static tests and can provide a more complete
picture of web server performance by looking at how well it handles
the execution of dynamic applications. Certainly this is something
being required more and more of web servers in the real world.
-Chris