SPECweb99 Benchmark clarificationJul 05, 2000, 23:23 (38 Talkback[s])
(Other stories by Marty Pitts)
WEBINAR: On-demand webcast
How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER >
By Marty Pitts, Linux Today
There has been a lot of interest in the results of the SPECweb99 benchmark tests that were preformed on two similar Dell PowerEdge 6400/700 servers.
At the end of the article I asked if the seemingly minor differences in the hardware could account for the large difference in the results. In addition to the talkbacks posted, I received several emails with new information about the tests as well as questions about the hardware involved.
One individual asked about the difference in the hard drive controllers. This person noted:
"The Linux machine was equipped with an Adaptec-7899 controller. This is a dual channel, Ultra160/m controller. The Win2k machine was equipped with a Dell PERC2. This is a quad channel, Ultra2 controller.As it turns out, I was able to speak with an informed source who has knowledge of the machines involved and tests performed on them. When I relayed the concerns about the hard drive controllers, this person was able to clarify it this way:
The difference is that the Linux machine had a controller capable of 160MB/sec, while the Win2k machine had a controller capable of 80MB/sec. I do think that could have impacted the results."
"Yes, but in what direction? The PERC2 had 128MB of memory, and a CPU independent of the system's main processors.The individual who raise the hard drive controller question also asked about the Network Controllers:
Anyway, the disk configuration is nearly irrelevant to the W2K results. Look at the result page - the fileset size is about 5GB, and (the machines) had 8GB of RAM. Everything was cached - the only disk activity was logging, which is pretty minimal. All the PERC2 did was get the data to memory quicker on the first access."
"The Linux machine was equipped with an AceNic 1000SX. This is a Gigabit Fiberoptic network adapter.The person close the the tests responded:
The Win2k machine was equipped with a AceNic PCI. This is a autosensing 10/100/1000Base-T adapter using UTP.
The difference here is obvious, and you did not state at what speed the Win2k boxes network connection was running at. That card is capable of running either 1Gbit/s, 100Mbit/s, or 10Mbit/s, autosensing. If plugged into the wrong port, or into a slower hub, or with substandard cable, it could have been running at as much as 1/100 the speed of the card in the linux machine."
"Actually his assumption is wrong. The client and network switch setup was identical for both results. (The testers) were just trying to clarify that the fiber NIC was being used. ACEnic PCI (PCI bus) could be either fiber or copper now - in the past there was only fiber, so ACEnic PCI was enough."One person wrote in with concerns that the Linux machine had an advantage since it was using an in-kernel HTTP cache. This would give the Linux box a definite advantage when serving static pages. While Windows does have a similar technology, doubts were raised about it being used for the tests.
The person close to the test responded:
"Sun also has similar technology available in Solaris 7 and 8, SNCA (Sun Network Cache Accelerator). Linux also has similar technology in khttpd.When asked about Tux 1.0 and the performance difference Ingo Monar of Red Hat responded:
SNCA, FRCA, SWC and NWSA were used on SPECweb96, which was 100% static content, but none have been used on SPECweb99 with 70% static and 30% dynamic content. Certainly if they could be used, they would be - competition for top SPECweb numbers is intense.
TUX includes an in-kernel HTTP cache but is also a full- featured http server itself. The others above are only caches. Ingo Molnar at Red Hat did the vast majority of the work on TUX and can answer specific TUX questions."
> SNCA, FRCA, SWC and NWSA were used on SPECweb96, which was 100% static
> content, but none have been used on SPECweb99 with 70% static and 30%
> dynamic content. [...]
Indeed, and I believe the reason is fundamental. I'd like to refer to the following comment as a background on TUX's design:
This comment should explains some fundamental properties of TUX.
SNCA, FRCA, SWC and NWSA are standalone static-only web-reply caches.
TUX is a very different thing, it's a:
So in our opinion TUX is a new and unique class of webserver, there is no prior art implementing such kind of 'HTTP stack' and 'abstract object cache' approach. It's i believe a completely new approach to webserving. Please read this comment too, which (i hope) further explains the in-kernel issue:
Also, you might want to take a look at the TUX SPECweb99 module source code to see how the TUX HTTP protocol stack works in practice:
I'd like to point out that I have no affiliation with SPEC, and we (Red Hat) picked the SPECweb99 suite because of it's sophisticated and realistic workload and the independence SPEC guarantees. The SPECweb99 suite is designed to create a complex workload, and such workload defeats in-kernel static webcaches such as SWC. The workload is 'mixed up', ie. the same web objects are requested in static and dynamic requests, and it's forbidden to cache dynamic replies. CGI requests are part of the workload as well.
I have specifically designed TUX to be integrated into Apache. While the GPL (TUX is under the GPL) and the Apache Software License are incompatible, I do plan to allow licensing the TUX 'user-space glue code' under the ASL as well - further easing the integration of TUX capabilities into Apache 2.0 or 3.0."
Please feel free to post further comments and questions on these tests below.
0 Talkback[s] (click to add your comment)