---

Eric S. Raymond — The Case of the Quake Cheats

By Eric S. Raymond

The open-source community got a lump of coal in its Yule 1999
stocking from renowned hacker John Carmack, the genius behind id
Software and such games as Castle Wolfenstein, Doom, and the Quake
series. Carmack’s
.plan file
noted a problem that has swiftly emerged since the
Quake 1 source was released under GPL; it seems that some people
have used their ability to modify the Quake client as a mechanism
for cheating.

This may at first sight seem a trivial issue — who cares if a
few testosterone-pumped teenagers cheat at a shoot-em-up game? But
in today’s internetworked world, countermeasures against Quake
cheating arguably provide an important laboratory model for cases
that are decidedly not trivial, such as electronic commerce,
securities trading, and banking.

The Quake model is made particularly relevant by its open-source
connection. Open source advocates (including me) have been making a
strong argument over the last two years that open-source software
such as Linux and Apache is fundamentally more secure than its
closed-source competitors. Cryptographers have long understood that
no encryption system can really be considered well-tested until it
has been energetically and repeatedly attacked by experts who have
full knowledge of the algorithms it uses. Open-source advocates
argue that there is nothing special about cryptography but its high
stakes — that, in general, open peer review is the only road to
systems which are not merely accidentally secure by obscurity, but
robustly secure by design.

Carmack, therefore, caused a bit of a flutter on Slashdot when he went on to to
suggest that only a pair of closed-source encryption programs could
solve the Quake-cheating problem. The problem, as he correctly
pointed out, is that asking the open-source client to verify its
own correctness won’t work; a sufficiently clever cracker could
always write a client that would simulate the right kinds of
responses and then cheat.

A
debate
ensued, with several people pointing out that trusting
the client half of a client-server pair is bad security policy
whether the client code is open or closed. Fundamentally, there’s
no way for the server to be sure it isn’t talking to a clever
simulation of `correct’ behavior. Thus, opening the source to Quake
1 didn’t create security problems, it merely exposed one that was
already present (and exploitable, and for all anyone knew already
secretly exploited) in the design of the game.

Carmack weighed in to make clear that the Quake-cheating problem
is subtler than many of the debators were assuming. It’s not
possible for a cheating client to give a player infinite ammunition
or life points; the server does not in fact trust the client about
these things, and manages them itself. This is correct design;
whether or not it’s open-source, a bank should not depend on a
customer’s client software to tell the bank what the cutomer’s
balance is!

Carmack observes that “the [cheating] clients/proxies focus on
two main areas — giving the player more information than they
should have, and performing actions more skillfully.”

The serious “more information” cheats depend on a performance
hack. In order to hold down the number of updates of the Quake
world it has to pass to the client, the server gives the client
information about the location of objects and opponents that the
player can’t yet see, but might be able to see before the next
update. The server then counts on the client not to make those
things visible until they “should” be (e.g, until the user gets to
a particular location in the maze the client is simulating). A
cheating client can reveal an opponent seconds before the
player would turn the corner and expose himself to fire.

The “more skillfully” cheats substitute the computer’s speed and
accuracy for tasks that the server and other players expect the
player’s hands and brain to be performing. Carmack talks about “aim
bots” which automatically lock the player’s gun onto visible
opponents and fire it with inhuman accuracy.

And indeed it’s hard to see how either of these sorts of cheats
can be prevented given an open-source client and no way independent
of the client itself to check that the client is honest. Thus
Carmack’s suggestion of a closed-source Quake-launcher program that
would take a checksum of the client binary, communicate with the
server to make sure the binary is on an approved list, and then
handle communication with the server over a
cryptographically-secured channel.

Carmack’s argument seems watertight. What’s wrong with this
picture? Are we really looking at a demonstration that closed
source is necessary for security? And if not, what can we learn
about securing our systems from the Quake case?

I think one major lesson is simple. It’s this: if you want a
really secure system, you can’t trade away security to get
performance. Quake makes this trade by sending anticipatory
information for the client to cache in order to lower its update
rate. Carmack read this essay in draft and commented “With a
sub-100 msec ping and extremely steady latency, it would be
possible to force a synchronous update with no extra information at
all, but in the world of 200-400 msec latency [and] low bandwidth
modems, it just plain wouldn’t work.” So it may have been a
necessary choice under the constraints for which Quake was
designed, but it violates the first rule of good security design:
minimum disclosure.

When you do that, you should expect to get cracked, whether your
client is open or closed — and, indeed, Carmack himself points out
that the see-around-corners cheat can be implemented by a scanner
proxy sitting between a closed client and the server and filtering
communicatiuons from server to client.

Closing the source of the client may obscure the protocol
between client and server, but that won’t stop a clever cracker
with a packet sniffer and too much time on his hands. Carmack
confirms that even without the packet sniffer or access to source
there are a variety of ways to flush out anticipatory information,
ranging from tweaking the gamma and brightness on your screen to
banish shadows to hacking your graphics card’s device drivers to do
transforms of the world model (such as making walls
transparent).

We’re back in familiar territory here; the history of computer
security is littered with the metaphorical (and in some cases maybe
literal) corpses of people who thought security through obscurity
was sufficient. Crackers love that kind of naivete and
prey on it ruthlessly.

The aim-bot cheat is trickier to prevent. The difference between
human and aim-bot actions is measured only in milliseconds of
timing. Changing the protocol to stop it from leaking information
won’t banish aim-bots; it would take the server doing statistical
analysis of player action timings to even detect them, and (as
Carmack points out) “that is an arms race that will end with
skilled human players eventually getting identified as subtle
bots.”

Fortunately, the aim-bot cheat is also much less interesting
from a general security point of view. It’s hard to imagine
anything but a twitch game in which the client user can cheat
effectively by altering the millisecond-level timing of command
packets. So the real lesson of both cheats may be that a
closed-source program like Carmack’s hypothetical secured program
launcher is indeed a good idea for security — but only if you’re a
hyperadrenalized space marine on a shooting spree.

(Any computer game at which computers are better than most
humans has analogous cheats, some of which aren’t even detectable
in principle. Carmack observes “correspondence chess has been
subverted from its original intent by players using computers.”
This isn’t something security design can fix.)

If Quake had been designed to be open-source from the beginning,
the performance hack that makes see-around-corners possible could
never have been considered — and either the design wouldn’t have
depended on millisecond packet timing at all, or aim-bot
recognition would have been built in to the server from the
beginning. This teaches our most important lesson — that open
source is the key to security because it changes the behavior
of developers
.

Open source keeps designers honest. By depriving them of the
crutch of obscurity, it forces them towards using methods that are
provably secure not only against known attacks but against all
possible attacks by an intruder with full knowledge of the system
and its source code. This is real security, the kind
cryptographers and other professional paranoids respect.

It’s the kind of security the Linux kernel and the Apache
webserver have, and the kind people victimized by the Melissa and
Chernobyl viruses and Back Orifice and the latest
Microsoft-crack-of-the-week don’t have. If you’re betting your
personal privacy or your business’s critical functions on the
integrity of software, it’s the kind of security you want, too.

To recap, the real lessons of the Quake cheats are (a) never
trust a client program to be honest, (b) you can’t have real
security if you trade it away to get performance, (c) real security
comes not from obscurity but from minimum disclosure, and most
importantly (d) only open source can force designers to use
provably secure methods.

So, far from being a telling strike against open source, the
case of the Quake cheats actually highlights the kinds of biases
and subtle design errors that creep into software when it’s
designed for closed-source distribution and performance at the
expense of security. These may be something we can live with in a
shoot-em-up, but they’re not tolerable in the running gears of the
information economy. Avoiding them is, in fact, a good reason for
software consumers to demand open source for anything more
mission-critical than a Quake game.


Eric S. Raymond

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends, & analysis