---

Reinventing the Wheel

By Pim van Riezen
Rotterdam, October 1999

Software design is a peculiar business. Companies working in the
field of developing software show the strangest product evolution
cycle a person can come up with: the product never fundamentally
changes, it only expands. Very rarely do any of the basic
assumptions of what a program should be doing and how it should be
treating the data it processes change past its first release.

When the basic functionality of a program gets extended beyond
the scope of the original design, the added functionality either
becomes a system of its own, or a major amount of mindbending and
glueing is performed to make it work on top of the existing
philosophy. Software evolution is a bottom-up process most of the
times.

The basic problem stems from a misconception that evolved mostly
with the rapid growth of the industry and consumer demands. It is
the dogma preaching that reinventing the wheel is a sin and that
reuse of existing code should be maximized. This idea seems to be
on a par with the way knowledge and engineering evolved during our
Age of Technology. But is it really?

If we take a look at the history of our understanding of the
physical world, it becomes clear that science has always been about
getting to the fundamentals of a reality that is already there. The
growth of our understanding is fed from the bottom by shifts in the
definitions of the elementary processes that make the world we
register as reality theoretically possible.

The scientific evolution (a process kicked off by and sometimes
intermingled with philosophy) is in contrast with the process of
software development, where the requirements keep growing. By
extending the requirements, we change the contextual reality of the
program’s “world”, which can lead us to a point where the
abstracted fundamental elements are no longer adequate to describe
it. Often this leads to a point where developers should say: “Our
basic assumption sucks, let’s reimplement.”

The fear of doing this leads to legacy bloat: a growth in the
amount of work needed (both in programming and in executing) to
implement the new “world” inside the existing philosophy.

If Microsoft, when computers commonly started getting more than
640 K of memory, had taken a good look at the assumptions MS-DOS
made about RAM and had rightfully concluded: “This sucks, let’s
redesign”, the long term usability of that OS would have grown
tremendously. Instead they gave us EMM386. When they implemented
the desktop paradigm, with icons and verbose descriptions of the
program or action they performed, Microsoft should have taken a
look at the limitations of DOS and the FAT filesystem and
reimplemented them to accomodate the new requirments. Instead they
turned icons into shortcuts pointing to 8.3 filenames.

Microsoft is not alone in being caught in this cycle. It’s an
Industry illness which even becomes apparent in hardware. (Or do
you really think that ISA was such a good idea to begin with?) And
in the field of software. (Do you think Netscape Communicator
couldn’t possibly do the things it does in less than a 13MB
executable file?) Microsoft has been around for quite a while,
though, and the assumptions their world started with were notably
limited. The industry pushes towards commoditization, but lacks the
patience to get the requirements straight before things are built
and declared final.

Free software (in both the beer sense and the speech sense) has
an advantage over commercial software in being better able to
prevent this effect from being perpetual. Developers always have a
choice to lay aside the imminently growing feature-demand from the
userbase and to stand still for a moment to reflect the usability
of the underlying philosophy against the direction the software is
growing into. Commercial software can only refer to the
competition. If the competition adds features and disregards bugs
and bloat, the commercial developer is economically forced to
follow the trend, for fear of losing existing market share.

Taking Linux as an example, we can see a lot of distinctive
kernel features that have been reimplemented over time. New ways to
look at the desired system functionality have made it necessary to
rethink the underlying kernel mechanisms. Linux also has the
advantage of being built on top of the UNIX philosophy, which turns
out to be better adaptable to modern thinking than DOS ever was (by
having less hardcoded assumptions and by being more generic in the
way it treats data).

If we step away from the documented merits of using an Open
Source model for software development with regard to developer
co-operation, we can see the much more fundamental advantage: it
creates a buffer between the software designers and the end users.
This implies a freedom not only for the user, but more importantly
for the developer — the freedom to make correct technical
decisions at any time, thus prioritizing technical excellence over
feature-demand; the freedom to reinvent the wheel because the old
one wasn’t round enough and to learn a whole lot about wheels in
general while doing it.

The UNIX philosophy has brought us a long way, but we should
never hang on to it so tight that we would end up yet repeating the
same mistakes. Just because an element of the functionality we are
looking for already exists, a software developer should not
automatically assume that the existing code is philosophically
compatible with what’s happening on the program’s higher and lower
levels. More to the point: the design of a program should not by
definition be a function of the available existing tools it could
integrate.

This leads us to the conclusion that in the case of, for
example, user interface toolkits, there is no such thing as the
Right Toolkit We Should Standardize On. KDE and GNOME, for
example, are fundamentally different, even though they seem to
basically lead to a visually similar end result.

When starting to write Post Office, I took the route of using
neither. Not because they were particularily bad toolkits, but
because they would have forced me to work myself into a certain
design philosophy which might limit the way I wanted to do things.
I made the, perhaps to some people odd, decision to use FLTK. This
toolkit tries to predefine as little as possible, which means I
gained a lot of freedom to approach things according to my own
world view.

Another implication from this train of thought is that the
GNU/Linux community should stop being too worried about user
acceptance. The system and community arrived where they are now,
not by pursuing every feature demand, but rather by trying to build
the Right system, just for the sake of building it.

Yes, I’m impatiently waiting for a lot of things in Linux that I
could use yesterday. But I’d rather wait another two months than
end up being forced to use some ad hoc solution which, on the
surface, seems to meet my needs. Thankfully, so far, Linus seems to
be following the same model.

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends, & analysis