By Brian Proffitt
Managing Editor
A cautionary tale:
Somewhere in the open source application known as OpenSSL, a
line of code was found that exposed a small, but exploitable,
vulnerability within part of the application. Specifically, the
broken code was in the pseudo-random number generator (PRNG)
implementation for the OpenSSL FIPS Object Module v1.1.1.
The bug was found by Geoff Lowe of the Secure Computing
Corporation, who notified the Open Source Software Institute (OSSI)
of the problem on November 7. The OpenSSL team members looked at
the report, determined that there was indeed a problem, and created
a single-line fix for the bug in–literally–15 minutes.
At this point, the story gets more interesting. As in the
Chinese-curse interesting.
Normally, with open source software, after a bug is found and a
fix is created, the patch is packaged up and readied for
downloading. At the same time, an advisory is prepared and people
are notified in whatever priority order that particular open source
project deems appropriate. After the announcement goes out, the
patch is downloaded and installed, and the project becomes that
much more secure. Contentment ensues, and all is right in the
world.
But even though OpenSSL is open source, that is not what
happened in this instance. What did happen was a crisis created by
the juxtaposing ideas of open source and proprietary security
management. A crisis specifically caused by one big complicating
factor: this bug was located in a FIPS-compliant part of the
OpenSSL application. Because of where the bug was, an overarching
set of procedures had to take precedence. Procedures that could
take 4-6 months to complete. Not days. Not weeks.
Months.
According to Steve Marquess of OSSI, who related this story to
me yesterday, the fact that this bug was located inside what is
referred to as a FIPS-defined algorithm boundary means that any
changes to that module must undergo the bureaucratic approval
procedure as laid out by the Federal Information Processing
Standard. Parts of OpenSSL, which forms the basis of a lot of
cryptographic technology, are compliant with FIPS-140-2, the most
recent version of the FIPS cryptography standard.
Under the guidelines, a bug and any solutions must be tested by
an agreed-upon neutral test lab, which then submits the results to
a committee known as the Cryptographic Module Validation Program
(CMVP). The CMVP is actually just six people: three U.S. team
members from the National Institute of Standards and Technology
Computer Security Division and three from Canada’s Communications
Security Establishment. These six folks, who all come from hardware
engineering backgrounds, a holdover from cryptography’s
all-hardware beginnings, are responsible for the granting FIPS 140
certification for applications that use cryptography.
Which applications? In the U.S., all cryptography apps in the
government and military that handle unclassified data must be
FIPS-compliant. While Marquess wasn’t 100% sure, its a safe bet
that a similar stricture holds in Canada as well.
That’s a lot of applications.
This is not the first time a bug has been found in OpenSSL. In
those cases, the big was found, fixed, and announced in the
traditional open source way. What’s complicated this situation is
the location of the bug inside the algorithm boundary and the
presence of another bug in the same module. This bug,
which Marquess describes as “cosmetic,” has been known for quite
some time, but it does nothing to make the PRNG vulnerable to any
exploit. But while it’s cosmetic, the bug does put the module in
violation of FIPS 140.
For those of you keeping score at home, the November 7 bug
(RealBug) had a small but real security vulnerability inside the
algorithm boundary. The earlier cosmetic bug (CosmoBug) had no
security implications, but did break FIPS rules because it was also
inside the algorithm boundary.
When the fix for the RealBug was sent to the independent test
lab for QA, the test lab did what it had to under the CMVP
guidelines: it ruled that both RealBug and CosmoBug had to be
fixed, which meant that the entire module would be officially
non-compliant until it underwent re-approval by the CMVP.
The CMVP, though, works under rules and guidelines that are far
more conducive to the proprietary development world than the open
source one. In proprietary land, long bugfix approval times were
not harmful, because any proprietary vendor using the buggy
software would never share that information with customers or
competitors. The odds of the vulnerability being found and
exploited, their thinking went, were not that high, so best keep it
quiet. In fact, the long lead-time was a bonus: it gave each vendor
time to figure how how they would distribute the fix. As a patch? A
service pack? Or wait until the next scheduled release?
In open source land, once a bug is found, the clock is ticking.
The code is open, which means if one developer finds it, anyone can
find it. That’s what keeps developers the incentive to issue
patches on a very timely basis. Marquess knew that once Lowe found
the bug, it would only be a matter of time before either another
vendor found it or worse, a malicious cracker-type developer.
If another vendor found it at this point, it would not matter,
since the OSSI was in the loop. Vendors who use OpenSSL in their
products, though, have a history of not releasing information,
according to Marquess and his boss, John Weathersby. Weathersby,
the Executive Director of OSSI, told me that earlier this year
another vendor had found a bug in a FIPS-compliant open source
application and had sat on it for months, hoping no one would
notice while the bug went through the CMVP approval process.
Because of this, “we were lucky Geoff Lowe contacted us first,”
Marquess shared with me. If Lowe had contacted some other vendor
first, it’s possible nothing about this bug would be publicly
known.
But the OpenSSL team was still stuck on how to proceed. If they
did nothing and went through the CMVP procedures, it could be
months before an official patch was released. If they did it the
open source way, suddenly dozens, if not hundreds, of vendors would
be left with the unsavory choice of admitting that all of their
products based on OpenSSL were no longer FIPS-compliant until the
vendors could release a CMVP-approved patch.
The team was stymied as discussions went back and forth. Be true
to open source ideals, but possibly leave a lot of vendors in the
lurch? Or follow the CMVP procedures, and pray no black hat found
the hole first?
In the end, Marquess and his team decided on a compromise
approach: afford vendors the opportunity to close the hole. To do
this, on November 29 they came up with two potential fixes that
were not true patches, in the sense of packaged download-and-go
software.
One fix was the direct, fix the busted line of PRNG code for the
RealBug inside the algorithm boundary, in the form of a
cut-and-paste text
file. The other fix was a workaround that didn’t directly touch
the PRNG code (meaning outside the algorithm boundary), also in the
form of a text
file. This latter solution was conceived because the team hoped
that such a workaround would still repair the problem and yet
circumvent the need to go through the CMVP process. Unfortunately,
while it was a good try, the CMVP still revoked the module’s FIPS
status.
There is some good news as this goes forward. The test lab had
already complete the paper work to submit the bug fixes to the
CMVP, and the CMVP has indicated that it will put this issue on the
“fast track,” which Marquess interprets as a week or so of
work.
It should be noted that Marquess absolutely bears no ill will
towards the CMVP. He recognizes they have a tough job to do under a
set of rules that do not mesh well with open source philosophies
and practices.
“They’re trying to do the right thing,” he stated.
“In the proprietary world this was never an issue,” Marquess
added. “Things like this were covered up until a fix was made
ready. Fortunately, the open source world doesn’t work that
way.”
This collision of two worlds is the heart of this story, and a
lesson for all of us to think about. In our daily battles on the
grand scale with the SCOs and the Microsofts, it is easy to
remember that there is a whole world of existing business and
government practices with which software developers have had to
comply for years. Being first, of course, does not mean that these
practices are any more morally right or wrong than later practices.
But they are still there to be reckoned with, and this situation
with OpenSSL serves to further illuminate the disparities between
both worlds.
It is easy to say in a high-handed manner that the “open way is
the best way,” because we know that that’s fundamentally true. But
there’s a big difference between theory and application. Open
source projects have to deal with real users and vendors with real
policies that drive their decisions–policies that cannot blithely
be ignored. Marquess and his team wanted to do the right thing, but
not at the expense of so many vendor’s bottom lines and
reputations.
As it stands, the solution the OpenSSL team and OSSI did come up
with was a good compromise: vendors and users who want to protect
their systems can copy the new code into their source, recompile,
and have safer systems. No, these fixes are not FIPS-compliant, but
in the end, the team felt that security in practice was better than
security in name.
On the larger scale, this serves as a challenge for many of us
in the open community: finding ways to bring old practices up to
speed with free and open source software ideals, without
compromising our own values. It won’t be easy, because if we are
too stringent, we could create situations where vendors and
developers might feel it’s too much trouble to deal with open
source code. But nor can we be too lax for the sake of procedures
that are out-of-date with how software development should be.