Security Portal: Open Source - Why it's Good for SecurityApr 18, 2000, 07:52 (3 Talkback[s])
(Other stories by Jay Beale, Kurt Seifried)
"Most attackers don't need the source code."
"Hiding your program's (or operating system's) source code doesn't buy you the security that you'd expect. Hackers have been reverse engineering or doing "Black Box analysis" for years. Just because I can't see the original C source code for a program doesn't mean that I can't run it in a debugging, or code execution trace environment, to watch its operation. The point to keep in mind is this: for a computer's processor to execute a program, it has to be able to read each instruction. Each instruction is a bit of machine code, which is transformed quite easily to assembly code. Some programs can attempt to convert the machine/assembly code into the more easily readable C code. As many people can read C and Assembly, especially the hackers who will develop the exploits against a program, closing the source doesn't stop a number of hackers from finding vulnerabilities in your program! A recent example of this was illustrated in a Bugtraq post last December, where BindView's Todd Sabin illustrated a vulnerability in Windows NT's SYSKEY, which was discovered without source and was aided by the use of a disassembler."
"Even simpler "Black Box" analysis is alive and well. Many vulnerable programs can even be cracked, or "exploited," without the need to understand the code well. The Spring 2000 edition of 2600 magazine contains an article on "Finding and exploiting bugs" that focuses on attacking closed source programs by using techniques like boundary testing, where you try feeding a program unexpected input types to find bugs associated with boundary conditions. Most software security problems stem from a few basic programming flaws, such as buffer overflows, which can be detected using this sort of analysis."