Top White Papers
Where Security "Studies" Go WrongSep 13, 2005, 23:30 (14 Talkback[s])
(Other stories by Brandioch Conner)
By Brandoch Conner
There are lots of articles comparing Windows and Linux's "security." Almost all of them are worthless. Without access to the source code you cannot tell which system has more "flaws" or "holes." Yet that is what most of them try to measure. They usually resort to simply counting the released patches for each.
If security were based upon the number of patches released, then Win95 would be one of the most secure OSes around. If you keep patching holes, eventually there won't be any more holes to patch. Doesn't that sound logical? Yet every year we see more patches being released.
The problem is that the people writing those articles don't know anything about "security." Security is not something you can retrofit to a system. Security is the system. Security is a process.
So, it's time to look at security from the attacker's point of view.
Step #1: What is out there?
I don't care about what Microsoft's next release will have. I don't care about what Red Hat's next release will have. What matters is what is out there today. And don't give me any of that "if there was more out there then it would be cracked more" garbage. I'm talking systems on the Internet. If they're on the Internet, they can be scanned and attacked and cracked... automatically, 24/7.
Step #2: Avenues of attack.
A system can only be attacked through openings. If you're not running any services/apps on open ports, then you cannot be attacked remotely. It's as simple as that. With no openings, the remote attacker is stopped.
Step #3: Run only those services/apps that absolutely have to.
If you aren't running the service/app that is being attacked, you've defeated that attack. So all you have to do is remove any services you don't need. Easy, huh?
But because this is about attacking systems, we'll presume that you're running something on an open port. If you can restrict the vulnerability by using a firewall to restrict access to those ports you may have defeated the attack.
If you cannot firewall the ports sufficiently, the attacker will be able to attempt an attack the service.
Step #4: Patching.
So, the attacker has found a service that you're running and has gained access to it through your firewall. Are you current with your patches? This isn't so simple anymore because the crackers can have attacks coded and deployed a few days or a few hours after a public release of a patch (not to mention vulnerabilities that haven't been made public). But you still have to run the patches through your testing process to make sure that the patch doesn't break something else. If you are current, you've probably defeated the attack, if not, then you've just been cracked. This is why Steps 2 and 3 are so important. Patching requires too much human intervention. Sure, you can set your systems to auto-patch, but you have to have a lot of trust in the OS vendor and anyone can make a mistake. So, if you got the patch installed before the attack, you win, if not... Step 5.
Step #5: The security model and implementation.
So, the cracker has managed to find some means of accessing your computer, you're only hope at that point is the security model of your OS and its implementation.
A good security model can have code errors ("holes," "bugs," whatever) but when those are patched, the model is secure.
Note that this model does not apply to trojans. Trojans can be made more difficult, but since they rely upon a person making a mistake, they will be around for a long time (even if they don't spread very far).
A bad security model can be patched over and over and over and still be vulnerable to the same type of exploits. A good example of this is the number of virus signature updates released over the years by the anti-virus companies. Yet the underlying system is still vulnerable to the next virus that will be released. A virus infection is a failure of the security model.
This is the point at which most of the "studies" comparing the "security" of various systems start (Step 5). This is also the most difficult item to compare because most researchers cannot get access to all the source code of the systems being compared. Which is why most of those "studies" simply count the patches mentioned in Step 4 and decide that means something at Step 5. But they never say whether the patches they're counting are for a flaw in the implementation or the security model itself.
Now, looking back over those steps you can see where those "studies" are wrong. The first step is removing services. If the service is not there then there is no chance it can be exploited. If the service is not there then all the other steps do not apply. So those "studies" should be looking at how many services are listening on each system. Then they should compare how easy is it to remove those services. Then, how the OS can protect those services that are running (chroot jail in Linux) and how easy it is to setup that protection. Simply counting patches is meaningless.
0 Talkback[s] (click to add your comment)