Andrew Myers

Security, programming languages, and computer systems

Why security mechanisms matter

I hear periodically that computer security is hopeless because there is always a way for the adversary to get around whatever security mechanisms are in place. This view misunderstands the point of security mechanisms. It’s true that there is no such thing as absolute security: an adversary with unbounded power and resources can defeat all computer security—and physical security as well! But the purpose of security mechanisms is not to achieve absolute security. Rather, the purpose is to make the cost of successful attacks so high that the adversary either lacks the resources to mount the attack or merely finds the attack unprofitable. The reason why computer security is so hard is that unlike in the case of physical security, there are typically many attacks against computing systems that are both cheap and low-risk for the adversary. The aim of computer security mechanisms is to prevent these attacks, forcing the adversary to use means that are too risky or expensive.

All security mechanisms are based on assumptions about the powers that the adversary is able to wield—the threat model. A security mechanism can indeed be “perfect” with respect to a given threat model. To attack successfully, the adversary must then use attacks that lie outside the threat model. This is useful in two ways:

  • If the threat model includes all the low-cost attacks—for example, the attacks that can be executed anonymously from across the Internet without spending significant resources and without using physical coercion—then the adversary can only employ riskier, more expensive measures.
  • Further, the threat model guides the deployment of additional defenses. It defines the attacks that the defender no longer needs to worry about. The defender can focus on the attacks that lie outside the current threat model and develop defenses against these additional attacks, to achieve defense in depth. These additional defenses might include physical security measures or monitoring of the actions of insiders.

There may not be such a thing as absolute security, but security can be perfect in a mathematical or logical sense, relative to a threat model. And this is all that is needed to build systems that are secure enough.

This discussion also shows why timing channels are so dangerous. It is often infeasible to prevent the adversary from measuring time cheaply, accurately, and anonymously. Consequently, an adversary clever enough to draw inferences about confidential information from timing measurements is hard to defeat. The only solution seems to be to design systems in such a way that timing measurements leak little information. We’ve done some work on this problem, at the system, software and hardware levels, but there is much more to be done.

(I want to thank Mark Miller for his thoughts about computer security, which have influenced this note.)


1 Comment

A Hippocratic Oath for computer security research?

When I was a graduate student at MIT, at some point we discovered that all of our systems had been compromised. I happened to have earlier hacked up a network monitoring tool that was a very graphical version of tcpdump, enabling us to rapidly figure out that the attacker was coming into via a computer in Germany. The hacker was reading our email, so when I communicated this fact to others in the group via email, I suddenly got a “talk” request and found myself chatting with them. They promised to go away. But — and this was surprising to me — they thought they were doing something completely ethical and appropriate. In fact, they were doing us a favor by showing us the vulnerabilities of our systems. I suppose that has a grain of truth to it, in much the same way that a burglar who breaks into your house shows you you need better locks.

Are researchers who focus on attacking systems really any better? Yes, if they clearly explain the vulnerabilities. And doubly yes, if they show how to ameliorate the vulnerabilities. But I worry that as the pendulum of the security community swings toward demonstrating attacks, the research community is exposing vulnerabilities faster than they are fixing them. Clearly this is not a sustainable path — if continued, we all just become less secure because vulnerabilities are being disseminated to a wide audience and solutions are not. We have the joy of knowing the truth. But pragmatically things are being made worse for all the ordinary users relying on computers.

In many research communities, discovering the truth is all that researchers need to be concerned with. But it seems to me that the security community has a special responsibility to make security better. I would hope that every security researcher would strive to, in the balance, do more good than harm. If every security researcher did work that exposed more vulnerabilities than it fixed, the world would be a worse place. If we accept Kant’s dictum, that observation implies that it is unethical for any security researcher to behave in this way. So my question is, do we need a version of the Hippocratic Oath for computer security research?