Andrew Myers

Security, programming languages, and computer systems


1 Comment

A Hippocratic Oath for computer security research?

When I was a graduate student at MIT, at some point we discovered that all of our systems had been compromised. I happened to have earlier hacked up a network monitoring tool that was a very graphical version of tcpdump, enabling us to rapidly figure out that the attacker was coming into via a computer in Germany. The hacker was reading our email, so when I communicated this fact to others in the group via email, I suddenly got a “talk” request and found myself chatting with them. They promised to go away. But — and this was surprising to me — they thought they were doing something completely ethical and appropriate. In fact, they were doing us a favor by showing us the vulnerabilities of our systems. I suppose that has a grain of truth to it, in much the same way that a burglar who breaks into your house shows you you need better locks.

Are researchers who focus on attacking systems really any better? Yes, if they clearly explain the vulnerabilities. And doubly yes, if they show how to ameliorate the vulnerabilities. But I worry that as the pendulum of the security community swings toward demonstrating attacks, the research community is exposing vulnerabilities faster than they are fixing them. Clearly this is not a sustainable path — if continued, we all just become less secure because vulnerabilities are being disseminated to a wide audience and solutions are not. We have the joy of knowing the truth. But pragmatically things are being made worse for all the ordinary users relying on computers.

In many research communities, discovering the truth is all that researchers need to be concerned with. But it seems to me that the security community has a special responsibility to make security better. I would hope that every security researcher would strive to, in the balance, do more good than harm. If every security researcher did work that exposed more vulnerabilities than it fixed, the world would be a worse place. If we accept Kant’s dictum, that observation implies that it is unethical for any security researcher to behave in this way. So my question is, do we need a version of the Hippocratic Oath for computer security research?


Unhackable computers?

Neil deGrasse Tyson received a lot of derision for calling for “unhackable systems” recently. I’m a bit perplexed by this response.

On the positive side, it’s clear that it is widely understood that current computer systems are very far from unhackable. On the negative, the common understanding (at least among those on Twitter) seems to be that this goal is impossible, requiring the intercession of “unicorns”. One line of argument is that “what man can create, man can destroy”. Yet methods for creating secure computer systems are a research goal of many computer scientists, including me.

On the one hand, security vulnerabilities mostly arise from bugs rather than from errors in the security architecture. The existence of verified compilers and operating systems has shown that bug-free software is possible even for fairly complex software. So systems can be built that are “unhackable” at least in some sense.

At a deeper level, though, the skeptics are right. The real problem is that we don’t know what it means for a system to be “unhackable” — we don’t have a formal definition of “unhackable” that is sufficiently permissive to capture desirable behavior of computer systems while excluding systems with security vulnerabilities. Without such a definition, the ability to formally verify software systems and prove them correct is not going to solve the problem. And unfortunately, there is relatively little research effort expended on this foundational problem.