Andrew Myers

Security, programming languages, and computer systems

Unhackable computers?

Neil deGrasse Tyson received a lot of derision for calling for “unhackable systems” recently. I’m a bit perplexed by this response.

On the positive side, it’s clear that it is widely understood that current computer systems are very far from unhackable. On the negative, the common understanding (at least among those on Twitter) seems to be that this goal is impossible, requiring the intercession of “unicorns”. One line of argument is that “what man can create, man can destroy”. Yet methods for creating secure computer systems are a research goal of many computer scientists, including me.

On the one hand, security vulnerabilities mostly arise from bugs rather than from errors in the security architecture. The existence of verified compilers and operating systems has shown that bug-free software is possible even for fairly complex software. So systems can be built that are “unhackable” at least in some sense.

At a deeper level, though, the skeptics are right. The real problem is that we don’t know what it means for a system to be “unhackable” — we don’t have a formal definition of “unhackable” that is sufficiently permissive to capture the desirable behavior of computer systems while excluding systems with security vulnerabilities. Without such a definition, the ability to formally verify software systems and prove them correct is not going to solve the problem. And unfortunately, there is relatively little research effort expended on this foundational problem.

Advertisements

Author: Andrew Myers

I am a professor of computer science at Cornell University. It is too hard to build trustworthy software systems using conventional systems APIs. I work on higher-level, language-based abstractions for programming that better address important cross-cutting concerns: security, extensibility, persistence, distribution.

Comments are closed.