Andrew Myers

Security, programming languages, and computer systems


Why Good Programmers are Master Architects, Negotiators, Gardeners, and Detectives

Good Programmers are Master Architects

Good programmers understand that they are building a complex structure with layers stacked upon other layers. They think critically about their design, and they know they need a strong, reliable foundation to support their work. Since their systems have many interdependent parts, they design carefully to limit these dependencies so that failures and design changes do not cascade throughout the system. Their work may occasionally need repair but it rarely needs to be torn down.

Good Programmers are Master Negotiators

Good programmers understand there are always design tradeoffs to make between different parts of the system. When they are working with other programmers, they are able to work together to find the best interfaces between the different parts of the system. When dealing with competing interests, they do not take hostages; instead, they find compromises that work to the overall good. They are even able to negotiate with themselves.

Good Programmers are Master Gardeners

Good programmers know that programs grow organically. The first plantings will spread and define the shape of the whole garden, so they think ahead about the garden’s layout. Good programmers are always vigilant for weeds and remove them on sight, because spreading weeds can choke the garden and make it hard to tend to the plantings. At the end of the day, they survey their work with pride even if most people cannot appreciate how much work it was.

Good Programmers are Master Detectives

Good programmers know how to quickly find the culprit when things go wrong. They have a theory about how things are supposed to work, so they are also good at forming theories about what is wrong. They have a sharp eye for anything behaving in an unexpected way. They can summon the suspects to the drawing room and efficiently test their theories to remove suspicion from the innocent. Especially, they relish the feeling when the criminal is hauled away in handcuffs.

Advertisements


Deterrence

Nice article about how deterrence cannot work for computer security at Slate.

The real problem is that computing systems are generally vulnerable to attack. This is not an inevitable state of affairs, but currently no one knows how to build secure, usable systems in a cost-effective way. It is not merely an engineering problem; it is a science problem. We lack the science base to do better. Why? The government has underfunded scientific research on cybersecurity defense for decades (offense is another story). Corporations have no incentive to invest in security research either. I don’t have the figures on the SaTC budget, but I would guess the National Science Foundation spends several million dollars a year on computer security. That is peanuts compared to the magnitude of the problem we face. Sure, other agencies spend money on security, too, like DARPA and NSA, but most of that goes to “beltway bandits” doing more engineering than science, or to research on the offensive side. The article linked above shows that the offense-oriented research is just not going to make us safer.


1 Comment

The Wooden Firehouse

An allegory for computer security.

You have lived all your life in a quickly growing town, whose growth has been sped up by constructing all the buildings out of wood. Some buildings in town are huge structures that have been repeatedly expanded with new wings and towers; others are simple shacks that are put up hastily and torn down just as quickly. Unfortunately, all of these buildings have the weakness that they can and do catch on fire. And lately more and more of them have been catching on fire. It even seems that arsonists from another town are sneaking in and deliberately setting buildings on fire.

You and your friends have warning for some time that better construction methods are needed. Building with concrete and steel would make buildings fundamentally less vulnerable to fires. Unfortunately, your warnings have not been heeded, for a variety of reasons:

  • Fires were once much less common, and buildings were smaller and farther apart, so fires caused much less damage and were taken less seriously.
  • Much of the construction effort goes into adding to existing buildings. While it might have once been easier to change to better materials, the objection is raised that it’s not practical to tear down all the existing buildings and replace them with concrete ones.
  • A quirk in your town’s legal system makes builders, no matter how careless in their design and construction work, not legally responsible when buildings catch on fire.
  • The technologies for building with concrete and steel are still in their infancy. Small demonstration buildings have been constructed, but there is skepticism that the new technologies will be cost-effective.

Thus, the builders have mostly ignored your warnings and have continued both to build new buildings and to add huge new extensions to existing ones, always using wood.

The town elders have decided that something must be done to address the increasing damage done by fires and the threat that fires might start spreading from building to building. By far the most town resources have been directed to firefighting, and firefighters are celebrated as heroes. Fortunately, the elders have had the foresight to realize that something must be done beyond simply fighting fires as they arise.

Several local companies therefore offer popular flame-retardant paint and inspection services to check for natural gas leaks, and citizens are encouraged to use them. While these measures do not protect against determined arsonists, they do seem to prevent some accidental fires.

The town is also supporting a small amount of basic research into the fire prevention problem. Most of this work has focused on improved smoke detectors, better fire hoses for firefighters, and better fire-retardant paint. Another focus has been demonstrations that existing fire prevention methods are inadequate—researchers have developed many clever new ways to set buildings on fire, and these are eagerly reported on by the media. Only a small fraction of the effort has gone into studying how to make it cheaper and easier to build buildings out of non-flammable substances.

And even the firehouse is still made out of wood.


3 Comments

The OPM disaster and computer security

The theft of data from the Office of Personnel Management is a disaster with long-lasting consequences. It is hard to imagine what event —without causing broad, immediate physical damage— could give the government a stronger incentive to support work on improving computer security. I’m worried the opportunity will be missed anyway.

Current computing systems are not at all secure, but almost all work on computer security focuses on “patching” inherently broken systems rather than on developing methods for building systems to be secure in the first place. Decades of experience has shown us that patching is inadequate, especially against a nation-state adversary.

My fear is that the theft of OPM will now cause research funding to go toward work on detecting intrusion, since the attack was found by a company demoing a tool for security diagnosis. That would be exactly the wrong response—the damage was already done by the time the attack was discovered. Let’s not work on better methods for closing the stable door after the horse has bolted.


Why security mechanisms matter

I hear periodically that computer security is hopeless because there is always a way for the adversary to get around whatever security mechanisms are in place. This view misunderstands the point of security mechanisms. It’s true that there is no such thing as absolute security: an adversary with unbounded power and resources can defeat all computer security—and physical security as well! But the purpose of security mechanisms is not to achieve absolute security. Rather, the purpose is to make the cost of successful attacks so high that the adversary either lacks the resources to mount the attack or merely finds the attack unprofitable. The reason why computer security is so hard is that unlike in the case of physical security, there are typically many attacks against computing systems that are both cheap and low-risk for the adversary. The aim of computer security mechanisms is to prevent these attacks, forcing the adversary to use means that are too risky or expensive.

All security mechanisms are based on assumptions about the powers that the adversary is able to wield—the threat model. A security mechanism can indeed be “perfect” with respect to a given threat model. To attack successfully, the adversary must then use attacks that lie outside the threat model. This is useful in two ways:

  • If the threat model includes all the low-cost attacks—for example, the attacks that can be executed anonymously from across the Internet without spending significant resources and without using physical coercion—then the adversary can only employ riskier, more expensive measures.
  • Further, the threat model guides the deployment of additional defenses. It defines the attacks that the defender no longer needs to worry about. The defender can focus on the attacks that lie outside the current threat model and develop defenses against these additional attacks, to achieve defense in depth. These additional defenses might include physical security measures or monitoring of the actions of insiders.

There may not be such a thing as absolute security, but security can be perfect in a mathematical or logical sense, relative to a threat model. And this is all that is needed to build systems that are secure enough.

This discussion also shows why timing channels are so dangerous. It is often infeasible to prevent the adversary from measuring time cheaply, accurately, and anonymously. Consequently, an adversary clever enough to draw inferences about confidential information from timing measurements is hard to defeat. The only solution seems to be to design systems in such a way that timing measurements leak little information. We’ve done some work on this problem, at the system, software and hardware levels, but there is much more to be done.

(I want to thank Mark Miller for his thoughts about computer security, which have influenced this note.)


1 Comment

A Hippocratic Oath for computer security research?

When I was a graduate student at MIT, at some point we discovered that all of our systems had been compromised. I happened to have earlier hacked up a network monitoring tool that was a very graphical version of tcpdump, enabling us to rapidly figure out that the attacker was coming into via a computer in Germany. The hacker was reading our email, so when I communicated this fact to others in the group via email, I suddenly got a “talk” request and found myself chatting with them. They promised to go away. But — and this was surprising to me — they thought they were doing something completely ethical and appropriate. In fact, they were doing us a favor by showing us the vulnerabilities of our systems. I suppose that has a grain of truth to it, in much the same way that a burglar who breaks into your house shows you you need better locks.

Are researchers who focus on attacking systems really any better? Yes, if they clearly explain the vulnerabilities. And doubly yes, if they show how to ameliorate the vulnerabilities. But I worry that as the pendulum of the security community swings toward demonstrating attacks, the research community is exposing vulnerabilities faster than they are fixing them. Clearly this is not a sustainable path — if continued, we all just become less secure because vulnerabilities are being disseminated to a wide audience and solutions are not. We have the joy of knowing the truth. But pragmatically things are being made worse for all the ordinary users relying on computers.

In many research communities, discovering the truth is all that researchers need to be concerned with. But it seems to me that the security community has a special responsibility to make security better. I would hope that every security researcher would strive to, in the balance, do more good than harm. If every security researcher did work that exposed more vulnerabilities than it fixed, the world would be a worse place. If we accept Kant’s dictum, that observation implies that it is unethical for any security researcher to behave in this way. So my question is, do we need a version of the Hippocratic Oath for computer security research?


Unhackable computers?

Neil deGrasse Tyson received a lot of derision for calling for “unhackable systems” recently. I’m a bit perplexed by this response.

On the positive side, it’s clear that it is widely understood that current computer systems are very far from unhackable. On the negative, the common understanding (at least among those on Twitter) seems to be that this goal is impossible, requiring the intercession of “unicorns”. One line of argument is that “what man can create, man can destroy”. Yet methods for creating secure computer systems are a research goal of many computer scientists, including me.

On the one hand, security vulnerabilities mostly arise from bugs rather than from errors in the security architecture. The existence of verified compilers and operating systems has shown that bug-free software is possible even for fairly complex software. So systems can be built that are “unhackable” at least in some sense.

At a deeper level, though, the skeptics are right. The real problem is that we don’t know what it means for a system to be “unhackable” — we don’t have a formal definition of “unhackable” that is sufficiently permissive to capture the desirable behavior of computer systems while excluding systems with security vulnerabilities. Without such a definition, the ability to formally verify software systems and prove them correct is not going to solve the problem. And unfortunately, there is relatively little research effort expended on this foundational problem.