Andrew Myers

Security, programming languages, and computer systems

Worse is Better vs. Better is Better

1 Comment

In 1991 Richard Gabriel wrote a insightful and influential article about the difference in designing software systems in the “MIT Style” and “New Jersey Style” (AT&T), where he termed the latter “worse is better”. He argued that when building software, the “MIT style” of getting the design “right” (at the cost of complexity in implementation) loses out to the “New Jersey” style of keeping the design easy to implement (at the cost of giving users weaker guarantees and a more complex interface). This ease of implementation allows systems built in the “New Jersey”, worse-is-better style to acquire mindshare quickly; over time, with many interested users, they are fixed to the extent possible. As Gabriel wrote, “it is often undesirable to go for the right thing first. It is better to get half of the right thing available so that it spreads like a virus. Once people are hooked on it, take the time to improve it to 90% of the right thing.”

He certainly had a point, and his article has become almost a self-fulfilling prophecy. However, a funny thing has happened since. Based on the worse-is-better argument, Gabriel predicted that the language of the future was going to be C++. It turns out that he was wrong. If you look at the data about what languages are most popular for new software projects, Java is rated above C++. In fact, Java is arguably the most popular language for new projects, or is heading that way. That is certainly the impression I’ve had when speaking to programmers in industry.

Now, maybe you’re thinking “Java is an example of worse-is-better, too.” I would argue otherwise, because from the beginning, Java tackled a lot of issues that C++ largely ignores: type safety, garbage collection, reflection, security, concurrency. In each case, Java offers a simpler programming model to users at the cost of considerable implementation complexity. While C++ has continued to evolve to address its weaknesses, it’s very hard to argue that the C++ programming model has gotten simpler. Java has been adopted because it tries harder to do the right thing.

I was reflecting on this on the occasion of Barbara Liskov’s recent retirement party. Barbara has always been in favor of offering programmers simple interfaces and has been unafraid to make the implementer of those interfaces (me, in some cases!) work harder and smarter. This philosophy was certainly present in her CLU programming language. CLU was designed only 4 years after C, yet offered a laundry list of features novel or unusual at the time: strong static typing, garbage collection, strong encapsulation, parametric polymorphism with constrained type parameters (generics), statically typed exceptions, coroutine iterators, and type-safe sum types. At the time, and even into the 90’s, there was a lot of skepticism about these features; many people argued that strong static typing and garbage collection were simply too restrictive and expensive. Yet in Java, you can see the clear influence of CLU. Of the list of features above, only coroutine iterators have not been adopted by Java – but they are present in other popular languages: C#, Ruby, and Python. The impact of CLU suggests that “worse-is-better wins in the short run, but better-is-better eventually wins.” Certainly I’m glad that Barbara and her group tried to design The Right Thing back in 1974, and I think that computer scientists should still try to design The Right Thing now even if it’s not going to have immediate commercial impact.

Advertisements

Author: Andrew Myers

I am a professor of computer science at Cornell University. It is too hard to build trustworthy software systems using conventional systems APIs. I work on higher-level, language-based abstractions for programming that better address important cross-cutting concerns: security, extensibility, persistence, distribution.

One thought on “Worse is Better vs. Better is Better

  1. The worse-is-better mentality (generally, and not specific to a particular language) seems obviously bad for security: Get some “50% right” program out there, riddled with (design and implementation) defects, and as the vulnerability reports roll in, shrug your shoulders to the users of those programs as you fix them. It seems unlikely to me that this mentality can last. If we look at efforts like http://www.bsimm.com (the “building security in maturity model”) it seems that companies are starting to take security seriously, right from the start. I am skeptical that you can somehow get the security part right while limiting the New Jersey mentality only to the rest of the design.