Software Liability for Armchair Quaterbacks
This Sunday is Superbowl LVIII, where millions will watch the final football game of the season. Along with the cheers, armchair experts will critique the performance of coaches and players. Paunchy men and women watching the TV screen have never played pro-sports, of course, but that doesn’t discourage them from expressing their opinions about how the game should be played.
Something similar happens in cybersecurity. People who have never coded beyond the equivalent level of high-school football nonetheless have strong opinions about how the game of professional software engineering should be played.
That’s the the background behind Lawfare’s “Security by Design” series. These are people pushing the political agenda of the Biden administration who believe they have the expertise to tell programmers how to write code, despite not being programmers themselves. Neither are they hackers, sysadmins, netadmins, or any sort of techie.
In football, after a play, you’ll hear shouts of “throw the ball” or “run the ball” or “call a timeout”. Viewers think can see what should happen that the players, inexplicably, can’t see for themselves.
The same is true of software. Every bug is obvious in hindsight. Once we see it, we wonder how the programmer could’ve been so stupid not to have seen it in the first place. Armchair programmers believe that they have some insight into the bugs that programmers lack. They believe that bugs happen because of moral weakness: laziness, sloth, greed, villainy, lust. In their mind, all it takes is a little push against moral weakness to stop bugs from happening.
That’s what Biden’s political slogan “Security by Design” means: only if we force programmers to be a bit more diligent, then we’ll solve these cybersecurity problems.
But the reality is that software engineers are already diligent. The reason for software bugs is complexity and the sheer amount of software that we demand programmers write.
Security is a tradeoff. The only way to make software more secure is to dramatically increase its cost, change the market, and reduce the amount of software being written.
These tradeoffs will not make the world a better place. For one thing, such bugs account for less than 1% of hacking, so the benefits are small. For another thing, the costs are huge, it’ll dramatically increase the price of software and kill innovation.
Path traversal
An example of these principles is this article claiming the following about a common bug known a “path traversal”.
the path traversal flaw, CWE-22, and how to avoid it have been known for decades. Allowing it into software should be grounds for liability. Yet it still emerged recently in a Microsoft product, where it was the cause of one of the most frequently exploited vulnerabilities of 2022.
Yes, the bug has been known for decades. Yes, we know how to fix it.
But we don’t know how to “avoid” it. Knowing how to fix the bug once it’s discovered is not the same thing as knowing how to avoid the bug from happening in the first place. Knowing how to fix a car engine doesn’t stop the engine from breaking.
It seems simple only if you aren’t a programmer. The cause of path traversal bugs isn’t laziness or sloth, it’s because the problem is really hard.
Modern software is made up of a stack of subsystems and components. It’s not clear where in the stack this problem needs to be handled.
At the top, it’s not clear that input will be added to a path. A web form might accept a username field, but it’s not clear how the rest of the system will handle this field. It’s rare that it’ll be combined with a path, causing the path traversal problem.
We could just sanitize all input fields to remove anything that looks like path traversal, but this would break far more stuff than the problem its attempting to fix. Sanitization of input is a hard problem. If it were easy, we’d just blindly put application firewalls in front and walk away.
At the bottom of the stack, where the problem manifests itself when path components are combined, it’s not clear where the input came from. If the input come a trusted part of the program, and path traversal happens, then you want it to happen. Almost every piece of software has something somewhere that does path traversal, and it’s a good thing. It’s only a problem if input comes from an untrusted, external source, such as a field in a web form. In short, you can’t simply filter all path traversal at this layer.
One solution is to track taint. This marks input from untrusted sources as dirty. This way, any component in the system can treat tainted data differently from trusted data. But in practice, this doesn’t work well in practice. Software is built from a stack of multiple languages that don’t know how to communicate taint between layers. For all that Rust is called an inherently safe programming language, it doesn’t automatically track taint or handle path traversal problems.
There is no minimum standard of diligence here. The idea of “Secure by Design” imagines there is easy step, like those described above, that if everyone did, then this problem would mostly disappear. If only we wrote up these easy steps in a document, and forced everyone to do these things, then this problem wouldn’t keep happening.
But no such standard actually exists here. There is no easy fix. Whatever you come up with won’t actually reduce the number of patch traversal bugs. What it will do is add a lot of cost to the software development process.
Specifically, the Microsoft bug mentioned above already went through “secure by design” processes far more rigorous than any government regulation. Microsoft, Google, and Apple are already the most diligent software companies in the world. The amount of money Microsoft spends on writing software would bankrupt 90% of software companies. They’ve literally written the book on software diligence. Whatever “minimal standard” you come up with already be much less than what Microsoft already does.
Whatever path traversal diligence standard government comes up with, it’ll do little to fix the problem, it will bankrupt the smaller companies who can’t afford it, and will allow bigger companies like Microsoft to relax their standards.
If you disagree with my techie analysis here, you could create a techie rebuttal: simply write up a description of processes that will solve this bug. If you claim we’ve known for decades how to “avoid” the bug, then tell me what this thing is. I’m making an easily falsifiable claim here. I say we don’t know how to avoid the bug, so you can prove me wrong by showing how its done.
Safe haven
The regulations they are pursuing is stick that’s described as a carrot.
They promise that if software companies can prove they follow a standard of diligence, that this will shield them from lawsuits when bugs do appear. In other words, since we know that path traversal bugs will happen even to the most diligent of companies, proving they are diligence will shield them from liability.
This is, of course, why you should vigorously oppose the law.
As mentioned above, whatever regulations the government comes up with, they’ll be less than what Microsoft, Google, and Apple already do. These companies spend more on development software than what smaller companies can match. Any standard that shields them liability will result in less security for the world’s most important software.
Software regulation won’t go anywhere with the big companies lobbying against it, so of course, the idea here is to design the rules to be friendly to these big companies. You can complain all you want that these big companies have such power, but the practical reality is that any software regulations are going to end up making them even more powerful.
Smaller companies can’t afford to implement these regulations. If such regulations were effective, then maybe that’s okay, maybe they shouldn’t be writing software if they can’t do so securely. But we can be confident that these regulations won’t actually make software more secure. That’s what the path traversal bug above shows, there’s actually no simply practice we can add to software development that’ll avoid the problem. The cybersecurity industry is already full of regulations that simply add more cost for compliance without doing much to solve cybersecurity issues.
Conclusion
Software regulation is based purely on the principle that everything needs to be regulated by the government. I hate to repeat Reagan’s tired cliche, but it’s informative here: “If it moves, tax it. If it keeps moving, regulate it. And if it stops moving, subsidize it.”. Those in Washington have already begun the discussion of how to subsidize “open-source” development to compensate for how regulation will shutdown small developers.
The test of whether there’s anything else is their lack of technical expertise. The discussion of path traversal is a good one. It’s based upon the armchair programmer who finds it inexplicable that such a simple bug keep happening. Those of us professionals out on the field know that it’s actually really hard.
Yes, this post is pretty rude, but so are those calling for software regulation. Implicit in their regulations is the stick, that they’ll punish me personally if I don’t follow their rules.