Why Biden's 'Cyber Trust Mark' is nonsense
It's about control by the cybersecurity industrial complex rather than security.
The Biden administration has published label IoT products it feels are more secure, its “U.S. Cyber Trust Mark”. This post explains why it’s nonsense — as silly as the way government licenses and certifies chiropractors (chiropractics is a pseudo-science, not real).
The sound of one hand clapping
The first reason that you know it’s nonsense is that none of the mainstream news stories written on this are critical of it, like from Vox or TechCrunch or AP. It’s like dictators who get 100% of the vote — when things are his lopsided, we know that something is corrupted somewhere.
We expect journalists to get both sides of things, to be critical of government policy. But security especially cybersecurity is somehow exempt from critical thinking. Security is treated as an inherent good, a higher moral duty, that no good person would oppose. Journalists treat security as a one-sided argument.
The cybersecurity industrial complex
In a pre-announcement briefing, Anne Neuberger told reporters “It will allow Americans to confidently identify which internet- and Bluetooth-connected devices are cybersecure”. This is a lie. Devices bearing this mark still won’t be secure. They still will be hackable. Devices bearing this mark are at best a marginal improvement in security.
The entire cybersecurity industrial complex is in on the lie. Everyone knows that the mark won’t make devices secure, but are willing to go along with this. They’ll be supportive of Anne Neuberger instead of critical, admitting that while it’s technically false there’s still some moral truthiness to her words.
The industry is full of “experts” who cannot be trusted. They become “activists”. They don’t see their job as a neutral cost-vs-benefit tradeoff, but see security as a higher moral duty, and end in of itself, rather than simply a means to an end. They’ll happily lie to you and spin information as long as it serves the higher moral goal of increasing security at all cost. They love any and all government policy aimed at increasing cybersecurity, no matter the adverse impacts on society.
The large corporations who have attached their names to this love additional regulatory hurdles. They squash the smaller competition. Getting a US Cyber Trust Mark for a product will cost a lot of money, something that a large corporation can easily afford, but which a smaller company can’t. Even when the smaller company innovates and has better security, they won’t be able to afford getting certified by the government.
There is no special IoT cybersecurity problem
There is no great IoT cybersecurity problem. On the whole, IoT devices aren’t getting hacked. For example, the well-known yearly Verizon Data Breach report doesn’t mention it. Ransomware is the biggest cybersecurity threat right now, and it has nothing to do with IoT.
The reason is simply that IoT is almost always behind firewalls. For those of us who scan the Internet, we see the public exposure of IoT decreasing, despite the number of IoT devices increasing by billions each year. More importantly, humans rarely interact with IoT. It’s humans that are the primary vulnerability. Devices with limited human interaction are of limited threat.
That’s why security is so important to Microsoft’s Windows, Apple’s iPhone, and Google’s Chrome. It’s why you have to update/patch these products every month to stay ahead of hackers. They are the boundary between humans and the Internet, exposed to the public Internet where hackers lurk as well as the bad decisions made by their human users.
Without exposure to the public Internet and without human interaction, an IoT device has little cybersecurity risk.
Security is a tradeoff
The issue of most concern here is patching. That’s because it’s critical to Windows/iPhone/Chrome, because they are exposed to users and the Internet.
But if a device doesn’t already have human interaction or Internet access, then not only is patching must less necessary, the patching process itself becomes the greatest risk. It means adding human interaction and Internet access.
Security is a tradeoff, but this fact is ignored by cybersecurity professionals. They can only think in terms of the benefits of patching, not the tradeoffs needed to achieve it.
This argument falls on deaf ears in the infosec community. They largely don’t care about such tradeoffs because patching is a moral duty. If that in practice adds vulnerability to a device, it doesn’t really matter, because you are still achieving the moral goal.
The cybersecurity community uses a simple model that security failure comes from moral weakness like sloth, villainy, pride, envy, or lust. They blame corporation greed for not making patches available, and user laziness for not applying patches. Their primary demand is for government regulation to fix this moral weakness. Whether there is an actual risk, whether this makes you more secure overall, isn’t the issue — the issue is greed and laziness.
Cloud security is the most dangerous
The increasingly common solution to IoT security tradeoffs is cloud management, having the vendor manage the configuration and patches themselves. Thus, your IoT is always up-to-date with patches, because the vendor pushes them down, without needing interaction by the users.
This is even less security. Current IoT devices are largely disconnected from the Internet by a firewall. They can otherwise allow wide-open access to hackers, but suffer few problems in practice because network infrastructure prevents access.
But when you surrender control to the cloud, now they are accessible regardless of any network protections. The cloud accounts now have access to the device regardless of any layers of protection. If that vendor isn’t trustworthy or gets hacked, then evildoers can penetrate the depths of your network.
Consider Mirai vs Verkada.
Mirai was a botnet/worm that in the 2017 infected 250,000 security cameras that exposed open ports to the public Internet. the problem was devices that should’ve been behind firewalls were exposed directly to Internet with no firewall.
Verkada was a “secure” camera vendor that secured their devices against Mirai-style attacks by removing all open-ports to their devices. You could no longer connect from a network to the device. Instead, the device itself would initiate the connection, to a cloud service. To configure the device, you’d interact with the cloud service, and the device would pull down the latest update.
But then in 2021, Verkada itself was hacked. The hacker created a malicious update that was pulled down by all 150,000 of their devices. This created a potential botnet nearly as large as Mirai. The hacker didn’t exploit this botnet and simply used to to embarrass the vendor, but they could’ve used it for more severe things like DDoS.
Consider notPetya, which many consider to be the most damaging mass hacking event so far, with $billions in damages. It was launched via a software updated, installing the malware behind corporate firewalls, deep in their networks.
Consider the Viasat hack in the recent conflict between Ukraine and Russia. At the start of the war, the Russian hackers hacked Eutelsat who managed the satellite modems, sending down an update that “bricked” them, essentially disconnecting them from the satellite network. Ukrainian hackers recently retaliated doing the same sort of thing to a satellite service popular in Russia.
Even the most trustworthy companies have problems. For example, earlier this month was the story of Ring employees spying on female customers.
You yourself can defend your own IoT by not doing what the cybersecurity industrial complex demands. They don’t like that. They want to coerce you into placing your security in their hands.
Security superstition
We don’t know the precise criteria this Trust Mark will have, but we know the sorts of things that are under discussion. The problem is that the things under discussion are cyber-superstition rather than real.
Patching is a good example. The above discussion points to only some of the tradeoffs. The biggest is that IoT lifetimes are longer than patch lifecycles. The average vendor will promise patches only for about 5 to 10 years whereas the average user expect these devices to work for 20 years or more. Moreover, for the most innovative IoT products, chances are good that the original vendor will disappear. No certification program can make a vendor to keep supply security patches for 10 years after they’ve stopped selling the model.
Another example is default/backdoor passwords. Cybersecurity people know that this is a problem but they ignore why it keeps happening. The Mirai botnet is a good example. Everyone knows the Mirai problem was default/backdoor passwords but everyone is wrong. The problem is that there were two separate subsystems. One is the management system provided via the web interface visible to the user. The other subsystem was a Telnet backdoor subsystem using a completely different set of usernames/passwords. No amount of doing things right on the web subsystem would solve the problems with the Telnet subsystem. The reason the Telnet subsystem existed is that there’s no good way of doing factory resets on security cameras. Instead of doing something useful, like coming up with secure ways to factory reset cameras, cybersecurity people ignore the real problem and pretend it’s a moral weakness for leaving in backdoor passwords.
Another issue is secure boot. It solves many security problems but suffers from the fact that it means the owner no longer has complete control over their device. It now means that owners of IoT devices will look for jailbreaking hacks in order to gain control over devices they own. Again, security is a tradeoff, where one type of security control means weakening others.
Even simple things like encryption are misunderstood. An SSL connection requires certificates that don’t work with IoT devices. To allow consumers to log into devices, you have to teach them how to bypass the warnings of web browsers telling users of insecure certificats. Worse, it means bypassing a web interface altogether to use an app on the phone, where your device fails as soon as the vendor stops updating its app for that device.
There are a lot of smart people involved in thinking about such issues. The problem is that the political process acts as a filter, guaranteeing that what comes out the end is the stupidest of cybersecurity superstitions. Everyone can understand the moral weakness of being too lazy to apply patches, so that becomes the driving factor regardless of any sophisticated understanding of the issue.
In short, the US Cyber Trust Mark is unlikely to contain anything other than superstitious cybersecurity.
Conclusion
This trust mark is mostly harmless, of course. The government isn’t forcing anybody to comply. But at the same time, it’s not pushing more secure IoT products, it’s pushing the agenda of the cybersecurity industrial complex. Their desires have a lot to do with control and little to do with security. Products with this mark are unlikely to be best for consumers.
Here is a good test for you journalists. Ask the experts you consult about the story about Anne Neuberger’s claim this will make products “cybersecure”. It’s a lie, the trust mark won’t make products secure. But watch them defend her anyway.
Oooh, I agree with you but you'll get some searing hate for the crack about chiropractics.