Some notes about cybersecurity ethics
If it's not about boring practicalities, then it's probably more about politics.
There are three rules about cybersecurity ethics:
People usually discuss politics, not ethics.
Ethics guide good people, not bad people.
Most arguments are self-serving.
We are talking about professional ethics, not academic philosophy.
When you need to violate ethics.
It’s really politics
Most discussions about ethics are really about politics, like “is it ethical to work for an oil company?”. It assumes a neutral, unbiased judgement that oil companies are evil. But really, reasonable people disagree, some think such companies are evil, and others don’t.
Our system is based upon the principle that reasonable people disagree about important matters, like climate, environment, abortion, race, religion, unions, education, immigration, and so on. Politicians and Parties on either side of these issues are equally legitimate, even if we hate them.
Most people reject this. In today’s polarized political climate, people can’t step outside the debate and say “I disagree”, but instead claim that the opposing side is unreasonable, evil, and illegitimate.
We can’t debate ethics unless we can rise above such political disagreements.
Here’s an example. I tweeted that if I worked for the RNC, I’d certainly quit because of politics. But I wouldn’t hang around and deliberately sabotage Trump’s campaign (such as by hacking RNC computers from the inside) because of ethics. Somebody replied with a disagreement, that we’d have an ethical duty to stop Trump, because he “is evil and is running a campaign to derail the country”.
Yes, yes, he is, but the problem is that many Republicans believe this same description applies to Biden. When I was at Lindell’s 2021 Cybersymposium, I talked with partisan IT people who claimed Biden was an “existential threat” to the country, and thus, their ethical duty was to stop him.
That person argued further, trying to come up with reasons that any objective person (Democrats or Republican) would use to distinguish Trump from Biden. That’s not the way it works. The fact is that those Republican IT workers saw Biden and not Trump as the existential threat to our company.
When people can’t rise above such debates, then we can’t begin to have ethics discussions.
Ethics are defined by the example the good guys set
The above section contains the hypothetical that sounds like “both sides-ism”, pretending that both sides have equivalent arguments.
I’m not saying that. I’m not saying the arguments on both sides are equivalent (I agree Trump is objectively worse than Biden). I’m saying that beliefs are equivalent. For every Biden supporter who believes Trump is an existential threat to the country, there is a Trump supporter who believes the same of Biden. For every person who believes Trump has illegitimate business dealings with foreign countries, there’s somebody who believes the same of Biden (Hunter laptop, etc.).
Ethics are for both sides. They are clear, easy rules that the good guys agree to follow expecting the bad guys to follow them, too.
For example, consider full-disclosure in cybersecurity, whereby companies are transparent about their vulnerabilities instead of covering them up. Even the best companies at this (Google, Microsoft, Apple) are sometimes caught trying to cover-up vulnerabilities, even these days.
The reason is employees convince themselves that full-disclosure is something that bad companies should do, to stop them from being bad. But, since they themselves are a good company, strict observance of ethics isn’t needed, that’s it’s okay to have exceptions. Indeed, it would be irresponsible to be so transparent in this case, because those bad guys will misrepresent it.
A good example of such thinking is this Canadian Broadcast Corporation tweet objecting to Twitter labeling them as “government-funded”, even though they get 2/3rds of their funding from the government. They believe they are one of the “good guys”, better than Chinese and Russia funded media. But of course, Chinese and Russian state funded media make the same claims. No matter how much the label hurts, they shouldn’t be trying to evade this fact.
Yes, this is a really small example. There’s no big cover-up going on, with the Canadian government trying to silence anybody who points out CBC is state-funded. The point is that ethics discussions shouldn’t focus on the obvious bad guys, like Russian or Chinese state media. Ethics are defined by the example the good guys set. CBC needs to change their own Twitter description to include “state-funded”. We’d have more reason to trust them if they did that, because it labels them as somebody committed to truth, even when truth hurts. Trying to evade the truth looks very bad for a news organization committed to truth.
It’s the good guys that set the example they expect the bad guys to follow. It’s a social contract that constrain us as well as them. For example, we won’t hack your political candidates if you don’t hack ours. Trying to create rules why it’s okay for us and not for them completely defeats the principle.
By the way, I’ve encountered this a lot. When encountering difficult situations, people proclaim that they are the good, ethical guys, and thus the rules don’t apply to them. It’s the thing to watch our for in your organization. Everyone tries to shirk their ethical duty when it becomes difficult, but the most corrupt argument of all is “because we are the good guys”.
Most ethical arguments are self-serving
As a continuation of the above argument, we see that most arguments are self-serving.
It’s like how most people in jail believe they are “innocent”. It’s not that the claim that they didn’t commit the actions. Instead, they’ve created convoluted reasons why their actions are justifed. Most everyone in jail, including murderers and thieves, believe they are good people who have been mistreated by the system.
A friend had his apartment burgled, an iPad was stolen. Apple’s “Find My” feature located it at a nearby neighbor, who was holding that iPad in their hands when the police showed up ask questions. Throughout their trial they complained how the system was unfairly treating them.
Most hackers go through the same thinking. There’s always reasons to justify any attack against corporations, the government, or the rich. Even an average person becomes a legitimate target for things they say on social media, or even living in the wrong country. There is no hack for which hackers don’t feel ethically justified committing.
Ransomware teams in Russia feel this way. In their minds, it’s ethically allowable to hack those in foreign countries, especially adversaries like the United States. It’s also ethically allowable to hack those who don’t take cybersecurity seriously, it’s the victim’s own weakness to blame.
Professional ethics vs. abstract philosophy
Nothing derails ethics debates more than those who insist on academic philosophy. They claim you need to understand such things as Kant vs. Hegel to even begin talking about ethics.
This is nonsense.
They do have a point. When you debate ethics, you are going to re-invent those philosophical arguments — badly. You are going to make some statement about whether “the ends justify the means”, and the person who studied ethics in college is going to soundly defeat you in debate, to the point that you run away in embarrassment.
But it doesn’t matter. Academic philosophy has no good answer to “the ends justify the means”, they’ve only become experts at identifying bad answers to the question. The more academics study ethics, the less effective they become at coherent answers.
The thing is that practical, professional ethics are very different than academic ethics. The important questions aren’t academic but practical.
Should there be a lawyer-client privilege? Academics have nothing to add to this debate. It’s all very practical, based on the fact that lawyers wouldn’t be able to provide adequate service to clients if they are required to later testify. Sure, there are hypothetical problems when the client says “I’m going to murder somebody next Monday” that need to be considered, but they so rarely happen it doesn’t really matter what the answer is.
Should doctors be required to perform abortions that conflict with their religious beliefs? Different countries have different answers. It’s political, based upon practical concerns, and not one amenable to academic philosophy.
The best example is how every so often, drama erupts when a journalist publishes an email from somebody that was clearly labeled “THIS IS OFF THE RECORD”. This appears to violate the ethics of protecting sources.
No such ethics exist. Indeed, it’s quite the opposite. Journalists should identify sources whenever possible, so that the reader can hold sources accountable for the claims they make.
But sometimes, in order to report something important, a journalist needs to promise confidentiality. In that case, the ethics are that the journalist keep their promise and protect the source. The ethics here isn’t “protect sources” but “keep your promises”. If no promise was made, then the rule doesn’t apply. If somebody sends the journalist an email entitled “OFF THE RECORD” without such a promise ahead of time, then it isn’t off the record. If the journalist the email in a story, then ethical thing to do (almost always) is to cite the source.
Consider this recent The Intercept piece about the Pentagon-Ukraine leak, asking “Why did journalists help the Justice department identify a leaker?”. It claims “If he’d shared the same classified materials with reporters, he would be tirelessly defended as a source”. This misunderstands ethics. Only those journalists who promised confidentiality would defend the source, the rest of the journalists would still try to out them. Certainly, they don’t need the in-depth research Bellingcat did, but outing the source isn’t inherently unethical. Everyone tried to discover Deep Throat from Watergate, for example.
The Intercept is notable here because because they failed to protect Reality Winner as a source. They did a poor job of redacting documents in order to hide her identity. It’s like the classic Aesop’s fable about the fox who fails to catch a rabbit. The fox was only running to get a meal, the rabbit was running for their life. The consequences to a source are dire, spending years in jail. The consequences to they journalist are insignificant if their source gets outed — they don’t protect sources like their own life depended on it.
Now let’s consider the classic Trolley Problem. Academics come to no good resolution, but from the point of view of professional ethics, it’s easily solvable.
If you’ll recall, the problem is that a trolley will kill 4 people unless diverted. But if you pull the lever changing tracks, 1 person will be killed. Do you flip the switch?
We already have practical experience with this in industry and the answer is that you do nothing. If you flip the switch, the family of the person who died will sue you for wrongful death. It doesn’t matter what Kant or Hegel says.
Consider “self-driving” features of cars. We are reaching the point where overall they are safer than human drivers, that they’ll save lives if we just enable them all the time. However, they still have bugs and problems that will cause deaths. In trolley terms, nobody blames them for the 4 lives that are lost when they aren’t used, but everyone blames them for the 1 life lost when they are used.
We saw that with the Toyota “unintended acceleration” issue. Toyota dramatically improved the safety of their brakes that undoubtedly saved many lives, but were blamed for defects in those these new brakes that might’ve caused the death of one person, causing them to pay out billions of dollars. No matter what you think the ethical answer should be, they are going to grab all the emails in the company, find some that when taken out of context hint that Toyota knew of the flaw and decided to cover it up to save $0.05 per vehicle while callously disregarding live lost. Everyone knows big corporations are evil, and thus they’ll always be blamed for the 1 life lost and not credited for the 4 lives saved.
The point here is that professional ethics are not abstract, academic ethics, but very practical concerns for the profession.
Practically, our primary ethics are honesty and keeping promises. If you are given trusted access to a system, then you don’t violate trust.
We do have specific problems unique to our industry. Let’s say you are hired to do some sort of assessment/pentest, and the customer asks you to edit your report to remove some of the worst conclusions. Do you do what the paying customer asks? Or do you put your name to something you don’t believe?
This isn’t hypothetical, but a common problem. Everyone in infosec who writes such reports regularly get such requests.
Among the practical issues is the fact that they might be right. Maybe you’ve been too aggressive phrasing things to make them seem worse than they really are. Infosec professionals are prone to this, exaggerating threats in order to make people listen. In such cases, it’s perfectly legitimate for the customer to ask that you modify the language.
Another practical issue is that maybe you’ve misheard the customer, that you’ve interpreted their request as a demand that you lie, whereas in their mind, they are asking something completely different. Maybe they are asking for clarification rather than that you tone down your language.
The point is professional experiences point to a question. It’s not resolvable with academic debate. It’s not even resolvable with simple rules like “never lie”.
One solution to ask the customer to put their request in writing, attaching the request to the appendix. Resolving the ethical ambiguities of the situation now becomes their problem. In my experience, the customer either says “never mind” because they recognize their request as illegitimate, or “okay” — in which case the request will still be in the report, albeit not as prominent.
The point of this section is simply that professional ethics aren’t academic ethics. They are based upon very practical concerns that professionals will experience, providing guidance on how to resolve them.
How to violate ethics
So let’s say that you’ve encountered a situation so intolerable that you have to violate ethics. Following the discussion above, let’s say that you think Trump is an existential threat to our country, and you have an opportunity to fight against it.
This is not hypothetical. Back in the 2016 election, Georgia Tech researchers who fight malware/hackers used their privileged access to DNS logs in order to attack the Trump campaign. They published an “October Surprise” claiming improper ties between Trump and a Russian bank. They tracked a Russian phone they believed to be associated with the Trump campaign.
From the discussion above, you can probably guess that I think this is a violation of ethics, and that nobody should trust Georgia Tech in the future, or the researchers involved.
But at the same time, Trump really is an existential threat to this country. There’s a good chance that in 2024 he’ll win the presidency by convincing Republican-controlled legislators to throw out the vote and put Trump in power. If that were to happen, you can be certain I’ll be violating all sorts of ethics in order to fight such a power grab.
The resolution to this ethical conflict is whether you are willing to be public about it. That’s why I’m public right now about what lines I’m willing to cross. The hypothetical with Trump is one, the government outlawing encryption is the other. Had I seen the mass metadata collection and Google spying that Snowden saw, I’d like to think I’d’ve leaked those documents (and stood publicly behind my actions).
The biggest problem with the Georgia Tech researchers is that they attempted to be anonymous. It took years for their identities to become known. Instead, they tried to benefit form both sides, presenting themselves as researchers who could be trusted with data, while at the same time leaking it in pursuit of their personal politics. Trump wasn’t even the threat he’s now become, simply an average Republican candidate
People have the same confusion about “whistleblowers” as the journalistic “anonymous sources” mentioned above. They think that whistleblower anonymity is some higher ethical duty. It’s not.
Certainly, some organizations want to encourage whistleblowing. The U.S. government doesn’t want corruption. They want to catch corrupt individuals who abuse their power in government. The government as a whole wants to whistleblowers to come forward in such situations. Under specific conditions, they promise protections, sometimes also anonymity, in order to encourage whistleblowers.
But that’s only a means to the end. It doesn’t mean every whistleblower deserves such protections automatically. It’s especially true when what they reveal is political, or due to disgruntlement.
Most leaks are from disgruntlement. After Snowden, the government analyzed his motivations to prevent such people from getting clearance in the future. Their model isn’t that what Snowden’s supporters claim, that he saw something so egregious that he had to do something. Instead, their model is somebody that is disgruntled. You can now get clearance denied simply for being overqualified for the job, because that’s going to make you disgruntled.
Assuming whistleblowers deserve protection means that those on the other side deserve protections, too. It means that the Pfizer whistleblower who claimed the mRNA vaccine didn’t work deserves protection. It means Jack Teixeira deserves protection for trying to derail support of Ukraine. Both Chelsea Manning and Teixeira leaked things opposing involvement in foreign countries, but for largely opposing sides of the political spectrum. The ethics of both should be the same. Politics shouldn’t become ethical justifications.
If you aren’t willing to risk your life, risk jail time, or public condemnation, then your cause probably isn’t righteous enough to violate ethics. Instead, what you are probably doing is just political or disgruntlement, something you would criticize if the opposing political side did it. If you are willing to st
The Declaration of Independence is a great guide to this ethical question. They point out that it’s illegitimate to do what they were doing for “light and transient causes”, that they weren’t malcontents with a few grievances. They make it clear that they should be held accountable for their actions, by the both public opinion, but also through their “lives, fortune, and sacred honor”.
Reality Winner’s grievances were “light and transient”. The document she leaked didn’t actually say that Russia was subverting elections. It really only said that an election company experienced phishing attempts. But all companies experience phishing attempts due to ransomware. Election companies or county systems getting targetted doesn’t point to a conspiracy.
For Snowden, I think the answer is yes, what he revealed was worth going to jail for. For Reality Winner, I think the answer is clearly no, that there was nothing here worth violating one’s ethical obligations.
The point of this section is simple to provide guidelines when it’s okay to violate ethics. It’d better be worth the jail time. It’d better be worth the public condemnation. If you aren’t willing to identify yourself, to come out in public and say “I did this”, then you probably aren’t justified in violating the ethical boundaries.
Conclusion
The goal of this document isn’t to argue any specific ethics, but to describe the shape of such arguments.
Most any discussion will be derailed by politics, academics, and self serving arguments.
Any discussion should start with boring experience, like that time I was asked to redact a report, or that time I discovered my customer was violating the law. If we aren’t talking about boring practicalities like this, then it’s a useless debate.
Lastly, ethics are only mostly a straight-jacket. But when we violate them, it can’t be for light and transient reasons, but something that comes at great cost to ourselves — so important that we are willing to pay those costs. If you can’t publicly and proudly declare “Yes, I hacked Russia to help defend against Ukraine”, then you probably shouldn’t be doing it.