Don’t let your ethical judgement go to sleep

We need to build organizations that are self-critical and avoid corporate self-deception.

By Mike Loukides
June 20, 2018
Checklist Checklist (source: Pixabay)

Brian LaRossa’s article “Questioning Graphic Design’s Ethicality” is an excellent discussion of ethics and design that pays particular attention to designers’ professional environments. He’s particularly good on the power and dependency relationships between designers and their employers. While programmers and data scientists don’t work under the same conditions as graphics designers, most of what LaRossa writes should be familiar to anyone involved with product development.

In untangling the connection between employment, power, design, and ethics, LaRossa points to “Ethical Fading: The Role of Self-Deception in Unethical Behavior,” by Ann Tenbrunsel and David Messick. Tenbrunsel and Messick write:

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Codes of conduct have in some cases produced no discernible difference in behavior. Efforts designed to reduce unethical behavior are therefore best directed on the sequence leading up to the unethical action.

This is similar to a point that DJ Patil, Hilary Mason, and I are making in an upcoming report. Oaths and codes of conduct rarely change the way a person acts. If we are going to think seriously about data ethics, we need tools, such as checklists, that force us to engage with ethical issues as we’re working on a project.

But Tenbrunsel and Messick are making a deeper point. The “sequence leading up to the unethical action” isn’t just the product development process. An “unethical project” doesn’t jump out from behind a tree and attack you. It’s rare for everything to be going just fine, and then your manager says, “Hey, I want you to do something evil.” Rather, unethical action is the result of a series of compromises that started long before the action. It’s not a single bad decision; it’s the endpoint of a chain of decisions, none of which were terribly bad or explicitly evil.

Their point isn’t that bad people make ethical compromises. The point is that good people make these compromises. Many motivations cause ethics to fade into the background. If what you’re asked to do at any stage isn’t obviously unethical, you’re likely to go ahead with it. You’re likely to go ahead with it for any number of reasons: you would rather not have a confrontation with management, you don’t trust assurances that participation in the project is a choice, you find the project technically interesting, you have friends working on the project. Developers, whether they’re programmers, data scientists, or designers, are dependent on their employers and want to move ahead in their careers. You don’t go ahead with an unethical project “against your better judgement”; you go ahead with it without judgement because these other motivations have suspended your judgement.

This is what Tenbrunsel and Messick call the “ethical fade.” They go on to make many excellent points: about the use of euphemism to disguise ethical problems (words like “externalities” and “collateral damage” hide the consequences of a decision); the “slippery slope” nature of these problems (“this new project is similar to the one we did last year, which wasn’t so bad”); the influence of self-interest in post mortems; and the way different contexts can affect how an ethical decision is perceived. I strongly recommend reading their paper. But it’s even more important to think about our own histories and to become aware of how we have put our own ethical sensibilities to sleep.

It’s easy to imagine how the “ethical fade” takes place. A group of developers might form a startup around a new technology for face recognition. It’s an interesting and challenging problem, with many applications. But they can’t build a business model around tagging family snapshots. So, they start accepting advertising and make some deals with advertisers around personal information, perhaps selling contact information for potential customers who are using the advertiser’s product at a party. That’s questionable, but easy to ignore. There are plenty of advertising-based businesses, and that’s how companies without business models support themselves. Then the stakes grow: a lucrative opportunity appears to combine face recognition with other tracking technologies, perhaps in the context of a dating app. That sounds like fun, right? But without asking a lot of serious questions about how the app is used, and who will use it, it’s a gateway for stalkers. Another version of the app could be used to track protestors and political dissidents, or to target individuals for “fake news” based on whom they’re associating with.

Applications like this aren’t built because people start out to be evil. And they’re not built because face recognition inevitably leads to ethical disaster. They’re built because questions aren’t being asked along the way, in large part because they weren’t asked at the beginning. Everyone started out with the best of intentions. Finding a business model for the would-be unicorn startup was more pressing than the possibility that their app would harm someone. Nobody wanted to question their friends when a potential client wanted to give them a big check. There were undoubtedly many smaller decisions along the way: the language they used when talking about selling data, hiring for cultural fit, and creating a group monoculture that didn’t have anyone sensitive to the risks. The developers aren’t evil; they’ve just put their ethical judgement to sleep (“broad is the way that leads to destruction“).

Ultimately, the only way to prevent self-deception is to recognize its pervasive and universal presence. We have to learn to become self-critical, and to realize that our motives and actions are almost never pure. But more than that, we need to build organizations that are self-critical and avoid corporate self-deception. Those organizations will use tools like checklists to ensure that ethical issues are discussed at every stage of a product’s life. They will take time for ethical questioning and create space for employees to question decisions—even to stop production when they see unanticipated problems appearing.

That is the biggest challenge we face: how do we address our own tendency to self-deception and, beyond that, how do we start, encourage, and maintain an ethical conversation within our organizations? It’s not an easy task, but it may be the most important task we face if we’re going to practice data science and software development responsibly.

Post topics: Emerging Tech, Future of the Firm
Post tags: Commentary
Share:

Get the O’Reilly Next Economy newsletter