Chapter 4. Ethics of Behavioral Science

Researchers at Princeton developed an automated tool for searching websites for dark patterns: “user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions.” In analyzing 11,000 websites, they found 1,841 dark patterns. They even found 22 third-party companies who offer “dark patterns as a turnkey solution”; in other words, digital manipulation as a service.1

The term dark pattern was coined by UX specialist Harry Brignull, who categorizes 11 different types, from confirmshaming (guilting the user into opting in) to privacy Zuckering (you can probably guess). He hosts a “Wall of Shame” of companies clearly trying to trick their users and demonstrates how Amazon makes it nearly impossible that someone will discover how to cancel their account, which Brignull nicely calls a “roach motel”: you can enter, but you can never leave.

Sadly, cases of such deceptive techniques aren’t hard to find in practice. A recent New York Times exposé, for example, detailed how the company thredUP generated fake users to make it look like other people had recently purchased a product and saved money to encourage real customers to make a purchase themselves.2

There’s a rightful backlash against the application of psychology and behavioral techniques in the design of products and marketing campaigns. Products that seek to manipulate users—to make them buy, get them addicted to our products, or change something deep and personal about their lives like emotions—have started to gain the rightful negative scrutiny they deserve.

In April 2019, Senators Mark Warner of Virginia and Deb Fischer of Nebraska introduced legislation they called the Deceptive Experiences to Online Users Reduction (DETOUR) to make it illegal for large online services to:3

  • “Design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data”

  • “Subdivide or segment” users into groups for “the purposes of behavioral or psychological experiments” without informed consent

  • Operate without an independent review boards for the approval of behavioral or psychological experiments

The effort by Senators Warner and Fischer is clearly targeted at social media, search, and ecommerce companies, which have been some of the worst offenders in terms of data privacy and tricking individuals into giving their consent to data usage. But the work on dark patterns, and sadly, the daily experience of anyone with an email inbox or mobile phone, shows that the deception and abuse don’t stop there. And we’re kidding ourselves if we think we’re not part of it.

Thus far in this book, we’ve talked about how to help our users succeed at their own goals; this chapter takes a different angle and seeks to accomplish four things:

  • To show the extent of unethical manipulation of users

  • To think about where things have gone wrong

  • To show that each of us is as likely to be unethical as anyone else given the right circumstances

  • To look at ways to clean up our act

Digital Tools, Especially, Seek to Manipulate Their Users

The Princeton study nicely quantifies how common dark patterns can be—but their analysis focused specifically on shopping sites in 2019. Is this a widespread problem? There don’t appear to be other large-scale quantitative studies (yet), but many government and watchdog groups have analyzed the practices of major digital companies and found them to be, nearly across the board, manipulative. For example, in a 2018 report, the Norwegian Consumer Council analyzed how Facebook, Google, and Windows discouraged people to exercise their privacy rights. Google is under fire for obtaining consent for location tracking through deception. ProPublica shows how Intuit has tricked people into paying for tax filing—even when it is free. Apple has even changed its App Store guidelines to hinder apps from tricking people into subscriptions.4

And while these issues have gained prominence recently, they clearly occurred before. Do you remember when people trusted Facebook, or at least didn’t think it was evil? One of the initial chinks in their armor came with an experiment they ran to manipulate user emotions, reported by the New York Times, Wall Street Journal, and others.5 The Times story started with “To Facebook, we are all lab rats,” and went downhill from there:

Facebook revealed that it had manipulated the news feeds of over half a million randomly selected users to change the number of positive and negative posts they saw. It was part of a psychological study to examine how emotions can be spread on social media…

I wonder if Facebook KILLED anyone with their emotion manipulation stunt. At their scale and with depressed people out there, it’s possible,” the privacy activist Lauren Weinstein wrote in a Twitter post.

This study was part of a collaboration with academic researchers and was widely condemned—even if it was misunderstood and blown out of proportion.6 After that study came Cambridge Analytica, and many more high-profile stories of broken trust with Facebook and other major companies. LinkedIn paid millions of dollars in a class action lawsuit because of its use of trickery to use people’s contact lists.7

I’d challenge the reader to think of three major digital companies that don’t try to trick you into giving consent to using your data, sign you up for things you don’t want, or encourage you to binge on their products despite your better judgment. As the Financial Times nicely summarized, for many companies “manipulation is the digital business model.”8 And despite the negative attention recently, the unfortunate revelations continue; for example, Flo Health’s app reported to Facebook about its users’ intention to get pregnant and the timing of their periods, without informing users.9

But it’s also important to note that it’s not a problem of “those big companies over there.” In the broader product development, marketing, design, and behavioral science communities, we brag quite publicly about the ways in which we can manipulate users into doing what we want. At numerous behavioral marketing conferences, for example, speakers talk about their special expertise at changing user behavior by understanding the psychological drivers of customers and using that to drive a desired outcome. They often throw around some behavioral techniques like peer comparisons and claim a tremendous rate of success for their clients.

Similarly, marketing and design companies tout their ability to change user purchase behavior, without any concern or discussion about whether the products being sold are appropriate or wanted by the end user. One example (among many) from the field is that of System 1 Group, a marketing agency named after a core psychological (and behavioral) model of decision making, which advertises on their website how they use “behavioral and marketing science to help brands and marketers achieve profitable growth.” As written in their promotional book, System1 Unlocking Profitable Growth,10 “Designing for System 1 (i.e., avoiding conscious thought among customers) can also boost the profitability of promotions and point-of-sale materials. To increase rate of sale, shopper marketing must help people make quicker, easier, more confident decisions.”

System 1 isn’t, I believe, a particularly egregious example. I personally know some of the people there and at similar companies in the field; they are reasonable people who are trying to catch the wave of interest in psychology (and behavioral science in particular) to do marketing and advertising more effectively for their clients. That wave of interest—and manipulative techniques that accompany it—wasn’t something they created.

While it’s easy to find examples of digital companies (or companies in the digital age that advertise online) doing questionable things, this is not a new problem. One of the great pioneer researchers in the field, Robert Cialdini, learned about the strategies and tricks of persuasion by doing field research with used car salesmen and other in-person persuaders,11 and many analyses have been written about the physical and psychological designs of casinos.12 What’s different is that behavioral science is both documenting and explicitly contributing to these efforts—especially in digital products.

Researchers and other authors like me have actively spread these techniques. In our community, we write books on topics such as how to:13

  • Make games and apps “irresistible” so you can “get users…and keep them”

  • Build habit-forming products based on “how products influence our behavior”

  • Apply behavioral research to adjust packaging, pricing, and other factors to create consumption

  • “Harness the power of applied behavior psychology and behavior economics to break through these nonconscious filters and drive purchase behaviors”

I’ve known some of these authors over the years, and they aren’t bad people; they are sharing techniques that can help product designers make better products—ones that people enjoy using and want to use. They seek to develop relevant and engaging marketing campaigns that are tailored to their audience’s interests. But empirically, the techniques we talk about are used in many other ways as well, which aren’t so beneficial.14

My own writing—including the first edition of this book—clearly falls into this category as well. We may want to help users, but we shouldn’t be blind to what has actually happened.

Where Things Have Gone Wrong: Four Types of Behavior Change

In the Preface and throughout this book, we’ve talked about two different types of behavioral products:

  • Behavior change is the core value of the product for users

  • Behavior change is required for users to extract the value they want from the product effectively

In the first case, products explicitly seek to help users change something about their lives, like exercise bands, sleep habit apps, and mindfulness apps. In the second case, products use behavioral techniques so the user can be more effective at using the product itself; these types of applications and products vary from making app menus manageable to helping users customize their displays and focus on what they care about.

Both of them have something clearly in common: they seek to help the user do something they already wanted to do. That’s the focus of this book: voluntary and transparent change. There’s another type of behavior change that hasn’t been our focus thus far, but now must be:

  • Behavior change is about helping the company achieve something, and the user doesn’t know about it or doesn’t want it.

From what I’ve seen in the industry, that’s the most common type of all, and it’s time to call a spade a spade. Our industry uses consumer psychology, behavioral science, and whatever other techniques it can to push people into doings they don’t fully realize, and wouldn’t want to do if they were fully aware.15

Facebook’s emotion study? That was about Facebook, not about helping its users. Marketing campaigns to use psychology to push a product (regardless of what the product is and who the audience is)? That’s clearly about helping the business’s profits, without thinking about the user and their goals and needs. If we refer to the issues that make people uncomfortable (effective persuasion or coercion, often hidden from users), that study rings alarm bells on all accounts.

To be fair, there are many cases where unethical use of behavioral techniques isn’t intentional, however, where by accident or through competitive pressure, firms have adopted an approach that tricks their users without setting out to do so. Teaser rates are an example: where competitive pressures and historical practice in the credit card industry lead to unsustainability low opening interest rates. Such teaser rates are only viable for a company because they are replaced with much higher rates later (as sophisticated buyers know, but the unsophisticated fall for), or user behavior triggers punishingly high rates that make up the difference in profits. In theory, credit card companies would be better off if they all ditched teasers and used more transparent pricing, but if any single company did so without the others, they would lose market share. Manipulative techniques don’t always signal malice; still, unintended but obvious manipulation is our responsibility just as intentional manipulation is.16

So, is the solution simply “don’t do that”? If only it were that easy. Instead, we have bad actors that poison the water for everyone else, products that seek to be addictive, and problems of incentives in our industry that lead us back to problematic uses.

Poisoning the Water

Applied behavioral science has a reputation problem; there’s no easy way for users to distinguish between “good” and “bad” actors—between companies that are using behavioral science to help them and those that are using it to hurt them. And, especially when companies overhype how effective behavior science is at changing behavior (as marketers often do), people can assume that behavioral techniques are inherently coercive—that is, able to make people do things they don’t want. What else should people expect from titles like The Persuasion Code: How Neuromarketing Can Help You Persuade Anyone, Anywhere, Anytime and How to Get People to Do Stuff: Master the Art and Science of Persuasion and Motivation?

That’s based on hype, though, and not the real state of the research. The main issue of impact in the research community is over techniques that don’t replicate (i.e., that don’t seem to have a real effect at all), that aren’t clear whether they generalize (i.e., all effects are context specific, and we don’t fully understand in which contexts a particular technique helps or not), or that backfire (i.e., something that has a positive effect to help people in one context but actually makes things worse in another context). That’s why throughout this book we talk about the importance of experimentation: behavioral science has an incredible set of tools, but they aren’t magic wands. The hype in the industry makes it seem like our tools are magic and that makes us all look bad.

Beyond telling thoughtful companies to simply stop using behavioral science in ways that users won’t approve of, we have a problem of how to stop the bad actors who don’t want to stop, or at least differentiate everyone else.

Addictive Products

In addition to products that go against what users want in the short term, there’s another highly problematic category of uses:

  • Behavior change helps the user do something that they currently want to do, but we know they are likely to regret in the future.

What’s an example of this? Any product that seeks to addict or hook its users, without their users asking for it, from mobile games to social media. We can see the backlash and discomfort in the field, from Ian Bogost’s labeling of cell phones (BlackBerry, at that time) as the “Cigarette of this Century” to more recent stories in the New York Times, the Washington Post, and on NPR.17 These products can harm users directly (like cigarettes harm people’s lungs) or indirectly by dominating what’s known as the attention economy:18 eating into their users’ time and attention to the point that they crowd out meaningful interactions with friends, family members, and others.

Now, the term addict is used loosely in the field and often doesn’t refer to what a medical doctor would call addiction. There’s also healthy debate on whether technological products are actually as addictive as some researchers say.19

From the perspective of those of us who design for behavior change, the point remains, though: if a product or a product designer tries to addict its users (even if it can’t achieve it in the medical sense), there’s probably something wrong. The language in the field of designers seeking to hook people is disturbing—to the extreme of Mixpanel’s congratulatory report called “Addiction.”20 Digital products seek to hook people on something they want right now (even if they may not have wanted it before the advertising campaign or sense of FOMO took root) but hurts them in the long run. Naturally, the individual user makes a series of choices that lead to that bad outcome. As a field, we should take responsibility for our actions; that doesn’t imply that others shouldn’t take responsibility for theirs as well.

So we could simply say, “Don’t addict people.” And indeed, some brave voices in the field do, like Matt Wallaert, chief behavioral officer at Clover Health.21 But even more than products that explicitly go against their users’ wishes, this is a hard one to tackle. There’s an easy path to self-justification, and the business incentives are huge: products that really could hook their users would be immensely profitable.

That brings us to the question of incentives. Simply put, would a company avoid designing for behavior change if that means hurting their business? When we would objectively see an action as ethically dubious, would the product managers, designers, and researchers working on the project see it as such in the moment of action?22 It appears that, in many cases, the answer to these questions is no. To understand and address these challenges, let’s take a short detour from the ethics of behavioral science into the behavioral science of ethics.

The Behavioral Science of Ethics

There is significant research literature on how ethical behavior is influenced by our environment, both explicitly in behavioral science and in the older social psychology research tradition.

Researchers find that our environment impacts not only everyday behavior but also moral behavior. There is a long history of work showing how factors in our environment shape whether we act responsibly or not; for example:23

  • When people hear the cries of someone having an epileptic seizure in another room, the more people who hear, the less likely that anyone responds.

  • People are more likely to help others when the person gives a meaningless reason for requesting help, instead of simply asking for help without a reason.

  • People are more likely to cheat on a test when they can’t be caught, when they see others cheat, and when they can rationalize it as helping someone else.

My personal favorite is the story of the seminary students.24 In that study, researchers had seminary students do an activity, at the end of which they had to go to another building (not knowing that the travel to the other building was, in fact, the key part of the study itself). The researchers varied the degree of urgency with which the students were asked to move and varied the activity the students undertook before traveling to the other building. In one version of the pretravel activity, the students prepared to discuss seminary jobs; in another, they prepared to discuss the story of the Good Samaritan. The requests to travel had one of three levels of urgency. In each case, the seminary students passed by a man slumped in an alleyway. He moaned and coughed, and the researchers had observers record whether the seminary students would stop and help the individual.

The level of urgency mattered—the more urgently the student was supposed to reach the other building, the less likely they stopped. The pretravel activity (thinking about the Good Samaritan story) did not. The ineffectiveness of thinking about the Good Samaritan story makes the research more dramatic and interesting, but the truly important finding was how simply being asked to hurry changed the behavior of presumably moral and helpful people. In particular:

  • In the least-urgent situation, 63% of the people helped the man slumped in an alley.

  • In the medium-urgency situation, 45% stopped and helped.

  • In the highest-urgency situation, 10% did.

A temporary and, in the grand scheme of things, largely unimportant detail (whether the person was asked to hurry or not) had a massive effect on the person’s behavior. To put it bluntly, these things shouldn’t matter—not if we’re good and thoughtful people, right? But yet they do. And as much as we might condemn the students in this famous study, I’m sure we can all remember similar times in our lives as well—when we had something on our minds and didn’t take the opportunity to help someone in need.

The research on moral behavior and how it’s shaped by our environment ranges from the comical to the truly troubling. It raises serious concerns: how can people be ethical in one situation but unethical in another situation that is only slightly different?

It also raises questions about what it means to be a good or moral person. Gil Hamms comes to this conclusion about avoiding the evil within: “we should seek out situations in which we will be good and shun those in which we won’t,” or as Kwame Appiah puts it:

A compassionate person can be helped by this research, by using it to provide a “perceptual correction” on how we see the world, and using them to reinforce the good in our behavior and avoid the bad.25

We’ll Follow the Money Too

What does this literature tell us? The first lesson from the literature is that good intentions aren’t enough; people’s environments affect ethical behavior just as they affect other areas. And, “people” includes us (heck, most of us aren’t as ethical as seminary students to begin with). Thinking otherwise requires a potent combination of arrogance and self-deception.

What environment are we in? Our environment, in which we apply behavioral science, by and large, is not directed to help individuals flourish and prosper. Many companies sincerely would want their users to be happy and successful, but their first priority in applying behavioral science is to increase profits, either for themselves directly or by serving as consultants and providers to other firms. The most money is to be made either using behavioral science to increase the profitability of products (regardless of the user’s interests and needs) or hooking users on products that look interesting in the short run but can cause significant downsides in the long run.

It’s neither a new discovery nor necessarily a negative thing that companies want to increase their profits; both good and bad come from our system. There is a problem, however, when those of us in the product, design, and research communities ignore the fact that we are affected by our environment. We should expect ourselves to follow the money just like anyone else and to use behavioral science in unethical or questionable ways. We shouldn’t be naïve about how our behavior will diverge from our intentions.

The second lesson is much more hopeful, though. Despite our impressive ability to self-deceive and the many ways in which our environment can nudge us to act unethically, we can also design our environment to encourage ethical behavior—that is, to turn our intentions to act ethically into action.

A Path Forward: Using Behavioral Science on Ourselves

How might a company or individual change their environment to support ethical uses of behavioral science? We can find many such techniques once we start to think about the problem as a behavioral one; in particular, as a gap between our intentions and our subsequent actions.

Assess Intention

As with any intention–action gap, the first question we should answer is whether we intend and prioritize helping users succeed. In other words, is the company actually concerned with the ethical behavioral science as we’ve defined it here; that is, does the company want to help the end user change behavior in a transparent and voluntary way?

This isn’t a glib question, nor one where the opposing side is evil or full of bad people. Many companies find their true north in accomplishing something that’s never been done before or in providing stable jobs for their employees. Similarly, most consulting companies are first and foremost concerned with providing value to their clients and not in judging what that means for the end user. These aren’t inherently bad companies; they are just companies for which the rest of this section won’t be relevant. Behavior change, even within our own companies, should be voluntary and transparent.

Assess Behavioral Barriers

Your company might already be applying behavioral science, and you might have a sense of where trouble could be in the future (or present). If the challenge is one of not taking a particular ethical action, debug it with the CREATE Action Funnel we’ve used throughout this book. If the challenge is one of existing and problematic habits, look to the cues of those habits and disrupt them. Above all, check the core incentives. Despite all of the nuance that behavioral science can provide about how people’s decisions are shaped by social cues and other factors, often the simplest economic reason is the best place to start: we do what we’re paid to do. If your company hasn’t started applying behavioral science, but you’re concerned about where things might go in the future, again, basic incentives (not intentions) are often the best place to start.

The specific barrier or challenge matters: there isn’t a magic wand here any more than there is in any other part of behavior change work. That being said, we can point to some techniques that might help, depending on the particular behavioral barriers you face in your company.

Remind Ourselves with an Ethics Checklist

A simple way to keep ethical uses of applied behavioral science front and center is to remind ourselves, such as with a humble checklist. What do you consider important in a project? Write it out. Condense it into a few questions to evaluate each project. That’s something we’ve done on my team at Morningstar. Then, with that checklist or set of questions, print it out, post it prominently, and if possible, set up a process where other people outside of the team review it.

As with other behaviors, often we simply fail to take action in the way we desire because we get distracted by other things and lose focus or forget; a checklist helps fix that.

Several groups in the field have drawn up ethical guidelines, from the Behavioral Scientist’s Ethics Checklist by Jon Jachimowicz and colleagues to the Dutch Authority for Financial Markets’ Principles Regarding Choice Architecture.26

Here are some rules that I find useful for this purpose:

  • Don’t try to addict people to your product. This should be obvious, but clearly needs to be reiterated.

  • Don’t harm your users. The phrase I use with the team is to always keep our work “neutral to good” either explicitly helping or doing something that users don’t mind and won’t cause harm. It can be difficult to know for sure that you’re helping the user, but if even your own team doesn’t think it will help or if people wonder if it might be harming users, that’s a big warning sign.

  • Be transparent: tell users what you’re doing. Directly telling the user what you’re doing shouldn’t cause a problem and is a good, simple check on excess. A related technique is to imagine that your work becomes front page news—would your users be upset? Would your company survive? This technique is useful, but it’s hypothetical. Even better is to tell them up front.27

  • Make sure the action is voluntary. The user should be able to decide whether or not to participate in the product or service. For example, an app that’s required at work to monitor employee productivity isn’t optional; the job may be “optional,” but the app isn’t.

  • Ask yourself whether you’d want someone else to encourage you to use the product. Is this product really designed to help you? Would you encourage your child or parents to use it?

  • Ask others, especially strangers, if they would trust the application.

Create a Review Body

Checklists are great, but not very valuable if you don’t use them or you get into the habit of marking off all the questions by rote. Having an external review body—external to your team, or even external to your company—can help here. In the academic community, the Institutional Review Board (IRB) serves this function, with an independent board that reviews the ethical considerations of any research study.

Most private companies do not work with the IRB or other external body. But they can create an internal one without much difficulty. However, remember the research in self-deception: the more closely aligned the review body is to the people being reviewed (the more the reviewers see the reviewed as friends or colleagues to support), the less valuable it is. In other words, it helps to have strangers on the review body. Or, if strangers can’t be found, try adding a few jerks!

Remove the Fudge Factor

One of the key lessons in the self-deception research of Ariely, Mazar, and others is that self-deception relies on a “fudge factor”: the capacity to bend the rules a bit and still see ourselves as honest people.28 That fudge factor is largest in ambiguous situations (where it isn’t clear whether you really are bending the rules or not) and in situations where it’s easy to rationalize the rule-bending (when you’re helping others or when you see others bend the rules—as previously mentioned).

To limit self-deception, we should try to limit the fudge factor, especially ambiguity and rationalization. To remove ambiguity, we can make the rules crystal clear with a plain-language internal policy, or we can create feedback processes to frequently check whether we’ve strayed from our internal rules. To remove rationalization, we can intentionally set an ethical reference group that is the best of the best; we can ensure that senior leaders set the tone not only that unethical behavior won’t be tolerated, but that it won’t help the company or other employees in the long run.

Raise the Stakes: Use Social Power to Change Incentives

Another technique we can use is to intentionally raise the stakes against straying from an ethical path by telling others about our commitments. In other words, don’t just set a checklist: tell your customers, your employees, and your friends and family, that you’ve made a particular commitment. Tell them the rules you’ll follow for designing products and applying behavioral science.

If your company has a reputation for honesty (which ideally yours does), this means using that reputation to help keep yourself in line; it’s at risk if you stray. As part of this technique, welcome attention—being transparent about how you’re applying behavioral science is good in its own right, but it also helps raise the stakes to not stray. If it fits your company culture, you can also be a bit preachy: call out the abuses of other groups in the field. In addition to perhaps helping clean up our field, it has another effect: because people really dislike hypocrites, this approach makes it very risky to stray.

Remember the Fundamental Attribution Bias

Working on behavior change in a company adds the extra layer of complexity: within the company there can be a thoughtful range of opinions and priorities. They may not like your approach to improving ethical behavior—thinking it unnecessary or actually counterproductive. And when they don’t like it, it’s simply easy to see them as naïve, deceived, or unethical themselves. In other words, that there’s something wrong with them. I find that, as a behavioral scientist, it’s valuable to start with the assumption that we’re all wrong; that the other person seeks to do good, but is just as imperfect as I am.29 It’s a small step to try to counter the fundamental attribution bias: to assume that other people’s “bad behavior” is because they are bad people, while ours we excuse away.

Use Legal and Economic Incentives as Well

Behavioral science provides a great set of tools to close the gap between intention and action. Sometimes, though, there really are bad actors who have no intention to doing right by their customers, their employees, or others.30 And in these situations, we shouldn’t be afraid of more traditional techniques to regulate abusive practices (legal penalties and economic incentives). Legal approaches can include the proposed DETOUR Act, which, at the time of writing this, is imperfect but could be revised and restructured to provide thoughtful legal oversight and penalties where they are lacking. Economic incentives might include taxes on the use and transfer of personal data (making some deceptive practices less lucrative).

Why Designing for Behavior Change Is Especially Sensitive

Thus far, we’re talked about abuses in the field and how to potentially counter them. However, are these abuses particular to behavioral science? I’d argue that they really aren’t. We shouldn’t design products that addict people—whether we use behavioral science or not. We shouldn’t trick users into buying things they don’t want, nor into giving uninformed “permission” to hawk their data.

Behavioral science offers a set of ideas for design (what to change) and measurement (how to know if it worked). Where the idea for a design comes from shouldn’t matter as much as whether the targeted change (the ends) and how you do it (the means) are themselves ethical. In other words, the simplest answer to the question “When is it ethical to use behavioral science to try to change user behavior?” is this: “in the same situations where it’s ethical use non-behavioral techniques to try to change user behavior.” Unethical efforts shouldn’t be tolerated either way, and ethical designs should be fine with or without a layer of behavioral science.

While theoretically correct, that answer isn’t terribly helpful. There is something different about behavioral science in product design that makes people uncomfortable. We can think about and understand why. From the perspective of a user, four factors are likely at work:

Persuasion
It’s inherently unsettling to think that any product is trying to “make” us do something.
Effectiveness
It’s especially unsettling when is a technique appears to be universally effective; that is, that it compels us to act or do something and we have no control over it.
Transparency
It’s worse when the technique is hidden; we never know it’s happening or know only after the fact (and thus feel tricked).
Attention
Behavioral science has the word behavioral in it and talks explicitly about trying to change behavior. That draws our attention to it, whereas in another case, we might not know about it.

The first three factors—persuasion, effectiveness, and transparency—aren’t really issues of behavioral science at all. We should always be concerned about them; doing something against someone’s will (effective coercion), especially without their knowledge (i.e., lacking transparency), should raise hard questions.

In terms of attention, though, behavioral science is special; people pay more attention to, and thus are more unsettled by, sketchy uses of behavioral science. In the design and research community, we should embrace that critical attention. Rather than try to dismiss it as behavioral science being treated unfairly, let’s use this attention to have an honest conversation about persuasion, effectiveness, and transparency.

If there’s something that we’re doing that relies on people not paying attention, that’s a pretty clear sign we shouldn’t be doing it. And yes, there are absolutely such cases—because of behavioral techniques or not—people are rightfully upset once they become aware of how products were designed and function. In other words, let’s apply behavioral science to ourselves to welcome the attention and scrutiny and special attention our field gets to raise the stakes to unethical conduct. Because sadly, we need it.

A Short Summary of the Ideas

Here’s what you need to know:

  • While there are always gray areas, ethical behavior change is not a subjective, squishy thing. There are manipulative, shady practices in our industry, and they are rightfully being called out by journalists and regulators. We need to clean up our act.

  • Other disciplines also have manipulative practices—Cialdini, for example, learned from used cars salesmen—but designing behavior change is drawing particular scrutiny because we do so intentionally and at massive scale. We should welcome that scrutiny; obscurity is never a solid ethical defense.

  • We do need guidelines for our work. For example:

    • Don’t try to addict people to your product.

    • Only apply behavioral techniques where the individual will benefit.

    • Tell users what you’re doing.

    • Make sure the action is optional.

    • Ask yourself and others if they’d want to use the product.

  • We’re all human though, and guidelines aren’t enough. We will fudge things and stretch the rules just like anyone else. Applying behavioral science on ourselves means:

    Fix the incentives
    If you’re paid to drive sales, you’re going to drive sales. If there aren’t clear goals or incentives to ensure clients will benefit from the product or service, then it’s too easy to fall into murky territory.
    Draw bright lines
    Ensure that whatever guidelines you set are straightforward and clear so any reasonable observer can talk whether they are being violated or not.
    Set up independent review
    Is there a third party, not on your team, who reviews the applications of behavioral science in your work?
    Support regulation
    Yes, I said it. While bills like DETOUR are flawed, some regulation is coming, and it’s better that we make it thoughtful and effective. Like it or not, the best way to align incentives, draw bright lines, support independent review, etc., is to hold our organizations legally liable for not doing so. Regulation and penalties force attention to the issue.

Avoiding coercion doesn’t mean that you encourage users to do anything they want to do. Your company will have, and must have, a stance on the behaviors it wants to encourage. “Dieting” and “eating everything you want” aren’t two equally valid options for weight loss. One works (sometimes) and the other doesn’t. You can talk about and be up front about that stance. If you’re helping people diet, don’t be ashamed about it. Do it, and do it proudly, but be transparent and make participation optional.

Many types of products, even those that are explicitly coercive, can be good and useful. The ankle bracelets used for home detentions probably fall into that category. On net, society is better off because of their use. But that’s a different type of product than we aim to develop here, and they deserve scrutiny and thought. Here, my goal is to spur ideas about products that enable voluntary behavior change so that we are clear about what we’re doing and the means we’re using to affect user behavior.

Talking about product ethics may be an unusual topic for a book aimed at practitioners. But we can’t outsource ethics. We should feel proud of our work. Part of that means double-checking that the product is truly voluntary, is up front about the behaviors it tries to change, and seeks to make a beneficial change for its users.

1 Mathur et al. (2019)

2 Valentino-DeVries (2019)

3 These quotes come from Reuters (2019) and GovTrack (2019.

4 Norwegian Consumer Council: Forbrukerrådet (2018); Google: Meyer (2018); Intuit: Elliot and Waldron (2019); Apple: Lanaria (2019)

5 Goel (2014); Albergotti (2014)

6 Kramer et al. (2014); My thanks to Ethan Pew for pointing out that the effect size was much smaller than was represented in the media.

7 Roberts (2015)

8 Murgia (2019)

9 Schechner and Secada (2019); h/t Anne-Marie Léger

10 Kearon et al. (2017)

11 Cialdini (2008)

12 See Schüll (2014), for example.

13 These quotes come from the Amazon book descriptions for Lewis (2014), Eyal (2014), Alba (2011), and Leach (2018), respectively, as of June 2019.

14 Nir Eyal’s book provides perhaps the clearest example. As one author puts it in describing his book: “the well-known book by user experience expert Nir Eyal was a hit because it showed developers exactly how to create addictions. Yet readers often forget that Eyal gave ethical guidelines for using this ‘superpower,’” Gabriel (2016).

15 In addition to Brignull’s Dark Patterns site mentioned earlier, an examination of how (usually unintentional) bad design can harm users can be found in Tragic Design by Savard Shariat (2019); h/t Anne-Marie Léger.

16 See Gabaix and Laibson’s (2005) shrouded attributes model; h/t Paul Adams.

17 Bogost (2012); see Alter (2018) for a book-length analysis of addictive products and their repercussions.

18 Thanks to Florent Buisson for the suggestion of including the attention economy here.

19 Gonzalez (2018)

20 This report has been removed from Mixpanel’s website, but was up when I last accessed it in June 2019, tauting the benefits of addicting your users.

21 Wallaert (2019)

22 Or hide behind either intentional ignorance or a cynical take on the mantra that “no design is neutral” and therefore all designs are permissible?

23 See Appiah (2008) for a summary. These examples come from Latané and Darley (1970), Langer et al. (1978), and Ariely (2013); the last study also provides a nice summary of how self-deception operates in daily life, what exacerbates it (ambiguity, cheating to help others, seeing others cheat), and what minimizes it (clear feedback on what dishonesty is, supervision/surveillance).

24 Darley and Batson (1977)

25 Appiah (2008)

26 Jachimowicz et al. (2017), Dutch Authority for the Financial Markets (2019); h/t Julián Arango

27 Telling people what you’re doing doesn’t necessarily mean to have a big banner saying, “Look here, we’re testing the impact of this button color on speed of response.” That creates an annoying experience for the user and an unrealistic environment for measuring the impact of an intervention (see Chapter 13 on experiments). Rather, it means being clear that you’re running tests and what you’re trying to accomplish overall; h/t Fumi Honda for raising the issue.

28 See Ariely (2013) for a summary.

29 Aka, Hanlon’s Razor: “Never attribute to malice that which can be adequately explained by stupidity”; h/t Paul Adams. See Wikipedia for a brief history of this aphorism.

30 Thanks to Clay Delk for stressing the need for tools to handle intentional bad actors. While there are variety of frameworks out there for organizing the tools for regulating behavior and society, my favorite framework comes from Lawrence Lessig’s Code (Basic Books, 1999): law, market, architecture, norms. Behavioral science tends to focus on architecture and norms; we should never forget the power of the law and market.

Get Designing for Behavior Change, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.