Chapter 1. Deciding and Taking Action

On the day I got married, I was lying on the bathroom floor of the church because my back hurt to move. I’d been out of commission and in bed for almost three weeks, but now my family and my wife-to-be, Alexia, were waiting for me in the aisles. My best man Paul had to pull me up from the floor and get me out there to say my vows. My back had seized up weeks before because I hadn’t been getting enough exercise.

Now I was born skinny, but that hides the fact that I’ve had musculoskeletal problems all my life—lower back problems, pinched nerves in my hands and neck, and so forth. I’ve seen many doctors over the years, and they’ve all said about the same thing: you’ll be OK, if you just exercise regularly.

So I’ve long known about the importance of exercise; I don’t have a problem with motivation. There’s nothing more motivating or scarier than almost canceling your own wedding. I’ve certainly intended to exercise. But, like many other people who struggle with this, I haven’t done as much as I should.

For me, and for many others, there’s a gap between our sincere intention to act and what actually happens. There’s something more going on in our heads and in our lives than a simple cost–benefit analysis about what we should do. Even though the benefits clearly outweigh the costs, we struggle. To change this pattern—to help ourselves and others take action when needed—we must first understand how our minds are wired.

In my research and that of many other behavioral scientists in the field, we’ve found that people don’t always make decisions and take action in a straightforward way. People struggle to turn their intentions into action. People struggle sometimes to make good decisions—even if, at another time, they might have done fine.

We recognize this for ourselves and our own lives, but we tend to forget this when it comes to our users. We assume that if they like our products, they’ll use them. If they want to do something, they’ll figure out how. But they don’t.

I’m not the only person who struggles with a lack of exercise. Many of your users might too. Or, they struggle with poor eating or bad sleep habits or distractions that keep them from their family and friends. Often, motivation isn’t the problem: like me, they know what they should do and even want to do it. Other things get in their way. This book is about how to help your users, and all of us, change behavior when we need to.

Behavior Change…

All around us, people try to change our and each other’s behavior. Negative examples are often obvious: from ads that entice us to buy stuff we don’t need to apps that try to swallow our attention and time. Positive examples are there too, but perhaps not as obvious; for example, the loving parent teaching a child to share. Support programs helping addicts break free from their demons. Apps helping us track our weight and encouraging us as we diet and exercise.

In a sense, we’re all in the “behavior change” business. When we’re falling short of our own goals and want to make a change in our lives, that usually means our behavior must change first. Moreover, we’re a social species; in order to achieve our goals, even altruistic goals of helping another person succeed, often someone must do something differently. To effect change is to effect behavior change.

Yet we rarely talk about it that way. In the product world, we talk about features delivered, user needs met, and so forth. Those things are all important, certainly, but none of them matters unless people adopt and apply our products (i.e., we need our users to change their behavior in a meaningful way).

Perhaps we don’t talk about behavior change so directly because it’s uncomfortable: we don’t want to be seen as manipulative or coercive. So we end up with sanitized conversations distanced from real people changing their behavior because of our products and communications: key performance indicators for adoption and retention; objectives and key results for click-through rates and usage.

It shouldn’t be that way. When we don’t talk about what we’re actually doing, we are both less effective at helping others when we should and more likely to try to change behavior in ways we shouldn’t. This book is about designing products intentionally to change behavior—how to ethically and thoughtfully help others succeed, without, I hope, falling into trickery or manipulation.

In this book we’ll have an open discussion about how to help people decide what to do and how to help them act on their intentions and personal goals. We’ll talk about how to build products that influence that process and how to assemble and run a team that does so. Nothing presented here is perfect, but I hope this book can help you make better products and better serve your users.

…And Behavioral Science

One of the best toolsets to accomplish this task—intentionally designing for behavior change—comes from behavioral science. And, in addition to being useful, behavioral science is fascinating.

Behavioral science is an interdisciplinary field that combines psychology and economics, among other disciplines, to gain a more nuanced understanding of how people make decisions and translate those decisions into action. Behavioral scientists have studied a wide range of behaviors, from saving for retirement to exercising.1 Along the way, they’ve found ingenious ways to help people take action when they would otherwise procrastinate or struggle to follow through.

One of the most active areas of research in behavioral science is how our environment affects our choices and behavior, and how a change in that environment can then affect those choices and behaviors. Environments can be thoughtfully and carefully designed to help us become more aware of our choices, shape our decisions for good or for ill, and spur us to take action once we’ve made a choice. We call that process choice architecture, or behavioral design.

Over the past decade, there has been a tremendous growth of research in the field and also of best-selling books that share its lessons, including Richard Thaler and Cass Sunstein’s Nudge, Daniel Kahneman’s Thinking, Fast and Slow, and Dan Ariely’s Predictably Irrational.2 They give fun introductions to the field, including anecdotes of how:

  • Putting a picture of a fly in the center of a men’s urinal can help reduce the mess that men make far more than exhorting them not to make a mess.

  • Giving people many small bags of popcorn makes them eat less of it.3

In fact, Thaler and Kahneman have each won the Nobel Prize largely because of their work in behavioral science.

That said, we’re not trying to re-create Nudge or Predictably Irrational here. This book is about how to apply lessons from behavioral science to product development; in particular, how to help our users do something they want to do, but struggle with. Whether that’s dieting, spending time with their kids, or breaking a social media app’s hold on their lives. It’s about arming you with a straightforward process to design for behavior change.

Some of those lessons are what you’d expect: when designing a product, look out for unnecessary frictions or for areas where a user loses self-confidence. Build habits via repeated action in a consistent context. Some of those lessons are far less expected, and you may not even want to hear them; for example, most products, most of the time, will have no unique impact on their user’s lives. For that reason, we need to test early and often, and use rigorous tools to do so. Other lessons are simply fun and surprising; for example, make text harder to read if it’s important that users make a careful and deliberative decision.

With that, let’s dive into a primer on behavioral science!

Behavioral Science 101: How Our Minds Are Wired

Last summer, my family and I were on vacation and having a great time. One afternoon, we decided we’d eaten out way too much and we wanted something cheaper and more familiar than another restaurant meal. So we went to a grocery store.

Now, the first thing we looked for was cereal. We found the aisle and there were far too many options to choose from. As they often do, our kids were running up and down the aisle, pulling and swinging each other around. Somehow, all of that movement makes them unable to hear us telling them to stop. It’s clearly loads of fun—until they crash into something. So we had to make a quick decision.

Unfortunately, my kids and I have lots of allergies. My allergies are lethal, and my kid’s allergies cause pain but thankfully not too much more. So as we’re standing in the aisle trying to make a choice and keep our kids out of trouble, my wife and I were torn: we simply couldn’t read all of the boxes for their ingredients.

Thankfully, we have some simple rules we know to follow. Any cereal with cartoons on the box is automatically out; those are often crammed full of sugar, and our kids have enough energy already. Second, cereals that are gluten free (which one of our sons needs) usually proclaim it proudly on the box—easy to scan for. And third, after decades of practice, I have a really useful habit: I automatically pick up food and recognize ingredients on the list that would kill me. It only takes a split second and I hardly think about it unless I see something that’s a problem.

After a little while, we picked up a nice bag of corn flakes, grabbed a box of some granola-like stuff, and went on to the next aisle. No problem. Unfortunately, we did forget to grab milk and a few other things in that aisle. We’d intended to get them, but in the moment, we missed those items on our mental checklist.

Now, when we got home, the granola stuff was actually really good. The corn flakes were terrible—in all of the hurry, we missed a key sign: dust on the bag. They’d been sitting there a long time, and everyone else clearly knew not to buy them.

In everyday life and in (true) stories like this one, we can find the core lessons of behavioral science if we know where to look. I like to start with a basic, and often overlooked, one: as human beings, we’re all limited. We can’t instantly know which cereal is best just by thinking about them. We have to take time and energy to sort through the options and make a decision. That time and energy is scarce—if we spend too much time on one thing, there’s a cost (like our kids crashing to the shelves). Similarly, we’re limited in our attention, our willpower, our ability to do math in our heads, and so on. You get the picture.

Our limitations aren’t bad, per se; they just are facts of life. For example, I can’t even imagine what it would mean to have unlimited attention—to be simultaneously aware of absolutely everything at once. That’s just not how we’re made.

Given these limitations, our minds are really good at making the best of what we have. We economize on our time, attention, and mental energy by using simple rules of thumb to make decisions; for example, by excluding cereals with cartoons. As researchers, we call these results of thumb heuristics. Another way our minds economize is by making split-second nonconscious judgments; for example, nonconscious habits are automated associations in our heads that trigger us to take a particular action when we see a specific trigger (like scanning for deadly ingredients whenever I see unknown food). Habits free up our conscious minds to think about other things.

While these economizing techniques are truly impressive, they aren’t perfect. They falter in two big ways. First, we don’t always make the right decision; for example, sometimes we don’t pay attention to something important (dust on the bag). As researchers, we often call the results of a heuristic or other shortcut going awry a cognitive bias: a systematic difference between how we’d expect people to behave in a certain circumstance and what they actually do.4 Second, even when we make the right choice, our inherent human limitations mean we don’t always follow through on our intentions (getting the milk). We call that the intention–action gap.

And finally, context matters immensely. It mattered that our kids were running around; we had less of our limited attention to pay to the task (reading ingredients, remembering milk). If milk were in a different aisle, we might have seen it and remembered it. If our kids weren’t running around…never mind. That wouldn’t happen.

So, if I were to put decades of behavioral research into a few bullet points (please forgive me, my fellow researchers!), it would be these:

  • We’re limited beings in attention, time, willpower, etc.

  • We’re of two minds: our actions depend on both conscious thought and nonconscious reactions, like habits.

  • In both cases, our minds use shortcuts to economize and make quick decisions because of these limitations.

  • Our decisions and our behavior are deeply affected by the context we’re in, worsening or ameliorating our biases and our intention–action gap.

  • One can cleverly and thoughtfully design a context to improve people’s decision making and lessen the intention–action gap.

Let’s look at each of these points in a bit more detail.

We’re Limited

Who hasn’t forgotten something at some point in their lives? Heck, who hasn’t forgotten something in the last hour, or the last five minutes? Forgetfulness is one of our many human frailties. Personally, the older I get, the longer that list seems to grow. There are sadly many ways in which our minds are limited and lead us to make choices that aren’t the best, including limited attention, cognitive capacity, and memories.

These limitations string together. In terms of our attention, there are nearly an infinite number of things we could be paying attention to at any moment. We could be paying attention to the sound of our own heartbeat, the person who is trying to speak to us, the interesting conversation someone else is having near us, or the report that’s overdue and we need to complete. Unfortunately, researchers have shown again and again that our conscious minds can really pay proper attention to only one thing at a time. Despite all of the discussion in the popular media about multitasking, multitasking is a myth.5 Certainly we can switch our attention back and forth; we can move from focusing on one thing to focusing on another—and we can do so again and again and again. But the reality is, switching focus frequently is costly; it slows us down, and it makes it harder for us to think clearly. Given that we can only focus on one thing at a time and that there are so many things that we could focus on (many of them urgent and interesting), it’s no wonder that sometimes we aren’t thinking about what we’re doing.

Similarly, our cognitive capacity is limited: we simply can’t hold many unrelated ideas or pieces of information in our minds at the same time. You may have heard the famous story about why phone numbers in the United States are seven digits plus an area code: researchers found that we could hold seven unrelated numbers in our heads at a time, plus or minus two.6 And, of course, there are so many other ways in which our cognitive capacity is limited. For one, we have a particularly difficult time dealing with probabilities and uncertain events, and with realistically predicting the likelihood of something happening in the future. We tend to over-predict rare but vivid and widely reported events like shark attacks, terrorist attacks, and lightning strikes.7

In addition, we can become overwhelmed or paralyzed when faced with a wide range of options, even as we consciously seek out more choices and options. Researchers call this the paradox of choice: our conscious minds believe that having more choices is almost always better, but when it actually comes time to make a decision and we’re faced with our limited cognitive capacity and the difficulty of the choice ahead of us, we may balk.8

Lastly, when it comes to our memories, they simply aren’t perfect, and nothing is going to change that. And, for most of us, having a “not perfect” memory is a significant understatement. Our memories usually aren’t crystal-clear videos, but a set of crib notes from which we reconstruct mental videos and pictures. We remember events that occur frequently (like eating breakfast) in a stylized format, losing the details of the individual occurrences and remembering instead a composite of that repeated experience. Additionally, in some circumstances, we remember the peak and the end of an extended experience, not a true record of its duration or intensity.9

What do all of these cognitive limitations mean? They are important to product people for two main reasons. First, these cognitive limitations mean that sometimes our users don’t make the best choices, even when something is in their best interest. It’s not that they’re bad people; it’s that they are, simply, people. They get distracted, they forget things, they get overwhelmed. We shouldn’t interpret a few bad choices as a sign that they are fundamentally disinterested in doing better (or using our product); instead, it’s just that their simple human frailties may be at work. We can design products to avoid overburdening users’ limited faculties.10

Second, our limitations matter because our minds cleverly work around them by having two semi-independent systems in the brain and by using lots and lots of shortcuts. When developing products and communications, we should understand those shortcuts and use them to our advantage or work around them.

We’re of Two Minds

You can think about the brain as having two types of thinking: one is deliberative and the other is reactive; it’s a useful metaphor for a complex underlying process.11 Our reactive thinking (aka intuitive, or System 1) is blazingly fast and automatic, but we’re generally not conscious of its inner workings. It uses our past experiences and a set of simple rules of thumb to almost immediately give us an intuitive evaluation of a situation—an evaluation we feel through our emotions and through sensations around our bodies like a “gut feeling.”12 It’s generally quite effective in familiar situations, where our past experiences are relevant, and does less well in unfamiliar situations.

Our deliberative thinking (aka conscious, or System 2) is slow, focused, self-aware, and what most of us consider “thinking.” We can rationally analyze our way through unfamiliar situations and handle complex problems with System 2. Unfortunately, System 2 is woefully limited in how much information it can handle at a time—like struggling to hold more than seven unrelated numbers in short-term memory at once! It thus relies on System 1 for much of the real work of thinking. These two systems can work independently of each other, in parallel, and can disagree with each other—like when we’re troubled by the sense that, despite our careful thinking, “something is just wrong” with a decision we’ve made.13

What this means is that we’re often not “thinking” when we act. At least, we’re not choosing consciously. Most of our daily behavior is governed by our intuitive mode. We’re acting on habit (learned patterns of behavior), on gut instinct (blazingly fast evaluations of a situation based on our past experiences), or on simple rules of thumb (cognitive shortcuts or heuristics built into our mental machinery).14 Researchers estimate that roughly half of our daily lives are spent executing habits and other intuitive behaviors, and not consciously thinking about what we’re doing.15 Our conscious minds usually become engaged only when we’re in a novel situation, or when we intentionally direct our attention to a task.16

Unfortunately, our conscious minds believe that they are in charge all the time, even when they aren’t. In his book, The Happiness Hypothesis (Cambridge 2006), philosopher Jonathan Haidt builds on the Buddha’s metaphor of a rider and an elephant to explain this idea. Imagine that there is a huge elephant with a rider sitting atop it. The elephant is our immensely powerful but uncritical, intuitive self. The rider is our conscious self, trying to direct the elephant where to go. The rider thinks it’s always in charge, but it’s the elephant doing the work; if the elephant disagrees with the rider, the elephant usually wins.

To see this in action, you can read fascinating studies of people whose left and right brains have been surgically separated and can’t (physically) talk to one another. The left side makes up convincing but completely fabricated stories about what the right side is doing.17 That’s the rider standing on top of an out-of-control elephant crying out that everything is under control!18 Or, more precisely, crying out that every action that the elephant takes is absolutely what the rider wanted them to do—and the rider actually believes it.

Thus, we can do one thing and think about something different at the same time. We might be walking to the office, but we’re actually thinking about all of the stuff we need to do when we get there (Figure 1-1). The rider is deeply engaged in preparing for the future tasks, and the elephant is doing the work of walking. In order for behavior change to occur, we need to work with both the rider and elephant.19

While the mind consciously thinks about what needs to be done at work, the subconscious mind keeps the body walking (habits and skills), avoids shadowy alleys (an intuitive response), and follows the sweet smell of a bakery (habit)
Figure 1-1. While the mind consciously thinks about what needs to be done at work, the subconscious mind keeps the body walking (habits and skills), avoids shadowy alleys (an intuitive response), and follows the sweet smell of a bakery (habit)

We Use Shortcuts, Part I: Biases and Heuristics

Both our conscious minds and our nonconscious minds rely heavily on shortcuts to make the best of our limited faculties. Our minds’ myriad shortcuts (heuristics) help us sort through the range of options we face on a day-to-day basis and make rapid, reasonable decisions about what to do.

These heuristics are a mix of rules we use throughout our lives with obvious consequence:

Status quo bias
If you’re faced with many options to choose from and you can’t devote time and energy to think them through, or you aren’t sure what to do with them, what’s generally the best thing to do? Don’t change anything. We should generally assume people will stick with the status quo. That’s true whether it’s a deep-seated historical status quo or one that is arbitrarily chosen and presented as the status quo: to change is to risk loss.20
Descriptive norms—we’re deeply affected by social signals
Another way we handle uncertainty in a decision is to look at what other people are doing and try to do the same (aka descriptive norms).21 This is one of our most basic shortcuts. For example, “People here are drinking and having a good time, so it’s OK if I do as well.”
Confirmation bias
We tend to seek out, notice, and remember information already in line with our existing thinking.22 For example, if someone has a strong political view, they may notice and remember news stories that support that view and forget those that don’t. In a sense, this tendency allows our minds to focus on what appears to be relevant in a sea of overwhelming information. It has a side effect, though: it leads us to ignore new information that might help us gain a truer picture of the world or try new things.
Present bias
Our limited attention also applies to time: we can only focus on a single moment at once. Our minds appear to use a simple shortcut: what seems to be most important? The present. We give undue attention to the present and value things we get now over future benefits, even if it puts our long-term health and welfare at risk. While formally studied in economics since the 1990s, the concept is an ancient one: the desire for instant gratification and the procrastination that comes with it.23
Anchoring
It’s often quite difficult to make a clear and thorough assessment of an answer. And so, when we don’t know a number—like the probability of an event or the price of an object—we often start with an initial estimate (whether our own or one provided to us) and make adjustments up or down based on additional information and feedback. Unfortunately, these adjustments are often insufficient—and the initial anchor has a strong effect on the results.24 Anchoring is one of many ways in which we make judgments that are relative to a reference point.

Others are interesting and seemingly narrow shortcuts that guide how we act, like these:

Availability heuristic
When things are particularly easy to remember, we believe that they are more likely to occur.25 For example, if I’d recently heard news about a school shooting, I’d naturally think that it is much more common than it actually is.
IKEA effect
When we invest time and energy in something—even if our contribution is objectively quite small—we tend to value the resulting item or outcome much more.26 For example, after we’ve assembled IKEA furniture, we often value it more than similar furniture someone else assembled (even if it’s of higher quality)—our sweat equity doesn’t matter in terms of market value, but it does to us.
Halo effect
If we have a good assessment about someone (or something) overall, we sometimes judge other characteristics of the person (or thing) too positively—as if they have a “halo” of skill and quality.27 For example, if we like someone personally, we might overestimate their skill at dancing, even if we knew nothing about their dancing ability.

There are over a hundred of these shortcuts (heuristics) or other tendencies of the mind (biases) that researchers have identified. Unfortunately, these shortcuts can also lead us astray as we try to make good choices in our lives. For example, if you’re a religious person living in a place where people don’t speak about religion, descriptive norms apply a subtle (or not-so-subtle) pressure to avoid doing so yourself. Or a homeless person might look and smell dirty, and the (negative) halo effect could lead others to think negatively about them; they might see the person as less honest and less smart than they really are. While I’ve mentioned some negative outcomes from our shortcuts and biases, it’s important to understand that, at their root, our shortcuts are clever ways to handle the limited resources that our minds have.

Let’s take a closer look at another way in which our minds economize: habits.

We Use Shortcuts, Part II: Habits

We use the term habit loosely in everyday speech to mean all sorts of things, but a concrete way to think about one is this: a habit is a repeated behavior that’s triggered by cues in our environment. It’s automatic—the action occurs outside of conscious control, and we may not even be aware of it happening.28 Habits save our minds work; we effectively outsource control over our behavior to cues in the environment.29 That keeps our conscious minds free for other, more important things, where conscious thought really is required.

Habits arise in one of two ways.30 First, we can build habits through simple repetition: whenever you see X (a cue), you do Y (a routine). Over time, your brain builds a strong association between the cue and the routine and doesn’t need to think about what to do when the cue occurs—it just acts. For example, whenever you wake up in the morning (cue), you get out of bed at the same spot (routine). Rarely do you find yourself lying in bed, awake, agonizing over which exact part of the bed you should exit by. That’s how habits work—they are so common, and so deeply ingrained in our lives, that we rarely even notice them.

Sometimes there is also a third element, in addition to a cue and routine: a reward, something good that happens at the end of the routine. The reward pulls us forward—it gives us a reason to repeat the behavior. It might be something inherently pleasant, like good food, or the completion of a goal we’ve set for ourselves, like putting away all of the dishes.31 For example, whenever you walk by the café and smell coffee (cue), you walk into the shop, buy a double mocha espresso with cream (routine), and feel chocolate-caffeine goodness (reward). We sometimes notice the big habits—like getting coffee—but other, less obvious habits with rewards (checking our email and receiving the random reward of getting an interesting message) may not be noticed.

Once the habit forms, the reward itself doesn’t directly drive our behavior; the habit is automatic and outside of conscious control. However, the mind can “remember” previous rewards in subtle ways, intuitively wanting (or craving) them.32 In fact, the mind can continue wanting a reward that it will never receive again, and may not even enjoy when it does happen!33 I’ve encountered that strange situation myself—long after I formed the habit of eating certain potato chips, I still habitually eat them even though I don’t enjoy them and they actually make me sick.34 This isn’t to say that rewards aren’t important after the habit forms—they can push us to consciously repeat the habitual action and can make them even more resistant to change.

The same characteristics that make habits hard to root out can be immensely useful. Thinking of it another way, once “good” habits are formed, they provide the most resilient and sustainable way to maintain a new behavior. Charles Duhigg, in The Power of Habit (Random House, 2012), gives a great example. In the early 1900s, advertising man Claude C. Hopkins moved American society from being one in which very few people brushed their teeth to a majority brushing their teeth in the span of only 10 years. He did it by helping Americans form the habit of brushing:35

Pepsodent advertisement from 1950, highlighting the cue for the habit of brushing teeth: tooth film (courtesy of vintage-adventures.com)
Figure 1-2. Pepsodent advertisement from 1950, highlighting a cue to trigger the habit of tooth-brushing (courtesy of Vintage Adventures)
  • He taught people a cue—feeling for tooth film, the somewhat slimy, off-white stuff that naturally coats our teeth (apparently, it’s harmless in itself) (Figure 1-2).

  • When people felt tooth film, the response was a routine—brushing their teeth (using Pepsodent, in this case).

  • The reward was a minty tingle in their mouths—something they felt immediately after brushing their teeth.

Over time, the habit (feel film, brush teeth) formed, strengthened by the reward at the end. And so did a craving—wanting to feel the cool tingling sensation that Pepsodent caused in their mouths that people associated with having clean, beautiful teeth.

Stepping back from Duhigg’s example, let’s look again at the three pieces of a reward-driven habit:

  • The cue tells us to act now. The cue is a clear and unambiguous signal in the environment (like the smell of coffee) or in the person’s body (like hunger). BJ Fogg and Jason Hreha categorize the two ways that they work on behavior into cue behaviors and cycle behaviors based on whether the cue is something else that happens and tells you it’s time to act (brushing teeth after eating breakfast) or the cue occurs on a schedule, like at a specific time of day (preparing to go home at 5 p.m. on a weekday).36

  • The routine can be something simple (hear phone ring, answer it) or complex (smell coffee, turn, enter Starbucks, buy coffee, drink it), as long as the scenario in which the behavior occurs is consistent. Where conscious thought is not required (i.e., consistency allows repetition of a previous action without making new decisions), the behavior can be turned into a habit.

  • The reward can occur every time—like drinking our favorite brand of coffee—or on a more complex reward schedule. A reward schedule is the frequency and variability with which a reward occurs each time the behavior occurs. For example, when we pull the arm or press the button on a slot machine, we are randomly rewarded: sometimes we win, sometimes we don’t. Our brains love random rewards. In terms of timing, rewards that occur immediately after the routine are best—they help strengthen the association between cue and routine.

Researchers are actively studying exactly how rewards function, but one of the likely scenarios goes like this: when these three elements are combined, over time the cue becomes associated with the reward.37 When we see the cue, we anticipate the reward and it tempts us to act out the routine to get it. The process takes time, however—varying by person and situation from a few weeks to many months. And again, the desire for the reward can continue long after the reward no longer exists.38

We’re Deeply Affected by Context

And finally, we turn to the last big lesson: the importance of context on our behavior. What we do is shaped by our contextual environment in obvious ways, like when the architecture of a building focuses our attention and activity toward a central courtyard. It’s also shaped in nonobvious ways by the people we talk and listen to (our social environment), what we see and interact with (our physical environment), and the habits and responses we’ve learned over time (our mental environment). These nonobvious effects can show themselves even in slight changes in wording of a question. We’ll examine how the environment affects human behavior throughout this book, but let’s start with one famous example:

  • Suppose there’s an outbreak of a rare disease, which is expected to kill six hundred people. You’re in charge of crafting the government’s response.

You have two options:

  • Option A will result in two hundred people saved.

  • Option B will result in a one-third probability that six hundred people will be saved and a two-thirds probability nobody will be saved.

Which option would you choose?

Now suppose there’s another outbreak of a different disease, which is also expected to kill six hundred people. You have two options:

  • Option C will result in the certain death of four hundred people.

  • Option D will result in the one-third probability that nobody dies and two-thirds probability everyone dies.

Which option would you choose now?

Presented with these options, people generally prefer Option A in the first situation and Option D in the second. In Tversky and Kahneman’s early studies using these situations,39 72% of people choose A (versus 28% for B), but only 22% choose C (versus 78% for D). Which, as you’ve probably caught on, doesn’t make much sense, since for both A and C, four hundred people face certain death and two hundred will be saved. Logically, if someone prefers A, that person should also choose C. But that isn’t what happens, on average.

Many researchers believe there is such a stark difference in people’s choices for these two mathematically equivalent options (A and C) because of how the choices are framed. One is framed as a gain of two hundred lives and the other is framed as a loss of four hundred lives.40 The text of C leads us to focus on the loss of four hundred lives (instead of the simultaneous gain of two hundred), while the text of A leads us to focus on the gain of two hundred lives (instead of the loss of four hundred). And people tend to avoid uncertain or risky options (B and D) when there is a positive or gain frame (A versus B) and seek risks when faced with a negative or loss frame (C versus D).

That’s, well, odd. It shows how relatively minor changes in wording can lead to radically different choices. However, it is especially odd since this isn’t something that people would explain themselves. If they were faced with both sets of choices, they wouldn’t say, “Well, I recognize that A and C have exactly the same outcomes, but I just intuitively don’t like thinking about a loss, even when I know it’s a trick of the wording.” Instead, the person might simply say: “knowing that I can save people is important (A), and I really don’t like the thought of knowingly letting people go to certain death (C).”

Or to use the rider and elephant metaphor again, the rider thinks they’re in control, but the elephant really is. Our conscious rider explains our behavior after it’s happened, without knowing the real reason. We are, as social psychologist Tim Wilson nicely puts it, “strangers to ourselves.”41 To bring this back to product development, our users take actions that they don’t understand but will try to explain after the fact.

Our lack of self-knowledge also extends to what we’ll do in the future. We’re bad at forecasting the level of emotion we’ll feel in future situations and at forecasting our own behavior in the future.42 For example, people can significantly overestimate the impact of future negative events on their emotions, such as a divorce or medical problem. We’re not only affected by the details of our environment, we don’t often recognize that our environment has affected us in the past, so we don’t consider the influence when we’re thinking about what we’ll do in the future. In a product development context, this means that asking people what they will do or what they think will happen to them in the future is fraught with problems.

Tversky and Kahneman’s study demonstrates one of the key principles of behavioral science: reference dependence. In absolute terms, outcomes of options A and C are identical. But the first frame sets the reference point as people dying and the potential to save them; the second sets the reference point as people living and the potential to let them die. Whether something is a loss or a gain depends on our reference point. And, as you can see in Tversky and Kahneman’s study, that reference point is malleable: it’s subject to design.

We Can Design Context

Because our environment affects our decision making and behavior, redesigning that environment can change decision making and behavior. We can thoughtfully develop product designs and communications that take this knowledge into account and help people make better decisions, use habits wisely, and follow through on their intentions to act, which is the focus of the rest of this book.

What Can Go Wrong

We’ve touched upon the areas in which the quirks of our minds can lead to bad outcomes a bit already. It’s useful to make these areas more explicit and clearer, though, since that understanding is the foundation of making things better. We can distinguish two major branches of research, both of which are useful for our purposes. Broadly, behavioral science helps us understand quirks of decision making and quirks of action.

Quirks of Decision Making

The shortcuts that our minds use lead us to rapid and generally good decisions without evaluating all of the options thoroughly. They’re necessary, letting us make the best of our limited faculties, and are generally very helpful—until they aren’t.

Think about what happens when you’re visiting a new town and you’re walking around looking for a bite to eat:

  • You might look on your phone to see which restaurant has the highest and most ratings. Or, you might peek in the window to see where other people are eating—it’s never a good sign if a place is deserted, right? That’s the social proof heuristic: if you aren’t sure what to do, copy what other people are doing.

  • You might have seen ads touting a particular restaurant, and when you pass by the sign, it catches your eye. If you’ve at least heard of it, that’s good! The availability heuristic supports that feeling.

  • You might notice a chain you’ve been to and liked recently and figure that what’s been good recently probably will be good again. That, among other things, is a recency bias helping you choose.

  • You might look at their prices and see that one place is offering burgers at $10, and another at $2. You’re all for saving money, but that’s just too much: there must be something wrong with the $2 place. That’s the shortcut of price as a signal of quality.

In each case, the shortcuts help us make quick decisions. None of them is perfect, certainly. We could find ways to make them better, but by and large, these are all reasonable choices. Most importantly, they are fast: they save us from spending hours and hours reviewing all of the pros and cons of each restaurant in the city, judging them in terms of their taste and nutrition on 50 dimensions, the time to reach each one, their ambiance and likely social environment, and so on. Shortcuts like these make decisions possible and avoid decision paralysis.

Switching contexts, let’s think about investing money in the stock market. You’ve recently received a windfall and you’re looking to invest some of it for the future. You don’t have much experience in investing, so what do you do?

  • You might look online to see what everyone else is talking about and investing in, using social proof. Awesome! Except that’s how bubbles are made: think Bitcoin or the dot-com bubble.

  • You might invest in things you’ve heard of, using the availability heuristic. Excellent! Except, again, that’s how bubbles are made.

  • You might look at what’s performed well in the past and invest in that, using recency bias. The problem is that past performance doesn’t predict future performance. Not a great guide.

  • You might look at prices—if a stock has a really high price, it must be a good investment, right? OK, you know where that goes.

And so forth. You get the picture.

When shortcuts work well, we often don’t notice them; we effortlessly and quickly come to a decision. Or, in the rare cases we do notice them, we call them clever. In the research community, we refer to them, in the positive sense, as fast and frugal heuristics (where heuristic is another word for shortcut).43

When the same shortcuts get us into trouble, however, we call them foolish: how could I have been so stupid as to follow the crowd? Since you’re reading this book, you’ve probably come across the term bias, which is intimately related to these shortcuts. A bias, strictly speaking, is a tendency of thought or action. It is neither positive nor negative; it just is. Most people, including many researchers, use it explicitly in the negative sense, as in a “deviation from an objective standard, such as a normative model” of how people should behave.44 A shortcut or heuristic gone awry creates a bias.

Once we call something a bias, it’s easy to jump to the logical conclusion: well, we just need to get rid of them! Remove people’s biases! It’s not that simple. It’s precisely because these shortcuts are so clever and smart that they are hard to change. If we simply did foolish things, we’d eventually learn not to do them (either in our own lives or across the history of our species). But these shortcuts are not foolish at all: they are immensely valuable. They are sometimes out of context and used at the wrong time. We can’t and shouldn’t be able to simply stop social proof or the availability heuristic. That would wreak more havoc than it would solve.

The reality is that we can’t completely avoid using mental shortcuts. Rather, by understanding how our conscious and nonconscious shortcuts are triggered by the details of our environment, we can learn to avoid some of the negative outcomes that result from their use. So the first challenge of behavioral science—of designing for behavior change—is to help people make better decisions, given the valuable but imperfect shortcuts we all use.

Quirks of Action

I, myself, am made entirely of flaws, stitched together with good intentions.

Augusten Burroughs

Behavioral science also helps us understand the quirks of our behavior, above and beyond our decisions, especially why we decide to do one thing and actually do something else. This understanding starts with the same base as for decisions: that we’re limited beings with limited attention, memory, willpower, etc. Our minds still use clever shortcuts to help us economize and avoid taxing our limited resources. But these facts make themselves felt in different ways after we’ve made a decision. In particular, we have errors of inaction and errors of unintentional action.

In the research literature, the intention–action gap is one of the major errors of inaction. We’ve all felt this gap in one way or another. For example, do you have a friend with a gym membership, or exercise equipment at home, that they just don’t use that often? Do they really enjoy giving money to a gym? Of course not. They really intended to go to the gym when they first signed up, or to use that fancy machine when they first bought it. It’s just that they didn’t. The benefits of the gym are still clear. And despite all of their past failures, they keep hoping and believing that they’ll get it together and go regularly. But something else gets in the way that isn’t motivation. So they keep paying—and keep failing to go.

With the intention–action gap, the intention to act is there, but people don’t follow through and act on it. It’s not that people are insincere or lack motivation; the gap happens because of how our minds are wired. It illustrates one of the core lessons of behavioral science: good intentions and the sincere desire to do something, aren’t enough.

And unintentional action? I don’t mean revelry that we regret the next morning. Rather, I mean behaviors that we don’t intend even while we’re doing them, often because we aren’t aware or thinking about them. One cause of these we’ve already looked at: habits. Our habits allow us to take action without thought—effortlessly riding a bike, navigating the interface of common applications, or playing sports. But naturally, they can also go awry.

Do you know someone who just can’t stop eating junk food? Each night, when they get home tired and need a break, on the way to the couch, they pick up a candy bar and a bag of chips and sit down with the laptop to watch videos. An hour or so later, they take a break and notice the crumpled-up wrapper and bag and throw them away. They’re still hungry and hardly noticed the snacks on their way into their mouth.

There are many other examples, like when we get hooked on cigarettes (it appears the habit is more powerful than the nicotine, in fact),45 on late-night TV binging, or on incessantly checking social media apps. Habits, as learned patterns of behavior, are inherently neutral. We learn bad habits just as we learn good habits: through repetition. Our minds automate them in order to save us cognitive work. For the person eating junk food, maybe it was a particularly rough time at work, or maybe it was when they first moved to the city and didn’t know where to get good groceries that set up the automated routine. Regardless of the source, once that junk food habit was set, it was hard to shake.

Just as with our decision-making shortcuts, try to imagine a world in which we didn’t have habits—one where we had to carefully think though every decision, every action, as if we were a teenager first learning to drive a car. We’d be exhausted wrecks in no time. We can’t not rely on habits, nor ask users of our products not to do so. Rather, as developers of behavior-changing products and communications, we need to understand them and work with (or around) them. Shortcuts gone awry, habits that people wish they didn’t have, and the yawning gap between people’s intentions and their actions: these are the problems that we’re here to solve. These are why we design for behavior change.

A Map of the Decision-Making Process

We’ve talked about the different ways in which our minds make our decisions, from careful deliberative thought to shortcuts to fully automated habits. We can think about these mental tools as being part of a spectrum, based on how much thought is involved. Unfamiliar situations (like, for most people, math puzzles) require a lot of conscious thought. Walking to your car doesn’t. Similarly, high-stakes decisions like “Which job should I take?” also pull in more conscious thought than “Which bagel should I eat?” Frequently repeated, low-stakes decisions like “Which way should I hold my tooth brush this morning?” don’t require much thought at all and can turn into habits.

The spectrum in Figure 1-3 provides the default, lowest energy way that our minds would respond if we didn’t intentionally do something differently.

In familiar situations, our minds can use habits and intuitive responses to save work
Figure 1-3. In familiar situations, our minds can use habits and intuitive responses to save work

Here are some simple examples, using a person who is thinking about going on a diet and doesn’t have much past experience with diets:

Eating potato chips out of a bag
Very familiar. Very little thought. Habit.
Picking out what to get at your favorite buffet bar
Familiar. Little thought. Intuitive response or assessment.
Signing up for dieting workshops at the office
Semi-familiar. Some thought. Self-concept guides choice.
Judging whether a cheeseburger will violate your diet’s calorie limit for the day
Unfamiliar. Thought required, but with easy ways to simplify.46 Heuristic.
Making a weekly meal plan for the family based on the individual calorie and nutrient counts of hundreds of foods
Unfamiliar. Lots of attention and thought. Conscious, cost–benefit calculations.

Table 1-1 provides a bit more detail on where each of the tools on the spectrum are likely to be used.

Table 1-1. The various tools the mind uses to choose the right action
Mechanism Where It’s Most Likely to be Used
Habits Familiar cues trigger a learned routine
Other intuitive responses Familiar and semi-familiar situations, with a reaction based on prior experiences
Active mindset or self-concept Ambiguous situations with a few possible interpretations
Heuristics Situations where conscious attention is required, but the choice can be implicitly simplified
Focused, conscious calculation Unfamiliar situations where a conscious choice is required or very important decisions we direct our attention toward

This spectrum doesn’t mean that we always use habits in familiar situations, or that we only use our conscious minds in unfamiliar ones. Our conscious minds can and do take control of our behavior and focus strongly on behaviors that otherwise would be habitual. For example, I can think very carefully about how I sit in front of the computer to improve my posture; that’s something I normally don’t think about because it’s so familiar. That takes effort, however. Remember that our conscious attention and capacity are sorely limited. We only bring in the big guns (conscious, cost-benefit calculations) when we have a good reason to do so: when something unusual catches our attention, when we really care about the outcome and try to improve our performance, and so on.

As behavior change practitioners, it’s a whole lot easier to help people take actions that are near the “eat another potato chip in the bag” side of the spectrum, rather than the “thoughtfully plan meals” side. But it’s much harder for people to stop actions on the potato chip–eating side than on the meal-planning side. The next two chapters will look at both, though: how to create the good and how to stop the bad.

A Short Summary of the Ideas

Behavioral science provides a powerful set of tools to help us both understand how people make decisions and take action and to help them make better decisions or follow through on their intent to take action if they would like our help.

Here’s what you need to know:

We’re limited beings
We have limited attention, time, willpower, etc. For example, there are nearly an infinite number of things that your users could be paying attention to at any moment.
Our minds use shortcuts(aka heuristics)
We use them to economize and make quick decisions because of our limitations. Heuristics applied in the wrong context are one cause of biases: negative and unintended tendencies in behavior or decision making. Often because of these biases, there’s a significant gap between people’s intentions and their actions.
We’re of two minds
What we decide and what we do depends on both conscious thought and nonconscious reactions, like habits. What this means is that your users are often not “thinking” when they act. At least, they’re not choosing consciously.
Decision and behavior are deeply affected by context
This worsens or ameliorates our biases and our intention–action gap. What your users do is shaped by our contextual environment in obvious ways, like when the architecture of a site directs them to a central home page or dashboard. It’s also shaped in nonobvious ways: by the people they talk and listen to (the social environment), by what we see and interact with (their physical environment), and the habits and responses they’ve learned over time (their mental environment).
We can cleverly and thoughtfully design a context
We do so to improve people’s decision making and lessen the intention–action gap. And that is the point of Designing for Behavior Change and this toolkit.

1 There are hundreds, if not thousands, of papers and books one can draw from. Benartzi and Thaler (2004) is a good start for retirement research.

2 Ariely (2008), Thaler and Sunstein (2008), Kahneman (2011)

3 Krulwich (2009); Soman (2015)

4 Soman (2015). Not all biases are directly caused by heuristics gone awry, but many can be traced back to time- or energy-saving devices in the mind. One major category that isn’t from heuristics consists of identity-preserving biases (mental quirks that make us feel better about ourselves), like overconfidence bias.

5 Hamilton (2008)

6 Miller (1956)

7 For example, see Manis et al. (1993). These outcomes are actually the result of reasonable but imperfect shortcuts that our minds use to counter our limitations; we’ll talk about those shortcuts shortly.

8 See Schwartz (2004, 2014), Iyengar (2010), and Solman (2014). As we should expect with all behavioral mechanisms and lessons, the paradox of choice isn’t universal or without disagreement.

9 Kahneman et al. (1993)

10 As many designers have argued, including Krug (2006).

11 That is, the family of theories referred to as dual process theory in psychology. Dual process theories give a useful abstraction—a simplified but generally accurate way of thinking about—the vast complexity of our underlying brain processes.

12 Damasio et al. (1996)

13 There are great books about dual process theory and the workings of these two parts of our mind. Kahneman’s Thinking, Fast and Slow (Farrar, Straus and Giroux, 2011) and Malcolm Gladwell’s Blink (Back Bay Books, 2005) are two excellent places to start; I’ve created a list of resources on how the mind works (including dual process theory).

14 The boundaries between “habit” and other processes (intuition, etc.) are somewhat blurry; but these terms help draw out the differences among types of System 1 responses. See Wood and Neal (2007) for the distinction between habits and other automated System 1 behaviors; see Kahneman (2011) for a general discussion of System 1.

15 Wood (2019); Dean (2013)

16 I’m indebted to Neale Martin for highlighting the situations in which the conscious mind does become active. See his book Habit (FT Press, 2008) for a great summary of the literature on when intuitive and deliberative processes are at play.

17 Gazzaniga and Sperry (1967)

18 This isn’t to say that the rider is equivalent to the left side of the brain and the elephant to the right side. Our deliberative and intuitive thinking isn’t neatly divided in that way. Instead, this is just one of the many examples of how rationalizations occur when our deliberative mind is asked to explain what happens outside of its awareness and control. Many thanks to Sebastian Deterding for catching that unintended (and wrong!) implication of the passage.

19 Heath and Heath (2010)

20 See Samuelson and Zeckhauser (1988) for the initial work on status quo bias.

21 Gerber and Rogers (2009)

22 Watson (1960)

23 See Laibson (1997) for initial modeling work in economics; O’Donoghue and Rabin (2015) for a relatively recent summary.

24 Tversky and Kahneman (1974)

25 Tversky and Kahneman (1973)

26 Norton et al. (2011)

27 Nisbett and Wilson (1977)

28 See Bargh et al. (1996) for a discussion of the four core characteristics of automatic behaviors, such as habits: uncontrollable, unintentional, unaware, and cognitively efficient (doesn’t require cognitive effort).

29 Wood and Neal (2007)

30 There are nice summaries at News in Health and CBS News.

31 Ouellette and Wood (1998)

32 There’s an active debate in the field about how exactly the notion of a reward affects a person after the habit is formed. See Wood and Neal (2007) for a discussion.

33 See Berridge et al. (2009) on the difference between wanting and liking. The difference between wanting and liking is a possible explanation for why certain drugs instill strong craving in addicts although taking them long stopped being pleasurable.

34 And yes, for those of you who recall this example from the first edition of the book, it’s still true today.

35 Duhigg’s story also is an example of the complex ethics of behavior change. Hopkins accomplished something immensely beneficial for Americans and American society. He was also wildly successful in selling a commercial product in which demand was partially built on a fabricated “problem” (the fake problem of tooth film, which is harmless, rather than tooth decay, which is not).

36 Fogg and Hreha (2010)

37 This is one form of motivated cueing, in which there is a diffuse motivation applied to the context that cues the habit (Wood and Neal 2007). There is active debate in the field on how, exactly, motivation affects habits that have already formed.

38 Duration is covered in Lally et al. (2010), and delay in Berridge et al. (2009); Wood (2019).

39 Tversky and Kahneman (1981)

40 Many researchers accept this explanation, but not all. As often happens in science, there is a divergence of opinion on why framing effects like this occur. An alternative view is that people make a highly simplified analyses of the options and the two different options have two different simplified answers. See Kühberger and Tanner (2010) for one such perspective.

41 Wilson (2002); see Nisbett and Wilson (1997b) for an early summary.

42 Emotion and other examples are covered in Wilson and Gilbert (2005), and behavior in Wilson and LaFleur (1995).

43 Gigerenzer and Todd (1999); Gigerenzer (2004)

44 Soll et al. (2015), building on Baron (2012)

45 Wood (2019)

46 One such commonly used heuristic is the volume of the food—yes, how big it is. Barbara Rolls, director of the Penn State Laboratory for the Study of Human Ingestive Behavior, developed a diet that leverages this heuristic to help people lose weight (see Rolls 2005).

Get Designing for Behavior Change, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.