In the previous chapter, we discussed the powerfully negative effect that unchecked assumptions can have on our product decision-making. The best way to remain objective about your assumptions is to write hypotheses that can be tested with customer feedback.
Jeff Gothelf, coauthor of Lean UX: Designing Great Products with Agile Teams, sums it up perfectly:1
Expressing your assumptions [using a hypothesis] turns out to be a really powerful technique. It takes much of the subjective and political conversation out of the decision-making process and instead orients the team toward feedback from the market. It also orients the team toward users and customers.
Sometimes it can be difficult to separate our own internal or technical limitations from those of the customer. For example, we might believe that the lack of an effective search algorithm is causing customers to be frustrated.
We may construct a hypothesis that focuses on what we are lacking:
We believe that, because our search algorithm produces ineffective results, customers are unwilling to create an account on our website.
This inwardly focused hypothesis already biases us toward one solution (adjust the search algorithm so it produces better results). It doesn’t explore who the customer is, what they’re trying to do, and how poor search results affect them.
This hypothesis is ineffective in its ability to help us gain greater insight about our customers. Are we sure that search results are the thing that’s preventing customers from subscribing? If so, how effective do the search results need to be to get customers to purchase a subscription?
You may have heard the phrase “correlation is not causation.” Be careful that you’re not ascribing undesirable customer behavior to your own product’s limitations. The two may be completely unrelated.
For example, you could invest a great deal of resources, increasing the speed and quality of search results, only to find that it had little effect on increasing subscriptions.
That’s not to suggest that improving product quality is unimportant; however, we must continually ensure that quality of our product reflects what the customer wants to achieve with it. We shouldn’t invest time and energy on things that have little customer impact. If spending months to shave a couple of milliseconds off your search results has no discernable impact on customer satisfaction, it may not be worth your investment.
Imagine this hypothesis instead:
We believe that customers without an account prefer to see their results in order of closest to farthest from their location when searching for providers on our website.
This hypothesis connects the customer’s motivation to one of our site’s limitations. In other words, the customer wants to see results from closest to farthest, but we can’t provide that functionality, because they don’t have an account.
If we validated this hypothesis, we may decide that offering this functionality is of a high priority. Additionally, the specifics of this hypothesis help the engineering team understand what functionality we’re looking to enable. It focuses their attention on one piece of functionality where we can improve search results (i.e., allowing customers to sort by location without having to log in to an account).
In each of our playbooks, we give you an activity to help you collect your team’s ideas or assumptions. Having a chance to “diverge” and get everything out there is a great way to see all the possible ideas and perspectives. We then like to “converge” by getting the teams to focus on a handful of possibilities presented. Typically, we do this by trying to find similarities or patterns in the group’s thinking.
Throughout our individual playbooks, you’ll find this continual “diverge” and “converge” pattern (Figure 2-1).
To help with formulating hypotheses, we provide our teams with a hypothesis template for each stage of the HPF (Figure 2-2). This helps them take their assumptions and formulate them into consistent hypotheses that can be tested. You can use these templates to help you get started, but the important thing to note isn’t the syntax of the hypothesis, but rather the specific parameters we focus on at each stage. In short, you may need to alter the language of the hypothesis template to meet your needs, but we highly encourage you to keep the parameters.
We believe [type of customers] are motivated to [motivation] when doing [job-to-be-done].
We start each of our templates with the statement, “We believe.” Some teams struggle with this because they might not have enough information to make a bold statement about what they believe. For example, if a team had been working on desktop software for over a decade and was suddenly tasked with creating a mobile app, they might feel uncomfortable making claims about what they believe mobile customers want.
These hypotheses templates shouldn’t feel like legally binding contracts. It’s a simple statement that illustrates what you believe given the information available to you. In the early stages of development, you might be operating with your gut instincts. That’s perfectly okay, but it’s better to capture those instincts so that you can appropriately validate them.
Most likely, your first hypotheses will be proven wrong. Therefore, you’ll need to iterate them over time. It’s a skill that can be sharpened, and you’ll find that you’ll get better at writing hypotheses the more you do it.
Each of our hypotheses templates has parameters that are highlighted at the stage they reside in. In the case of the Customer hypothesis, there are three parameters: [type of customers], [motivation], and [job-to-be-done]. These parameters will be carried throughout, helping inform the remaining stages. This is the progression of the Hypothesis Progression Model.
In Chapter 5 through Chapter 8, we cover each parameter in its respective stage, but there are three parameters that are shared throughout the HPF: [type of customers], [job-to-be-done], and [problem] (Figure 2-3). We call these the “shared parameters” because they create a common thread throughout the HPF. Let’s look at them a bit more closely.
Continually learning about your customers is paramount to the success of our framework. Throughout all stages we’re interested in getting to know the specifics of the customer we’re targeting. The Customer stage is where we define the customers we’re targeting; however, it’s important that we maintain the type of customer we’re targeting to avoid confusion. For example, we might be looking at building an app for education. If our app would serve both students and teachers, it would be a mistake to talk to only students. There would be characteristics that differ between students and teachers, and it would be important to know what those are. By continually identifying and segmenting the customer you are talking about, you can ensure the team has a shared understanding of whom you’re targeting (and whom you’re not).
The job-to-be-done parameter, based on Clayton Christensen’s Jobs Theory, is the task the customer employs to reach their goal. Essentially, Christensen says that customers don’t simply use products, they hire products to complete a job for them. Therefore, the Jobs Theory suggests that a job is the progress that a person is trying to make in a particular circumstance.2
For example, a customer doesn’t want a drill, they want a quarter-inch hole. Therefore, they “hire” a drill to achieve that for them. Depending on the merits of the drill they choose, it will perform that job well or poorly.
We could write an entire book on just Jobs Theory alone, but there are plenty of them already out there. For now, all you need to know is that it’s important to separate the job you’re exploring from the motivation of your customer.
Imagine we work on a website portal that helps customers find local service providers in their area. Think of services like lawncare, childcare, housecleaning, and others.
Let’s say we’re interested in improving our customers’ experience searching our website for local providers. We might ask our customers all sorts of questions, like how they use the search feature, what results they clicked on, or whether they use the site’s advanced search tools. We might conclude that our customers’ motivation is to “quickly find what they’re looking for.” However, this perspective is too general and lacks the specifics needed to make meaningful impact with the customer.
It’s true that customers want fast search results, but it’s the underlying motivations for those results that provide the insights we need to make innovative products.
For example, a customer comes to our site because he’s motivated to find quality childcare. He could “hire” our search feature and quickly receive a list of providers in his area. However, we still didn’t help him achieve his goal, because he’s unable to sort the provider satisfaction rating from highest to lowest. Effectively, he has no easy way to discern which provider offers the highest-quality care.
In this example, if we separate the job-to-be-done (searching the site) from the motivation (finding quality childcare), we could track our product’s performance against these parameters separately. This separation allows us to track multiple jobs as they relate to the customer’s motivation. Our customer may have tried searching for a provider, but he might’ve also asked for recommendations on the member forum or read through provider reviews.
We’d want to know how each of these jobs performed in helping the customer achieve his goal.
If you’re in the business of creating products, then you’re in the business of solving problems. Therefore, it’s important that you continually track the problem you’re trying to solve and continually validate that problem with customers. This will help keep your development on track and away from feature or scope creep.
To effectively use the Hypothesis Progression Framework, it’s important that you and your team become competent at writing hypotheses. Writing great hypotheses can be a bit of an art form, but with a little practice, you’ll find that you’re able to create them very quickly.
A great hypothesis:
can be tested;
separates the person from their behavior;
focuses on the customer’s limitations, not your own; and
can be measured.
Let’s review these principles a little more closely.
When you begin to test your assumptions with your customers, it’s easy to fall into a false belief that, to “win,” your assumptions must be proven right. This is an improper mindset for working and testing hypotheses. In fact:
An invalidated hypothesis is just as valuable as a validated one.
We’ll repeat this, in case you didn’t read it the first time:
An invalidated hypothesis is just as valuable as a validated one.
There are two positive outcomes for the results of a hypothesis. If you get results that align with what you expected, then your hypothesis has been confirmed. If your results are unexpected, then you’ve made a discovery. Both outcomes are equally important.
We learn just as much when we’re proven wrong as we do when we’re proven right. In some cases, we learn more.
The HPF should be used for continual learning and exploration. You should absolutely document when a hypothesis has been proven wrong. This will save everyone from treading on old ground or repeating the same mistakes. Create a culture that celebrates not just when a hypothesis has been validated, but also when a discovery has been made because a hypothesis has been invalidated.
Note that when a hypothesis has been validated or invalidated, it doesn’t become fact. You should think of hypotheses as an instrument to reduce risk. If you talk with 20 customers who all validate that your hypothesis is true, then you should feel as though you have greater confidence that it is. Therefore, you must continually test your hypotheses to “de-risk” your product strategy. By validating or invalidating what you know to be true, you grow your understanding of what may be successful and what may not be.
A validated hypothesis is not a guarantee, it’s a window into what could possibly be true. You should prioritize accordingly. The riskier the decision you’re trying to make, the more you want to try to validate the hypothesis that supports that decision.
We should strive to have the highest possible confidence in what we know to be true before we launch. It allows us to set expectations, position our products effectively, and better predict the outcome.
When working with teams, writing hypotheses for the first time, we find they try to “cast a wide net.” The belief is that if their hypotheses are applicable to more people, it stands a greater chance of being validated or invalidated. Having a nonspecific hypothesis leads to nonspecific answers.
Let’s go back to our previous example: working on a website that helps customers find local services providers like lawncare, childcare, and petsitting.
Let’s say we want explore why customers might use this type of website. So we decide to try to validate the following hypothesis:
People want to save money on services.
There’s a high likelihood that this hypothesis will be validated. After all, who doesn’t want to save money on services?! If we chose to use the validation of this hypothesis as justification to pursue our idea, we would be on shaky ground. In a sense, all we’ve validated is that people want to save money on services, not that they want to use a website to search for service providers.
Additionally, this type of general hypothesis will lead to uninformative conversations with our customers. It doesn’t drill down to the specifics of the customer’s motivation and will generate customer feedback that is all over the map or unhelpful.
Let’s take another look at the Customer hypothesis template and see how it can be used to drive at a more specific hypothesis:
We believe [type of customers] are motivated to [motivation] when doing [job-to-be-done].
We believe working parents who have children under 12 are motivated to find quality childcare for an affordable price when searching the internet for service providers in their area.
What if we found this hypothesis to be partially invalidated? Imagine we talked to working mothers and we discovered that they didn’t trust online searches when it came to their childcare needs; instead, they often preferred to use recommendations from their family and friends. They wanted to keep their search limited to people whose opinion they valued and trusted.
However, when we talked to fathers, we found that they valued having the greatest selection of results over personal recommendations. Fathers were more concerned about “missing out” on a great childcare provider, because none of their friends or family knew about it.
That would be an important discovery. We would want to iterate our hypothesis and start tracking mothers and fathers separately:
We believe working mothers who have children under 12 are motivated to find quality childcare for an affordable price when searching the internet for service providers in their area.
This segmentation will help us continually appreciate that mothers and fathers will use our site differently when it comes to searching for childcare services. It could affect the entire strategy of our website, what features we create, and how we target (or don’t target) them to each customer segment.
That is why specific hypotheses are important. These subtle distinctions have huge consequences.
It’s very easy to create an identity around someone’s actions or behaviors. This can lead to convoluted hypotheses that are difficult to draw specific conclusions from. It can obfuscate underlying issues and motivations that are more critical.
For example, imagine we want to create a retention program to encourage customers not to leave our website that connects them to local service providers. We call these customers “churners”; they’ve created an account and engaged with the site for a couple of weeks, but haven’t returned in over a month. We might have a hypothesis about why they haven’t returned:
We believe churners left our website because they are no longer looking for a service provider.
This seems like a legitimate reason for leaving our website; however, it’s lacking because it focuses only on the behavior of churning—not who the customers are and what motivated them to engage with us in the first place.
How old are these churners? What is their level of skill or expertise with using the internet? Where do they live or work? What was their motivation for coming to our website in the first place? Did they fail to find a provider because there wasn’t a desirable one in their area, or because the search tool was too confusing to navigate?
It’s important to resist the urge to put the behaviors you’re trying to correct (e.g., slow adoption, bad reviews, refusal to upgrade to paid service) in your hypotheses. It wraps the customer’s identity around the negative behavior and makes it difficult to understand who they are and what truly motivates them.
If our strategy were to focus only on correcting behaviors that don’t align with our business goals, we would end up distancing ourselves from our customers and fall into a combative “us verus them” mentality.
The key is to align our business goals with their goals.
A better hypothesis would be:
We believe customers who have limited knowledge of the internet find it difficult to search for providers on our website, because they don’t know what keywords can be used to narrow to produce meaningful results.
This hypothesis gets at the heart of the type of customer and the unique problem they’re having (which is resulting in churn).
If this hypothesis were validated, we could explore how we might provide keyword suggestions to help customers conduct better searches or automatically provide customers with local providers without requiring them to conduct a search at all.
Hypotheses must be measured objectively to determine their accuracy. Without these criteria, how do we know if our hypothesis has been validated or invalidated?
Imagine our goal is to help customers learn about premium features on our website. This is functionality that is available through a paid subscription. Let’s say we had a hypothesis like this:
We believe that customers using a free account are frustrated when searching for local service providers, because they need to see review comments, which are only provided through a paid subscription.
This is a good Problem hypothesis; however, what customer comments or behaviors will help us validate or invalidate whether it’s true?
Effectively, this data is used to approximate or characterize your customers. It may be characteristic attributes or customer quotes that perfectly capture a common sentiment. We often say that while a picture is worth a thousand words, a direct customer quote can be worth ten thousand words.
Qualitative data helps you tell a rich and complete story that can raise your team’s customer empathy. It brings customer depth by highlighting characteristics that allow you to make an emotional connection.
Imagine you’re sharing with your leadership team something you learned from your customer development, regarding your paid subscription model.
You could say:
We learned that our customers don’t find the paid subscription valuable.
Or, you could share this direct quote from one of your customers:
I really don’t see the value of a paid subscription. I mean, you guys are charging a lot and most of the stuff I see here—these premium listing things—I mean, you should just be offering that for free. Look at your competitors! They offer all this stuff for free! Not only would I not pay for this, I would probably tell everyone to avoid your website, because you’re clearly trying to rip people off. Seeing this kind of makes me angry, to be honest.
Which quote do you believe would compel your leadership team to act?
As you begin to talk with customers, it’s important that you’re not simply checking boxes and adding up totals, but that you’re engaging in active listening and trying to capture their voice and unique perspectives.
These comments and quotes help enrich your data and boost your confidence that your hypothesis has been validated.
During product development, we tend to rely on more traditional quantitative measures like satisfaction ratings or scores that determine intent to use. These numerical scores can be easily monitored and measured throughout your exploration.
You can certainly supplement the measurements we use in our playbooks with your own KPIs (key performance indicators), goals, or business metrics.
For example, you may have a survey asking customers to rate how valuable each of your premium subscription offerings is.
“Soft” quantitative data is data that doesn’t have statistical significance. These are numbers that allow us to track if there’s a signal, without bringing in heavy formulas or statistical rigor. “Soft quant” measures are great when you’re trying to measure the effectiveness of a design iteration or the number of times a sentiment is expressed by a customer.
For example, we may decide we’re going to count the number of times customers express that a feature should be provided for free. If we talked to 10 service portal customers and 8 of them mentioned that our premium listings should be free, that would be a signal worth investigating.
Identifying measurable criteria for your hypotheses doesn’t have to be complicated. Before testing your hypotheses, you should consider the signals you think you may hear from customers to either validate or invalidate your assumptions. One of the best ways to identify these types of signals is to formulate a Discussion Guide.
The Discussion Guide is a tool that we use, in each of our playbooks, to help formulate the types of questions we will ask customers to validate our hypotheses. What questions we ask and, more importantly, how we ask them, is a critical component of how we test our hypotheses.
Having a solid strategy, before talking with customers, is a great way to ensure you and your team come back with meaningful results. We use our Discussion Guides to help teams have a shared understanding of the questions they want to have answered. Building the guide first ensures that teams are asking the same questions, in the same way, so that their results can be compared efficiently and effectively.
Let’s go back to our example website that helps customers find service providers. Imagine we wanted to talk to customers about any negative experiences they may have had using our search engine.
If we were to ask, “What do you dislike about our search engine?” this, of course, implies that the customer disliked something. They may feel compelled to find something they didn’t like, even if they thought the overall experience was fine. Effectively, we’re biasing customers toward our conclusion that we believe something is wrong with our search engine experience. This type of question would only seek to confirm that bias, because customers may feel obligated to name something so they could properly answer our question. Here are some examples of nonleading questions that we could include in our Discussion Guide:
Tell me about the last time you tried to search for a provider on our website. What was that experience like?
How often are you able to successfully find a service provider you are looking for? How do you know you’ve been successful?
How confident do you feel when searching our site? Do you feel like you’re able to find the best results? What makes you feel that way?
Have you ever had trouble finding a provider on our website? How did that make you feel?
If you could improve one thing about our search experience, what would it be?
Notice how these questions are open-ended. They don’t evoke simple yes or no answers. Our Discussion Guides are intended to evoke a conversation. You want to open the space that you’re exploring with customers and have them fill in the gaps with their own experiences and perspectives. Trust that customers will naturally talk about what matters to them, and structure your questions to help them do that.
As we navigate from customer development to product development, we need ways to help us generate ideas on how to effectively respond to the customer’s problem. We employ activities like “How might we?” exercises, sketching, and storyboarding to help formulate ideas at this stage.
The best way to remain objective about your assumptions is to write hypotheses that can be tested with customer feedback.
Each stage of the Hypothesis Progression Framework includes a hypothesis template that helps you formulate your assumptions into hypotheses. Each template is composed of parameters; some are shared throughout the framework, while others are unique to a specific stage.
The [job-to-be-done] parameter, shared throughout the framework, is inspired by Clayton Christensen’s Job Theory. His theory suggests that customers “hire” products to complete a job.
A hypothesis that has been invalidated is just as valuable as one that has been validated. Invalidated hypotheses can prevent you from making costly mistakes or heading in a misguided direction.
Writing hypotheses and validating them with customer data is all about reducing risk in your product decision-making. You’ll never be 100% confident that a product decision will be a success, but by validating or invalidating your assumptions, you can increase your confidence that you’re heading in the right direction.
Hypotheses should be specific. Having a hypothesis that is too broad does not provide actionable information or insight. Including characteristics like the type of customer and their motivations, tasks, and frustrations will give you specific details that can help you make informed decisions.
It’s important to separate the person from the behavior you’re trying to observe.
Your hypotheses should have measurable criteria so that they can be effectively tested.
A great hypothesis focuses on the customer’s limitations, not your own. It’s important that you don’t inject your technical or political limitations into the customer’s experience. For example, don’t say, “Customers are frustrated because our search algorithm doesn’t support location.” Instead say, “Customers are frustrated when they cannot find specific search results using their location.”