O'Reilly logo

Lean Analytics by Benjamin Yoskovitz, Alistair Croll

Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

What Makes a Good Metric?

Here are some rules of thumb for what makes a good metric—a number that will drive the changes you’re looking for.

A good metric is comparative. Being able to compare a metric to other time periods, groups of users, or competitors helps you understand which way things are moving. “Increased conversion from last week” is more meaningful than “2% conversion.”

A good metric is understandable. If people can’t remember it and discuss it, it’s much harder to turn a change in the data into a change in the culture.

A good metric is a ratio or a rate. Accountants and financial analysts have several ratios they look at to understand, at a glance, the fundamental health of a company.[5] You need some, too.

There are several reasons ratios tend to be the best metrics:

  • Ratios are easier to act on. Think about driving a car. Distance travelled is informational. But speed—distance per hour—is something you can act on, because it tells you about your current state, and whether you need to go faster or slower to get to your destination on time.

  • Ratios are inherently comparative. If you compare a daily metric to the same metric over a month, you’ll see whether you’re looking at a sudden spike or a long-term trend. In a car, speed is one metric, but speed right now over average speed this hour shows you a lot about whether you’re accelerating or slowing down.

  • Ratios are also good for comparing factors that are somehow opposed, or for which there’s an inherent tension. In a car, this might be distance covered divided by traffic tickets. The faster you drive, the more distance you cover—but the more tickets you get. This ratio might suggest whether or not you should be breaking the speed limit.

Leaving our car analogy for a moment, consider a startup with free and paid versions of its software. The company has a choice to make: offer a rich set of features for free to acquire new users, or reserve those features for paying customers, so they will spend money to unlock them. Having a full-featured free product might reduce sales, but having a crippled product might reduce new users. You need a metric that combines the two, so you can understand how changes affect overall health. Otherwise, you might do something that increases sales revenue at the expense of growth.

A good metric changes the way you behave. This is by far the most important criterion for a metric: what will you do differently based on changes in the metric?

  • “Accounting” metrics like daily sales revenue, when entered into your spreadsheet, need to make your predictions more accurate. These metrics form the basis of Lean Startup’s innovation accounting, showing you how close you are to an ideal model and whether your actual results are converging on your business plan.

  • “Experimental” metrics, like the results of a test, help you to optimize the product, pricing, or market. Changes in these metrics will significantly change your behavior. Agree on what that change will be before you collect the data: if the pink website generates more revenue than the alternative, you’re going pink; if more than half your respondents say they won’t pay for a feature, don’t build it; if your curated MVP doesn’t increase order size by 30%, try something else.

Drawing a line in the sand is a great way to enforce a disciplined approach. A good metric changes the way you behave precisely because it’s aligned to your goals of keeping users, encouraging word of mouth, acquiring customers efficiently, or generating revenue.

Unfortunately, that’s not always how it happens.

Renowned author, entrepreneur, and public speaker Seth Godin cites several examples of this in a blog post entitled “Avoiding false metrics.”[6] Funnily enough (or maybe not!), one of Seth’s examples, which involves car salespeople, recently happened to Ben.

While finalizing the paperwork for his new car, the dealer said to Ben, “You’ll get a call in the next week or so. They’ll want to know about your experience at the dealership. It’s a quick thing, won’t take you more than a minute or two. It’s on a scale from 1 to 5. You’ll give us a 5, right? Nothing in the experience would warrant less, right? If so, I’m very, very sorry, but a 5 would be great.”

Ben didn’t give it a lot of thought (and strangely, no one ever did call). Seth would call this a false metric, because the car salesman spent more time asking for a good rating (which was clearly important to him) than he did providing a great experience, which was supposedly what the rating was for in the first place.

Misguided sales teams do this too. At one company, Alistair saw a sales executive tie quarterly compensation to the number of deals in the pipeline, rather than to the number of deals closed, or to margin on those sales. Salespeople are coin-operated, so they did what they always do: they followed the money. In this case, that meant a glut of junk leads that took two quarters to clean out of the pipeline—time that would have been far better spent closing qualified prospects.

Of course, customer satisfaction or pipeline flow is vital to a successful business. But if you want to change behavior, your metric must be tied to the behavioral change you want. If you measure something and it’s not attached to a goal, in turn changing your behavior, you’re wasting your time. Worse, you may be lying to yourself and fooling yourself into believing that everything is OK. That’s no way to succeed.

One other thing you’ll notice about metrics is that they often come in pairs. Conversion rate (the percentage of people who buy something) is tied to time-to-purchase (how long it takes someone to buy something). Together, they tell you a lot about your cash flow. Similarly, viral coefficient (the number of people a user successfully invites to your service) and viral cycle time (how long it takes them to invite others) drive your adoption rate. As you start to explore the numbers that underpin your business, you’ll notice these pairs. Behind them lurks a fundamental metric like revenue, cash flow, or user adoption.

If you want to choose the right metrics, you need to keep five things in mind:

Qualitative versus quantitative metrics

Qualitative metrics are unstructured, anecdotal, revealing, and hard to aggregate; quantitative metrics involve numbers and statistics, and provide hard numbers but less insight.

Vanity versus actionable metrics

Vanity metrics might make you feel good, but they don’t change how you act. Actionable metrics change your behavior by helping you pick a course of action.

Exploratory versus reporting metrics

Exploratory metrics are speculative and try to find unknown insights to give you the upper hand, while reporting metrics keep you abreast of normal, managerial, day-to-day operations.

Leading versus lagging metrics

Leading metrics give you a predictive understanding of the future; lagging metrics explain the past. Leading metrics are better because you still have time to act on them—the horse hasn’t left the barn yet.

Correlated versus causal metrics

If two metrics change together, they’re correlated, but if one metric causes another metric to change, they’re causal. If you find a causal relationship between something you want (like revenue) and something you can control (like which ad you show), then you can change the future.

Analysts look at specific metrics that drive the business, called key performance indicators (KPIs). Every industry has KPIs—if you’re a restaurant owner, it’s the number of covers (tables) in a night; if you’re an investor, it’s the return on an investment; if you’re a media website, it’s ad clicks; and so on.

Qualitative Versus Quantitative Metrics

Quantitative data is easy to understand. It’s the numbers we track and measure—for example, sports scores and movie ratings. As soon as something is ranked, counted, or put on a scale, it’s quantified. Quantitative data is nice and scientific, and (assuming you do the math right) you can aggregate it, extrapolate it, and put it into a spreadsheet. But it’s seldom enough to get a business started. You can’t walk up to people, ask them what problems they’re facing, and get a quantitative answer. For that, you need qualitative input.

Qualitative data is messy, subjective, and imprecise. It’s the stuff of interviews and debates. It’s hard to quantify. You can’t measure qualitative data easily. If quantitative data answers “what” and “how much,” qualitative data answers “why.” Quantitative data abhors emotion; qualitative data marinates in it.

Initially, you’re looking for qualitative data. You’re not measuring results numerically. Instead, you’re speaking to people—specifically, to people you think are potential customers in the right target market. You’re exploring. You’re getting out of the building.

Collecting good qualitative data takes preparation. You need to ask specific questions without leading potential customers or skewing their answers. You have to avoid letting your enthusiasm and reality distortion rub off on your interview subjects. Unprepared interviews yield misleading or meaningless results.

Vanity Versus Real Metrics

Many companies claim they’re data-driven. Unfortunately, while they embrace the data part of that mantra, few focus on the second word: driven. If you have a piece of data on which you cannot act, it’s a vanity metric. If all it does is stroke your ego, it won’t help. You want your data to inform, to guide, to improve your business model, to help you decide on a course of action.

Whenever you look at a metric, ask yourself, “What will I do differently based on this information?” If you can’t answer that question, you probably shouldn’t worry about the metric too much. And if you don’t know which metrics would change your organization’s behavior, you aren’t being data-driven. You’re floundering in data quicksand.

Consider, for example, “total signups.” This is a vanity metric. The number can only increase over time (a classic “up and to the right” graph). It tells us nothing about what those users are doing or whether they’re valuable to us. They may have signed up for the application and vanished forever.

“Total active users” is a bit better—assuming that you’ve done a decent job of defining an active user—but it’s still a vanity metric. It will gradually increase over time, too, unless you do something horribly wrong.

The real metric of interest—the actionable one—is “percent of users who are active.” This is a critical metric because it tells us about the level of engagement your users have with your product. When you change something about the product, this metric should change, and if you change it in a good way, it should go up. That means you can experiment, learn, and iterate with it.

Another interesting metric to look at is “number of users acquired over a specific time period.” Often, this will help you compare different marketing approaches—for example, a Facebook campaign in the first week, a reddit campaign in the second, a Google AdWords campaign in the third, and a LinkedIn campaign in the fourth. Segmenting experiments by time in this way isn’t precise, but it’s relatively easy.[7] And it’s actionable: if Facebook works better than LinkedIn, you know where to spend your money.

Actionable metrics aren’t magic. They won’t tell you what to do—in the previous example, you could try changing your pricing, or your medium, or your wording. The point here is that you’re doing something based on the data you collect.

Eight Vanity Metrics to Watch Out For

It’s easy to fall in love with numbers that go up and to the right. Here’s a list of eight notorious vanity metrics you should avoid.

  1. Number of hits. This is a metric from the early, foolish days of the Web. If you have a site with many objects on it, this will be a big number. Count people instead.

  2. Number of page views. This is only slightly better than hits, since it counts the number of times someone requests a page. Unless your business model depends on page views (i.e., display advertising inventory), you should count people instead.

  3. Number of visits. Is this one person who visits a hundred times, or are a hundred people visiting once? Fail.

  4. Number of unique visitors. All this shows you is how many people saw your home page. It tells you nothing about what they did, why they stuck around, or if they left.

  5. Number of followers/friends/likes. Counting followers and friends is nothing more than a popularity contest, unless you can get them to do something useful for you. Once you know how many followers will do your bidding when asked, you’ve got something.

  6. Time on site/number of pages. These are a poor substitute for actual engagement or activity unless your business is tied to this behavior. If customers spend a lot of time on your support or complaints pages, that’s probably a bad thing.

  7. Emails collected. A big mailing list of people excited about your new startup is nice, but until you know how many will open your emails (and act on what’s inside them), this isn’t useful. Send test emails to some of your registered subscribers and see if they’ll do what you tell them.

  8. Number of downloads. While it sometimes affects your ranking in app stores, downloads alone don’t lead to real value. Measure activations, account creations, or something else.

Exploratory Versus Reporting Metrics

Avinash Kaushik, author and Digital Marketing Evangelist at Google, says former US Secretary of Defense Donald Rumsfeld knew a thing or two about analytics. According to Rumsfeld:

There are known knowns; there are things we know that we know. There are known unknowns; that is to say there are things that we now know we don’t know. But there are also unknown unknowns—there are things we do not know, we don’t know.

Figure 2-1 shows these four kinds of information.

The hidden genius of Donald Rumsfeld
Figure 2-1. The hidden genius of Donald Rumsfeld

The “known unknowns” is a reporting posture—counting money, or users, or lines of code. We know we don’t know the value of the metric, so we go find out. We may use these metrics for accounting (“How many widgets did we sell today?”) or to measure the outcome of an experiment (“Did the green or the red widget sell more?”), but in both cases, we know the metric is needed.

The “unknown unknowns” are most relevant to startups: exploring to discover something new that will help you disrupt a market. As we’ll see in the next case study, it’s how Circle of Friends found out that moms were its best users. These “unknown unknowns” are where the magic lives. They lead down plenty of wrong paths, and hopefully toward some kind of “eureka!” moment when the idea falls into place. This fits what Steve Blank says a startup should spend its time doing: searching for a scalable, repeatable business model.

Analytics has a role to play in all four of Rumsfeld’s quadrants:

  • It can check our facts and assumptions—such as open rates or conversion rates—to be sure we’re not kidding ourselves, and check that our business plans are accurate.

  • It can test our intuitions, turning hypotheses into evidence.

  • It can provide the data for our spreadsheets, waterfall charts, and board meetings.

  • It can help us find the nugget of opportunity on which to build a business.

In the early stages of your startup, the unknown unknowns matter most, because they can become your secret weapons.

Circle of Moms Explores Its Way to Success

Circle of Friends was a simple idea: a Facebook application that allowed you to organize your friends into circles for targeted content sharing. Mike Greenfield and his co-founders started the company in September 2007, shortly after Facebook launched its developer platform. The timing was perfect: Facebook became an open, viral place to acquire users as quickly as possible and build a startup. There had never been a platform with so many users and that was so open (Facebook had about 50 million users at the time).

By mid-2008, Circle of Friends had 10 million users. Mike focused on growth above everything else. “It was a land grab,” he says, and Circle of Friends was clearly viral. But there was a problem. Too few people were actually using the product.

According to Mike, less than 20% of circles had any activity whatsoever after their initial creation. “We had a few million monthly uniques from those 10 million users, but as a general social network we knew that wasn’t good enough and monetization would likely be poor.”

So Mike went digging.

He started looking through the database of users and what they were doing. The company didn’t have an in-depth analytical dashboard at the time, but Mike could still do some exploratory analysis. And he found a segment of users—moms, to be precise—that bucked the poor engagement trend of most users. Here’s what he found:

  • Their messages to one another were on average 50% longer.

  • They were 115% more likely to attach a picture to a post they wrote.

  • They were 110% more likely to engage in a threaded (i.e., deep) conversation.

  • They had friends who, once invited, were 50% more likely to become engaged users themselves.

  • They were 75% more likely to click on Facebook notifications.

  • They were 180% more likely to click on Facebook news feed items.

  • They were 60% more likely to accept invitations to the app.

The numbers were so compelling that in June 2008, Mike and his team switched focus completely. They pivoted. And in October 2008, they launched Circle of Moms on Facebook.

Initially, numbers dropped as a result of the new focus, but by 2009, the team grew its community to 4.5 million users—and unlike the users who’d been lost in the change, these were actively engaged. The company went through some ups and downs after that, as Facebook limited applications’ abilities to spread virally. Ultimately, the company moved off Facebook, grew independently, and sold to Sugar Inc. in early 2012.

Summary

  • Circle of Friends was a social graph application in the right place at the right time—with the wrong market.

  • By analyzing patterns of engagement and desirable behavior, then finding out what those users had in common, the company found the right market for its offering.

  • Once the company had found its target, it focused—all the way to changing its name. Pivot hard or go home, and be prepared to burn some bridges.

Analytics Lessons Learned

The key to Mike’s success with Circle of Moms was his ability to dig into the data and look for meaningful patterns and opportunities. Mike discovered an “unknown unknown” that led to a big, scary, gutsy bet (drop the generalized Circle of Friends to focus on a specific niche) that was a gamble—but one that was based on data.

There’s a “critical mass” of engagement necessary for any community to take off. Mild success may not give you escape velocity. As a result, it’s better to have fervent engagement with a smaller, more easily addressable target market. Virality requires focus.

Leading Versus Lagging Metrics

Both leading and lagging metrics are useful, but they serve different purposes.

A leading metric (sometimes called a leading indicator) tries to predict the future. For example, the current number of prospects in your sales funnel gives you a sense of how many new customers you’ll acquire in the future. If the current number of prospects is very small, you’re not likely to add many new customers. You can increase the number of prospects and expect an increase in new customers.

On the other hand, a lagging metric, such as churn (which is the number of customers who leave in a given time period) gives you an indication that there’s a problem—but by the time you’re able to collect the data and identify the problem, it’s too late. The customers who churned out aren’t coming back. That doesn’t mean you can’t act on a lagging metric (i.e., work to improve churn and then measure it again), but it’s akin to closing the barn door after the horses have left. New horses won’t leave, but you’ve already lost a few.

In the early days of your startup, you won’t have enough data to know how a current metric relates to one down the road, so measure lagging metrics at first. Lagging metrics are still useful and can provide a solid baseline of performance. For leading indicators to work, you need to be able to do cohort analysis and compare groups of customers over periods of time.

Consider, for example, the volume of customer complaints. You might track the number of support calls that happen in a day—once you’ve got a call volume to make that useful. Earlier on, you might track the number of customer complaints in a 90-day period. Both could be leading indicators of churn: if complaints are increasing, it’s likely that more customers will stop using your product or service. As a leading indicator, customer complaints also give you ammunition to dig into what’s going on, figure out why customers are complaining more, and address those issues.

Now consider account cancellation or product returns. Both are important metrics—but they measure after the fact. They pinpoint problems, but only after it’s too late to avert the loss of a customer. Churn is important (and we discuss it at length throughout the book), but looking at it myopically won’t let you iterate and adapt at the speed you need.

Indicators are everywhere. In an enterprise software company, quarterly new product bookings are a lagging metric of sales success. By contrast, new qualified leads are a leading indicator, because they let you predict sales success ahead of time. But as anyone who’s ever worked in B2B (business-to-business) sales will tell you, in addition to qualified leads you need a good understanding of conversion rate and sales-cycle length. Only then can you make a realistic estimate of how much new business you’ll book.

In some cases, a lagging metric for one group within a company is a leading metric for another. For example, we know that the number of quarterly bookings is a lagging metric for salespeople (the contracts are signed already), but for the finance department that’s focused on collecting payment, they’re a leading indicator of expected revenue (since the revenue hasn’t yet been realized).

Ultimately, you need to decide whether the thing you’re tracking helps you make better decisions sooner. As we’ve said, a real metric has to be actionable. Lagging and leading metrics can both be actionable, but leading indicators show you what will happen, reducing your cycle time and making you leaner.

Correlated Versus Causal Metrics

In Canada, the use of winter tires is correlated with a decrease in accidents. People put softer winter tires on their cars in cold weather, and there are more accidents in the summer.[8] Does that mean we should make drivers use winter tires year-round? Almost certainly not—softer tires stop poorly on warm summer roads, and accidents would increase.

Other factors, such as the number of hours driven and summer vacations, are likely responsible for the increased accident rates. But looking at a simple correlation without demanding causality leads to some bad decisions. There’s a correlation between ice cream consumption and drowning. Does that mean we should ban ice cream to avert drowning deaths? Or measure ice cream consumption to predict the fortunes of funeral home stock prices? No: ice cream and drowning rates both happen because of summer weather.

Finding a correlation between two metrics is a good thing. Correlations can help you predict what will happen. But finding the cause of something means you can change it. Usually, causations aren’t simple one-to-one relationships. Many factors conspire to cause something. In the case of summertime car crashes, we have to consider alcohol consumption, the number of inexperienced drivers on the road, the greater number of daylight hours, summer vacations, and so on. So you’ll seldom get a 100% causal relationship. You’ll get several independent metrics, each of which “explains” a portion of the behavior of the dependent metric. But even a degree of causality is valuable.

You prove causality by finding a correlation, then running an experiment in which you control the other variables and measure the difference. This is hard to do because no two users are identical; it’s often impossible to subject a statistically significant number of people to a properly controlled experiment in the real world.

If you have a big enough sample of users, you can run a reliable test without controlling all the other variables, because eventually the impact of the other variables is relatively unimportant. That’s why Google can test subtle factors like the color of a hyperlink,[9] and why Microsoft knows exactly what effect a slower page load time has on search rates.[10] But for the average startup, you’ll need to run simpler tests that experiment with only a few things, and then compare how that changed the business.

We’ll look at different kinds of testing and segmentation shortly, but for now, recognize this: correlation is good. Causality is great. Sometimes, you may have to settle for the former—but you should always be trying to discover the latter.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.

Start Free Trial

No credit card required