Chapter 4. Measures and Decision Making

What you choose to measure has an impact on both the timeliness of decisions and, as a consequence, how weak a signal you can detect. The weaker the signal you get, the more value you can deliver to customers. Digital transformation in part is about capturing weak but important signals so that you can make more time-critical decisions in the moment.

The thin-slice approach gives you the opportunity to test and learn in an end-to-end environment. As early as possible, you need to rely on some kind of signals to know whether you are moving closer to the outcome or further away from it. Measuring the correct signal will allow you to make better decisions earlier to maximize the value you deliver to your customers.

Of course, the challenge is not the lack of signals, but often the opposite: there are too many signals and the system is fairly noisy. The challenge lies in capturing those signals aligned with measuring customer value. The strong, easy-to-measure signals are not always the appropriate ones to measure; in fact, the strength of the signal has almost no correlation to the relevance of the signal for progressing the outcome.

In the early days of Taylorism, the Industrial Age was optimizing for labor productivity and economic efficiency based on logic, rationalism, and empiricism that could be assessed with standardized “certainty” of process and tasks. In the Digital Age that certainty and standardized practice has diminished, yet we still try to apply many of the theories from that time. Take for example software development—we have known for a long time that the value of a piece of software is not proportional to the number of lines of code it contains. Still, we continue to see again and again examples in the industry in which “lines of code” or “Velocity” are incorrectly used as a key signal associated with a programmer’s productivity. It’s as if the most difficult part of software programming is typing. Perhaps the same can be said for how finances are managed for which the exact spending of the budget, not over and not under, is celebrated in lieu of any customer value that is delivered as a result.

Value measures are often confused with the need for our work to appear valuable, or for ourselves to be valued as individuals. Herein lies an epiphany: true value is rarely a quantitative measure, but rather a yes/no question (was the value received?) and, as such, needs to be asked each time and be dependent on the transaction at hand. That’s the real issue: customer value can be difficult to measure; that is, the proper measure is not always easy to get, so organizations tend to favor simpler, less-relevant, unaligned alternatives. This is especially true when you want to focus more on customer outcomes—often due to the time lag between the transaction and the realization of the value. There are no obvious metrics that can gauge the value of peace of mind, the value of personalization, the value of convenience, or the value of feeling in control. You must live with ambiguity and sometimes use approximations. 

One example, Net Promoter Score (NPS), has been widely adopted as an indicator of customer satisfaction and loyalty to a company’s products and services. There are many factors that could affect a customer’s overall satisfaction and loyalty over an extended period of time. Although NPS is a great tool to gauge the longer term trends of customer satisfaction, it’s not always useful for getting real-time or early feedback on the products you develop, improvements you make, and the more granular-level tasks you do. We have seen situations in which a customer has left comments that they are completely satisfied and scored a 7 (on a scale of 0 to 10, with only 9s and 10s counting as “promoters”). If an unhappy customer also left a 7, reporting would show them as equal when clearly they are not. Worse would be the linking of these type of trend-based analyses to an individual’s performance given that the state of the customer and the relating score is very often nothing to do with the employee; the customer is rating their feeling toward the company not necessarily the person. It is not difficult to understand why you hear an agent state, “Please remember the survey is about your interaction with me,” or why unhappy calls are cut off prior to the survey being asked.

An all-too-common scenario for traditional organizations is that their structures, policies, and behaviors make it too difficult to truly measure customer value (on top of the fact it’s difficult to measure to begin with). So it gets replaced by a combination of different types of measurements that appear as proxies to show progress, such as benefits to the business like revenue and profits, individual KPIs for performance, or activities like tasks, work completed, and budget spent.

To be clear, we are not advocating that traditional measures are bad and new ones are good; you need to apply the appropriate measure for the time and need. Good measures are often temporary and directly linked to the outcome—after it’s done or it’s good enough, the measurement could become meaningless. That will be the right time to throw them away and focus on the other (or next) important measure for the other or new goals. The key is to review measurements regularly to make sure that they still serve a good purpose and not just blindly use traditional long-lived measures for a purpose they were not intended. The critical step is to separate measures back into their original intent—the horses for courses approach; for example:

  • Customer value should be the determining measure of what work we do next to meet the outcome. It’s how we decide what projects, programs, initiatives, and experiments to invest in.

  • Financial measures are for company health, leading to fiscal responsibility.

  • Individual KPIs are to help people improve.

  • Improvement measures show whether the company is getting better, more efficient, and effective over activity measures.

Of course, these all need to stay aligned to the outcome dial; that is, the work we are doing is moving the needle to tell us we are achieving our strategy. Let’s now dive into some details of some of the different types of measures such as customer value, financial and individual KPIs, and improvement. We then examine the impact they have on the purpose of delivering more value to customers.

Customer Value Measures

Value is what customers pull from your business; benefits are what you receive in return, such as revenue, profit, and so on. Value to customers should lead to benefits to business. The leap of faith you need to take is that the more value you deliver to customers more often, the more profitable and successful your business will be (i.e., the more benefits you will get).

As described earlier, a major shift in corporate governance took place in the last three decades of the twentieth century focusing on maximizing short-term returns to shareholders. The shareholder-value-maximization theory dominated management principles and defined the ultimate success for a company as profit maximization. As a result, most companies focused on measuring revenue and profit-related metrics, sometimes at the cost of customer, employee, and society-oriented metrics.

However, revenue and profit metrics lag far behind the immediate impact of a company’s actions to customers’ value delivery; in fact, the short-term correlation is often the opposite. Building a new feature that improves customers’ experience will cost money and reduce profit instantly, and the future revenue as a result of increased customer satisfaction and loyalty might not be reflected in the near-term revenue.

We recently observed the use of leading indicators in an airline company. The company was trying to increase customers booking flights directly through its online channel. However, after many feature releases, search engine optimization, and other marketing efforts, conversion rate and revenue from the online digital channel remained flat. After doing some analytics, the company found out that there were definitely more visitors to the website, more people searching for flights, more people booking flights, and even more people selecting seats. However, 75% of people dropped out of the payment step—much worse than the industry average of 25%. It turned out that as the team added more features to the payment function, they also introduced small defects and delays. Even though the defect rate and delays were not higher than average compared with other user journey segments, somehow customers lost patience (or became frustrated) more easily at this step and dropped out at a faster pace. Minor defects and issues from one team ended up offsetting the great work done by half a dozen other teams. If this were caught earlier by measuring the leading indicators rather than just the revenue, the customer booking success during this period could have increased by 300%!

Pirate metrics, as explained in the preceding sidebar, are a great example of leading indicators, but we need to reiterate the need for them to be linked to customer value measures. The talent you need to build is to use these types of cascading leading measures using the strong link to customer value, used in isolation they are benefit measures and might not reflect value to the customer.

When you focus too much, or exclusively, on revenue and profit measurements, the balance will often tilt toward benefits to the business as opposed to delivering value to customers. What you choose to measure decides what you optimize. Optimizing the benefits to the business in the short term could significantly undermine customer value and long-term health of the business. This is why we are seeing corporate behaviors like “80% of managers willing to cut discretionary spending like R&D, advertising, and maintenance to hit the numbers.”1

But business benefit is, of course, something you need to watch so that you don’t provide customer value at unsustainable cost. Without any constraints, the easiest way to maximize value delivered to customers is to give everything away for free!

In 2014, McDonald’s tested a new menu called “Create Your Taste” in a few countries, allowing customers to build their own burger from 30 premium ingredients ordered from in-store kiosks. The strategy outcome was offering a value add to higher-end customers. The company started the rollout to China, Europe, and the US in more than two thousand restaurants in 2015. Over the course of the rollout, it became clear that besides the higher cost to produce the burger (about twice as much as a Big Mac), store owners also had to invest about $125,000 per restaurant to just install the kiosks, and it slowed down kitchen operations significantly. So although customer value was there, the cost to offer it made it prohibitive. McDonald’s decided to hold back on the rollout plan and eventually dropped the “Create Your Taste” menu altogether, replacing it with “Signature Crafted Recipes,” high-end options on a predetermined menu.

Finding the right connection between business benefit and customer value is at times very obvious and easy to synchronize; other times there is no obvious connection. In these situations, the ability to find the leading indicators becomes crucial. You need to be able to hypothesize that a change in one measure will eventually lead to a change in the greater measure, and ultimately in business benefits like profit. This hypothesis should be tested and then validated or proven wrong quickly, allowing further hypotheses to be formed and tested. Testing and identifying the appropriate leading indicators early on can help to find the correct approach to deliver customer value and generate more business benefits at the same time. Some of the potential leading indicators in the “Create Your Taste” example could be as follows:

  • Percent of customers who are aware of the new menu item in a restaurant

  • Percent of customers who stopped by at the kiosk

  • Percent of customers who used the kiosk to customize the burger but stopped at the payment step

  • Percent of customers who ordered a customized burger and then ordered a regular burger in the following visit:

    • Percent of customers who attributed it to wait time

    • Percent of customers who attributed it to price

Some of these potential leading indicators could be difficult to measure, but digital technology is making it possible and easier to capture these weak signals. Some of them could have helped to discover that “Signature Crafted Recipes” was a better offer than “Create Your Taste” before the high investment was made.

We recognize that this test and learn style comes up regularly through this book. It would be much easier if there were just a set of answers already sitting there, but that’s the fast-changing digital world we live in today. It requires smaller bets with clearer measurement and the ability to validate that the connection is true. It takes time and patience to establish an undeniable link from customer value to business benefit, but after you have this, the return on investments is much shorter and far greater.

Financial Measures

It can be very easy to blame the bean counters for all the woes that stop agility. The pressure to “make the numbers” becomes the dominant conversation especially as the end of the financial year approaches.

The impact of these conversations is often that work is slowed or delayed, money is quickly spent to use up budgets, and organizations generally make poor decisions on low-value work for which budget balancing is more important than for high-value work. In the case of a major telecommunications company, we witnessed some common patterns of fiscal behavior:

  • Operating expense (OPEX) and capital expense (CAPEX) budgets being decided by different teams with no correlation; in other words, no link between the capital work you want to do and the operational cost of doing it. So you are forever struggling to pay for the staff needed for work you committed to do in business cases.

  • Overspending in the first half of the fiscal year, causing budgets to be cut in the middle of the fiscal year. This generally leads to scope pressure on the in-flight work and delays to new work, reducing value delivered for the year.

  • The predictable January hiring freeze (first month of the second half of the fiscal year). This causes functional units to feel pressure based on the luck of the draw with outstanding vacancies, not based on the value of the work.

  • The Q4 EBITDA target rise. This causes the inevitable scramble and readjustment of goals and priorities for spending

  • The end of fiscal year work slowdown due to the need to reduce spending and meet the targets, resulting in the giant hockey stick of work in month one of the following years, which in turn takes you back to the first bullet point.

That said, financial measures are important; they describe the health of the business and in large part are what you are held to account to as an indicator of the business performance. It’s just that they are not the only measure and should not be used to prioritize or decide work. The timing of them is usually such that they are too lagging; that is, they occur too long after the work is done to accurately attribute that work to the movement in the financial measures. Financial measures are lagging measures that ultimately should improve as a result of the value you deliver to customers.

Managers and teams need to be clearly aware of the ongoing fiscal measurements to understand the collective fiscal accountability. They should also have a clear understanding that these are constraints that limit some of their options for delivering more value to clients, not targets by which to align everything in order to maximize them. Financial measures should be reviewed periodically to ensure fiscal responsibility.

It should also be clear that financial measurements are largely lagging indicators. The key is to find leading indicators and establish the connection to the financial measurements so that the team can focus on using the former to provide feedback on the day-to-day work.

In reality, you could make this a very powerful and insightful category by making the environment “safe to fail.” Finding incentives for transparency and visibility is probably one of the more productive cultural changes a company can go through. This would allow the business to understand the relationship between the outcome desired, money spent, and value delivered along the way.

Take Figure 4-2, which shows money spent versus value delivered. This is what we all should aspire to being able to use as the key communications and reporting construct for the business. At its heart lies the concept that money spent is linear, so you are funding known capacity rather than project by project, and that you can incrementally track value delivered at points in time. The two measurements are percentages, allowing you to compare apples and oranges, and when added together at any point, lead you to a conversation about the value trajectory. It also allows you to roll the graphs up and down; one graph can be the accumulation of percentages of multiple graphs below it, an outcome graph made up of the sum of the work being done to achieve it, right up to a single view across the business.

Cost versus Value graph: using percentages to track value and money allows comparisons across work to make sure you are investing in the most valuable things
Figure 4-2. Cost versus value graph: using percentages to track value and money allows comparisons across work to make sure you are investing in the most valuable things

If value is tracking below spend, you need to ask whether what you are doing is not having the value impact you expected and whether you should stop or pivot. If it is tracking well above, perhaps you should be looking to add further investment to accelerate and increase the value impact. Or, if as per Figure 4-2 you are starting to see a plateau of the value line, you want to be able to ask whether enough is enough. For example, if you look at the 90/70 coordinates, you want to know whether it is worth spending the last 30% of the money for only a further 10% of the value. Sometimes it is, but the question should at least be explored.

Of course, Agile is a dependency for this kind of view because it is based on the incremental delivery of consumable value through appropriate design slicing. Work needs to be done in small autonomous chunks of value for which relative value percentage can be reasonably predicted and understood. Where many digital-native companies have succeeded is in scaling Agile in ways that allow them to continue to achieve beneficial value with continued spend. Most organizations reach points of diminishing returns as coordination costs overwhelm value beyond a certain point, but digital-native companies can use such measures to realize when it’s worth investing in shared self-service tooling, for example.

In this context, meeting commitments is more about providing a safe environment in which you can be transparent enough to track the progress toward a realized outcome, building in a way that allows you to change course when needed. (We refer to this more when we discuss finance constraints in Chapter 8.) Then, with a lightweight governance process in place, you should be able to review and realign customer value delivery and fiscal responsibility every two months or quarterly—certainly more frequently than once or twice a year. It will help mitigate, if not remove, some of the fiscal-year behavior patterns mentioned earlier.

Individual KPIs

The need for a business to introduce measurements to gauge people’s performance can make value measurements complicated. Individual KPIs are often associated closely with recognition, financial reward, career progress, power, and status. KPIs need to be designed to measure and reflect the value of an individual’s activity isolated from the rest of the team, division, and company. This need to isolate and break out an individual’s contribution tends to make the KPIs too granular and sometimes disconnected from team and company goals: deliver more customer value. They also tend to be more designed to measure the person’s contribution to the business, not value to the customer. For example, in a global transportation service provider business, there was a clear theoretical understanding of the importance of customer experience because one of the company’s four strategic goals was to increase customer satisfaction. But when we looked into the division’s team and individual KPIs that were being measured regularly, they were mostly revenue, margin, and efficiency improvement targets. Most of the people talked about value-based, customer-centric outcomes and goals, but it was not being reflected sufficiently in the metrics.

KPIs were invented in a time when the pace was slow and much could be planned and scoped. They measure the success, output, quantity, or quality of an ongoing process or activity. KPIs measure processes or activities already in place.

Google, LinkedIn, Twitter, and a few other companies have been using a simple and adaptive framework—objectives and key results (OKR). OKR is a goal system used to create alignment and engagement around measurable goals. The main difference from the traditional planning process is that OKRs are frequently set, tracked, and reevaluated—usually quarterly. OKR is a simple, fast-cadence process that engages each team’s perspective and creativity.

When designing a performance review process and KPIs, you need to shift the focus away from managing performance to fueling performance. Indications are now that individual performance measures are suboptimal when compared to team-based performance measures. Performance comes from shared success, from the ability of people to use their talents, and from alignment to the purpose of the company. Individuals need to be able to see the value in their performance and their work as the relationship to the value they are delivering to customers.

Performance reviews should be designed around a more regular review of a person’s ability to help teams achieve their contribution toward the outcome instead of their ability to optimize a fixed set of processes and activities throughout an entire year. If the outcome is achieved, the team and the individuals should succeed; KPIs should not be possible to achieve if the outcome is not. How often do we see bonuses being paid to individuals when the organization has performed poorly, or an individual achieves their target but customers are suffering? All too often there is conflict between doing the right thing by customers and achieving a bonus. People are forced to choose between the right reaction and disadvantaging themselves.

We witnessed an interesting experience of a top performing call center operator. He took a call from a woman claiming to have a great offer from a competitor. She wanted to know if this company could reward her loyalty with a better deal. Without hesitation, the operator replied yes and offered to have someone call her to discuss it further—in four days’ time. She abruptly chuffed and hung up. The operator was angry, threw his headset, and stormed off. You see, the operator was paid on sales leads generated, and his performance was measured by average handling time. Even with the ability to help the customer immediately, he would have been penalized for having her on the phone too long and lost his “lead bonus.” Sales leads and efficiency are measurements of benefits to the business, not value to customers, and these can put agents into ridiculous situations that hurt the customer, the organization, and the agents’ performance.

Individual KPIs are quite possibly the greatest constraint to being able to measure value, especially given their attachment to pay raises or performance ratings. You need to find better ways to measure individual performance and align it better to the goals and purposes of customers so that it doesn’t get into the way of doing real meaningful work serving the customers. Based on what we have seen, the best form of performance review is teaching people how to give feedback in the moment to peers, which enables teams to help each other improve as they work rather than at point-in-time reviews. And the best bonus structure is to attach targets to team goals at one level up so that individuals are accountable for the collective impact of the entire team. This tends to make it easier to connect to customer outcomes. 

Improvement Measures

Improvement versus achievement is a constant tension that should exist in the organization. Achievement is about whether you are progressing toward the outcomes, whereas improvement metrics inform you as to whether you are getting better at doing that. They make you better.

How much you have done different from how much value you have delivered. In the absence of value measurement, because value is difficult to measure and sometimes nonquantitative, activities or work being done become the replacement measure.

The Waterfall software development approach is a classic example of measuring work completed rather than value delivered. It breaks a product cycle into multiple steps: requirements gathering, feature and architecture design, coding and implementation, testing, and, finally, going live. This is a perfect model if there is no ambiguity about the requirements, no change of strategy or market conditions during development, no surprises in technology platforms and integration points, and no difference between production and development environments. We learned a long time ago that this would never actually be the case, and that’s partially how the Agile software development methodology was started: addressing Waterfall’s lack of flexibility and adaptability.

The other interesting symptom of the Waterfall approach is that projects almost always appear to be on track until they are about to finish. Requirement gathering, design, coding—none of these phases deliver any value to customers because customers simply cannot use the system yet. However, from the work completed perspective, the project team is humming along smoothly, celebrating one milestone after another—neat documents and dashboards are produced and presented perfectly along the way. Alarms generally start to go off in the late stages of coding and testing. Suddenly, instead of 80% finished, only 20% of the features are production ready—if the software is ready to go live at all. Project status turns from green to deep red overnight. The schedule is delayed; the budget is increased. According to an Harvard Business Review analysis, the average project cost overrun was 27%, with one in six projects overrunning cost by 200%, and schedule by almost 70%.

Using classic Agile and Lean continuous-improvement thought patterns—understanding current state, setting target states, and iterating toward them—is a foundational mindset for digital transformation. It supports the capture of weaker signals, helps you to identify constraints much earlier, encourages test and learn activities, and breeds experimental mindsets.

Improvement metrics have the added benefit of being more translatable to the traditional quantitative measures those who are not familiar with this way of working would be used to, while still giving you qualitative insights.

Some useful examples of improvement metrics include:

Quality

Defect reduction, automated test coverage, reduction in calls, work stopped because you learned new information.

Productivity

Cumulative flow diagrams for identifying areas for improvement in wait time or handoffs, the amount of work that needs to leave a team to be completed, capabilities you have versus those you need.

Throughput

The amount of value delivered in a period; the time it takes to validate ideas.

Predictability

Much like the cone of uncertainty, you should see predictability increase with time whether that be in capacity planning, skills planning, financial planning, or the amount of ideas you are able to exploit.

Financials

Money spent versus value delivered, cost per unit of value, portfolio balance between exploring and sustaining the business, return on investment (ROI) in outcomes, the impact each piece of work delivers against the customer value outcome.

Agile software delivery and continuous delivery (CD) methodologies emphasize the importance of taking a thin slice of the work, finishing it end to end, and delivering value before moving on to the next thin slice. That is also the root of the thin-slice approach we are advocating in the digital transformation journey (see Chapter 3). Using such an iterative, incremental approach creates the foundation and possibility of measuring end-to-end value instead of activities and work completed. But it does not guarantee that. In fact, we have seen too many Agile software projects measured and reported on using activities done and work completed in a similar, if not exactly the same way, as Waterfall projects.

There seems to be a stronger institutional desire to keep track of how much work is done and how much effort is put in. This is where we have seen some gaming behaviors. Because you tie it to the bidding war for budget purposes, you end up with fictional commitments and spend a large part of the time managing the message to demonstrate progress or justify the lack of progress toward them.

Improvement measures are a key muscle that organizations need to build in order to optimize for customer value and remove the constraints to its delivery. They tell you whether your digital transformation is driving the intended change and improvements. In Chapter 12, we discuss the importance of building a continuous learning and continuous improvement culture. Recognizing and implementing improvement metrics will help to reinforce that culture.

Putting It All Together

When the lights go out across the city, the power companies are at their best because the measure is clear to everyone: get the lights back on. A clear measure and a clear purpose make cross-functional teams bound into action. Talent and skills come to the fore, irrespective of roles and individual KPIs, and everyone just gets it done.

Random or unrelated measures lead to random and unaligned work. The focus becomes being busy instead of effective, complete instead of valuable. The key is to provide a set of measures that helps teams make in-the-moment decisions to ensure they do work that helps achieve the outcome, not just complete their work or become better at delivering the outcome. A useful measure aligns directly to outcome achievement—both the effectiveness and efficiency aspects. It is represented by value to the customer and is leading enough that it can be measured in a timely fashion to know that the work is achieving its intent.

The measurements should be leading indicators on customers getting the value they want. There could be many potential leading indicators—some stronger, some weaker—to consider. You need to make sure that you test the connection with business benefit indicators (often lagging), and find the ones that can link them together. You also need to be aware that work completed is not equivalent to value delivered. There is a lot to learn from Agile software development methodology in terms of both the ability to measure value delivered and the environment to encourage visibility and transparency to mitigate gaming behavior.

Achieving your outcomes and improving how you achieve them should not be mutually exclusive. Improvement measures will complement achievement measures as well as reinforce the continuous improvement culture. In addition, you need to be aware of the impact of personal KPIs and, if possible, redesign them to be more team based, customer focused, and adjustable through regular reviews.

Here is our final thought on measure: a great way to recognize whether a measure is correct is to apply the reverse and see if it holds true, much like the scientific method. So if you didn’t do it, would the measure still hold? If you did nothing, could the measure still change? If you did the opposite, would the measure reduce? These are all helpful checkpoints to ensure you are applying a suitable measure.

Key Points

Following are the key points that we hope you take away from this chapter as well as two actions for you to take to begin implementing what we’ve discussed:

  • Optimize your business to measures of customer value.

  • Many traditional measures, in the current way they are applied, undermines teams’ ability to deliver customer value.

  • The importance of applying the appropriate type of measures for the decision being made.

Here are two actions to take:

  • Create measures to tell you what work to do and whether you are improving: Redesign your organization’s current measures into their categories of “intent” and by the leading/lagging nature of them, separating them from the measures that determine what work to do.

  • Learn how to measure your return on value delivered: Practice the cost versus value delivered graphs by trying to build some for your current key programs.

1 John Graham, et al., “The Economic Implications of Corporate Financial Reporting” Journal of Accounting and Economics (September 15, 2005).

Get Digital Transformation Game Plan now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.