Weights and measures.
Weights and measures. (source: Pixabay)

The problem: How do I measure engineering team success?

I’m running an engineering organization for the first time, and my boss is a first-time startup CEO with a non-technical background. The CEO wants to report to the board statistics on how the engineering team is doing, so she’s asking me for metrics. Neither of us really knows what metrics I should be measuring and reporting, though. I know that we need to hire more and it always feels like we aren’t shipping as much code as we should, but every time I look at a measure, I feel like it is lacking. What are the critical things I should look for to have a good sense of “velocity” and team performance? What should I be reporting on to measure success?

The solution: Measure what matters while it matters

Measurement is a dangerous game, or, as @raffi said in a Tweet: “You can’t game what you don’t measure.” Humans can and will manipulate the rules of any system you put in front of them. This doesn’t mean you should give up on measurement. Rather, it means that measurement needs to be focused on the right goals for right now, and you should expect that what you measure will change frequently as the state of systems and the business changes.

So, for starters, this push to measure may need to be met with a counter-push. If the business itself does not have goals that it is measuring, and especially if the business itself is not aligned across departments on those goals, the first step in good measurement is getting company goal alignment. What are you, as a business, trying to achieve this quarter? The engineering department is not a standalone function, and it’s certainly not a factory cranking out code widgets. Divorcing the goals set for engineering productivity from their relationship to the overall business goals is the fastest way to create busywork and company dysfunction.

Business goal setting is not easy, but there are some methodologies out there that you can try. One method, popularized by Intel and Google, is that of Objectives and Key Results (OKRs). This measure of setting big goals and breaking them down into measurable results requires the whole company to align on what is important. The value of doing a good OKR process is that it forces focus, at least for a quarter at a time, and encourages breaking goals into measurable results.

All that said, there will be times when you have business goals but are still struggling for tech-specific goals. What should you do then?

Practical advice: Avoid most software development-specific measures

There are plenty of classic measures of software development velocity. If you are following a variant of Agile, it might be the rate of tickets closed over time against a particular milestone, or perhaps the number of “points” that your team is accomplishing in each sprint. Personally, I hate almost all of those measures for reporting outside of the immediate team because they are at the extreme wrong end of the measurement spectrum. Tickets not getting closed can indicate anything from the ticket size being too big to a lack of detail in the tickets to the engineers being mired in technical debt and too busy putting out production fires. Until you understand the actual cause of this lack of “velocity,” measuring it only tells you that there might be some problems. Furthermore, each team will have slightly different ways of defining these measurements depending on what works for them, and I don’t think it is wise to force every team to standardize to one very specific Agile measurement process. These measurements are intended for use by the team itself, not for reporting to outside interests.

What do you really want? What is it that makes your business succeed? For most companies, the thing that drives success more than anything else is the frequency of releasing code. There’s a reason that continuous deployment is such a hot topic, that people have spent a great deal of energy describing how to get code into production quickly. The act of releasing frequently forces reckonings throughout your software and process stack. If the code is too brittle to release easily, you will need to fix that. If there are too many places that require manual QA, you’ll need to fix that. If you have a complex series of steps required to get code into production, you will need to fix that. If developers don’t break their changes down into small chunks, and projects are contingent on several teams coordinating work together to release, you’ll definitely need to fix that. None of these things shows up precisely in a pure ticket-closing measure of velocity, but they are all critical to the thing you actually care about: getting features in front of your customers quickly so you can start to learn from them.

If you want to report something related to engineering productivity to your executive team, I would start by reporting on release frequency. Especially if your frequency is scheduled at once a week (or less!), and you often miss releases due to issues preparing code. Note however, that if you spend all of your time working on internal tools and processes to make releases faster or more automated without actually then releasing features to your customers, you have missed the point. The point is not the perfect continuous deployment system. The point is enabling engineers to build features and release them easily so that your business can learn.

One other metric that can be critical to technical success, of course, is uptime of your systems. If you have constant production outages or customer-facing errors, it is likely that your technical instability is causing your customers to abandon your product. It’s probably also causing your engineering and operations teams to burn out from alerts and being on-call. Even if you don’t report uptime and stability metrics to the board, the process of gathering these metrics should be taken seriously for the operational health of your engineering team. Similar to releasing more frequently, improving uptime and reducing incident frequency often forces the team to fix things that are currently slowing down development of features.

Hiring and retention are two areas where metrics can really shine a light on your areas for improvement. If you have a goal to increase the size of your team, you can start to notice when and why you are failing to hire quickly. Are you not getting enough candidates? Are you moving too slowly on offers? Are you rejecting most people? Are your offers getting rejected? What is the average tenure of your team? What is the average experience level of your new hires? All of these statistics can teach you a lot about potential failures in the hiring process as well as gaps in developing and retaining existing talent. 

Wrap-up

All of the measures I listed above can go wrong. You can over-focus on the ability to ship code to the extent that too many people are working on the toolchain and not enough people are building features. If your team can’t tolerate downtime or incidents, you will be tempted to release infrequently to prevent errors. Teams that focus on story points might cause developers to inflate their project estimates so they will not be rushed to finish things, or to make themselves look good for completing so many points. Focusing on hiring, but ignoring retention, can result in hiring a bunch of people who quit within a year because they are unhappy with the workplace.

Ultimately as a leader, you must figure out how to tie the activities of your team to the success of the overall business, and focus on measuring and improving those related activities. You are unlikely to find a single golden set of unchanging metrics, but the search for measurements that are appropriate for your current situation has value in itself. And finally, don’t forget to reflect on what you learn over time from these measurements. It’s one thing to set a goal or make a plan, but if you don’t ever go back and look to see not only how you did against that goal but what you learned in the process, you are play-acting at measurement, when you should be using it to learn.

Article image: Weights and measures. (source: Pixabay).