Building a better code review process

By approaching change as if you're the customer and knowing the risks involved when making changes to your code base, you can reduce the amount of churn in the review process and increase the chances of shipping code that sticks around.

By Gregory Brown
June 1, 2016
Traffic Light Tree Traffic Light Tree (source: Kevan (CC BY 2.0))

When you conduct a code review, do you look at the implementation code, the tests, or the documentation first? Hopefully by the end of this article, I can convince you that “none of the above” is the right answer to that question.

In the last few years, I’ve done thousands of code reviews for hundreds of developers—in commercial projects, in open source projects, and in classroom settings. Although there are some exceptions, the overwhelming majority of programmers I’ve worked with have assumed that my main focus would be helping them drive up their code quality. That’s a worthwhile goal but it’s never my main priority, and so I often catch reviewees by surprise with my way of doing things.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Start with production testing

The first thing I insist on is that I need to be able to see the code running in as close to a production-like environment as possible. In some cases, this means deploying code behind a feature flipper and “doing it live”, in other cases it might be a staging or development environment. In some situations, a video or a screenshot may do, but that’s a last resort for me.

From there, I take a close look at any sample data that is needed to try out the new feature or bug fix. If it’s missing entirely and I’m expected to guess at it, that’s a giant red flag. If some data is present but it’s contrived or anemic, I still remain deeply suspicious and typically ask to see real examples that reflect what we’d expect to see in production after this change has been rolled out and in use for quite a while.

Sometimes realistic examples are readily available and the test data was kept minimal just for the sake of developer convenience. In those cases I look for ways to automate the construction of more realistic data sets, or attempt to grab sanitized snapshots of real production data. But in other cases—and this happens often—the lack of realistic examples comes from an underspecified requirement. For example, a bit of “lorem ipsum” text for a notes field on a cookbook recipe looks just fine in a mockup, but without knowing what kind of notes you expect to attach to those recipes… it’s impossible to evaluate how best to implement it, or if it ought to be implemented at all.

Approach the change as if you’re the customer

If we make it this far, we’ve got code running in a production-like environment, with (possibly imaginary but realistic) production-like data. So far, I probably haven’t bothered to look at the code for the change under review at all yet, but in most cases I’ve already helped the reviewee catch some bugs, misconceptions, or holes in their knowledge that can be filled in before I even start trying to interact with whatever they’ve built.

From here, it becomes a game of “Let’s pretend we’re the customer, what is the job we’re trying to do?” For this, I usually don’t ask the developer to answer that question themselves, but instead take a look at any notes from actual feature requests and bug reports, or I try to learn a bit about the problem space so that I can imagine myself in the shoes of the person who will ultimately use this new code.

With that basic background information, I go to work just trying out the software… using my best guess at the typical use cases, the edge cases, and a handful of pathological cases that are still within the realm of reasonable behavior. I take notes on things that break, things that work awkwardly, as well anything that confused me along the way.

Know the risks of the change you’re deploying

Assuming this process doesn’t reveal major issues that would prevent me from continuing my review, I then look at the code for the first time. What I’m looking for on my first pass is any red lines in the diff… indicating a change or deletion to some existing code. A second quick pass helps me see whether any new code has been added that might affect other areas of the system… things like updated configuration files, any modifications to global objects, changes that might cause new errors to be raised, etc.

If I suspect that the change under review isn’t completely self-contained, then it’s a little tricky to decide how to proceed. In the simple cases, it might mean just doing a little more manual testing in areas that might be affected by the change to make sure that they’re still working as expected, and then doing a quick review of any relevant automated tests to see if the coverage is adequate. In more complicated cases, it means having a longer conversation with the reviewee to make sure that they have thought through the potential consequences of the modifications and have some sort of test plan for them.

If a change is self-contained, then I’m slightly more comfortable but I still feel the need to scan for potential risk factors. In practice, this means looking at any system ingress and egress points for potential danger (e.g. user input, system I/O, email alerts, web service calls, etc.)—and there is a whole host of criteria to be applied there that I really should turn into a checklist some day.

Combining all of this together, I end up with an informal heuristic for evaluating the risk factors of a particular change. This informs how carefully I should be auditing tests, error handling, documentation, overall code quality, etc. in my review. Mostly safe changes might get a light pass of revision suggestions along with a handful of clarifying questions. More dangerous changes get run over with a fine tooth comb, and the question of “is there a safer way of doing this?” is asked before even heading down that road.

Stay focused on the product, even as you review the code

The kinds of questions I find essential to ask before you even think about issues like code quality are things that have to do with the overall external quality and utility of your software:

  • Does this change solve a real problem for the customer?
  • Is the solution robust, or merely adequate?
  • How do we know that this change is a good investment, relative to other things we could be doing instead?
  • What new problems might arise as a result of this change, and how will we mitigate them?
  • Can the change be easily reversed and/or gracefully degraded if something goes wrong?
  • Are there ongoing maintenance costs associated with this change? What are they?

Simply asking these questions makes it possible to catch issues early on in the review process before you bother investing energy in things like coding style, design patterns, opportunities for refactoring, or strategies for automated testing. It also forces a strong outside-in focus, which is very much in the interest of the people who will end up using your software.

This process also nearly ensures that you will only end up doing detailed code reviews on patches that have a high probability of shipping and sticking around for the long haul. This reduces the amount of churn in the code review process, and helps avoid cleaning up code for the sake of cleaning it up rather than to serve a real need.

So what do you look at first when reviewing a new change to a software system? That’s easy: You start with the customer’s need, you proceed to the customer’s happiness, and then you work your way down to the code from there.


Editor’s note: Gregory Brown is working on a book about the non-code aspects of software development, called “Programming Beyond Practices.” Follow its progress here.

Post topics: Software Engineering
Share: