Software quality takes time. And good quality products come from properly working feedback loops. Timely feedback can mean clarity over confusion; a validation of assumptions can mean shorter development cycles.
For example, let’s say you have a project that needs to be delivered next month, but you and your development team know it will take at least two more months to complete. How do you communicate this to key stakeholders?
First off, you need to establish a shared understanding of goals and quality amongst all involved participants. As a developer, you tend to base your behavior and build products and architectures around values and assumptions. If these values and assumptions are not aligned and validated, you will never end up with what you intended—let alone on time and within budget. Assuming your assumptions are accurate, you get carried away and spend way too much time on something before gathering feedback. But honestly, when would you rather hear all of your effort was a waste: after you spent a day working on it, or after working on it for a week?
A feedback loop is straightforward: it uses its input as one of its inputs. In its simplest form, a developer changes a code base and then gets feedback from the system by unit testing. This feedback will now be input for the developer’s next steps to improve the code. However, reality is not that simple. Plus, humans have an irrepressible tendency to include as many people as possible in one loop.
If you follow such a course, you’ll end up with feedback chaos: massive “loops” including every potential player make it impossible to control, validate assumptions, and create a shared sense of reality. Quite simply, there’s too much going on. But there’s a solution: reflection. Reflection helps you identify existing feedback loops and determine who needs to be included. The shorter the feedback loop, the better.
There are two forms of feedback: personal and tool based. Personal feedback is given on an interpersonal level—people discussing code, products, or processes and identifying where things can be improved. Tool-based feedback, such as static analysis, provides you with code-level feedback and tells you where to improve your code (or specific parts of your code) to increase quality. Personal feedback is often specific for projects, more sensitive to context, and offers concrete suggestions to implement. Tool-based feedback enables faster feedback loops, allows for scalability by iteration, and is more objective. But which form of feedback is better?
There is a false dichotomy between full automation and human intervention. Successful quality control combines tool-based measurement with manual review and discussion. At the end of the day, the most effective feedback loops are a mixture of daily best practices, automation, tools, and human intervention.
In an upcoming follow-up post, I’ll discuss specific practices that integrate personal and tool-based feedback. These practices will help you bolster your code and architectural quality.
This post is a collaboration between O'Reilly and SIG. See our statement of editorial independence.