Aggregating Evidence
One way to deal with the limitations of evidence is to combine different forms or sources: to aggregate results from different studies on the same question. The notion behind combing evidence is that if different forms of evidence or evidence from independent sources agree (or at least don’t contradict each other), they gather a sort of “weight” and are more credible together.
Software engineering does not yet have a consolidated body of evidence. However, various efforts are being made at consolidation. Barbara Kitchenham and colleagues have been fuelling a movement for systematic literature reviews, which examine and aggregate published evidence about a specific topic using a framework of assessment against specified criteria. For instance, Jørgensen and Shepperd’s work on cost modeling compiles and integrates the evidence comparing the performance of models and humans in estimation and suggests that they are roughly comparable [Jørgensen and Shepperd 2007]. Interestingly, doing systematic reviews—and doing tertiary reviews of systematic reviews, as done by Kitchenham et al.—exposes interesting weaknesses in the evidence base [Kitchenham et al. 2009]:
Little credible evidence exists on a variety of topics in software engineering.
Concerns about the quality of published evidence plague the endeavor.
Concerns about the quality of the reporting of evidence (e.g., whether the method is described fully and accurately) limit its evaluation.
Systematic reviews are not ...
Get Making Software now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.