Evaluating cybersecurity product claims with a critical eye

Questions to help you weigh the true value of “scientifically proven” security solutions.

By Josiah Dykstra
September 15, 2016
Binoculars looking over ocean Binoculars looking over ocean (source: Unsplash via Pixabay)

You probably encounter new cybersecurity products and services every day, whether from peers or online or at conferences. RSA Conference 2016, for example, had more than 475 exhibitors! With so many options, how do you see through the tactics and claims appealing for your attention?

The appendix to my book Essential Cybersecurity Science has a discussion about bad science, scientific claims, and marketing hype–basically some approaches to evaluate the validity and value of cybersecurity product claims.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

Here is an excerpt from the appendix that provides some example questions you can use when presented with a cybersecurity product or solution. These questions can help you bring a healthy skepticism to your own evaluation of other people’s claims.

Clarifying questions for salespeople, researchers, and developers

Your experience and expertise are valuable when learning and evaluating new technology. The first time you read about a new cybersecurity development or see a new product, chances are that your intuition will give you a sense of the value and utility of that result for you. Vendors, marketers, even researchers are trying to convince you of something. It can be helpful for you to have ready some clarifying questions that probe deeper through the sales pitch. Whether you’re chatting with colleagues, reading an academic paper, or talking with an exhibitor at a conference, these questions might help you decide for yourself whether the product or experimental results are valid.

  • Who did the work? Are there any conflicts of interest?
  • Who paid for the work and why was it done?
  • Did the experimentation or research follow the scientific method? Is it repeatable?
  • How were the experimental or evaluation dataset or test subjects chosen?
  • How large was the sample size? Was it truly representative?
  • What is the precision associated with the results, and does it support the implied degree of accuracy?
  • What are the factually supported conclusions, and what are the speculations?
  • What is the sampling error?
  • What was the developer or researcher looking for when the result was found? Was he or she biased by expectations?
  • What other studies have been done on this topic? Do they say the same thing? If they are different, why are they different?
  • Do the graphics and visualizations help convey meaningful information without manipulating the viewer?
  • Are adverbs like “significantly” and “substantially” describing the product or research sufficiently supported by evidence?
  • The product seems to be supported primarily by anecdotes and testimonials. What is the supporting evidence?
  • How did you arrive at causation for the correlated data/event?
  • Who are the authors of the study or literature? Are they credible experts in their field?
  • Do the results hinge on rare or extreme data that could be attributed to anomalies or non-normal conditions?
  • What is the confidence interval of the result?
  • Are the conclusions based on predictions extrapolated from different data than the actual data?
  • Are the results based on rare occurrences? What is the likelihood of the condition occurring?
  • Has the result been confirmed or replicated by multiple, independent sources?
  • Was there no effect, no effect detected, or a nonsignificant effect?
  • Even if the results are statistically significant, is the effect size so small that the result is unimportant?

For more red flags of bad science, see the Science or Not blog.

Post topics: Security
Share: