reporting negative results

Two of my recent posts have reported negative results, meaning that no meaningful effects were found during the investigations. Had these investigations been framed as hypothesis tests, we would have failed to reject the null hypotheses. Sounds boring.

However there are good reasons to report these results. The first is that negative results still generate knowledge about a system or process; for instance you might learn that your hypothesized explanatory variables do not explain the variance in the data, which might prompt you to return to the drawing board to look for new explanatory variables. Reporting this progress enables others to try alternative approaches faster by prompting them to avoid strategies you’ve already tried.

Second, suppose a published (in a journal) experimental result is significant at the 5% acceptance level. That means there is a one and twenty chance the null hypothesis was incorrectly rejected based on the data. Reporting additional experiments, whether in publications or on the web in blog posts, helps to mitigate the risk of having one paper’s possibly erroneous conclusion bias a whole section of research. This is vital since many publications only publish positive results, so it is up to alternative channels of communication (like blogging) to complete the picture.

Finally, I use this blog to keep track of where I’ve been in my analyses to help guide where I’m going next for future analyses. Among other things, I treat this blog like an enhanced lab journal, which is why I post code for most of what I do. A good lab journal keeps track of negative results and therefore I do the same.

Scientific inquiry often leads to dead ends; that is part of the game. Reporting such negative outcomes is part of the game as well.

Post Author: badassdatascience

Leave a Reply

Your email address will not be published.