Here’s another consideration. We all live in a world of outcome and productivity. This is especially true for the academic science community. Many researchers are not enthusiastic about publishing negative findings. Based on a comment in the Q&A forum in Editage Insights, negative results may poorly represent research skills, which, in turn, could affect opportunities and reduce chances of recognition or grant awards. Therefore, instead of publishing all negative results, some scientists add positive results to the negative results when submitting the manuscript.
Explore This Issue
June 2022Alternatively, many investigators, including myself, prefer to pursue another way of proving our hypothesis rather than writing a manuscript that explains why the original hypothesis was wrong.
Not many platforms exist to publish negative results, as journals aren’t as open to publishing them. This pattern likely originated from the inclination to publish innovative and novel results rather than an article that describes how and why a hypothesis of the original research didn’t work.
Taken together, these factors may be drivers of scientific misconduct, including falsifying and fabricating data/figures to increase their impact or statistical significance (PLoS ONE. 2013;8:e68397).
What Can Be Done?
To start, we could consider building a different platform exclusively to publish negative results. This approach would yield two advantages: It could help prevent other researchers in a similar field from making the same mistakes, and it could incentivize authors who spend a lot of time on their research projects.
The National Institutes of Health (NIH) plays an important role in providing guidance as well. Principal investigators applying for NIH research grants or mentored career development awards (K-grants) are now instructed to describe plans to address any weaknesses in the rigor of prior research within the research strategy portion. NIH strives to exemplify and promote the highest level of scientific integrity, public accountability, and social responsibility in the conduct of science. For the past few years, the guideline for rigor and reproducibility portions in NIH grant applications has been updated almost annually.
Publishing high quality research in a journal involves more rigors and transparent reporting guidelines about experimental design and statistical descriptions, for example, providing the raw data in the public domain. The Journal Impact Factors (JIF) of most journals, including those in our specialty, have been inflated recently by adding online publication citations to the calculations. Higher JIFs should follow more strict standards with accurate, robust, and transparent descriptions of the methods and outcomes.