All research has weaknesses: gaps (unexplored ideas), flaws (problems with study design), and limitations (factors that constrain the applicability of study findings). Most researchers will discuss these weaknesses, knowing these are of interest to readers as the basis for future studies. That is, where one research project ends, another begins to fill the gap, correct the flaw, or address the limitations, consequently furthering scientific advancement. Thus, although we want to do exceptional research, we also understand that research weaknesses are fruitful grounds for continuing research; weaknesses do not necessarily mean the research is unreliable or useless.
As an editor (and a researcher, scientific writer, and reviewer), I know all studies have weaknesses. Still, I strive to ensure that articles published in Nursing Research have limited weaknesses or at least well-explained ones. The challenge, which is big, is determining what is too weak versus what is, in fact, exceptional. At Nursing Research, we received over 500 papers in 2020, and we are on target for that number in 2021; because of page limitations, we will only publish about 60 articles yearly. Thus, reviewing research papers for high-quality work is critical.
Determining what is an exceptional paper takes effort. I look for specific characteristics when conducting my first review. The first characteristic I consider is author knowledge or, rather, the expression of that knowledge. All research begins with an idea or question-reviewers and readers appreciate interesting questions with answers that advance science. What researchers may fail to appreciate, however, is that questions that interest them are likely not new. Rather, much of what we study has actually been studied before. Of course, researchers may explore new variations to "old" questions, they may use new methods to address those questions, or they may study different people or variables. What I look for is a substantive, yet concisely written, review of existing research. Moreover, the review needs to include critique of the rigor of prior research and evidence that the researchers intended to address the weaknesses found in prior studies.
Next are methods, perhaps the most important characteristic to consider. There are many ways to get sloppy with methods in research reports. Researchers can fail to name and follow a research design, forget to state inclusion and exclusion criteria, or be unclear about participant recruitment, enrollment, and retention. Researchers may also fail to specify a sample size or fail to describe interventions or study measures and how they performed in the sample. Researchers may fail to specify an analytic plan or to stick with a stated plan. They may fail to address normality of the data or to report missing data, withdrawals, or dropped participants. Fortunately, science can advance with a good bit of methods messiness if we know about it. Silence about method weaknesses, however, is not good; transparent reporting of methodological mishaps is essential.
One of the most concerning methods weakness is the failure to implement adequate bias control measures. Bias is defined as systematic error caused by incorrect research methods (Higgins et al., 2011); it can be assessed by a number of tools (Spratling & Hallas, 2021). Researchers should take all appropriate actions to minimize bias. For example, measures to control bias in clinical trials include randomization of participants to interventions and control conditions, blinding of investigators (and participants) to intervention, and having a credible control condition. Bias in design is particularly problematic; what looks like a fishing expedition generally is. However, some of the most worrisome biases are the results of unplanned, undocumented, or unacknowledged protocol changes. The smallest variations in technique or procedure can lead to major differences in results. Thus, researcher clarity about what they did and why they did it is important. Reviewers and readers want to know enough to judge if divergence from a protocol materially reduced the quality or completeness of the data or changed the results. Particularly concerning are changes that affected a participant's safety, rights, or welfare; these changes need full documentation, including a statement about how the changes were vetted through an institutional review board.
Reviewers at Nursing Research are particularly sensitive to overstated results. Researchers know to avoid the use of "trends in significance" (Nead et al., 2018), and yet I still see that statement in papers as well as statements that extend results well beyond the characteristics of the described sample. This push to make significant what is not and to apply findings to everyone whether appropriate or not often results in a paper's negative review. Researchers also need to contextualize their results within the framework of existing knowledge. This is the hardest part of writing for me, and I suspect that is true for many others. Yet, when a discussion section is not done well, a research report is weak. Spending thoughtful time writing clearly about the meaning of research results is critical to advancing science.
There are, of course, flaws that are unrecoverable, making a research paper unpublishable. For example, we cannot publish papers of ethically questionable quality. Bad ethics is bad science. We are also unlikely to publish papers with significant conflicts of interest, particularly financial conflicts. Finally, any study involving an intervention must be preregistered on http://clinicaltrials.gov or a similar international trial registry (see https://www.hhs.gov/ohrp/international/clinical-trial-registries/index.html). Trial registration allows authors to document that analyses were determined in advance and are scientifically sound. All trials must be registered within 1 month of enrollment of the first participant; that has been the rule since 2017 (http://www.icmje.org/about-icmje/faqs/clinical-trials-registration/).
Perhaps, a more tricky aspect of evaluating research for high quality is theory use. We know that theory informs research in a number of ways, including providing rationale for the study, defining the aim and research questions, considering the methodological stance, developing data collection and generation tools, and providing a framework for data analysis and interpretation (Stewart & Klein, 2016). However, is theory needed for an exceptional research report? Maybe. In 1847, before the germ theory of infection was developed, Ignaz Semmelweiss suggested that by washing their hands before examinations, physicians could save the lives of many women and infants after childbirth. Semmelweiss came to this conclusion after observing that physicians' wards produced much higher infection and death rates than midwives' wards; midwives were willing to wash their hands, and physicians were not. Published articles showing that handwashing reduced maternal mortality to less than 1% supported Semmelweiss' advice. However, Semmelweiss' advice offended physicians, and they rejected it (Best & Neuhauser, 2004). Semmelweiss had no explanation for his advice about handwashing. He could describe the benefit of handwashing, but he could not explain it. He needed a theory.
At Nursing Research, we encourage authors of research reports to include an appropriate checklist of their methods. In time, these checklists will be required, and we will also ask reviewers to use checklists to guide their review. Many of these checklists can be found on the EQUATOR Network (Enhancing the Quality and Transparency of Health Research; https://www.equator-network.org/), which was created to improve the reliability and value of published health research by promoting transparent and accurate reporting of research. Specific assessment tools also exist for the critical appraisal of potential methodological flaws (i.e., Higgins et al., 2011). Critical appraisal tools also exist as a guide for evaluating research studies, typically to determine the best evidence for practice. Critical Appraisal Skills Programme checklists are useful to evaluate research studies, including qualitative studies and randomized trials (https://casp-uk.net/casp-tools-checklists/). These guidelines very briefly address many of the areas noted above, including questions about bias in research design, analysis, and reporting. The most frequently used tool for assessing risk of bias in randomized trials is the Cochrane risk of bias tool (Higgins et al., 2011).
Recently, the revised risk of bias assessment, RoB 2.0, was published (https://sites.google.com/site/riskofbiastool/welcome/rob-2-0-tool?authuser=0). This free, online tool was designed to improve efficiency of bias assessment, particularly in randomized trials. The assessment tool helps reviewers (and researchers) determine the bias risk in a trial. A study of RobotReviewer's performance in the appraisal of clinical trials in nursing provided good results; it could certainly be an adjunct to human review (Hirt et al., 2021).
Research provides the foundation for our discipline and practice and determines its course and value. Inaccurate findings based on poorly executed methods may lead to imprecise applications and end in further errors in scientific knowledge. Publishing high-quality, exceptional research is critical for scientific advancement and for improved outcomes for persons, families, and communities, the focus of nursing science. Although a journal editor makes the final decision to accept or reject a research paper, all of us who conduct, write about, review, read, and use science in our practice or in our own research have an obligation to ensure that the highest quality of research is published.
ORCID iDs
Rita H. Pickler https://orcid.org/0000-0001-9299-5583
REFERENCES