We don't often receive letters to the editor at JNSD. I wish we did! Receiving such communications from readers warms an editor's heart[horizontal ellipsis]they prove that people are reading the publication-and responding to what they read. Sometimes, the letters commend an author; sometimes, they criticize, but always, they are welcome, and this age of e-mail makes them so much easier to send. Whatever the content, the letters are taken seriously by the journal staff.
So it was with the letter to the editor from Jeff Zurlinden, MS, RN, clinical coordinator, Women's Programs and Education, Prentice Women's Hospital, Northwestern Memorial Hospital, Chicago, about two of the articles published in JNSD. Jeff said:
I'm concerned about the validity of the statistical measures used to report two articles in the July/August 2009 issue of JNSD:
"Clinical Nurse Educators' Perceptions of Research Utilization" by R. Strickland and C. O'Leary-Kelley and "Barriers to Research Utilization Among Registered Nurses Practicing in a Community Hospital" by H. Schoonover.
Both articles relied on ordinal-level data generated by a survey that asked for answers on a 1-to-4 Likert scale. Upon which the investigators calculated means and standard deviations. Then they ranked the responses based on means. This seems inappropriate. I wanted to see frequency distributions and modes.
I was further confused by Table 3 in the Schoonover article. The data in the first column seems meaningless. The second column, "Reporting Item as Moderate or Great," seems like the right way to rank the individual barriers, much better than using means, although the data were further collapsed to the nominal level by combining "moderate" and "great." But I'm uncertain of the denominator. The table states that n = 79, but the text seems to imply that the dominator subtracts nurses who responded "No Opinion," implying that each item had a different, unreported n. Did I misunderstand?
Overall, I commend the authors for their work. I was most interested in the ranking of individual barriers. But I would have had greater confidence in a simpler, more straightforward statistical approach. Ranking them on means based on ordinal data did not convince me that the results reflected an accurate ranking.
As is journal policy, the authors of the articles were contacted, provided with the concerns expressed, and asked for a response. Neither author was aware of the writer of the letter or of the other author mentioned in the letter. The authors sent their responses as follows:
"Clinical Nurse Educators' Perceptions of Research Utilization" by R. Strickland and C. O'Leary-Kelley. Rosemary Strickland responded:
Hundreds of research studies have incorporated the use of the BARRIERS to Research Utilization Scale and have followed the instructions of the author of the tool when scoring the results. To be consistent, the study investigators followed this process exactly to calculate the mean and standard deviation for each BARRIER item, loaded each item to its specific factor category, and then calculated each factor mean according to the scoring instructions. This allowed for a more consistent listing of factor means and is comparable with other studies. By listing each barrier item with the number of responses, and the mean and standard deviation, the ranking of each item is clear as to its perceived barrier (1 = to no extent, 2= to a little extent, 3 = to a moderate extent, and 4 = to a great extent). Prior to conducting the study, we performed an exhaustive literature review, of which only a sampling was cited in this study. It is true that there are alternative ways to list the data statistically; however, the methodology used in this peer-reviewed study was believed to be consistent with the formation of the factor data, was statistically sound, comparable to other studies, and represented well the response to each barrier item.
"Barriers to Research Utilization among Registered Nurses Practicing in a Community Hospital" by H. Schoonover. Heather Schoonover responded:
Thank you for taking the time to read the article and provide your thoughtful feedback. Information about the validity and scoring of the Barriers tool can be found on the original creator's website, http://www.unc.edu/depts/rsc/funk/barrier1.html.
Regarding table three, I did not eliminate those nurses who replied "no opinion" from the rank ordering of the means. I could have been clearer about this. Your feedback regarding frequency distributions and modes is appreciated and I will consider this for future work regarding barriers to research utilization/evidence-based practice.
The letter to the editor and responses to the letter to the editor are generally published in a separate section of the journal, but I thought it might be instructive for readers to learn about the process for handling these communications. That process is followed with all letters to the editor, whether complimentary or critical.
Rest assured that your letters to the editor will receive the same careful handling as the one described above. Do consider writing one; we'd love to hear from you!