In every good seed there is always a piece of bad.
I once heard this quote by Marian Edelman and I thought, "Wow, that is a downer!" Today, however, I will use this quote as I think about Artificial Intelligence (AI) and the potential harm it plays in advancing nursing science.
Artificial intelligence is the development of information by machines rather than by humans. Machines (computers) take information from previous information that already exists and then can create something new. Areas of AI usage that we are already familiar with would be voice recognition, translation applications, automobiles, smartphones/televisions, etc. Artificial intelligence is not new and has been around for decades and has been an important feature in some areas: marketing, customer support, weather alerts, etc. There are also bots that contain malware and other bad things. Back to the Edelman quote earlier. When the accuracy of scientific manuscripts is placed in jeopardy, then steps must be taken to intercede.
There are many discussions occurring about when a bot may be useful. Even academic institutions are having conversations about the use of bots in academia to "jump-start" an idea. It is clear that bots will not go away, so learning how to use them and not compromise the literature is imperative.
At our recent Orthopaedic Nursing editorial board meeting, I wanted to have a discussion on ChatGPT and other AI bots to help us understand what these are and what they mean to scholarly work. Our Senior Content and Publishing Associate, Jonathan Kemmerer-Scovner, took it upon himself to create a short manuscript using ChatGPT for demonstration purposes. By entering a few terms into ChatGPT, he was able to create a document that would make most editors "cringe." It was shocking! I was grateful that he demonstrated this for our board members to help further understand the topic. And to be honest, on first read, I did not pick up on that it was written by a bot. Since that meeting, I have spent considerable time learning about this topic and being able to identify potential issues.
The use of AI in publishing came to a head in November 2022 with the launch of ChatGPT, which was developed by OpenAI. There was a tremendous amount of interest from authors, students, and others. For editors and publishers, the dilemma now exists of who exactly is the author of a manuscript. Did a bot create the document or was the information created by a person who can guarantee the content? How will an editor know? For editors of scholarly medical journals, the concern is whether the information is accurate and if it is not, what potential harm to the patient could be the result? What we do know is that a bot cannot be responsible for the accuracy of a document that it created. This is where the ethics of the publishing process must be at the highest level, and the author must be able to support the authenticity of their work.
Platform creators such as Turnitin are developing tools to identify documents created by AI. Earlier this year, a new application called GPTZero was launched to do the same.
Revised Author Guidelines have been created in an effort to further clarify for authors that the use of AI in the preparation of manuscripts for Orthopaedic Nursing will not be allowed. The use of bots such as ChatGPT will not be allowed in the final written preparation of a manuscript. Authors will need to certify that AI was not used in the preparation of the data, development of thoughts for a manuscript, or written by an AI tool. We will also be instituting an authentication process at the time every manuscript is submitted. We are committed to maintaining the integrity of Orthopaedic Nursing by ensuring that the content is of the highest quality.
Resource