INANE Conference: Tuesday PM 8/5/25

Speakers:
- Cindy Munro, co-editor-in-chief of the American Journal of Critical Care, and Emeritus Dean at the University of Miami School of Nursing and Health Studies
- Mary Fran Tracy, PhD, RN, APRN, CCNS, FCNS, FAAN: Assistant Dean for the PhD Program and Director of Graduate Studies and Associate Professor at the University of Minnesota School of Nursing. Editor in Chief of AACN Advanced Critical Care
- Annette Bourgault, PhD, RN, CNL, FAAN: Associate professor and PhD program director, University of Central Florida, College of Nursing, Editor in Chief of Critical Care Nurse
Abstract: In the evolving landscape of publishing, editors of nursing journals face a myriad of complex challenges. In many instances, clear guidelines are available or consensus exists in the editorial community. A conundrum (defined as a confusing and difficult problem or question) may occur when guidelines are lacking or evolving, and when the editorial community has not reached agreement on a particular issue. This presentation delves into prominent editorial conundrums to illuminate underlying principles and facilitate discussion among attendees. This presentation aims to provide insights and foster discussions on developing best practices and guidelines by examining editorial conundrums. Ultimately, it seeks to equip editors with strategies to navigate these challenges, build consensus where possible, and ensure the integrity and advancement of nursing journals.
Summary: In advance of the meeting, the presenters solicited examples of conundrums through email to the INANE listserve. Conundrums were then selected from listserve suggestions and the presenters’ experiences and discussed in the session.
“Conundrum: a confusing and difficult problem or question.”
CONUNDRUM #1: AI HALLUCINATIONS
- Journal policy requires authors to disclose the use of AI and, if they do, fact-check the content..
- In this conundrum, the authors stated that AI was used for minor grammar and word choice but after publication, 3 references are noticed to cite non-existent studies.
Dr. Tracy asks: To What extent are editors seeing use of AI in manuscript preparation?
- One editor mentioned encountering this exact scenario (non-existent studies in the references page) but asked what she should do about it.
- A peer reviewer spoke about finding hallucinated references and the pressure to check every single reference as the peer reviewer. Whose job should that be?
- Another editor said there are two things to discern between using AI for grammar and syntax (which she’s not really concerned about) versus using generative AI to build content, format, or create references.
- Another spoke out about seeing manuscripts with paragraphs with tons of data but no citations….is it AI generated??
- Another spoke about teaching students and making them sign paperwork about academic integrity and asks, can we do the same thing for authors? And if they disclose they are submitting original work and it is found not to be true, that’s it, one strike and you’re out.
- Another asks what we do when we identify AI that was not disclosed. She personally emails the author to ask about their concerns rather than just writing back and accusing the author of unethical behavior.
- Another editor has an editorial manager who checks all the references for her, not the peer reviewers.
- What do you do if someone references an article that does exist….but the data isn’t from that cited reference!?
- Another editor uses AI to decide what AI is, but this technology has limitations. But what do you do? It really depends on the situation. You could email the author or outright reject it. There are different options. But what is best practice? Many English language learners use AI for better syntax and grammar and that is an appropriate use. So we shouldn’t restrict that. This upcoming generation is going to use AI, we need to acknowledge that they are doing it so we have to teach them to use their own thoughts and skills to make it better.
- Another editor says we need to reframe what “using AI” means. We all use AI. He points out that there is a difference between AI and generative AI. AI is not inherently bad. We need to reframe the discussion.
- Someone else suggested thinking about cross-reference checking.
- Another speaker spoke about authors using ChatGPT to analyze qualitative data. But the question is, how much of our expertise is lost when we delegate this type of analysis?
- Dr. Bourgault encountered a similar situation and she used ChatGPT to identify erroneous AI references. However, we have to be careful about imputing anything into AI and make sure the info put into it will not be used to train the model.
The Outcome: Dr. Tracy said that in this particular conundrum, she put her paper into an AI grammar checker and because the references were listed on that document, the AI program took liberty to add their own references!
——————————————————
CONUNDRUM #2: A journal receives multiple submissions from one author in one month. All are based on data from a large public clinical data repository. They appear to have used the same template. The first submission was desk rejected. Three more followed in the next two weeks. What is the appropriate response?
- Dr. Munro encountered this situation.
- Since this happened, she read an article that NHANES data has been used in the same way to create these “plug-and-play” papers. If you read the NY Times, there was an article about a paper that looked at questionable publications and how they are growing exponentially.
Question: What is the appropriate response to this??
- One editor had an author send her 5 studies in one month. She then looked in editorial manager because this was a red flag. Her editorial decision was that she will not accept any papers from that author because she can’t trust the ethics of that author that it is a real study
- Another editor said that she automatically rejects if she receives another paper from the same author on the same topic after being rejected even once.
- Another editor said she reads things many times: before peer review, after peer review, after revision. And she thinks to herself that it is hard to realize that someone is submitting multiple similar papers because it’s hard to keep track in her head. She asks for advice on how to organize yourself around this. The answer from Dr. Munro was that in this situation, the papers were submitted so close together that it was very obvious. She states you just have to be really careful and also, she is extra cautious about Mimic 4 papers.
- A qualitative researcher spoke about having “that sense of a pattern” in her. Therefore she has a file that she puts articles in once she downloads them so that she can go back to them. What she has found is that the same authors will pick two different topics but use the same template. And because of her filing system, she can pull the old studies and notice if there is some possible unethical behavior.
- Another person recommends asking yoru publisher if they have tools you can use to help to weed out these types of things (like papers that are from papermills that have the same template, etc).
- A managing editor spoke and said that when she checks the papers in, she is alerted to these activities through Scholar One. Then, she alerts the editor to the “perceived problem” because that is for the content experts to decide.
- Someone asks what feedback you give to the authors who do this. Dr. Munro said that she just sends the standard desk rejection email. However, she is worried about where those papers go next. Someone else asks is there a way to send a rejection letter that is explicit about the paper being rejected for ethical issues.
The Outcome: Dr. Munro said that in this situation, she simply rejected all these papers.
——————————————————
CONDUNDRUM #3: STUDENT LETTERS TO THE EDITORS. A journal receives multiple (twenty!) letters to the editor within one week. All commented positively on the articles from the same journal. All appeared to have used the same template for their letters. All of the authors were from the same institution, and their cover letters said it was for a class assignment. The letters were also in APA format, which was not the journal’s style.
Question: What do we do about this?
- Respond to the student that there is no way that we can publish this but that we appreciate them reading their journal and writing.
- Someone else said to contact the faculty member directly and tell them next time to have the students peer review each other’s work and submit the best as a class.
- Someone else said to write to the faculty member and explain that this shouldn’t be done.
- Someone else said that this practice needs to be stopped, that faculty should not require this type of practice for a class, and that it unnecessarily bogs down the journal staff.
- One person did reach out to a faculty member who said she was “too busy to speak to the editor!” So instead she emailed each student individually to encourage them to keep reading the journal, but that their letter would not be published.
The Outcome: Dr. Bourgault said that she reached out to the faculty and asked for a Zoom meeting, and she did this a few times over a few years, and really explained that the faculty was setting the students up to fail by having so many submissions sent to the same journal on similar topics. She encourages faculty to discuss journal due diligence and which journals they consider submitting to as a class. She also asked the faculty to be a peer reviewer, and it worked!
Your INANE 2025 reporter is Melissa Anne DuBois, BSN, RNC-OB, PhD Student.
Content for this post was obtained from the INANE 2025 website, the conference guidebook, internet searches, speaker submitted bios, and live reporting from each session. Any errors in content are purely accidental and not intended to offend. If you notice an error you would like corrected, please contact Melissa Anne at melissadubois2 at gmail dot com and she will be happy to make corrections.
