Misuse of Methods in Academic Publishing: A Growing Concern

Roger Watson, BSc, PhD, FRCP Edin, FAAN

Mark Hayter, RN, Ba(Hons), M. Med. Sci, PhD

Writer’s Camp Counselors


Nursing editors, reviewers, and academics should challenge method misuse.


As Editors-in-Chief of two leading nursing journals, we have noticed a troubling trend, especially within the last three years. Out of thousands of submissions received, a steady flow of manuscripts report studies that are methodologically rich and supported by complex statistical procedures with striking diagrams, yet they do not make many contributions to our field. We regularly exchange notes on this pattern, and we have written several editorials outlining our concerns about specific methods. For example, our concern about the overuse and misuse of methods led us to co-publish an editorial1 in Nurse Education in Practice (edited by RW) and Journal of Clinical Nursing (edited by MH).

Specifically, this editorial discussed studies that use variable, latent cluster, and bibliometric analyses.1 More recently, we have observed a similar pattern in manuscripts employing network analyses. All these methods are legitimate and can generate valuable insights, but they must be used in the right context. The extent to which results are meaningful depends on the theoretical underpinnings and the necessity of the research question. As the adage goes: put garbage in, and you are likely to get garbage out. In this article, we explore just that: the misuse of the methods we’ve identified (variable, latent cluster, bibliometric, and network analyses) in nursing publications. We’ll also provide some ideas to address this growing problem.

Misuse of Methods

The misuse of these methods has certain features in common. They are multivariate, they can be illustrated through sophisticated diagrams, and they are supported by user-friendly software packages. In fact, the availability of such software has likely contributed to studies that look technically advanced but offer little substance. Instead of beginning with a substantive problem and clear research question, authors start with a statistical package and then hunt for data to run through it. We have described this pattern as the emergence of “methods in search of a research question.” The outcome is predictable: superficial results, repetition across studies, and the erosion of originality.

Another issue lies in the relentless “publish or perish” mentality that characterizes academic life in many parts of the world. In some regions, the sheer volume of publications, provided they appear in impact-factor journals, overrides considerations of originality, rigour, and contribution. The volume of manuscripts we receive reflects this culture: publishable outputs are pursued at all costs, even if the research questions are weak and the results are trivial.

Additionally, our concern about the culture of method misuse extends to the behaviour of some authors. Sometimes, we receive misguided questions. Authors sometimes ask about “research hot spots,” the most publishable designs, or whether their paper will be accepted before submission. Other authors treat editors like research supervisors, seeking advice on study design or even offering co-authorship in exchange for editorial help.

We find these practices deeply problematic. Editors are not gatekeepers to be manipulated, nor are they responsible for teaching authors how to conduct research. Rejection is a normal part of academic life, not something that can be negotiated away. When authors attempt to bypass the rigours of scholarship through such shortcuts, they undermine both their own credibility and the integrity of publishing.

Misusing methods means that editors and reviewers are overloaded with studies that add little to knowledge to the discipline, cluttering the literature with noise rather than signal. This has a negative impact on the nursing discipline and many outcomes.

Variable Analyses

Now that you have a general overview of the issues, let’s look at one of our commonly misused methods in more detail: variable analyses.2 One of our recurring concerns relates to the proliferation of studies built on multivariate path analyses framed around mediating variables. At their best, such approaches can clarify mechanisms and identify how one factor influences another through intermediate variables. Yet too often, the manuscripts we see appear to be methods in search of a question rather than researchers seeking to answer a research question. Authors appear to take advantage of large datasets, sometimes secondary in nature, running endless models until they find a configuration that yields a statistically significant pathway.

This practice carries a high risk of Type I errors, producing findings that are misleading and not substantive. While there are examples where mediation analyses are grounded in strong theory and yielding meaningful insights, these are the exception. More commonly, we see papers that reflect the pursuit of quantity over quality, designed to tick boxes in a performance evaluation rather than to advance knowledge. Our message has been consistent: editors and reviewers should look carefully at whether a manuscript is anchored in theory and genuine research questions, and reject those that are not.

Latent Cluster Analyses

Next, let’s explore another commonly misused method: latent cluster analyses.3 Latent cluster analyses are another method that has become popular in recent years. This method can be valuable for uncovering patterns within data that are not immediately obvious. However, the way it is typically used in nursing research raises concern. A familiar pattern occurs including large samples completing a questionnaire, clusters of high, medium, and low scorers inevitably emerge, and regression analyses follow and the article ends with a superficial narrative.

This formula produces results that are both predictable and trivial. Worse, it misapplies the method, which is intended to reveal meaningful and novel groupings rather than to generate obvious categories that could be detected by inspecting raw data. In our view, this approach not only adds little to knowledge but, when pursued systematically for the sake of publication, comes close to research misconduct. We have cautioned authors against this misuse and signalled our intent, as editors, to reject such submissions.

Bibliometric Analyses

We have also critiqued the rising tide of bibliometric analyses.1 At first glance, bibliometric analyses seem impressive with colourful maps of co-authorship, citation patterns, or keyword networks that can be generated quickly and presented attractively. But beneath the surface, many such papers are atheoretical, descriptive, and repetitive. The novelty of such analyses has worn off, leaving behind a flood of manuscripts that add little to nursing knowledge.

This is not to say the method is without value. Bibliometric analyses can illuminate collaboration patterns across countries, identify neglected research areas, or assess the contribution of nursing scholarship to broader fields. Yet these benefits arise only when the analyses are guided by substantive questions. When conducted simply because software makes them easy, bibliometric analyses become another vehicle for an easy publication. Our position is that bibliometric analyses are worthwhile only if they are tied to meaningful, theoretically informed questions.

Network Analyses

More recently, we have turned our attention to network analyses, which are increasingly appearing in nursing research. Network analyses can be powerful, especially in fields where relationships and influence are central, such as epidemiology, informatics, or sociology. In the right context, it can reveal hidden structures and offer actionable insights.

Yet many of the manuscripts we receive use network analyses simply to produce visually striking diagrams. Variables, authors, or concepts are connected in elaborate webs, but the underlying research question is either absent or trivial. The temptation to “let the software speak” results in networks that are open to multiple interpretations without clear conclusions. Such studies illustrate once again the problem of methods being used for their own sake, rather than as tools to address genuine scholarly questions.

Addressing the Problem

We’ve discussed the misuse of methods, including general concerns and concerns specific to the methodologies we identified, including variable, latent cluster, bibliometric, and network analyses. Although we have published editorials on these issues, we know that change can be hard. We understand the incentives that drive academic publishing, especially in environments where volume is prized over quality. Nevertheless, we believe editors, reviewers, and academic leaders play a role in pushing back against this problem.

First, editors and reviewers should be vigilant for these issues as they screen manuscripts. Submissions that lack theoretical underpinnings, clear research questions, or substantive contributions should not make it through peer review, no matter how sophisticated the methods appear.

Second, academic leaders should reconsider incentive systems that reward sheer output rather than meaningful impact. So long as output remains the metric of success, the temptation to misuse methods will persist. Nursing academics should partner with institutional leadership to revise these practices like changing promotion and tenure criteria. Champions can help outline alternate practices that safeguard the discipline.

Finally, we suggest that editors, reviewers, and academic leaders offer further education on the misuse of methods. Personally, we deliver workshops and seminars on academic publishing, and we incorporate messages such as the ones outlined here. For example, we discuss that methods are tools, not ends in themselves, that theory and research questions matter, and that integrity is more important than volume. These messages can be offered via separate presentations, or they can be incorporated into existing disseminations.

Conclusion

The misuse of methods in academic publishing is not a trivial matter. It wastes resources, undermines trust, and clutters the literature with work that looks impressive but adds little. The methods themselves are not the problem. The problem lies in how they are deployed, often in the absence of theory, originality, or necessity.

As editors, we will continue to challenge this culture. We call on our colleagues—authors, reviewers, and academic leaders—to do the same. Only by resisting the lure of easy publications and reaffirming the primacy of good questions, sound design, and theoretical integrity can we ensure that our field produces research that truly advances knowledge and improves outcomes.

Declaration

The generative AI package ChatGPT was used, with oversight, to summarize existing work and to check the accuracy and coherence of the manuscript.

References

  1. Watson R, Hayter M. Bibliometric analyses: Do they contribute to knowledge? Journal of Clinical Nursing. 2025; 34(4): 1101-1102. doi:10.1111/jocn.17423
  2. Watson R, Hayter M. How many studies of mediating factor studies is enough? Nurse Education in Practice. 2025; 82(January 2025): 104178. doi:10.1111/jocn.17570
  3. Watson R, Hayter M. Latent cluster analysis: Another method in search of a research question? Nurse Education in Practice. 2025; 82(January 2025): 104219. doi:10.1016/j.nepr.2024.104219

Authors: Roger Watson and Mark Hayter.

Reviewed and Edited by: Jenny Chicca

Copyright © 2026 Writer’s Camp and Roger Watson and Mark Hayter. CC-BY-ND 4.0

Citation: Watson R, Hayter M. Misuse of methods in academic publishing: A growing concern. The Writer’s Camp Journal. 2026; 2(1):2. doi:10.5281/zenodo.18039465

 

Leave a Reply

Your email address will not be published. Required fields are marked *