The Transformative Impact of Generative AI on Academic Writing

By Roger Watson, BSc PhD FRCP Edin FAAN

Writer’s Camp Counselor


Artificial intelligence (AI) is now an inescapable fact of life.


This is not because it exists – people have been working on it and with it for decades – but because it is now so widely and easily available.

Until the middle of 2022 few outside academic or science fiction circles had heard of AI. Far fewer had ever used it or had realized they were using it or that search engines such as Google used it. But one of those rare events – call it a ‘step change’ or a ‘singularity’ – took place on 30 November 2022. The tech company Open AI launched ChatGPT and the world, certainly with respect to AI, changed.

ChatGPT was a huge and instant success due to its availability on a range of devices such as desktops and smartphones. It was free to use for certain capabilities, the interface was user friendly requiring only text messages written in plain English, and it had a wide range of capabilities from answering simple questions (i.e. “What will the weather be like today?”) to the generation of responses to letters and emails and performing complex calculations almost instantly.

For those willing to pay a subscription, they could upload documents to the platform and ask for summaries of articles, analysis of spreadsheets, the generation of images and many more functions limited only by the imagination of the user. Many use it to plan anniversaries, for travel itineraries or to find recipes.

Generative AI

Whereas platforms such as search engines and commercial sites such as Amazon used AI in the form of machine learning both to learn about the user and to improve their functions, Chat GPT offered something new: generative AI. Generative AI, while still capable of retrieving information and improving its functions by machine learning, also offered the generation of text.

Thus, people use it to write policy documents and strategies and the responses provided are text-based. ChatGPT can also alter the tone of what it writes from, for example, conversational to formal. With a subscription, the results of ChatGPT sessions can be saved as Word documents, PDFs, Excel files, or PowerPoints.

A wide range of generative AI platforms now exist and some such as Gemini, Grok, and Meta AI which are integrated into other platforms such as Google, X (formerly Twitter) and WhatsApp, respectively. Soon after the launch of ChatGPT, on 7 February 2023, Microsoft released a free standalone AI platform Microsoft CoPilot which, if you have a Microsoft subscription, offers a wider range of facilities. The most recent such platform to be launched, in China, with considerable media attention was DeepSeek.

AI as an Aid to Writing

The range of potential uses of generative AI in writing is wide. With appropriate prompting (the instruction you type in) it can do almost anything from checking a piece of writing for typographical errors, grammatical errors, flow of the writing and overall structure. The result of this can be either a list of points or a re-written document.

For writers whose first language is not English, all the above are very useful features. Generative AI is also very good at translating and a mixture of languages can be used with a prompt to produce a document written in one, either, or both the languages used.

Generative AI can also produce whole documents or sections of documents, such as academic manuscripts, either de novo or in response to an uploaded document. In this case a prompt to “write me an introduction and discussion” could be used, having uploaded the methods and results sections of a study. Alternatively, generative AI can write a complete manuscript if prompted. Moreover, it can write a manuscript formatted to the style of a particular journal. The results are often remarkably plausible.

Clearly, there is a spectrum of uses from that which is no more than an editor could provide through to that which an experienced academic could provide. In parallel, as should be obvious, there is a spectrum of acceptability in the above uses of generative AI. In addition to the challenges AI poses in academia generally regarding its use by students, it is all proving to be a challenge for editors and publishers of academic journals.

The Publishing Challenge Posed by Generative AI

It is fair to say that generative AI caught the academic publishing industry by surprise. Early in 2023 reviewers, editors, and publishers began to realize, not only the potential of generative AI, but that it was clearly being used by authors. The problem was that, given the difficulties of detecting the use of generative AI, it was not clear by whom and to what extent it was being used.

Suspicion that AI was being commonly used to generate text in manuscripts or, indeed, whole manuscripts, arose because of detectable changes in the use of language in manuscripts. Generative AI tends not to ‘spare the blushes’ of the author and tends to use terms that are superlative (e.g. ‘the most important’) and evaluative (e.g. ‘this meticulous study’) and to use them often. One study detected an 83.5% increase in at least one such term and a 468.4% increase in more than one such term in academic articles published in 2023 compared with 2022.

The Publishing Response to Generative AI

The responses of the academic publishing industry, mirroring those of academia, ranged from the draconian to the liberal. I participated in several publishing forums in 2023 and at one end of the spectrum people said the use of generative AI should be completely banned while, at the other, people said some accommodation was possible. The major problem, alluded to above, for those wishing to ban its use is that it is virtually undetectable and careful use – for example editing the outputs of generative AI to ensure that they did not read like AI generated texts – could make its use almost invisible.

The same problem exists for those proposing a more liberal use. Inappropriate use of AI would still be undetectable and it would be very hard to know in which manuscripts authors had ‘crossed the line’ from acceptable to unacceptable practices. A compromise had to be reached.

The compromise, as demonstrated by two major academic publishers of nursing journals – Wiley and Elsevier – has been to incorporate statements into their author guidelines stating what the limits of the use of generative AI are and asking authors to declare the extent to which they may have used generative AI. The statements are similar in that they say that generative AI should only be used to improve already written material and not to generate manuscripts. Use of AI for any such purposes must have human oversight. Authors are asked to declare which AI platform was used; what it was used for; and that there was human oversight of the submitted material.

Naturally, this is not a perfect solution. Authors are asked to make declarations with respect to the ethical conduct of their research, aspects of publication such as copyright and conflicts of interest. Nevertheless, the number of academic articles being retracted continues to increase. But declarations regarding the use of generative AI do throw the responsibility for the appropriate use of generative AI back on the author. It is possible that the use of AI in academic writing will become more detectable and, of course, that detection could be done retrospectively. This may behove authors to be honest in their declarations.

Generative AI and Ethics

Several ethical issues arise in the use of AI and these are attracting increasing attention. Websites such as The Scholarly Kitchen, Retraction Watch, The LSE Blog, and The Source regularly publish articles on the use (and misuse) of generative AI in academic writing.

The ethical issues include to whom the material used by generative AI platforms belongs, not only whether the platform has some claim but also what sources it used to obtain the information used in writing an article. Uploading an article to a generative AI platform for suggested improvements means that the information contained in the article is then stored and used by the platform. This raises a potential copyright issue for journals to which the manuscript is submitted.

With respect to specific aspects of publication ethics such as plagiarism, fabrication and falsification, generative AI raises problems in each of these areas. Using material which a platform has stored could be considered plagiarism; using generative AI to create a manuscript could be considered fabrication; and use to improve writing could verge on falsification. These issues remain to be fully considered and resolved.

Conclusion

The rise of generative AI has introduced both unprecedented opportunities and profound challenges for academic writing. On one hand, its capabilities as a linguistic assistant, translator, summariser, and stylistic enhancer are undeniable — especially for those working in second languages or seeking efficiency in an increasingly pressurised publishing environment. On the other hand, its potential for misuse, lack of transparency, and ethical ambiguity raises legitimate concerns for researchers, editors, and publishers alike. As policies and detection tools catch up, it will be essential for the academic community to strike a balance between innovation and integrity. Ultimately, the responsible use of AI in academic writing will depend not only on technological regulation, but on the willingness of researchers to uphold the values of authorship, originality, and accountability in a changing scholarly landscape.

Declaration

ChatGPT was used to check the article for typographical and grammatical errors and to generate the Conclusion…with human oversight.

© 2025 by Roger Watson and Writer’s Camp. This work is licensed under CC BY-ND 4.0 

Author: Roger Watson

Editor: Leslie H. Nicoll

Citation: Watson, R. The Transformative Impact of Generative AI on Academic Writing. The Writer’s Camp Journal, 2025, 1(1), 4. https://doi.org/10.5281/zenodo.15366886

 

Leave a Reply

Your email address will not be published. Required fields are marked *