Today the Open Access journal Health Prospect published our paper ‘ChatGPT: Challenges to editors and examiners’ [1]. The past year saw an exponential growth in the use of machine learning using AI (artificial intelligence) and particularly Generative AI (GenAI) such as ChatGPT. The latter has seen a spectacular rise in the public debate and in the mass media. Those not involved in the development of AI were amazed by the capabilities of ChatGPT to produce text equal to the average human produced texts. There is no doubt that the adoption of AI is advancing rapidly.
To test the ability of ChatGPT in its free version, we posed simple questions about migrant workers in Nepal, a topic we have published about widely. After reading the short essay produced by ChatGPT on that question, we repeated the question whilst asking for references to be included. We were surprised by the quality of this very general piece of work. In many UK universities, including at Bournemouth University, there is a debate about students’ use of ChatGPT. We all recognise how difficult it is to distinguish between work produced by the average student and that produced by AI. There is a similar problem for editors and reviewers of academic journals. It really boils down to the question: ‘How can you be certain the submitted manuscript came from a human source?’ However, we feel the progress of AI is not all doom and gloom. The paper also outlines some of the key problems around AI and academic publishing, but also opportunities arising from the use of AI in this area.
The authors of this paper are based at Bournemouth University, the University of Strathclyde, and the University of Huddersfield.
Reference:
- Simkhada, B., van Teijlingen, A., Simkhada, P., & van Teijlingen, E. R. (2024). ChatGPT: Challenges to editors and examiners. Health Prospect, 23(1), 21–24. https://doi.org/10.3126/hprospect.v23i1.60819