Academic style content produced by ChatGPT is relatively formulaic and would be picked up by many existing AI-detection tools, despite being more sophisticated than those produced by previous innovations, according to a new study.
However, the findings should serve as a wake-up call to university staff to think about ways to explain to students and minimise academic dishonesty, researchers from Plymouth Marjon University and the University of Plymouth, UK, said.
ChatGPT, a Large Language Machine (LLM) touted as having the potential to revolutionise research and education, has also prompted concerns across the education sector about academic honesty and plagiarism.
To address some of these, this study encouraged ChatGPT to produce content written in an academic style through a series of prompts and questions.
Some of these included “Write an original academic paper, with references, describing the implications of GPT-3 for assessment in higher education”, “How can academics prevent students plagiarising using GPT-3” and “Produce several witty and intelligent titles for an academic research paper on the challenges universities face in ChatGPT and plagiarism”, the study said.
The text thus generated was pasted into a manuscript and was ordered broadly, following the structure suggested by ChatGPT. Following this, genuine references were inserted throughout, the study published in the journal Innovations in Education and Teaching International said.
This process was revealed to readers only in the academic paper’s discussion section, written directly by the researchers without the software’s input.
Launched in November 2022, ChatGPT is the latest chatbot and artificial intelligence (AI) platform and has the potential to create increasing and exciting opportunities in academics.
However, as it grows more advanced, it poses significant challenges for the academic community.
“This latest AI development obviously brings huge challenges for universities, not least in testing student knowledge and teaching writing skills – but looking positively it is an opportunity for us to rethink what we want students to learn and why.
“I’d like to think that AI would enable us to automate some of the more administrative tasks academics do, allowing more time to be spent working with student,” said the study’s lead author Debby Cotton, professor at Plymouth Marjon University.
“Banning ChatGPT, as was done within New York schools, can only be a short-term solution while we think how to address the issues.
“AI is already widely accessible to students outside their institutions, and companies like Microsoft and Google are rapidly incorporating it into search engines and Office suites.
“The chat (sic) is already out of the bag, and the challenge for universities will be to adapt to a paradigm where the use of AI is the expected norm,” said corresponding author Peter Cotton, associate professor at University of Plymouth.