Universities Confront the Challenge of ChatGPT Academic Dishonesty
I appreciate the coverage of the AI cheating scandal in UK universities, but referring to it as merely the “tip of the iceberg” falls short. While freedom of information requests shed light on universities identifying instances of AI-related cheating, the greater issue lies with those institutions that are failing to address the problem.
In 2023, Turnitin, a widely utilized assessment platform, introduced an AI detection feature, reporting high reliability based on extensive testing. However, numerous universities chose to forgo this indicator without putting it to the test. Although concerns regarding excessive “false positives” were raised, independent studies have refuted these claims (Weber-Wulff et al. 2023; Walters 2023; Perkins et al. 2024).
The underlying motivation might stem from institutions that depend on high tuition fees from international students—it’s a case of “see no cheating, hear no cheating, lose no revenue.” The political dynamics within higher education are contributing to a scandal involving dubious degree awards and a widespread decline in graduate competency. While schools like mine are committed to maintaining rigorous assessments, we recognize that the costs of neglecting this responsibility will ultimately be much higher.
If our pilots were unable to fly planes or if our surgeons lacked essential knowledge, it would raise significant concerns. We naturally expect our lawyers, teachers, engineers, nurses, accountants, and social workers to possess genuine expertise and skills as well.
A transformative shift is occurring within the sector, as some universities embrace traditional examination methods—often criticized as outdated or rote-focused—that effectively evaluate students’ capabilities. Those institutions hesitant to abandon their convenient yet flawed assessment practices may eventually have to justify their actions in a public inquiry.