Artificial intelligence has confused teachers and academics around the world – how to check if a student has completed their tasks independently? Often, unscrupulous work is easy to verify, especially when AI starts hallucinating (that is, entering meaningless or made-up data), but there is still a significant percentage of consistent work whose true origin is difficult to determine at first glance.
A critical review of tools for recognizing the effects of artificial intelligence proves that they are not good enough. What’s more, they’re actually easy to fool with little extra work. This is a big problem for people verifying authorship.
A group of researchers led by Debora Weber-Wulff (HTW Berlin – University of Applied Sciences) decided to check the effectiveness of the programs used to detect the activities of artificial intelligence. For this purpose, they tested fourteen different models declaring such skills. Most of these tools work by looking for characteristics of AI-generated texts (e.g. specific repetitions) and then calculating the probability of authorship associated with those characteristics.
ChatGPT presents unprecedented challenges to the world and science. Henry Kissinger on the revolution that awaits us soon
As it turned out? For text directly ported from ChatGPT, they are relatively effective. However, it is enough to modify the work, even to a small extent, so that they can pass as original. A delicate paraphrase at the grammatical level is enough (including AI tools for paraphrasing, such as Quillbot) for the recognition efficiency to drop from 74% to 42% (which basically excludes the possibility of demonstrating dishonesty to someone). Interestingly, the human creation score averaged 96% correct (on average). So it seems that at the moment dishonest students can sleep peacefully.
Source: MIT Technology Review