OpenAI, the company behind the AI programme ChatGPT, has released a tool for determining whether or not a text was written by AI. However, the research lab warns that the tool is not perfect.
OpenAI recently introduced the classifier tool, which can distinguish text written by humans from text written by multiple AI systems, not just ChatGPT, in a blog post.
According to OpenAI researchers, accurately identifying all text written by AI is difficult, but effective classifiers can recognise certain signs. The tool could be useful in detecting AI-based academic cheating as well as cases where AI chatbots impersonate humans.
They did admit, however, that the classifier has limitations, as it correctly identified only 26% of English texts written by AI and incorrectly labelled 9% of human-written texts as AI-generated.
The researchers said:
The reliability of our classifier typically improves as the length of the input text increases. This new classifier is significantly more reliable on text from more recent AI systems than our previously released classifier.
OpenAI acknowledged the classifier tool’s limitations, including its ineffectiveness on texts shorter than 1,000 characters and its tendency to misinterpret human-written text as AI-generated text.
The researchers emphasised that the tool should only be used for English because it performs poorly in other languages and is unreliable when it comes to code checking.
It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text.
OpenAI has asked educational institutions to share their experiences with incorporating ChatGPT into their classrooms. Although the majority of institutions have banned AI, some have adopted it.