ChatGPT has created a tool that can detect when written works are generated by artificial intelligence.
OpenAI has released a new tool that can determine if a piece of text was produced by a human or an AI, as well as its own ChatGPT chatbot.
The artificial intelligence research company stated in a 31 January statement that while it is “difficult” to properly detect all AI-authored content, strong tools “can inform mitigations for misleading claims” a person wrote that AI-generated language.
This might be used to detect AI-written automated misinformation campaigns, identify instances of academic dishonesty in institution settings, and even expose AI chatbots impersonating as humans.
However, OpenAI cautioned that its most recent public tool has a number of significant shortcomings and is not yet totally dependable.
“It should not be used as a major decision-making tool, but rather as a supplement to other means of detecting the source of a piece of text,” according to the business.
According to OpenAI’s study of English texts, its program accurately classified 26 percent of AI-written text as “likely AI-written,” while wrongly labeling human-written text as AI-written 9 percent of the time.
Importantly, the technique is extremely unreliable for short texts of less than 1,000 words.
“The reliability of our classifier often rises as the length of the input text increases. This new classifier is substantially more dependable on text from more contemporary AI systems than our previously released classifier.”
The identification technique should only be used for English texts, according to OpenAI, because it performs “much worse” in other languages.
Since ChatGPT became available to the public, educational institutions around the world have expressed worry that it could lead to exam or assessment cheating.
To prevent students from exploiting AI, UK lecturers are being pushed to reassess the way their courses were graded, while some colleges have banned the technology outright and returned to pen-and-paper exams.
According to one lecturer at Australia’s Deakin University, one in every five of the examinations she graded during the summer had employed AI assistance.
A number of scientific journals have likewise prohibited the use of ChatGPT in paper text.
Recognizing ChatGPT’s significance in academic circles, where it has been misused for assignments and subsequently barred from use at some colleges, OpenAI is now engaging with educators to get input on the technology.
“We are interacting with educators in the United States to discover what they are seeing in their classrooms and to discuss ChatGPT’s strengths and limitations, and we will continue to widen our outreach as we learn,” the business added, requesting direct feedback from educators.
“These are critical debates to have because part of our aim is to deploy big language models in direct touch with affected people in a safe manner.”
OpenAI – Lessons Learned on Language Model Safety and Misuse (openai.com)