ChatGPT creator OpenAI abruptly shut down its AI classifier, a tool used to distinguish AI from human writing. According to an updated blog post
, OpenAI’s AI detector is no longer available “due to its low rate of accuracy.”
“We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated,” reads the update. The startup hasn’t released a replacement solution for AI detection.
Prior to shutting down the AI classifier, OpenAI was enthusiastic about improving it with more data. OpenAI has also acknowledged the tool’s limitations in the past, admitting it is unreliable on short texts (below 1,000 characters) and could falsely flag human-written text as AI. It seems that the AI classifier was wrong more than it was right, which led to its cancellation.
Skepticism around the theory that AI-generated text
contains some recognizable patterns that could be detected is rising. Sophisticated technologies like GPT-4 make AI detection more challenging than ever.
With ChatGPT surpassing 100 million users, entire industries are wondering how to protect themselves from AI misuse. Educators, in particular, are worried about students fabricating homework and test results using ChatGPT. Some schools have even banned ChatGPT use but have no way of knowing whether someone has used it outside of school grounds. AI spreading misinformation at lightning speed is a major concern of the general public.
Governments have yet to establish rules, regulations, and protective measures regarding AI, leaving it to individuals and organizations to fight AI misuse on their own for now
. In the meantime, AI-related lawsuits are on the rise.
OpenAI is the mastermind behind one of the most sophisticated AI technologies to date, but like competitors, is struggling to develop tools that can recognize it. Tech giants race to release new AI features daily but somehow fall behind when it comes to AI security.
At least all major AI players have pledged to develop watermarking and detection methods. But no one – not even OpenAI, the pioneer of the generative AI craze – managed to introduce reliable solutions. That said, OpenAI recently formed a Superalignment team
to fight rogue AI, which might work on this issue.