Even Generative AI cannot figure out if a piece of text was written by AI or a human.
I know, it sounds scary 🤯
OpenAI had developed an AI Classifier Tool designed to differentiate between human and AI-written text.
Last month, it shut it down: OpenAI ended its AI classifier on July 20th due to low accuracy. The tool was not effective at catching AI-generated text.
According to OpenAI: “In our evaluations on a “challenge set” of English texts, our classifier correctly identifies 26% of AI-written text (true positives) as ‘likely AI-written’, while incorrectly labeling human-written text as AI-written 9% of the time (false positives).”
Detection is challenging because there are so many permutations and combinations to validate; and thanks to prompt engineering, output created by human inputs and AI algorithms are making it increasingly undetectable.
Ability to detect AI generated information could improve over a period of time; however it could go one of two ways:
a) We successfully build better AI detection techniques that will help us better grade any work output based on whether it is human generated or AI generated.
b) We are not super success with AI detection which would mean AI becomes a de facto standard of how human beings will partner with AI to create text, images, video, and of course, code. This also raises questions about trust, ethics, and the future of AI.
What are your thoughts on this topic of ability or inability to differentiate between AI generated and human generated output?
#generativeai #ethicsinai #ethics #aigovernance