Future Tech

ChatGPT makers launch tool to spot AI texts – but it's not very good

Tan KW
Publish date: Sat, 04 Feb 2023, 03:57 PM
Tan KW
0 427,415
Future Tech

NEW YORK: Despite all the enthusiasm surrounding ChatGPT, experts have been ringing the alarm bells about the risks of such AI-generated texts. The makers of this potentially revolutionary piece of software now want to help us tell which texts are from humans and which aren't.

Faced with widespread concerns that potentially inaccurate AI-written texts are about to flood the internet and be indistinguishable from human-written texts, the makers of the text-writing software ChatGPT say they have a solution.

The software company OpenAI has launched a program that is supposed to be able to distinguish whether a text was written by a human or a computer.

ChatGPT can mimic human speech so well that there are concerns, among others, that it could be used to cheat on school and university papers or to create disinformation campaigns on a large scale.

Despite its best efforts, OpenAI admitted in a blog post on Tuesday that its recognition tool still works rather poorly.

"Our classifier is not fully reliable," ChatGPT says, noting that in test runs, the software correctly identified texts written by a computer in 26% of the cases.

At the same time, however, 9% of texts written by humans were wrongly assigned to a machine. For the time being, it is therefore recommended not to rely mainly on the assessment of the so-called classifier tool when evaluating texts.

"Our work on the detection of AI-generated text will continue, and we hope to share improved methods in the future," the company says.

OpenAI says the need for such AI recognition software is clear, since its AI could be used for "automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human."

The company's AI-based software, trained to mimic human speech on massive amounts of text and data, has prompted waves of debate as to how education, media and other text-reliant sectors could be changed.

OpenAI made ChatGPT available to the public as a demo late last year, sparking both admiration of the software's capabilities and concerns about misuse.

While the linguistic level of quality is impressive, IT experts have warned that users cannot (yet) rely on ChatGPT for accurate or even ethical answers. Critics of the system also complain that ChatGPT does not name any sources for its statements.

Especially in the field of education, there have been calls for a means to quickly expose texts written by an AI. Classic plagiarism scanners, with which one can effectively check the authenticity of texts, one does not get any further.

These scanners only check whether the text or parts of it already appear in other sources. The AI writer of ChatGPT, however, produces unique texts that have never been formulated exactly like this before.

OpenAI is also talking about using a kind of digital watermark for ChatGPT that would not be recognisable to human eyes. However, special verification software would then signal whether it is an AI text or not.

Google parent company Alphabet, scared by the potential of ChatGPT and the ways in which it could be leveraged by major rival Microsoft, now wants to begin making stronger use of its own AI research.

Google has been developing software that can write and speak like a human for years, but has so far refrained from using it. Now, however, the internet company is letting employees test a chatbot that functions similarly to ChatGPT, the broadcaster CNBC reported late on Tuesday, citing an internal email saying that a response to ChatGPT was a priority. Google is also experimenting with an AI-based version of its search engine.

 - dpa

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment