Future Tech

OpenAI is very smug after thwarting five ineffective AI covert influence ops

Tan KW
Publish date: Fri, 31 May 2024, 08:28 AM
Tan KW
0 459,783
Future Tech

OpenAI on Thursday said it has disrupted five covert influence operations that were attempting to use its AI services to manipulate public opinion and elections.

These influence operations (IOs), the super lab said, did not have a significant effect on audience engagement or in amplifying the reach of the manipulative messages.

"Over the last three months, our work against IO actors has disrupted covert influence operations that sought to use AI models for a range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts," the biz said.

The campaigns have been linked to two operations in Russia, one in China, one in Iran, and a commercial company in Israel.

One operation out of Russia dubbed "Bad Grammar" focused on Telegram, targeting people in Ukraine, Moldova, the Baltic States, and the United States. The other, known as "Doppelganger," posted content about Ukraine on various internet sites.

The Chinese threat actor, referred to as "Spamouflage," praised China and slammed critics of the country.

The influence operation from Iran, known as the International Union of Virtual Media, celebrated Iran and condemned Israel and the US.

And the Israel-based firm STOIC created content about the Gaza conflict and Histadrut, Israel's trade union.

According to OpenAI, these manipulation schemes rated only two on the Brookings’ Breakout Scale, a scheme to quantify the impact of IOs that ranges from one (spreads within one community on a single platform) to six (provokes a policy response or violence). A two on this scale means the fake content appeared on multiple platforms, with no breakout to authentic audiences.

The OpenAI report [PDF] into this whole affair finds that these influence operations are often given away by errors their human operators have failed to address. "For example, Bad Grammar posted content that included refusal messages from our model, exposing their content as AI-generated," the report says.

"We all expected bad actors to use LLMs to boost their covert influence campaigns - none of us expected the first exposed AI-powered disinformation attempts to be this weak and ineffective," observed Thomas Rid, professor of strategic studies and founding director of the Alperovitch Institute for Cybersecurity Studies at Johns Hopkins University’s School of Advanced International Studies, in a social media post.

OpenAI's determination that these AI-powered covert influence campaigns were ineffective was echoed in a May 2024 report on UK election interference by The Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute.

"The current impact of AI on specific election results is limited, but these threats show signs of damaging the broader democratic system," the CETaS report found, noting that of 112 national elections that have either taken place since January 2023 or will occur in 2024, AI-based meddling was detected in just 19 and there's no data yet to suggest election results were materially swayed by AI.

That said, the CETaS report argues that AI content creates second-order risks, such as sowing distrust and inciting hate, that are difficult to measure and have uncertain consequences.

Rid suggested that as more competitors develop tools for synthetic content creation and OpenAI's share of the market declines, the Microsoft-championed lab will be less able to detect abuses of this sort. He also noted that OpenAI, in its discussion of IOs, doesn't address other forms of synthetic content abuse, including fake product reviews, ad bots, fraudulent marketing copy, and phishing messages. ®

 

https://www.theregister.com//2024/05/30/openai_stops_five_ineffective_ai/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment