Future Tech

TikTok becomes first platform to require watermaking of AI content

Tan KW
Publish date: Fri, 10 May 2024, 10:30 PM
Tan KW
0 436,130
Future Tech

TikTok intends to begin labelling AI-generated images and videos uploaded to its video-sharing service.

"TikTok is starting to automatically label AI-generated content (AIGC) when it's uploaded from certain other platforms. To do this, we're partnering with the Coalition for Content Provenance and Authenticity (C2PA) and becoming the first video sharing platform to implement their Content Credentials technology," revealed the company.

The Chinese short form video platform said it plans to extend the feature to audio-only content "soon." TikTok already labels AI-generated content made in-app and requires realistic AI to be labelled as such by creators. To what degree the latter is effective, is debatable.

Content Credentials was created by C2PA, which is co-founded by Arm, BBC, Intel, Microsoft, Truepic, Adobe, and Microsoft. Its goal is to form an open, royalty-free technical standard to fight against disinformation.

The technology works as a watermark by attaching metadata to content, which TikTok can use to instantly recognize and label as AIGC.

"They tell you who made it, when it was made, edits that were made and whether AI was used or not," explained Adobe chief trust officer Dana Rao in an TV interview. He compared it to a nutrition label for content.

The age of deepfakes has arrived

There is growing concern globally regarding human ability to decipher deepfakes - whether its remote IT worker job applicants, scammers out for money or pornography.

Just this week, the internet was entertained by a stream of AI generated images of celebrities at the Met Gala who were not in attendance. The fakes were so realistic that even the mom of pop star Katy Perry was fooled.

"The AI generated fake photos from the Met Gala are a low-stakes prelude for what's going to happen between now and the elections," observed one US-based individual.

Microsoft Threat Analysis Center manager Clint Watts warned last month that deepfake election subversion is disturbingly easy. Microsoft should know, its VASA-1 tool is consdered too dangerous to be relased due to ethical considerations.

It is a scenario being played out in India right now where AI deepfakes of Bollywood stars support political parties and level criticism against the backdrop of an election that will determine the fate of current prime minister Nahendra Modi.

OpenAI, meanwhile, released model safety guidance earlier this week while acknowledging that it's looking into how to support the creation of content that's NSFW or "not safe for work."

Government intervention

Concerns about the images and videos used to replicate both Bollywood actors and lawmakers sparked India's Ministry of Electronics and IT (MeitY) to issue an advisory last fall stating social media companies need to remove deepfakes from their platforms within 36 hours after they're reported.

Failure to act would mean an organization would be held liable for third-party information hosted on platforms.

Meanwhile US entrepreneur Cassey Ho recently found herself in the middle of a TikTok deepfake nightmare after one of her clothing designs went viral. She found images of her body superimposed with a different face in videos on TikTok, posted by counterfeiters of her skirt design who needed promotional content.

She described it as feeling like she was "in an episode of Black Mirror," and urged her followers to report the incident.

"Your use of the report button is just as strong as mine. Any power we may be able to have is going to be our strength in numbers," implored Ho.

"Honestly, it's time for the Department of Commerce to really crack down on counterfeits," said one fed up follower.

The US Department of Commerce requested [PDF] an additional $62.1 million in fiscal year 2025 "to safeguard, regulate, and promote AI, including protecting the American public against its societal risks."

In her testimony defending the budget before the House Appropriations Committee, United States Secretary of Commerce Gina Raimondo said those funds would go toward the AI Safety Institute.

"Everybody including myself is worried about synthetic content so we want companies to watermark what's AI generated. Well, what's adequate watermarking? What's adequate red teaming? We're going to build a team - the AI safety Institute - to develop standards so that Americans can be safe," she explained.

"We're also investing in scientists and we're investing in policy people at [the National Telecommunications and Information Administration (NTIA)] to help us develop policies for AI," she added.

Watermarks not foolproof

Unfortunately, watermarking may not be the savior it has been billed as. A team at the University of Maryland in the US looked into the reliability of watermarking techniques for digital images, and found they were not that robust.

The researchers developed an attack to break watermarks and were successful at toppling every existing one they encountered.

"Similar to some other problems in computer vision (eg, adversarial robustness), we believe image watermarking will be a race between defenses and attacks in the future," said the boffins. ®

 

https://www.theregister.com//2024/05/10/tiktok_ai_watermarks/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment