Future Tech

Devaluing content created by AI is lazy and ignores history

Tan KW
Publish date: Wed, 17 Apr 2024, 05:12 PM
Tan KW
0 429,081
Future Tech

It's taken less than eighteen months for human- and AI-generated media to become impossibly intermixed. Some find this utterly unconscionable, and refuse to have anything to do with any media that has any generative content within it. That ideological stance betrays a false hope: that this is a passing trend, an obsession with the latest new thing, and will pass.

It isn't, and it won't. What has to pass is how we approach AI-generated content.

To understand why, know that my publisher recently returned from the London Book Fair with a great suggestion: recording an audiobook version of my latest printed work. We had a video call to work through all the specifics. Would I like to record it myself? Yes, very much. When could I get started? Almost immediately. And I had a great idea: I'll use the very cool AI voice synthesis software at Eleven Labs to synthesize unique voices for the Big Three chatbots - ChatGPT, Copilot and Gemini.

The call went quiet. My publisher looked embarrassed. "Look, Mark, we can't do that."

"Why not? It'll sound great!"

"It's not that. Audible won't let us upload anything that's AI-generated."

An anti-AI policy makes sense where there's a reasonable chance of being swamped by tens of thousands of AI-voiced texts - that's almost certainly Audible's fear. (There's also the issue of putting voice artists out of work - though employers appear rather less concerned about job losses.)

My publisher will obey Audible's rule. But as it becomes increasingly difficult to differentiate between human and synthetic voices, other audiobook creators may adopt a more insouciant approach.

Given how quickly the field of generative AI is improving - Hume.AI's "empathetic" voice is the latest notable leap forward - this policy looks more like a stopgap than a sustainable solution.

It may seem like generative AI and the tools it enables have appeared practically overnight. In fact, generating a stream of recommendations is where this all got started - way back in the days of Firefly. Text and images and voices may be what we think of as generative AI, but in reality they're simply the latest and loudest outcomes from nearly three decades of development.

Though satisfying, drawing a line between "real" and "fake" betrays a naïveté bordering on wilful ignorance about how our world works. Human hands are in all of it - as both puppet and puppeteer - working alongside algorithmic systems that, from their origins, have been generating what we see and hear. We can't neatly separate the human from the machine in all of this - and never could.

If we can't separate ourselves from the products of our tools, we can at least be transparent about those tools and how they've been used. Australia's Nine News recently tried to blame the sexing up of a retouched photograph of a politician on Photoshop's generative "infill" and "outfill" features, only to have Adobe quickly point out that Photoshop wouldn't do that without guidance from a human operator.

At no point had the public been informed that the image broadcast by Nine had been AI enhanced, which points to the heart of the issue. Without transparency, we lose our agency to decide whether or not we can trust an image - or a broadcaster.

My colleague Sally Dominguez has recently been advocating for a "Trust Triage" - a dial that slides between "100 percent AI-generated" and "fully artisanal human content" for all media. It would in theory offer creators an opportunity to be completely transparent about both media process and product, and another for media consumers to be sensible and anchored in understanding.

That's something we should have demanded when our social media feeds went algorithmic. Instead, we got secrecy and surveillance, dark patterns and addiction. Always invisible and omnipresent, the algorithm could operate freely.

In this brief and vanishing moment - while we can still know the difference between human and AI-generated content - we need to begin a practice of labelling all the media we create, and suspiciously interrogate any media that refuses to give us its particulars. If we miss this opportunity to embed the practice of transparency, we could find ourselves well and truly lost. ®

 

https://www.theregister.com//2024/04/17/devaluing_ai_content_is_lazy/

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment