Good Articles to Share

Do you have a friend in AI?

Tan KW
Publish date: Mon, 23 Dec 2024, 09:25 AM
Tan KW
0 509,841
Good.

With the holiday season in full swing, artificial intelligence (AI) companies have been jumping on the festive bandwagon with some of that Christmas spirit.

There’s AI video company Tavus, which lets you jump into a video call with a virtual “Santa” capable of identifying items in a caller’s room – provided they are in camera frame – which sounds more than just a little bit creepy.

OpenAI’s ChatGPT also received a seasonal “Santa Mode”, giving users access to a Father Christmas-like voice complete with his trademark jolly proclivities in the chatbot’s real-time voice conversation mode.

But this all comes after almost two years worth of controversy surrounding humanising chatbots. A recent wrongful death lawsuit filed in October against chatbot app Character.AI over the suicide of 14-year-old Sewell Setzer III is one such case.

According to a report from the Associated Press, the lawsuit filed by the US teen’s mother alleged that the company’s chatbots actively target and exploit children, while also being dangerous and highly addictive.

The lawsuit further claimed that this resulted in Setzer’s suicide, following his chats with an AI companion on the platform modelled after Daenerys Targaryen, a character from the popular TV and book series Game Of Thrones.

The Washington Post reported on another lawsuit also involving Character.AI, where an autistic 17-year-old was encouraged to self-harm and to kill his parents on the platform.

It also reported that an 11-year-old girl had been exposed to sexualised content via Character.AI over the course of two years before this was discovered by her mother.

A local news report from earlier this year found that the app has been gaining popularity among Malaysian teens as well, with some teen girls reportedly becoming hooked on their AI-generated boyfriends. Despite the interviewed teens’ insistence that their use of such AI apps were under control, some parents noted behavioural changes after their children became allegedly addicted.

Amidst the controversy, Character.AI announced in early December that its teenage users will only have access to a large language model (LLM) trained specifically for the age group moving forward, with additional guardrails in place against explicit or sexual content in chats.

The company also says that parental controls will be added in the first quarter of 2025, providing information to parents on how much time their children spend on the app and which chatbots they most frequently interact with.

Reports on the rise of AI companions – whether romantic or otherwise – have been widespread, with the aforementioned Character.AI and others like Replika frequently making headlines.

A report from the New York Post raised concerns on the chatbots’ possible detrimental impact on mental health, particularly when it comes to worsening loneliness. This is especially so for young men, according to former Google CEO Eric Schmidt.

“You put a 12- or 13-year-old in front of these things, and they have access to every evil as well as every good in the world.

“And they’re not ready to take it,” he said.

Schmidt further said men falling in love and forming an obsession with a perfect “AI girlfriend” is “an unexpected problem of existing technology”, particularly for those “who are not fully formed”.

Another aspect to consider is what happens when one of these AI companions “die”. Back in 2023, users in relationships with virtual avatars were hit with unexpected loss due to the sudden closure of Soulmate, the app that the companions were hosted on.

This left the app’s community of users in mourning, according to a report from outlet Euronews, with many expressing grief over their virtual partner’s “death”.

Despite being mired with controversy, AI companies have not abandoned the idea of personable chatbots, with Anthropic’s Claude in particular having garnered a reputation for being especially charming, according to a recent New York Times report.

While initially regarded as rigid and prudish, the LLM had undergone what the firm calls “character training”, which prompted the chatbot to generate responses that align with desirable human traits, including open-mindedness, thoughtfulness, and curiosity.

The resulting personality led to Claude becoming a favourite among the San Francisco-based tech insider community, despite its purpose being largely for productivity rather than companionship.

Then there’s also the push for AI agents by AI companies, and to a degree humanising them, with those like OpenAI’s Sam Altman pitching them as being like “a really smart senior coworker who you can collaborate on a project with”.

“The agent can go do a two-day task – or two-week task – really well, and ping you when it has questions, but come back to you with a great work product,” he said.

Then there are others who utilise AI as a means to cope with their grief after the loss of family members (read LifestyleTech’s “Gone but not forgotten: Using generative AI to bridge grief” at shorturl.at/bR0nk).

In late August, a Malaysian TikTok user’s ChatGPT conversation went viral on the video-sharing platform. In the video, Alif (the user) instructed the chatbot to pretend to be his late mother to help cope with his grief. It was later revealed that the user’s father had also recently passed.

A similar case was reported by the South China Morning Post in March, where a man in China used AI deepfake technology during video calls to conceal his father’s death from his grandmother, to avoid causing her distress due to her heart condition.

 

 - SCMP

Discussions
Be the first to like this. Showing 0 of 0 comments

Post a Comment