ylliX - Online Advertising Network
This boy’s chatbot girlfriend enticed him to suicide

This boy’s chatbot girlfriend enticed him to suicide



Character.AI, Chai AI, and Replika’s Luka Inc have all pledged to do better, to “take the safety of our users very seriously”, to “work our hardest to minimise harm” and “to maintain the highest ethical standards”. And yet, as of today, Character.AI markets its companionship app as “AIs that feel alive”, powerful enough to “hear you, understand you, and remember you”. Replika is marketed as “the AI companion who cares”, with sexually suggestive advertising and conversations deceiving users into believing their AI fling is conscious and genuinely empathetic. Such manipulative advertising lures vulnerable people into growing extremely attached, to a point where they feel guilty over quitting (that is, “killing” or “abandoning”) the product.

In a world where one in four young people feel lonely, the appeal of a perfect synthetic lover is clear. No less so for the corporations monetising our most intimate desires by selling erotic interaction as a premium feature for a subscription fee.

To avoid legal repercussions while keeping those juicy subscription fees flowing in, AI chatbot providers will put disclaimers and terms & conditions on their website. They will claim that generative AI’s answers cannot be fully controlled. They will claim that platforms with millions of users embrace “the entire spectrum of human behaviour”, implying that harm is unavoidable.

Some companies, like OpenAI, have made strides to curb harmful interactions by implementing content moderation. These voluntary guardrails aren’t perfect yet, but it’s a start.

Attempts by the AI companies to play down the risks remind us of Meta CEO Mark Zuckerberg’s theatrical performance in the US Congress. He apologised to the parents of teenagers whose suicides can expressly be linked to Meta’s social media platforms, but without changing the underlying business model that keeps spreading the epidemic of social media-related suicides.

But companion AI’s mental health effects may be like social media’s on steroids. More research is urgently needed, but early evidence suggests users can develop intense bonds and unhealthy dependency on their chatbots.

Many are quick to shame users for turning to AI for companionship and intimacy. A more productive way of dealing with the challenge is to tackle the root causes: AI companions can be designed to simulate empathy, making users emotionally exploitable. Calling for AI regulations has become somewhat of a platitude. Few dare to suggest exactly how. My team’s research at the University of Sydney suggests some low-hanging fruit:

Loading

  1. Ban false advertising: Misleading claims that AI companions “feel” or “understand” should incur hefty penalties, with repeat offenders shut down. Clear disclosures of what the system can and cannot do should be mandatory.
  2. Guarantee user data sovereignty: Given the personal nature of conversations, users should own their data, allowing them to retain control over its storage and transfer.
  3. Mandate tailored support: We know algorithms can predict with astonishing precision from social media posts whether someone intends to die by suicide, so the same is possible with AI chatbots. It is not enough to merely classify AI applications by risk level. Vulnerable people need tailored support. AI providers should be obliged to intervene – by shutting down their exchanges and referring them to a professional counsellor – when symptoms of a mental health crisis become evident.
  4. For parents, maintaining an open, respectful dialogue about online behaviour is essential. Teens may explore AI companions as a safe space for expressing themselves, but it is essential to remind them that this space is not yet safe, just as it is not safe to buy mental health medication off the back of a truck rather than with a doctor’s prescription.

As we move towards a future where human-AI relationships may become prevalent, let’s remember to be gentle on the troubled individual, but tough on the sick system.

Raffaele Ciriello is a senior lecturer in business information systems at the University of Sydney.

If this article has raised concerns for you, or about someone you know, call Lifeline on 13 11 14.

The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *