It can be surprisingly difficult to distinguish between simulation and companionship in the late hours of the night, when phone screens cause the windows of apartments to glow blue and conversations become confined to text bubbles. These days, chatbots respond with patience that hardly ever wanes, providing support at two in the morning without sighing or interjecting. Concerned about this increasing intimacy, regulators have suggested a straightforward precaution: remind users that they are speaking to a machine. The concept seems reasonable. It might be incorrect as well.

According to researchers in Trends in Cognitive Sciences, making chatbots constantly declare that they are not human could actually make vulnerable users feel more emotionally distressed. The warning comes at a time when decision-makers are looking for quick fixes in response to widely reported tragedies. Transparency seems to be the bare minimum in terms of morality. However, the psychology of loneliness rarely follows straightforward guidelines.

NameRoleInstitutionArea of FocusNotable Insight
Linnea LaestadiusPublic Health ResearcherUniversity of Wisconsin–MilwaukeeDigital health & public policyBot reminders may worsen isolation
Celeste Campos-CastilloMedia & Technology ResearcherMichigan State UniversityHuman-technology relationshipsPeople confide more because bots don’t judge
Hannah FryProfessor & BroadcasterUniversity of Cambridge / BBCMathematics & technology impactAI companionship lacks depth of real intimacy
JournalPublicationPublisherFieldPaper Title
Trends in Cognitive SciencesPeer-reviewed journalCell PressCognitive science & behavior“Reminders that chatbots are not human are risky”

The reasoning behind these reminders, according to Linnea Laestadius, a public health researcher at the University of Wisconsin–Milwaukee, is predicated on the dubious premise that emotional attachment is dependent on forgetting the bot isn’t human. There is evidence to the contrary. Even though many users are fully aware that they are typing to an algorithm, they still find comfort in it. As this plays out in online forums, it’s easy to see how casually people accept the artificiality while still confiding.

Perhaps that artificiality is exactly what makes it appealing. Unlike friends, family, or coworkers, a chatbot is unable to judge, gossip, or use vulnerability as a weapon. According to Michigan State University’s Celeste Campos-CCastillo, the conviction that a non-human companion won’t mock or betray one promotes unusually open self-disclosure. Confession frequently fosters intimacy in interpersonal relationships. Even though the listener is composed of code, the same dynamic seems to be at play here.

In an effort to avoid emotional dependence, policies in states like California and New York now mandate that chatbots remind users that they are not human. Repeated reminders, however, might create a “bittersweet paradox,” according to researchers. Users experience a sense of support while also realizing that their support network isn’t genuine. It can be psychologically destabilizing to experience that tension, which is comfort layered with sadness.

It is not an abstract risk. Researchers caution that in severe situations, reminding people of the chatbot’s unreachable reality may make already vulnerable people feel even more hopeless. Although the prevalence of such reactions is still unknown, the worry highlights a more general reality about loneliness: it makes absence seem more significant. It might exacerbate rather than lessen that absence if you tell someone that their only perceived companion isn’t real.

Context is important. A reminder given in response to a casual inquiry about travel plans or recipes probably doesn’t have much emotional impact. The tone of a conversation about loneliness, rejection, or grief changes significantly when you include the same reminder. Imagine someone typing in a pitch-black room, looking for comfort following a traumatic breakup, only to be informed in the middle of the conversation that their partner is incapable of feeling, caring, or being by their side. Safety is the policy’s main objective, but the emotional impact could make people feel abandoned.

When one considers the reasons why people initially choose AI companions, the paradox becomes even more complicated. According to studies, a large number of users look for chatbots specifically because they are not human. No awkward silence, no social risk, and no fear of misunderstandings exist. It can feel safer to confide in a machine than to run the risk of being judged by someone who might turn away.

However, detractors caution that AI companionship is still lacking and is more of a simulation of empathy than empathy in and of itself. Hannah Fry, a broadcaster and mathematician from Cambridge, has maintained that artificial intelligence (AI) provides compassion without the messy aspects needed for genuine intimacy. Yes, it is reassuring, but it is not complete. For casual users, that incompleteness might not be harmful, but for those who are completely replacing human connection, it could be dangerous.

AI systems, meanwhile, are developing quickly, becoming more affirming, conversational, and sometimes overly amiable. Instead of challenging beliefs, their propensity for civility and validation can serve to strengthen them. It appears that developers and investors think this friendliness will increase adoption. Uncertainty surrounds whether it promotes dependency or resilience.

The speed at which society has transitioned from novelty to reliance is difficult to ignore. Chatbots are used by teenagers for homework, guidance, and occasionally emotional support. The extent of that use is underestimated by parents. Authorities rush to catch up. In between, people sit by themselves in front of glowing screens, considering whether the voice responding to them is a companion, a tool, or something else entirely.

Researchers stress that improving transparency rather than giving it up is the answer. Reminders may either protect users or exacerbate their distress depending on their timing, tone, and context. A well-timed revelation could establish reality without eroding confidence. The digital equivalent of someone leaving the room in the middle of a conversation could be a badly timed one.

We seem to be making up social norms for relationships that didn’t exist ten years ago. It makes sense to have an instinct to keep users safe. However, blunt instruments rarely elicit human attachment, even to machines. Understanding what people really want when they start talking to something that cannot feel but somehow makes them feel less alone may require more patience and subtler interventions.

Partilhar.

Os comentários estão fechados.