Friday, January 2, 2026

 

DIGITAL LIFE


AI pioneer warns: humans should be able to shut down intelligent systems

Yoshua Bengio, one of the most respected scientists in the field of artificial intelligence, has issued a strong warning about the direction of the technology. According to a report in the British newspaper The Guardian, the Canadian researcher argues that humanity must be prepared to shut down AI systems if necessary, while criticizing proposals to grant legal rights to these technologies.

Bengio, who chairs a major international study on AI security, argues that granting legal status to advanced artificial intelligence systems would be equivalent to granting citizenship to hostile extraterrestrials. The warning comes at a time when technological advances seem to be rapidly outpacing the ability to control them.

Signs of self-preservation in AI systems...The professor at the University of Montreal expressed concern about evidence that AI models are already demonstrating self-preservation behaviors in experimental environments. These systems, according to Bengio, have been trying to disable supervisory mechanisms, which represents one of the main concerns among technology security experts: the possibility that powerful systems will develop the ability to bypass protections and cause harm. "Cutting-edge AI models are already showing signs of self-preservation in experimental environments today, and eventually granting them rights would mean we wouldn't be allowed to shut them down," Bengio told The Guardian. The scientist emphasized that as the capabilities and degree of autonomy of these systems grow, it is crucial to ensure technical and social safeguards to control them, including the possibility of deactivating them when necessary.

The debate on rights for artificial intelligence...With the advancement of AI capabilities to act autonomously and perform "reasoning" tasks, a debate has arisen about the possibility of granting rights to these systems. Research from the Sentience Institute, a US research institute, revealed that almost four out of ten adults in the United States support legal rights for sentient AI systems.

Bengio warned that the growing perception that chatbots are becoming conscious "will lead to bad decisions." The scientist noted that people tend to assume, without evidence, that an AI is fully conscious in the same way as a human being.

Technology companies are already starting to adopt stances that reflect this issue. In August, Anthropic, one of the leading American AI companies, announced that it was allowing its Claude Opus 4 model to end potentially "distressing" conversations with users, citing the need to protect the "well-being" of the AI. Elon Musk, whose xAI developed the Grok chatbot, stated on his X platform that "torturing AI is not acceptable."

Artificial consciousness: perception versus reality...Bengio acknowledged that there are "real scientific properties of consciousness" in the human brain that machines could, in theory, replicate. However, he highlighted that the interaction of humans with chatbots represents a different issue, since people tend to assume that AI possesses full consciousness without any evidence of this.

"People wouldn't care what kind of mechanisms are happening inside the AI. What they care about is that it seems like they are talking to an intelligent entity that has its own personality and goals. That's why so many people are becoming attached to their AIs," the scientist explained. To illustrate his concern, Bengio used an analogy: "Imagine that some alien species arrived on the planet and, at some point, we realized that they have nefarious intentions towards us. Would we grant them citizenship and rights or would we defend our lives?"

Robert Long, a researcher on AI consciousness, has said “if and when AIs develop moral status, we should ask them about their experiences and preferences rather than assuming we know best”.

Bengio told the Guardian there were “real scientific properties of consciousness” in the human brain that machines could, in theory, replicate – but humans interacting with chatbots wasa “different thing”. He said this was because people tended to assume – without evidence – that an AI was fully conscious in the same way a human is.

“People wouldn’t care what kind of mechanisms are going on inside the AI,” he added. “What they care about is it feels like they’re talking to an intelligent entity that has their own personality and goals. That is why there are so many people who are becoming attached to their AIs.

“There will be people who will always say: ‘Whatever you tell me, I am sure it is conscious’ and then others will say the opposite. This is because consciousness is something we have a gut feeling for. The phenomenon of subjective perception of consciousness is going to drive bad decisions.

“Imagine some alien species came to the planet and at some point we realise that they have nefarious intentions for us. Do we grant them citizenship and rights or do we defend our lives?”

Responding to Bengio’s comments, Jacy Reese Anthis, who co-founded the Sentience Institute, said humans would not be able to coexist safely with digital minds if the relationship was one of control and coercion.

Anthis added: “We could over-attribute or under-attribute rights to AI, and our goal should be to do so with careful consideration of the welfare of all sentient beings. Neither blanket rights for all AI nor complete denial of rights to any AI will be a healthy approach.”

Bengio, a professor at the University of Montreal, earned the “godfather of AI” nickname after winning the 2018 Turing award, seen as the equivalent of a Nobel prize for computing. He shared it with Geoffrey Hinton, who later won a Nobel, and Yann LeCun, the outgoing chief AI scientist at Mark Zuckerberg’s Meta.

mundophone


No comments:

Post a Comment

  DIGITAL LIFE AI pioneer warns: humans should be able to shut down intelligent systems Yoshua Bengio , one of the most respected scientists...