DIGITAL LIFE

“The danger of AI is not the machine — it’s our laziness to think”
While the world discusses the advancement of artificial intelligence, Spanish journalist and writer Laura G. de Rivera issues a simple but powerful warning: the greatest risk lies not in the technology, but in human stupidity. Her book Slaves of the Algorithm is a manifesto against the blind dependence on automated systems — and an invitation to recover something that seems to be on the verge of extinction: critical thinking.
Imagine you decide to go out to dinner. Your partner may not know what you want to eat, but the AI does — because in the afternoon you watched a taco video on Instagram. This type of behavior, says Rivera, shows how much we cede control of our decisions to machines that only analyze data and patterns.
“If we don’t make decisions, others will make them for us,” she writes in the book. And who are these “others”? Digital platforms, technology companies, and systems that learn from our clicks, searches, and likes.
Research by psychologist Michal Kosinski of Stanford University has already shown that an algorithm can predict your preferences more accurately than your mother. It seems practical, but Rivera sees a high price for this convenience: "We lose freedom, the ability to be ourselves — and even imagination."
According to the author, we live in a reality where "we work for free for Instagram." Every photo posted, every like, and every second of scrolling feeds systems that transform our data into profit — but without us realizing it.
The problem, Rivera explains, is that we have become lazy. We no longer think in waiting rooms, we don't get bored, we don't stay alone with our ideas. "We pick up our cell phones all the time, and the moments that used to serve for reflection have been taken over by constant stimuli," she says.
Her proposal for resistance is almost ironic in its simplicity: thinking. It's not about abandoning technology, but about regaining awareness of what we do online. "Only critical thinking can defend individual freedom against algorithmic control."
Rivera believes the first step is understanding how the platforms work. “Many people don’t realize that by spending hours on TikTok, they are working for the company. Behavioral data has economic value — that’s why Google is one of the richest companies in the world, even without charging for its services.”
For her, the solution lies in digital education and transparency. Learning to read the “terms of use,” rejecting cookies when possible, and limiting the sharing of personal information are small gestures that help curb the abusive collection of data.
The real danger is not AI — it’s human complacency…“Artificial intelligence won’t do anything on its own; it’s just a sequence of zeros and ones,” says Rivera. “The real danger is our laziness.”
She criticizes what she calls the “dormantling of human will”: we accept being watched, monitored, and influenced because it’s easier to let technology think for us. “We prefer to receive orders. It’s an old fear of freedom.”
The writer quotes the philosopher Erich Fromm, author of Fear of Freedom, who already in the 20th century said that human beings fear deciding for themselves. "Today, we only replace the boss or the State with the algorithm," Rivera summarizes.
When the computer decides for you...The danger of blindly trusting automated systems is not theoretical. Studies show that people tend to believe more in an answer given by a computer than in their own intuition — even when the result is absurd.
Rivera warns of the risk of delegating critical decisions to algorithms, especially in areas such as health, public safety and justice. "When we let an AI decide, we may be handing over life-or-death issues to a system that only understands statistics."
Whistleblowers and resistance within big tech companies...The journalist recalls emblematic cases of professionals who confronted tech giants to denounce abuses. Among them:
-Edward Snowden, who revealed the mass surveillance scheme of US agencies;
-Sophie Zhang former Facebook employee, who warned about the use of fake accounts by governments to manipulate public opinion[https://en.wikipedia.org/wiki/Sophie_Zhang_(whistleblower)];
-Timnit Gebru, fired from Google after denouncing racial and gender discrimination in algorithms;
-Guillaume Chaslot, former YouTube employee, who showed how the recommendation system pushed users towards radical content and conspiracy theories[www.linkedin.com/in/guillaume-chaslot-6774b982]
These cases, says Rivera, show that the problem is not in the machines, but in the people who control them — and in the lack of ethics of companies that prioritize engagement and profit above all else.
How to resist algorithmic manipulation...For Rivera, it's not possible to completely disconnect—but it is possible to make the platforms' work more difficult. She suggests some simple measures:
Use browsers that block tracking;
Reject cookies whenever possible;
Control the time spent on social networks;
And, above all, understand the business model behind each application.
"When you understand the game, you're no longer a pawn," she says. "Knowledge is the only form of resistance."
Creativity, empathy, and solidarity: what AI will never have...Despite the criticism, Rivera is not against technology—but she reminds us that AI will never be able to create something genuinely new or compassionate.
"A computer program cannot invent what does not exist in the data. It lacks creativity, empathy, or solidarity. These are human qualities, and they are exactly what we need to preserve."
Thinking is the new act of rebellion...Laura G. de Rivera doesn't want the reader to flee from technology—she wants them to reclaim the power to decide. For her, resisting the algorithm begins with a simple, almost banal, but revolutionary gesture: thinking before sliding a finger across the screen.
After all, as she concludes, "artificial intelligence may be powerful, but nothing is more dangerous than human stupidity when it ceases to think for itself."
mundophone