Saturday, December 27, 2025

 

DIGITAL LIFE


Fake videos made by AI: AI tools show how easy it is to manipulate public perception, creating an alternate 'reality' in seconds

A TikTok video from October appeared to show a woman being interviewed by a television reporter about the use of food assistance. The women weren't real. The conversation never happened. The video was generated by artificial intelligence. Still, people seemed to believe it was a real conversation about selling food assistance for cash, which would constitute a crime.

In the comments, many reacted to the video as if it were real. Despite subtle warning signs, hundreds began labeling the woman a criminal—some with explicit racism—while others attacked government assistance programs, just as a national debate intensified around President Donald Trump's planned cuts to the program.

Videos like this fake interview, created with OpenAI's new app, Sora, show how public perception can be easily manipulated by tools capable of producing an alternate reality from a series of simple commands.

In the two months since Sora's arrival, misleading videos have skyrocketed on TikTok, X, YouTube, Facebook, and Instagram, according to experts who monitor this type of content. The deluge has raised concerns about a new generation of misinformation and fabrications.

Most major social media companies have policies requiring disclosure of the use of artificial intelligence and broadly prohibit content intended to deceive. But these safeguards have proven utterly insufficient in the face of the technological leap represented by OpenAI's tools.

Vídeo de uma mãe pegando na mão de um bebê recém nascido — Foto: Reprodução/Sora

Video of a mother holding the hand of a newborn baby — Photo: Reproduction/Sora

While many videos are silly memes or cute—but fake—images of babies and pets, others aim to incite the kind of hostility that often marks online political debate. They have already appeared in foreign influence operations, such as Russia's ongoing campaign to demoralize Ukraine.

Researchers who track misleading uses say it is now up to companies to do more to ensure people know what is real and what is not.

“Could they do a better job moderating misinformation content? Yes, clearly they are not doing that,” said Sam Gregory, executive director of Witness, a human rights organization focused on the threats of technology. “Could they be more proactive in seeking out AI-generated information and labeling it themselves? The answer is also yes.”

The video about reselling food stamps was one of several that circulated as the standoff over the U.S. government shutdown dragged on, leaving actual beneficiaries of the Supplemental Nutrition Assistance Program (SNAP) in the United States struggling to feed their families.

Fox News dropped a similar fake video, treating it as an example of public outrage over alleged abuses of the food stamp program in an article that was later removed from the site. A Fox spokesperson confirmed the removal but did not provide further details.

The hoaxes have been used to ridicule not only poor people but also Trump. A video on TikTok showed the White House with what appeared to be a narration in Trump's voice reprimanding his cabinet for releasing documents involving Jeffrey Epstein, the discredited financier convicted of sex crimes.

According to NewsGuard, a company that monitors misinformation, the video — which was not labeled as AI — was viewed by more than 3 million people in just a few days.

Until now, platforms have relied primarily on creators to inform viewers that published content is not real — but they don't always do so. And while there are ways for platforms like YouTube and TikTok to detect that a video was made with artificial intelligence, this isn't always immediately signaled to viewers.

"They should have been prepared," said Nabiha Syed, executive director of the Mozilla Foundation, the technology security organization behind the Firefox browser, referring to social media companies.

The companies responsible for the AI ​​tools claim they are trying to make it clear to users what content is computer-generated. Sora and Google's competing tool, Veo, incorporate a visible watermark into the videos they produce.

Sora, for example, adds the label "Sora" to each video. Both also include invisible, machine-readable metadata that indicates the origin of each fake.

The emergence of realistic videos has been a boost for disinformation, hoaxes, and foreign influence operations. Sora videos have already appeared in recent Russian disinformation campaigns on TikTok and X.

One of them, with its watermarks crudely obscured, sought to exploit a growing corruption scandal among Ukraine's political leadership. Others created fake videos of frontline soldiers crying.

Two former members of a now-defunct State Department office that combated foreign influence operations, James P. Rubin and Darjan Vujica, argued in a new article in Foreign Affairs that advances in AI are intensifying efforts to undermine democratic countries and divide societies.

They cited AI videos in India that attacked Muslims to inflame religious tensions. One recent one, on TikTok, appeared to show a man preparing biryani rice in the street with water from a sewer. Although the video bore Sora's watermark and the creator claimed it was AI-generated, it was widely shared on X and Facebook, disseminated by accounts commenting as if it were real.

“They are creating things, and will continue to create things, that make the situation much worse,” Vujica said in an interview, referring to the new generation of AI-generated videos. “The barrier to using deepfakes as part of disinformation has crumbled, and once disinformation spreads, it’s difficult to correct the record.”

mundophone

No comments:

Post a Comment

  DIGITAL LIFE Generation Z created its own “ digital detox ”: turning off their cell phones and going back to writing letters, thankfully.....