Monday, December 8, 2025

 

TECH


Netflix mulled buying EA before Warner Bros. acquisition, as it grows AAA games library

Before the proposed Netflix Warner Bros. acquisition, the streamer pondered bidding for EA. Bloomberg details how its leadership had second thoughts after considering stock evaluations. EA’s AAA games include sports franchises, but Warner Bros. Games may be a better complement to Netflix content.

The potential Netflix $82 billion deal to acquire Warner Bros. could boost the streaming giant’s AAA gaming portfolio. WB Games has produced titles for lucrative movie and TV franchises like Harry Potter and Batman. However, according to a paywalled Bloomberg report, it previously expressed interest in EA.

As 2025 winds down, the new biggest potential acquisitions of the year appear to be the $55 billion deal for EA by Saudi Arabia's Public Investment Fund and other investors and the more recent proposed $82 billion agreement for Netflix to buy Warner Bros. Studios. Now, a new report suggests that Netflix once considered buying EA and two other movie studios before the Warner Bros. negotiations came together.

According to Bloomberg, Netflix executives previously debated whether the streamer should place bids for EA, Fox, and Disney. However, the management team reportedly couldn't agree on a deal and feared that its interest in those companies would hurt its standing with investors.

The report doesn't include the details about when Netflix considered these acquisitions, especially since Disney is usually the company that buys other studios including Pixar, Marvel, and Lucasfilm. Disney purchased Fox's entertainment assets in 2019 following an extended approval process from government agencies around the world.

Neither the Warner Bros. nor the EA deals are guaranteed to go forward, since both of them have to win regulatory approval after facing heavy scrutiny. Both, however, have major implications for the video game industry. If the EA deal is completed, Saudi Arabia's PIF will reportedly gain majority control of the company. Senators Richard Blumenthal and Elizabeth Warren have raised their concerns about "foreign influence" on EA. In a regulatory filling last month, EA claimed it would retain creative control of the company.

As for Netflix and Warner Bros., the deal would leave the former in charge of the Mortal Kombat, Batman, and Harry Potter video game franchises. The streamer demonstrated greater ambition for the gaming industry until 2024, when it shut down a AAA multiplayer shooter from a team that included Halo and Destiny veteran Joe Staten. In October, Boss Fight--the studio behind Netflix's Squid Game: Unleashed mobile game--was closed by the streamer.

The Lucas Shaw Bloomberg article reveals that Netflix considered bidding for “every major asset put up for sale, including Electronic Arts Inc. and Fox.” Although it seems less likely, the company even contemplated a Disney takeover. However, executives balked at paying too much for stocks that were previously available at lower prices. The report doesn’t specify when Electronic Arts first appeared on Netflix’s radar.

Would gamers have benefited from a Netflix EA deal? In September, a group that includes Saudi Arabian investors proposed an Electronic Arts buyout. The development raised concerns about the publisher’s future. Analysts have predicted cost-cutting that could involve job losses and more reliance on generative AI. With the latest news, gamers are now debating whether Netflix would have been a preferable suitor.  

To this point, the offering of Netflix games has been viewed as lackluster by critics. Most titles are basic mobile-quality experiences that don’t cater to dedicated gamers. The recent addition of Red Dead Redemption attracted attention, but may not lure players away from the PS5, Xbox, or Switch 2.

The Warner Bros. Games lineup would align with the streamer’s TV and movie prowess. It’s easier to see how it could tie in upcoming projects like the Hogwarts Legacy sequel to existing content. On the other hand, with EA, agreements to make Lord of the Rings or Star Wars titles have often come and gone. The publisher’s most profitable releases are sports franchises led by EA Sports FC and Madden NFL.

Regardless of how the subscription service evolves, not all followers believe it can effectively manage AAA games. In 2024, Netflix decided to shutter Team Blue, which had talent that worked on Halo, God of War, and other major titles. The move foreshadowed a focus on lower-budget smartphone endeavors. The potential Netflix Warner Bros. acquisition and curiosity with EA suggest it still lacks a clear direction.

mundophone

 

DIGITAL LIFE


Trump's offensives only vulgarize what big tech has been doing forever

It's 2012 and Facebook researchers conduct a psychological manipulation experiment with algorithms and are shocked to realize they forgot that the platform was created to rank students in a misogynistic way. 

It's 2015: Misuse of Data (Precedents): Although the Cambridge Analytica scandal came to light in 2018, the basis for the collection and misuse of data began to be discussed, with Facebook removing a personality test app in 2015 after gaining access to user data for academic purposes, but which was passed on without consent.

It's 2016: Facebook admitted that Cambridge Analytica – a political consulting firm that ran Trump's 2016 digital campaign – used an app to collect private information from 87 million users without their knowledge. The company then used this data to send users specially tailored political advertising and create detailed reports to help Trump win the election against Democratic candidate Hillary Clinton.

It's 2017: massacres in Myanmar lead to the flight of hundreds of thousands – Amnesty International points to the role of algorithmic hate promotion. It's 2018, the Cambridge Analytica scandal reinforces the power of big tech in influencing elections. It's 2021, Google and Amazon employees protest against Project Nimbus, which offers AI and digital infrastructure to Israeli apartheid. It's 2025: the Grok tool exalts Hitler on X, formerly Twitter. This case is reminiscent of Microsoft's famous Tay Bot, which did the same on Twitter in 2016.

In 2019, Cambridge Analytica pleaded guilty to failing to comply with an order from the UK's data protection regulator (ICO), which mandated that it disclose information it held about an American professor, David Carroll. Carroll had requested to know what data the company held about him and how it had obtained it.

It's 2023: Big tech companies also allied with the far right in 2023 to block regulatory initiatives when attempts were made to incorporate the Brazilian penal code into digital platforms. In Congress, conservative lawmakers spread the idea that passages from the Bible would be banned in Brazil through regulation. The document that fabricated this false connection was conceived by a lobbyist for Meta and distributed by a lobbying entity representing Amazon, Meta, Google, Kwai, and TikTok.

It's January 2025: Mark Zuckerberg (Meta), Sam Altman (OpenAI), Elon Musk (X), Sundar Pichai (Google), Jeff Bezos (Amazon), and Peter Thiel (Palantir) attend Donald Trump's inauguration, forge new contracts, and abandon superficial commitments – never truly fulfilled – related to sustainability, diversity, and transparency. In Brazil, the Attorney General's Office holds a public hearing in response to the sudden change in the platforms' terms of use. The companies do not send representatives.

June 2025: Brazil launches, with little input from civil society, an updated version of the "Brazilian Artificial Intelligence Plan" which includes several positive mentions of companies like OpenAI and, despite including mentions of "reducing external dependence," the actions and investments are overshadowed by the enormous amount of spending the country has on big tech. In 2024 alone, R$ 10 billion was spent on tools, software, and cloud services that still compromise the data of Brazilians. Shortly before, the Minister of Finance met with representatives from Amazon and Nvidia to present a plan for tax incentives for the construction of data centers in Brazil. Brazilians are still unaware of the plan's conditions.

The vulgarity of Trump's offensives has very evident partners and beneficiaries. In a statement full of misinformation, the White House stated that the motivation for disproportionately taxing Brazil includes the country's alleged actions to "tyrannically and arbitrarily coerce US companies into censoring political speech, deplatforming users, handing over sensitive data, or changing their moderation policies."

Brazil suffers from Stockholm syndrome with Big Tech – the sad psychological phenomenon where victims of kidnapping or abuse develop positive feelings and dependence on their abusers. Evidence of the role of big tech and financial capital eager to use digital technologies and AI to further exploitation seems to be ignored by public policies, while organized civil society lacks space to participate – or at least access to information that should be transparent.

The poor decisions of the Brazilian state regarding big tech do not occur in a vacuum. These companies are far more sophisticated in their political influence than the most aggressive expression of Trumpism suggests. A significant portion of the Brazilian National Congress has been captured by the influence of these corporations, which have privileged access to decision-making spaces and are able to set the legislative agenda in favor of their interests.

The abuse of economic power by groups such as Meta, Alphabet, and Microsoft manifests itself not only in the pressure exerted on parliamentarians but also in the influence exerted over other actors in the field of digital technology governance. Big tech's investment in think tanks, research groups at private universities, data privacy conferences, and other spaces for debate on technology creates an environment that promotes ignorance and forgetfulness about the anti-democratic history and present of big tech and what they represent.

In this context, the advancement of social media platform regulation in Brazil, often touted as prior censorship, appears to be influenced by lobbying and pressure from the private sector, revealing how its power infiltrates institutions and erodes the capacity to defend an effective democratic process.

The negative impacts of Trump's offensive on so many different Brazilian sectors and the resulting benefits of public support for the federal government, as well as the weakening of trust in the United States, open a unique window of opportunity. Public officials, civil society, and academic researchers, as well as responsible national businesses, can make better decisions that recognize that the problem goes far beyond Trump – and that we need a technological vision of the future that meaningfully includes digital sovereignty.

https://diplomatique.org.br/

Sunday, December 7, 2025


TECH


Avoiding marine collisions with system powered by radar and machine learning

Collisions between marine vessels and stationary structures, like offshore oil platforms and depleted wellheads, are becoming increasingly common. These collisions come with a cost—including the financial burden of lost goods and potential loss of life.

Ocean engineering researchers at Texas A&M University are developing a smarter system to combat these collisions and their costs. By combining raw radar imaging data with advanced machine learning, researchers have created SMART-SEA, a system that gives seafarers real-time guidance on how and when to maneuver their vessel.

A paper describing this work is published in the journal Process Safety and Environmental Protection.

To design a practical system for seafarers, researchers conducted a focus group with Texas A&M Galveston faculty members, many of whom are former seafarers. Researchers also collaborated with industry experts, the U.S. Navy and the U.S. Coast Guard. Their experience assisted researchers in defining practical decision-making skills—like when to yield and how far to turn—and implementing them into the SMART-SEA system.

"Many of these collisions are caused by human error," said Dr. Mirjam Fürth, an assistant professor of ocean engineering. "By using data to provide seafarers with real-time instructions, we hope to reduce marine collisions."

At its core, the SMART-SEA system aims to provide seafarers with the ideal maneuvers to ensure vessel safety, without controlling movements autonomously. SMART-SEA provides the information visually on a dashboard, but the decision and steering of the vessel is controlled by the seafarer.

Key data points used by SMART-SEA to provide maneuvering suggestions are raw radar images and vessel maneuverability—determined through a tiered model based on seafarer experience, state-of-the-art computational fluid dynamics models, and machine learning trained on past vessel motions.

Raw radar images are processed using a machine learning tool that identifies and classifies stationary objects near the vessel. Once identified, the vessel's maneuverability and seafarer's experience level are considered to recommend the safest action for the vessel.

By combining raw radar imaging data with advanced machine learning, researchers have created SMART-SEA, a system that gives seafarers real-time guidance on how and when to maneuver their vessel. Credit: Rachel Barton / Texas A&M Engineering

Fürth and her team, including former seafarer and Texas A&M Galveston Professor of Practice Ryan Vechan, tested SMART-SEA aboard the Texas A&M research vessel Trident, with preliminary data supporting the prototype as a way to reduce marine collisions.

SMART-SEA can detect stationary objects in all weather conditions, and seafarers can choose how to receive the data—either visually, audibly or a combination of the two.

"I do think SMART-SEA could reduce marine collisions and possibly pave the way for more autonomous vessels," said Fürth.

Researchers hope to continue testing SMART-SEA on other vessels and to improve the system. Fürth believes that the system's low costs could allow it to be adapted for recreational vessels, reducing boating accidents.

"I hope we get to continue this research in the future. I think we just scratched the surface," said Fürth.

Provided by Texas A&M University

 

TECH


The strategy that could change the future of networks in Europe — and push back giants like Huawei and ZTE

The European Union is redesigning its digital infrastructure to reduce geopolitical risks and strengthen its technological autonomy. The decision involves limiting certain suppliers, strengthening strategic alliances, and preparing the ground for 6G. The move promises to transform the global balance of telecommunications — and it has already begun.

The new generation of mobile networks will be decisive for the competitiveness, security, and technological sovereignty of countries. The European Union, aware of its vulnerability, has adopted a strategy to protect its critical infrastructures and reduce external dependencies considered risky. This change includes new rules, international alliances, and open technologies that aim to prepare the continent for the 6G era without losing participation in the global innovation ecosystem.

The EU has reinforced its cautious stance towards suppliers classified as "high risk," especially Chinese manufacturers like Huawei and ZTE. The concern is not only technical: Chinese laws that require companies to cooperate with the state generate fears about possible undue access to strategic data.

Although it hasn't explicitly declared a veto, the bloc has drastically expanded restrictions. The 5G security toolbox already limits the use of these suppliers in the core of networks, and new rules foreseen in the future Cyber ​​Resilience Act should expand control to areas such as cloud, AI, servers, and defense-related equipment between 2026 and 2027.

“Open Strategic Autonomy”: independence without isolation...The European philosophy does not seek total self-sufficiency, but rather to prevent any country or company from having the power to interrupt the supply of critical technology. This autonomy rests on three pillars:

-Own capabilities: encouraging research, startups, and strengthening Nokia and Ericsson.

-Protection: exclusion of risky suppliers and strengthening cybersecurity.

-Alliances: cooperation with reliable partners, such as the US, Japan, and especially South Korea.

At the same time, the EU wants to abandon closed telecommunications models, betting on technologies such as Open RAN and vRAN, which allow combining equipment from multiple manufacturers without dependence on a single provider.

South Korea: the ideal partner for the 6G race...South Korea already occupies a key position in the development of 6G standards and has technological giants capable of competing with Chinese suppliers. The partnership with Europe gained strength with the Samsung–Vodafone agreement to expand Open RAN networks in Germany, anticipating thousands of sites in the coming years.

These solutions reduce technological lock-in and increase efficiency, as virtualization and AI-based tools optimize energy consumption and performance.

A more distributed network — and increasingly political...The choice of who provides antennas, cloud and AI systems has become a political act. To avoid strategic vulnerabilities, the EU is diversifying its supply chain and investing in programs such as IRIS² and GOVSATCOM, which guarantee its own satellite communications.

The path to 6G will be shaped by technology, diplomacy and regulation. Europe does not want to walk alone — but it wants to carefully choose its travel companions.

mundophone

Saturday, December 6, 2025

 

DIGITAL LIFE


Mass layoffs in global companies aided by AI

Artificial intelligence (AI) is a technological milestone for humanity, with significant advances in various sectors, but the progress of this tool may be the cause of mass layoffs in technology companies, according to analysts. Corporations such as Google, Microsoft, and Amazon have announced reductions in their workforce in the last two years, citing the need to reallocate resources, including jobs, to AI-related initiatives.

Amazon confirmed in October that it plans to reduce its global workforce by "approximately 14,000 positions." The decision fueled a long-standing concern: that artificial intelligence (AI) is beginning to replace workers. Hewlett-Packard (HP) announced in late November that it intends to lay off between 4,000 and 6,000 employees—about 10% of its current workforce—by the end of 2028, in an AI adoption plan aimed at increasing productivity.

Other companies in the sector, such as Chegg, Salesforce, and United Parcel Service (UPS), have announced that they are cutting or will cut significant numbers of employees, showing a pattern in the market. The logistics company UPS, for example, has laid off 48,000 people since last year. Chegg, in the education sector, will reduce its workforce by 45%.

Economist and professor at the University of Brasília (UnB), César Bergo, believes that, in the next five years, some sectors will be "drastically affected," especially those that depend on intellectual production. "There will be an impact in the field of consulting, in design, especially industrial design, also in architecture and engineering. Basically, jobs that depend on intellectual production will suffer a direct impact, because AI will facilitate and speed up this production," he explained.

In the academic's assessment, AI is a revolution that is here to stay. "There's no point in crying about it; it's really necessary to seek ways to improve and acquire knowledge related to this area, because there will be other activities that can be performed without a significant influence from artificial intelligence," he advised.

The CEO of Inteligência Comercial, Luciano Bravo, also believes that, in the next five years, the sectors most affected will be based on routine, standardized, and highly digitalizable tasks, significantly changing the job market, such as customer service, telemarketing, and technical support.

For the executive director of the Budget Lab, an economics research center at Yale University in the US, Martha Gimbel, extrapolating executive statements during cuts is "possibly the worst way" to determine the effects of AI on jobs, as the dynamics of each company tend to influence these movements.

In Bravo's assessment, the replacement of workers by AI is, to a large extent, alarmism. For him, AI tends to redefine and complement human work rather than eliminate entire jobs. "Historically, disruptive technologies create new jobs, increase productivity, and displace functions rather than completely destroying them, and this is likely to happen again, requiring adaptation, training, and reorganization of tasks," he explained. 

According to him, the State should guarantee a just transition, creating robust retraining programs and incentives for technological education. The Ministry of Labor and Employment (MTE) was contacted but did not comment on the matter.

by Caetano Yamamoto, Brazil

 

DIGITAL LIFE


Big tech, AI, and digital colonialism: from the fable of progress to the nightmare of global inequality

If the Third Industrial Revolution, in the second half of the 20th century, represented the advent of the Internet and computers, the Fourth would be characterized by the virtually borderless projection of the Internet and the establishment of Artificial Intelligence. Big Tech is the clearest expression of the so-called Industry 4.0 – a revolutionary stage that, being so recent, we are still understanding and learning to deal with. Once again, the fable that has been told to us is very much related to the one about the technological revolution and globalization presented by Milton Santos. Industry 4.0 is presented to us as the pinnacle of human progress and technological neutrality, as well as a tool for emancipation and freedom. It thus erases the representation of technology as the result of social relations with political and economic purposes to maintain a capitalist structure, while replacing living labor with objectified labor, distancing the worker, more and more, from the appropriation of the fruits of their labor.

“Outsourcing, for example, prevalent in 18th and 19th century England, whereby the working class labored at home, outside the factory space, without any rights and under conditions of unlimited exploitation, has now become the pompous crowdsourcing, also devoid of protective legislation, adulterating the arduous global history of work.”

Big Tech companies have a close link to political power in the United States and serve as a tool for maintaining that country's hegemony. Far from the technical neutrality that the fable of the Fourth Industrial Revolution portrays, they play an important role in preserving the class system and in the (neo)colonial dominance of the United States in the periphery. As expected, they prospered from public resources while privatizing the profits. The US government was the first major investor in Silicon Valley during the Cold War, in an attempt to contain Soviet technological advancement. The Department of Defense was responsible for research and the production of the Internet, computers, and GPS. The Trump administration merely made explicit the public-private relationship with Big Tech. In his 2024 election campaign, he received substantial resources from these companies, which were promptly reciprocated in grants for the technological modernization of government institutions, such as the Pentagon itself.

It was within this intricate relationship between the public and private sectors that OpenAI donated US$1 million to the Trump administration. One of the world's most valuable companies, when created in 2015, spread the tech fable that predicted the use of Artificial Intelligence and technological advances for the benefit of all humanity. In a document presented on March 13, 2025, to the Trump administration, however, the fable is unveiled by demonstrating the use of this same technology to maintain US hegemony and leadership.

It may seem strange that a technology giant would defend the use of this same technology for political, economic, and even military purposes. In fact, we are witnessing, simultaneously, the modernization of US defense technology and the militarization of Big Tech companies in service of the former. Meta, Google, and OpenAI have removed clauses from their corporate policies that prohibited the use of Artificial Intelligence in weaponry. In April 2025, an executive order from Trump established a technological modernization plan for the Defense sector (for example, drone technology) with an investment of US$1 billion—largely directed to Silicon Valley. 

In August 2025, in Virginia, executives from Meta, OpenAI, Thinking Machines Lab, Palantir, among others, were appointed lieutenant colonels of the newly created Technical Innovation Unit (Detachment 201) of the Army. As lieutenant colonels, they swore allegiance to the USA in a formal ceremony. Then, in September 2025, the CEOs of Meta, Apple, Google, Microsoft, and OpenAI participated in a dinner with Trump, where the US$1 trillion investment in the US by Trump was agreed upon.

Friday, December 5, 2025

 

DIGITAL LIFE


China is increasing its use of artificial intelligence to strengthen social control mechanisms, expanding the reach of censorship and surveillance over the population

This conclusion is part of a new report from the Australian Strategic Policy Institute (ASPI), an Australian institution that details how the Beijing government has integrated cutting-edge technologies into the country's digital monitoring system and judicial apparatus.

According to the study, artificial intelligence is being used to make the tracking and suppression of politically sensitive content more efficient. This practice, already a hallmark of the Chinese censorship apparatus, gains speed and precision with systems capable of scanning large volumes of data, identifying keywords, and reducing the dissemination of messages critical of the government or leader Xi Jinping.

Nathan Attrill, senior analyst at ASPI and co-author of the research, told the Washington Post that the technology does not inaugurate a new model of repression, but intensifies already established methods. According to him, AI allows the Chinese Communist Party to monitor "more people, more closely, and with less effort," deepening control patterns previously executed mostly by human teams.

The technological dispute with the United States also influences this scenario. While the US and China compete for global leadership in artificial intelligence, Beijing is expanding the domestic use of the technology, relying on the collaboration of large national companies such as Baidu, Tencent, and ByteDance. These companies receive access to immense sets of government data, which accelerates the development of more advanced models.

The report highlights that the companies act as "assistant sheriffs," responsible for moderating content that goes beyond the scope normally adopted by platforms in the West. While social networks in other countries only remove illegal material, such as pornography, Chinese firms also need to eliminate content that could irritate the central government. Tools such as Tencent's content security audit system, for example, assign risk scores to users and track repeat offenders across various platforms.

This surveillance industry has also become a business. Companies like Baidu market automated moderation systems to other companies, expanding the reach of censorship and, according to researchers, "storing market principles in service of authoritarianism." Despite increasing automation, the adopted model is hybrid: human teams remain essential to interpret political nuances, identify codes used to circumvent supervision, and compensate for technical flaws.

Surveillance is even more intense over ethnic minorities, such as Uyghurs and Tibetans, targets of expanded monitoring in recent years. Because language barriers hindered tracking, the government is investing in the development of language models specific to regional languages. A laboratory created in Beijing in 2023 works with languages ​​such as Mongolian, Tibetan, and Uyghur, with the aim of analyzing public opinion and promoting what the government calls "ethnic unity."

The report also details how AI is being incorporated into the criminal justice system. Technology is already appearing in the identification of suspects at protests through facial recognition, in the screening of judicial documents by "smart courts," and even in prisons capable of predicting the emotional state of detainees. Researchers who had access to one of these systems in Shanghai warned that the tools could compromise judicial impartiality by introducing multiple "black boxes" that are impossible to audit.

Experts point out that the use of AI in the Chinese justice system has gone through distinct phases: it began with enthusiasm and exaggerated expectations, followed by a period of caution, and is now experiencing a stage of reflection on limitations and risks. The accelerated adoption of the technology, encouraged by guidelines from the central government, often leads local authorities and companies to exaggerate its capabilities to obtain contracts, making it difficult to measure the real impact of these systems.

The report concludes that, despite advances and efficiency in some processes, the expansion of AI in China raises profound concerns about privacy, transparency, and discrimination. For researchers, the lack of clarity about how the models work and the risk of inherent bias make the ecosystem particularly dangerous, even more so because Chinese companies have global ambitions and export these systems to other countries.

Key developments include:

Accelerated Monitoring: AI allows the Chinese government to scan and analyze vast volumes of digital content in real time, identifying and suppressing politically sensitive material much faster than manual methods.

Predictive Control: Authorities are using algorithms to analyze patterns of online behavior and sentiment, aiming to anticipate and neutralize dissent or protests before they occur, which experts describe as "preventive repression."

Minority Surveillance: Reports indicate that the government is developing specific AI tools to deepen the monitoring of ethnic minorities, such as Uyghurs and Tibetans, including through language models in their native languages, both inside and outside China.

Integration into the Judicial System: AI is being implemented in courts and prisons to assist in processes, from drafting documents to recommending verdicts and sentences, raising serious questions about impartiality and accountability.

Multimodal Censorship: In addition to text, new Chinese AI systems are capable of censoring politically sensitive images and videos, adding a new layer to the country's "Great Firewall."

Collaboration with Technology Companies: Large Chinese technology companies, such as Tencent, Baidu, and ByteDance, are developing and selling AI-based censorship platforms to other organizations, creating a domestic market for these control tools.

These actions have led to discussions and restrictions on the use of Chinese technologies in other countries, such as the United States and the European Union, due to concerns about privacy and alignment with the values ​​of the Chinese Communist Party. China, in turn, advocates for the creation of a global organization for AI governance, but emphasizes the need for the technology to respect "fundamental socialist values."

mundophone

  TECH Netflix mulled buying EA before Warner Bros. acquisition, as it grows AAA games library Before the proposed Netflix Warner Bros. acqu...