Tuesday, February 3, 2026

 

DIGITAL LIFE


Understanding how feminist AI in Latin America works

Online spaces perpetuate stereotypes about who creates them and the data that is inserted into them — currently, predominantly men. This global phenomenon has been driving the construction of technological alternatives, such as the Feminist AI Network of Latin America and the Caribbean.

The technological literature is full of examples of gender bias. Image recognition systems have difficulty accurately identifying women, especially Black women, which has already resulted in misidentifications with serious consequences for law enforcement.

Voice assistants have long used exclusively female voices, reinforcing the stereotype that women are more suited to service roles.

In image generation, AIs often associate the term "CEO" with a man, while a search for "assistant" returns images of women.

— Artificial intelligence feeds on data that is not neutral: it reflects societies marked by historical inequalities and power relations. If a company wants to achieve fair results, it needs to analyze datasets, verify their representativeness, and actively intervene when this is not the case. Equity doesn't just appear on its own: it needs to be designed — Ivana Bartoletti, an international expert in AI governance and author of a Council of Europe study on artificial intelligence and gender, told the ANSA news agency.

For Bartoletti, the recent case of Grok — Elon Musk's AI that allowed the generation of fake images of naked women and minors, a function that was later discontinued — “shows what happens when the safety and rights of women are not considered in the design of systems.”

— If there are tools to undress women, they will be used. Deepfake nudes are a form of humiliation and control. The implicit message is dangerous: you are online, therefore you deserve this. This is how many women are silenced and abandon the digital space — she explained.

It is in this context that technological alternatives emerge to rethink artificial intelligence and transform it into a space of struggle and shared power.

Feminist AIs...In Latin America and the Caribbean, for example, the Feminist AI Network emerged, supporting dozens of projects focused on transparency and public policies. Tools like AymurAI, Arvage AI, and SofIA apply a gender perspective to legal analysis and expose the biases and discrimination inherent in algorithms.

Afrofeminism has also been reclaiming artificial intelligence as a space for self-determination, with assistants like AfroféminasGPT, trained based on the knowledge and voices of Black people.

— They demonstrate that we can organize ourselves to use AI for the benefit of all, share data collectively, and develop solutions centered on real needs. But the key remains power. The feminist issue in AI is a question of power: women need to have more power. Not on the margins, but at the top of companies and in the spaces where technological policies are decided. We need diversity in decision-making environments, not just among programmers. Artificial intelligence is not just technology, it's a choice about how we want to transform society — concluded Ivana Bartoletti.

by La Nacion — Buenos Aires

Monday, February 2, 2026

 

DIGITAL LIFE


Notepad++ confirms hijacked by Chinese state-sponsored hackers

Notepad++ reported that its built-in auto-update feature had been hijacked by Chinese state-sponsored hackers from June to September of 2025, and the credentials gathered by the bas actors enabled further exploits until December 2nd, 2025. In an effort to thwart similar issues moving forward, Notepad++ has moved to a hosting provider "with significantly stronger security practices", which has been in place since Notepad++ version 8.8.9. For users who happened to follow an auto-update prompt or started one through Notepad++ within the vulnerable timeframe though, you'll very much want to scan your system for malware.

For existing Notepad++ users, developers advise manually installing version v.8.9.1, which includes a secured WinGup updater for improved security, instead of auto-updating through your current version. As a Notepad++ user myself, I was able to install the new version of Notepad++ over my old installation without issue, and my system scanned clean before and after doing so. Notepad++ mentions that the compromise occurred at the hosting provider level rather than through vulnerabilities in Notepad++ code itself, but the application still received the aforementioned security upgrades after being moved to a more secure provider in hopes of preventing the recurrence of something similar in the future.

This isn't the only time Notepad++ and its users have been targeted by cybercriminals, but last time it was through "notepad.plus", a "fan" site that was actually used to host malicious advertising and attempt to infect those looking for the legitimate Notepad++. This time the attack was more direct, though the full scale of harm done remains unknown. Similar to recent DarkSpectre stories, it does raise concerns about how existing auto-update infrastructure can be exploited, even against applications that seem or are legitimate. At least Notepad++ was informed of the breach by its old hosting provider and was able to move to a more secure host.

Notepad++, a popular text editor among programmers and technology professionals, was the target of a sophisticated cyberattack that lasted six months. Between June and December 2025, hackers sponsored by the Chinese government managed to hijack the software's update mechanism to distribute malware to specific targets.

The attack involved infrastructure-level compromise that allowed malicious actors to intercept and redirect update traffic destined for the notepad-plus-plus[.]org website.

Don Ho, creator and maintainer of Notepad++, revealed full details of the incident on Monday. The information was shared after an investigation conducted in collaboration with external security experts and the shared hosting provider that was used at the time.

Highly targeted attack...The compromise occurred at the hosting provider level, not through vulnerabilities in the Notepad++ code itself. The attackers gained access to the shared hosting server and, from there, established the ability to selectively redirect update traffic from specific target users to servers controlled by them.

Instead of compromising all Notepad++ users at once, which would have been quickly detected, the hackers chose specific targets.

Traffic originating only from certain users was routed to malicious servers that delivered components disguised as legitimate updates, while other users continued to receive genuine updates normally.

Multiple independent security researchers have assessed that the threat actor is likely a Chinese state-sponsored group, identified as Violet Typhoon, also known as APT31. This group primarily targeted telecommunications and financial services organizations in East Asia.

How the attack was discovered...Security researcher Kevin Beaumont was the first to report, in early December 2025, that some organizations using Notepad++ were being targeted with malicious software updates.

According to Beaumont, hackers linked to China had exploited Notepad++ to gain initial access to the systems of telecommunications and financial services companies in East Asia.

The discovery prompted Don Ho to quickly release Notepad++ version 8.8.9 to address an issue that resulted in WinGUp traffic, the Notepad++ updater, occasionally being redirected to malicious domains.

Specifically, the problem stemmed from how the updater verified the integrity and authenticity of the downloaded update file, allowing an attacker capable of intercepting network traffic between the updater client and the update server to trick the tool into downloading a different binary.

According to the detailed statement provided by the hosting provider, the shared server where Notepad++ was hosted was compromised until September 2, 2025.

On this date, the server underwent scheduled maintenance where the kernel and firmware were updated. After this update, the suspicious patterns in the logs disappeared, indicating that the attackers lost direct access to the server.

Even though they lost direct access to the server after September 2, the attackers still retained access credentials to the provider's internal services that they had captured during the initial compromise period.

With these credentials, even without directly controlling the server, they were able to continue redirecting some of the traffic destined for the Notepad++ update address to their own malicious servers until December 2, 2025.

The hosting provider confirmed that it found no evidence that other clients on the shared server were targeted.

The attackers specifically searched for the notepad-plus-plus.org domain in order to intercept traffic, demonstrating prior knowledge of existing vulnerabilities in the Notepad++ update verification system.

Remedial measures implemented...To definitively resolve the problem, Don Ho took several concrete steps. First, he migrated the entire Notepad++ website to a new hosting provider with significantly more rigorous security practices.

Next, he enhanced WinGUp in version 8.8.9 to verify both the digital certificate and the signature of the downloaded installer. Digital certificates and signatures are cryptographic mechanisms that ensure that a file actually came from the person who claims to have created it and that it has not been altered.

Furthermore, the XML returned by the update server is now signed using XMLDSig, a digital signature standard for XML documents. Certificate and signature verification will be mandatory starting with the next version 8.9.2, scheduled to be released approximately one month after the announcement.

Supply chain attacks...This incident falls under a category of attacks known as software supply chain attacks. This type of attack does not directly target the program that users use, but rather the infrastructure that distributes updates to that program.

State-sponsored groups, especially from China, North Korea, and Russia, have shown increasing interest in compromising software supply chains as a way to gain access to target organizations.

Instead of attempting to directly infiltrate organizations' networks, attackers find vulnerable points in the supply chain and use this as an initial entry vector.

Don Ho published full details of the investigation and took public responsibility for what happened, even though technically the failure was at the hosting provider.

"I deeply apologize to all users affected by this hijacking," Ho wrote in the official statement. He concluded by saying he believed the situation had been completely resolved with all the changes and security reinforcements implemented, but maintained a humility appropriate for someone who had just dealt with a state-sponsored hacking attack.

Notepad++ users should ensure they are running at least version 8.8.9 of the software, which includes critical security fixes for the update mechanism.

mundophone


DIGITAL LIFE


Big tech's data center: boon or burden for society?

Boom goes Big Tech data centres in the hearts of Manitoba Premier Wab Kinew and American President Donald Trump. The dynamical duo of data centre supporters is on par with prominent detractors, Democratic Socialist Senator Bernie Sanders of Vermont and Conservative Republican Governor Ron DeSantis of Florida. Sanders and DeSantis, polar opposites on the political spectrum, agree on nearly nothing. But both oppose Big Tech’s un-checked expansion of data centers championed by Trump accelerating construction of data centres with minimal scrutiny.

Acres of sprawling stand-alone windowless buildings packed with silicon chips stacked on server racks chilled by internal building cooling systems. These are data centres, the foundation of big tech’s scaling up of the artificial intelligence industry, consuming electricity, water and land to rival small cities.

Early 2000’s data centres were ‘micro’: size of a shipping container or smaller, often co-located with another use in a building. Data centres are now often stand-alone buildings sized as either ‘enterprise – 5,000 to 50,000 square feet’ or ‘hyperscale – greater than 1,000,000 square feet’. Winnipeg Richardson International Airport terminal is approximately 550,000 sq. ft. Several data centre buildings sited together comprise a data centre campus driving cloud computing and AI research and computational resources.

Big Tech’s data centres site selection is large, flat, stable, cleared lands in rural areas. Places such as Port Washington, Wisconsin (pop. 12,500), Mount Pleasant, Wisconsin (pop. 28,184), Council Bluffs, Iowa (pop. 62,665) and Cedar Rapids, Iowa (pop. 137, 710). Agriculture lands are often re-designated and re-zoned ‘industrial’ after local government public hearings to consider the land-use planning prior to permitting building construction.

Port Washington, WI data centre broke ground in December 2025 for phase 1 construction of four data center buildings on 672 acres of former farmland in a total project land scope of 2,000 acres. Waverley West in southwest Winnipeg is 2,500 acres. This new Port Washington data centre plus data centres being built in Mount Pleasant, WI will require the same energy to power 4.3 million homes in Wisconsin, a state which has only 2.8 million homes.

Data centres require a persistent supply of electric power running servers, networking equipment and cooling systems. Complete islanding (no grid connection) of data centres is not possible. A hyperscale data center requiring 960 megawatts is plugged directly into the Susquehanna nuclear plant in eastern Pennsylvania. Manitoba’s newest hydroelectric generating station, Keeyask Generating Station completed in 2022 at a cost $8.7 billion CAD, only generates 695 megawatts.

Voracious appetites for electricity and land are matched by data centres insatiable thirst quenched by freshwater. Freshwater cooling systems prevent equipment damage by managing the high heat generated by equipment and maintaining a constant humidity to prevent static electricity discharges. An enterprise sized data center consumes around 110 million gallons of water per year, equivalent to the annual freshwater usage of approximately 1,000 households. A hyperscale data centre of 2.5–2.9 million square feet such as the one in Council Bluffs, Iowa uses 1.3 billion gallons of freshwater annually, equivalent to 50,000 households.

Data centre proponents Kinew, Trump and Big Tech point out several benefits: digital sovereignty, increased municipal revenues, and construction job creation. A hyperscale data centre in Cedar Rapids, Iowa currently under construction will take 3-5 years with 3,000 tradespersons on-site at any given time while paying $8.5 million in building permitting fees with a minimum capital investment by the proponent of $576 million. The local government secured a community betterment agreement with the company making annual payments of $400,000 per data centre building for 15 years, maximum of $6 million per data centre campus and total of $36 million.

Skeptics of data centres such as environmentalists, consumer groups, labour unions and local residents, point out several concerns. Loss of productive agricultural lands. Lack of transparency and accountability of operations and government agreements. Permanent employment positions, post-construction, within data centres is small compared to construction, one job for every 30 construction jobs. Massive consumption of freshwater sourced from surface waters and ground aquifers depleting local supplies.

Most significant concern are the higher energy costs for households. University of Michigan 2025 study found utility companies ramp up the electric grid infrastructure to increase power generation and transmission to meet the enormous new electric power demands of data centres. These infrastructure costs have been passed down to utility customers forced to pay increasingly higher electricity rates. A 2024 independent study commissioned by the Commonwealth of Virginia showed that by 2040, Virginian residents will be paying up to $37 more per month on energy because of data centres.

Average people are putting cost of living at the top of their list of concerns, every increase in the household costs of food, shelter and energy matters. Manitoba Hydro electricity rates rose by 4.0 percent on January 1, 2026 due to severe drought and debt. The Crown Corporation carries billons in debt now and is expected to borrow billons more over the next decade.

Sanders has called for a national moratorium on the construction of data centers in America. Over 230 organizations from across 50 states have called on the American Congress to impose a national moratorium on data center siting and construction. Over 40 Republican and Democratic states have passed nearly 150 laws aiming to regulate AI. DeSantis in Florida is championing a bill to protect local communities’ right to block data center construction.

Hospitable locations with suitable lands, power and water for data centres are diminishing quickly around the USA. Big Tech is looking north and Premier Wab Kinew has opened his heart to welcome data centres to Manitoba. "You'll see servers and data centres in Manitoba in the future," Kinew told reporters last October after releasing a report from data experts supporting growing data centres in Manitoba. The Premier is channelling his inner Trump, failing at providing any details on how his government’s support for Big Tech growing data centres will not prevent average Manitobans from going bust.

Author: John Wintrup is a lifelong Winnipegger, urbanist, globetrotting city explorer, Harvard student, and professional planner with a M.Sc. Planning degree and holder of multiple planning accreditations in both Canada and the United States.

Sunday, February 1, 2026

 

TECH


What AI is doing to health information

For decades, searching for symptoms or treatments online meant diving into long texts, technical articles, and unfriendly institutional pages. This scenario, however, is beginning to change discreetly and profoundly. Artificial intelligence incorporated into search engines is reorganizing priorities and changing the path users take to medical information. The result is not only visual—it's also behavioral. What once required careful reading is now presented in a different way, more direct, more accessible, and, in many cases, more engaging.

The recent evolution of search engines has introduced automatic summaries generated by artificial intelligence, capable of condensing complex content into a few lines. This functionality, which seemed like just a tool for speed, has begun to influence something bigger: the format of the information presented.

Analyses from platforms specializing in SEO and digital behavior indicate that, in a large part of the queries related to health and well-being, these summaries have begun to highlight audiovisual content more frequently than traditional texts. Instead of simply providing links for reading, the system now suggests materials that explain concepts through visual demonstrations, animated diagrams, and accessible language.

This change is not accidental. The algorithmic logic has begun to consider not only the veracity of the information, but also the user's ability to understand it. In medical matters—such as symptoms, procedures, or prevention—visual clarity tends to reduce ambiguities and increase content retention. The practical effect is a silent reordering of the information hierarchy: the text ceases to be the protagonist and begins to share space with formats that were previously seen as complementary.

Another relevant point is the selection criterion. Unlike what one might imagine, prioritization does not simply fall on popular or viral content. Studies indicate that artificial intelligence tends to favor materials produced by professionals with verifiable credentials, recognized institutions, and specialized channels with academic backing.

This movement reveals an attempt to balance two factors that are often opposed in the digital environment: accessibility and scientific rigor. In this context, video acts as a translator between technical language and the general public, allowing complex concepts to be explained with visual examples and a didactic tone without necessarily losing precision.

For clinics, hospitals, and healthcare professionals, the message is clear: future relevance in searches will depend not only on well-written articles, but also on the ability to communicate knowledge in more dynamic formats. Artificial intelligence does not eliminate textual content, but redefines the means by which it gains visibility.

In the emerging scenario, learning and staying informed about health tends to become an increasingly multimodal experience. Reading remains important, but seeing and hearing take on similar weight. Online medical research is no longer just a solitary reading experience and is becoming more like a quick, direct, and visual lesson—a sign that the way we assimilate specialized knowledge is, once again, transforming.

Artificial intelligence (AI) is transforming health information into an active diagnostic and preventative tool, processing large volumes of data that would be impossible to analyze manually.

The main actions of AI with this information include:

Precision diagnosis: AI analyzes imaging exams (such as CT scans and MRIs) to detect subtle patterns and identify diseases such as cancer, Alzheimer's, and heart problems early.

Personalized treatments: By cross-referencing medical record data with clinical guidelines, AI suggests individualized therapies and predicts the likelihood of disease recurrence, as in the case of cancer.

Drug development: Tools like Google DeepMind's AlphaFold accelerate the discovery of new drugs by predicting the structure of proteins.

Hospital management and Efficiency: Algorithms optimize patient flow, reduce hospitalization times, and automate administrative tasks, such as filling out medical records.

Epidemiological surveillance: The cross-referencing of clinical and environmental data allows for the prediction of disease outbreaks and the planning of preventive public health actions. Privacy and Ethics

The use of this sensitive data requires compliance with laws such as the LGPD (Brazilian General Data Protection Law) and guidelines from the World Health Organization (WHO), focusing on transparency and protecting patient autonomy.

mundophone


DIGITAL LIFE


Mark Zuckerberg has already forgotten about the metaverse?

After changing the company's name and investing billions in digital worlds, Meta begins 2026 retreating from the metaverse. Layoffs, record losses, and a strategic shift toward AI-powered devices are reshaping the future of virtual reality. Experts explain why the original vision failed—and why this may, paradoxically, strengthen the sector.

The turn of the calendar to 2026 marked a turning point for the metaverse. What Zuckerberg presented as the next chapter of human interaction lost prominence within Meta itself. The company reduced teams, absorbed billions in losses, and repositioned priorities. Still, the end of the "metaverse era" does not mean the end of virtual reality—on the contrary, it may be the beginning of a more pragmatic phase.

In early January, Meta announced cuts of about 10% in its Reality Labs division, affecting data engineers, software engineers, and game developers. Shortly after, the fourth-quarter balance sheet confirmed the magnitude of the blow: the virtual reality area accumulated losses of US$19.1 billion in 2025, with US$6.2 billion in the last quarter alone.

In the conference call with investors, Zuckerberg made it clear that the company will continue investing in extended reality (XR), but with an increasing focus on wearable devices with AI — such as smart glasses partnerships with Ray-Ban — while virtual worlds lose centrality.

According to experts interviewed by Euronews Next, the change of course does not need to be read as a structural defeat for VR, but rather as the end of a narrative inflated by unrealistic expectations.

Billion-dollar investment and low adoption...Since its announcement, Horizon Worlds has been ridiculed by users around the world. In addition to the interface lacking detail and poor graphics, the main reason was the size of the investment for such a result: according to the company's own annual reports, Meta invested $36 billion in the project. For comparison, this amount is greater than NASA's annual budget in 2024, which was almost $25 billion. All this to show a world where the avatars didn't even have legs.

Those who tried to access it and gave it a chance were also disappointed. Users reported headaches and dizziness after prolonged use of the virtual reality glasses, a sparsely populated and lifeless world, and without much purpose – nothing presented there was revolutionary, necessary, or offered a more practical way to perform a job or even to socialize. In reality, it was more complicated to do things through Horizon Worlds than through other conventional methods. In 2023, a YouTuber decided to conduct an experiment and spend a week living on the platform, and noticed that the population was almost nonexistent: less than 1,000 daily users.

Decentraland – one of the most expensive projects within Horizon Worlds – cost $1.3 billion and in October 2022, had 38 daily users.

Security...Although the company has taken more effective measures to preserve user data and also their security while on the platform, initially, Meta Horizon Worlds was accessed by teenagers and children without parental control. In 2024, Meta announced the official opening of its metaverse to children and with this announcement, a series of measures to facilitate parental control over the type of content that could be accessed by children within the platform.

In 2022, however, this did not exist. Newer users of the platform also report that the parental control system's features are insufficient or can be completely ignored, depending on the information passed to the company's software, such as age.

Why the metaverse didn't take off...When Meta doubled down on the metaverse, the context seemed favorable. The world was still emerging from the COVID-19 pandemic, remote work was growing, and socialization had migrated to screens and platforms like Zoom. For George Jijiashvili, senior analyst at the consulting firm Omdia, that was the perfect time to try to create the next great computing platform.

There was also a strategic incentive: to reduce dependence on the mobile ecosystems controlled by Apple and Google. The ambition was clear: to lead the "post-smartphone era." The problem is that the leap was too big.

The technology is still expensive, uncomfortable for long periods of use, and unconvincing to the general public. In addition, there was a lack of truly indispensable applications. Without a "daily reason" to put on a headset, the metaverse remained restricted to niches — gamers, enthusiasts, and companies in pilot projects.

Add to that unintuitive interfaces, graphics below expectations, and a learning curve that alienated ordinary users. The result was a promising product on paper, but far from mass adoption.

Meta's retreat opens space for a more down-to-earth approach. Instead of persistent universes and ubiquitous avatars, the market tends to advance through specific use cases: corporate training, industrial design, immersive education, healthcare, and entertainment.

Lighter devices, mixed reality experiences, and integration with artificial intelligence point to a less grandiose—and more useful—future. Smart glasses, for example, can gain traction by combining computer vision, AI assistants, and contextual information in the real world, without requiring complete user isolation.

For developers and startups, the departure of the "totalizing" metaverse can be liberating. With less pressure to build a single universe, the diversity of solutions, platforms, and business models grows.

Meta paid a high price for trying to accelerate the future. But its investment also pushed the sector forward, funding research, hardware, and talent. Now, the industry has the chance to learn from its mistakes: focus on concrete experiences, reduce barriers to entry, and deliver immediate value.

No one seems to lament the end of the metaverse as a universal promise. In its place, a more modest, fragmented, and practical virtual reality emerges—exactly the type of evolution that, in the long run, tends to transform experimental technologies into everyday tools.

by mundophone

Saturday, January 31, 2026

 

TECH


'Thermal diode' design promises to improve heat regulation, prolonging battery life

New technology from University of Houston researchers could improve the way devices manage heat, thanks to a technique that allows heat to flow in only one direction. The innovation is known as thermal rectification, and was developed by Bo Zhao, an award-winning and internationally recognized engineering professor at the Cullen College of Engineering, and his doctoral student Sina Jafari Ghalekohneh. The work is published in Physical Review Research.

This new technology gives engineers a new way to control radiative heat with the same precision that electronic diodes control electrical currents, which means longer-lasting batteries for cell phones, electric vehicles and even satellites. It also has the potential to change our approach to AI data centers.

"This will be a very useful technology for thermal management and for building a logical system for radiative heat flow," said Zhao, assistant professor of mechanical and aerospace engineering. "For example, you would be able to keep your cell phone's battery at a comfortable temperature without overheating it, especially if it's being used in a very hot environment."

Prior to this discovery, traditional materials allowed radiative heat to travel freely in multiple directions, creating challenges for electronics, vehicles and energy systems to stay cool under stress. Zhao's technology pushes heat flow forward and is completely blocked from moving in the opposite direction.

The way Zhao's team accomplished this was by using semiconductor material placed under a magnetic field, which changes how energy moves at the microscopic level and allows heat flow to be directed with more control than previously possible.

Schematic of the system consisting of nonreciprocal surfaces. Credit: Physical Review Research (2025)

From rectifiers to heat circulators...Additionally, Zhao's team is developing a device known as a circulator, which pushes radiative heat to move in a continuous loop in only one direction. This could improve next-generation energy technologies that rely on radiative heat transfer.

"Basically, you have a hot side, a cold side and something in the middle," Zhao said. "If you look at a triangle, you want to have heat to transport counterclockwise from surface one to surface two, then surface two to surface three—you can't have it go from two to one. It essentially creates a heat loop."

The team's success isn't limited to radiative heat transfer. In a companion study published in Physical Review B, Zhao and his team demonstrated that similar principles can induce asymmetric thermal conductivity in materials and enable conduction heat rectification. This specific finding bridges the gap to everyday electronics, offering a potential solution for the conductive heat generated by high-performance microchips and batteries.

Towards real-world applications...These concepts have so far only been demonstrated theoretically, but Zhao aims to build experimental platforms to show the innovation in action. Once developed, the technology could have important implications for consumer technology ranging well beyond cell phones. For example, electric vehicles would be able to maintain a stable temperature to operate safely and efficiently.

Zhao expects the technology to be particularly valuable for space systems, where satellite electronics must stay cool despite constant exposure to sunlight. It will allow internal heat to escape while blocking solar heat from entering, thus improving reliability and reducing the risk of overheating.

Bo Zhao, assistant professor of mechanical and aerospace engineering, expects his heat regulating technology to be a game changer for devices ranging from cell phones to satellites. Credit: University of Houston

Potential to reshape AI in space...And although the technology was not explicitly developed with AI in mind, Zhao speculated that the technology could help regulate heat in AI hardware, which tends to have high demand for thermal management.

That could create new opportunities for the development of AI data centers in outer space, where its vacuum lacks air for convection and makes shedding heat difficult. This, coupled with the technology's potential to better regulate solar power, could take humanity's AI prowess to new frontiers.

"This is a very innovative technology," Zhao said. "Nobody has done it, so we're very excited about it."

Provided by University of Houston


DIGITAL LIFE


Creative talent: Has AI knocked humans out?

Are generative artificial intelligence systems such as ChatGPT truly creative? A research team led by Professor Karim Jerbi from the Department of Psychology at the Université de Montréal, and including AI pioneer Yoshua Bengio, also a professor at Université de Montréal, has just published the largest comparative study ever conducted on the creativity of large language models versus humans.

Published in Scientific Reports, the findings reveal that generative AI has reached a major milestone: it can now surpass average human creativity. However, the most creative individuals still clearly outperform even the best AI systems.

AI reaches the threshold of average human creativity...The study tested the creativity of several large language models (including ChatGPT, Claude, Gemini, and others) and compared their performance with that of 100,000 human participants. The results mark a turning point: some AI models, such as GPT-4, now exceed the average creative performance observed in humans on tasks of divergent linguistic creativity.

"Our study shows that some AI systems based on large language models can now outperform average human creativity on well-defined tasks," explains Professor Jerbi.

"This result may be surprising—even unsettling—but our study also highlights an equally important observation: even the best AI systems still fall short of the levels reached by the most creative humans."

Analyses conducted by the study's two co-first authors—postdoctoral researcher Antoine Bellemare-Pépin (Université de Montréal) and Ph.D. candidate François Lespinasse (Université Concordia)—reveal a new and intriguing reality. While some generative AI systems now surpass average human creativity, the highest levels of creativity remain distinctly human.

In fact, the average performance of the most creative half of participants exceeds that of all AI models tested, and the top 10% of the most creative individuals open an even wider gap.

"We developed a rigorous framework that allows us to compare human and AI creativity using the same tools, based on data from more than 100,000 participants, in collaboration with Jay Olson from the University of Toronto," says Professor Jerbi, who is also an associate professor at Mila.

How do you measure human and AI creativity?...To compare human creativity with that of AI systems, the research team relied on several complementary approaches. The main one is the Divergent Association Task (DAT), a tool used in psychology to measure divergent creativity—the ability to generate many, varied, and original ideas from a single starting point.

Developed by study co-author Jay Olson, the DAT asks participants—human or AI—to produce ten words that are as semantically different from one another as possible. For example, a highly creative participant might suggest: "galaxy, fork, freedom, algae, harmonica, quantum, nostalgia, velvet, hurricane, photosynthesis."

Crucially, performance on this task in humans also reflects performance on other well-established creativity tests, used in idea generation, writing, and creative problem solving.

In other words, although the task is language-based, it does not simply measure vocabulary skills: it engages general cognitive mechanisms of creative thinking, relevant far beyond the linguistic domain. Another major advantage is that the test is quick—taking only two to four minutes—and easily accessible online to the general public.

Following this logic, the researchers then asked whether AI performance on this very simple task—generating a small set of semantically distinct words—would generalize to more complex creative activities closer to real-world creative practices.

They therefore directly compared AI models and human participants on creative writing tasks, including haiku composition (a short three-line poetic form), movie plot summaries, and short stories. Here again, the most skilled human creators retained a clear advantage, even though AI systems can sometimes outperform average human creativity.

Is AI creativity a matter of tuning?...These findings naturally led the researchers to a key question: can AI creativity be modulated? The study shows that it can—notably by adjusting the model's temperature, a technical parameter that controls how predictable or daring the generated responses are.

At low temperatures, AI produces cautious and predictable outputs; at higher temperatures, it introduces more randomness, takes greater risks, and encourages the system to move beyond well-trodden paths, generating more varied and original associations.

The study also shows that how instructions are phrased strongly influences AI creativity. For instance, a prompting strategy based on etymology—encouraging the model to draw on the origins and structure of words—leads to less obvious associations and higher creativity scores.

Together, these findings highlight a central point: AI creativity depends closely on how humans guide and parameterize these systems, making human–AI interaction a key element of the creative process.

Will human creators be replaced?...These results provide a nuanced perspective on concerns about the potential replacement of creative workers by artificial intelligence. While some AI systems can now rival human creativity on specific tasks, the study also underscores the current limits of machines and the central role of humans in creativity.

"Even though AI can now reach human-level creativity on certain tests, we need to move beyond this misleading sense of competition," says Professor Jerbi. "Generative AI has, above all, become an extremely powerful tool in the service of human creativity: it will not replace creators, but profoundly transform how they imagine, explore, and create—for those who choose to use it."

Rather than announcing the disappearance of creative professions, the study invites us to rethink AI as a creative assistant, capable of expanding possibilities for exploration and inspiration. The future of creativity may lie less in opposition between humans and machines than in new forms of creative collaboration, where AI enriches human ingenuity instead of replacing it.

"By directly confronting human and machine capabilities, studies like ours push us to rethink what we mean by creativity," concludes Professor Jerbi.

Provided by University of Montreal

  DIGITAL LIFE Understanding how feminist AI in Latin America works Online spaces perpetuate stereotypes about who creates them and the dat...