Wednesday, April 22, 2026


DIGITAL LIFE


Half of the major digital platforms fail in transparency regarding advertising and user data, according to an international study

A survey conducted by researchers from Brazil and the United Kingdom reveals that social networks operate with low levels of transparency, hindering independent investigations and the fight against disinformation. In the Brazilian scenario, limitations are even more evident and worrying.

The influence of digital platforms on the flow of information has never been so evident — and, at the same time, so difficult to examine closely. A new international study sheds light on this paradox by showing that, although these companies collect enormous volumes of data on users, they offer little visibility into their own practices.

The research, entitled Data Not Found, was conducted by NetLab, from the Federal University of Rio de Janeiro, in partnership with the Minderoo Centre for Technology & Democracy, in the United Kingdom. The objective was to analyze, in an unprecedented way, how large digital platforms make data on content and advertising available.

Fifteen platforms operating in Brazil, the European Union, and the United Kingdom were evaluated, including popular names such as TikTok, Instagram, Facebook, YouTube, Kwai, and Telegram. Comparing these regions allows us to understand how different regulatory contexts influence access to information.

The European Union, for example, has one of the most advanced legislations in the world, notably the Digital Services Act (DSA), which establishes stricter transparency rules. The United Kingdom, on the other hand, adopts a more flexible approach, based on specific assessments by regulatory authorities. Brazil, in turn, still faces a developing regulatory landscape.

Limited transparency and incomplete data...To measure the level of openness of the platforms, the researchers used the Social Media Transparency Index, which assesses factors such as availability, quality, and accessibility of data.

The results point to a widespread problem: in virtually all the platforms analyzed, the data is incomplete, difficult to access, and poorly standardized. This includes flaws in ad libraries, lack of clarity on campaign financing and targeting, as well as obstacles to tracking essential information.

In Brazil, the situation is even more critical. Some tools available in other countries simply do not exist here, or function in a more limited way. This significantly reduces the ability of independent researchers to analyze the impact of these platforms.

An opaque system by nature...According to the study, the lack of transparency is not isolated, but structural. Even when mechanisms for accessing data exist, they are often inconsistent and unreliable.

This scenario creates a clear imbalance: while platforms accumulate detailed information about their users, the internal workings of these companies remain virtually inaccessible to the public. In practice, the platforms themselves define what can or cannot be investigated about them.

Many of these transparency initiatives end up functioning more as image strategies than as real tools for accessing information. The result is an appearance of openness that does not translate into useful data for analysis.

Impacts for research, regulation, and society...The opacity of platforms has direct consequences for different sectors. Researchers face difficulties in validating studies and investigating social impacts, while regulatory authorities lack information to conduct audits or open investigations.

This prevents, for example, the effective mapping of disinformation campaigns, abusive advertising practices, or the exposure of vulnerable audiences—such as children and adolescents—to harmful content.

Without reliable and accessible data, it becomes almost impossible to understand the true extent of these problems or to develop effective public policies to address them.

The global debate on the power of digital platforms has reinforced the importance of transparency as a central element in ensuring the integrity of information. Organizations such as the UN already recognize that access to quality data is essential for accurate diagnoses of the digital environment.

However, the study highlights that it is not enough to simply release data: it is essential that it be complete, standardized, and truly useful for analysis. Currently, many available tools offer limited resources, hindering deeper investigations.

Furthermore, even in regions with advanced legislation, such as the European Union, access to data still largely depends on the decision of the platforms themselves—which represents a significant limitation.

An urgent and global challenge...Faced with this scenario, researchers advocate for the creation of more robust and effective regulations, especially in countries like Brazil. At the same time, they suggest that the platforms themselves adopt higher standards of transparency on a voluntary basis.

The lack of uniformity across regions also exacerbates inequalities: while some researchers gain access to data, others—especially in the Global South—remain excluded, even when dealing with more vulnerable contexts.

Ultimately, transparency cannot be treated as a corporate choice. In a world increasingly dependent on digital platforms for information and public debate, it needs to be seen as an essential condition for protecting the collective interest.

Research indicates that around half or more of major digital platforms (including Meta, Google, TikTok, X, and others) fall short in providing adequate transparency regarding advertising and user data. A 2024 analysis found that opacity is the norm rather than the exception, particularly regarding how user data is used for targeting and the lack of accessible advertising repositories for independent researchers. 

Key findings on ad transparency failures(below):

Lack of repositories: Several major platforms, including Telegram, TikTok, X (Twitter), and Spotify, have failed to provide functional, comprehensive, and public advertising repositories in many regions, notably in the Global South.

Pinterest: Pinterest has faced significant scrutiny and legal complaints regarding its transparency in user data tracking and advertising practices, particularly within the European Union. Critics, including digital rights advocacy group noyb (None Of Your Business), have accused the platform of violating GDPR by engaging in "secret tracking" and failing to provide adequate information on how data is shared with third parties.

Inadequate data access: Even where libraries exist, such as Meta’s Ad Library, the provided data is often limited, providing insufficient information on ad targeting, total spend, or reach.

"Transparency-washing": Researchers argue that platforms often employ "transparency-washing," creating limited, self-regulated tools to avoid stricter, mandatory, and more comprehensive oversight.

API restrictions: Social media platforms are increasingly restricting access to their Application Programming Interfaces (APIs), which are essential for independent data collection, effectively blocking researchers from auditing their systems.

Ephemeral ads: Ephemeral (short-lived) ads are often missed by transparency tools, creating significant blind spots for monitoring disinformation or illegal content. 

Source: The Conversation


DIGITAL LIFE


Generative AI may cut costs in machine-learning systems, but it increases risks of cyberattacks and data leaks

Using generative AI to design, train, or perform steps within a machine-learning system is risky, argues computer scientist Micheal Lones in a paper appearing in Patterns. Though large language models (LLMs) could expand the capabilities of machine-learning systems and decrease costs and labor needs, Lones warns that using them reduces transparency and control for the people developing and using these systems and increases the risk of malicious cyberattacks, data leaks, and bias against underrepresented groups.

"Machine-learning developers need to be aware of the risks of using GenAI in machine learning and find a sensible balance between improvements in capability and the risks that might come with that," says Lones, a computer scientist at Heriot-Watt University in Edinburgh, UK. "Given the current limitations of generative AI, I'd say this is a clear example of just because you can do something doesn't mean you should."

How generative AI is being integrated...Machine-learning systems are algorithms that learn to recognize patterns in data, which they can then use to make predictions and decisions regarding new data. Machine learning has been around for decades, and most people encounter it in their daily lives in the form of spam filters, product recommendations on e-commerce websites, and social media newsfeeds. In the last two or so years, there has been a push to incorporate generative AI (in the form of LLMs) into machine-learning systems, but doing so carries risks and limitations that developers and the general public should be aware of, Lones says.

Lones explores four ways in which generative AI is currently being applied in machine learning: as a component within a machine-learning pipeline, to design and code machine-learning pipelines, to synthesize training data, and to analyze machine-learning outputs. All of these applications carry risks, Lones says, and these risks are compounded if LLMs are used for multiple tasks within a machine-learning system, or if LLMs are "agentic"—meaning they can autonomously use external tools to solve problems.

Complex systems and high‑stakes sectors..."If you have GenAI working in a number of different ways within your machine-learning workflows or system, then they can interact in unpredictable and hard to understand ways," says Lones. "My advice at the moment is to avoid adding too much complexity in terms of how we use GenAI in machine learning, particularly if you're in a sector that has high stakes that impact people's lives and livelihood."

One of the biggest risks is simply that LLMs sometimes make mistakes, bad decisions, and fabricate or "hallucinate" information. Lones says that these errors aren't necessarily predictable and may be difficult to evaluate because LLMs operate in a non-transparent way, which presents an additional issue for legal compliance.

"In areas like medicine or finance, there are laws about being able to show that the machine-learning system is reliable, and that you can explain how it reaches decisions," says Lones. "As soon as you start using LLMs, that gets really hard, because they're so opaque."

Security, privacy, and public awareness...Lones advises machine-learning developers to always manually evaluate LLM-generated code and outputs. He also warns that bigger, remotely hosted LLMs often store and share data, which means that using them opens up opportunities for cybersecurity breaches and the leakage of data and sensitive information.

"It's important for people in the general public to be aware of the limitations of GenAI systems," says Lones. "Companies will deploy these systems to do things like cut costs, and this may improve the experience that end users get, but it may also have negative consequences, such as bias and unfairness."

Generative artificial intelligence (GenAI) can indeed reduce costs in machine learning (ML) systems, but this savings comes with new operational and financial risks. While traditional ML focuses on analysis and prediction, GenAI acts on creation and synthesis, transforming the software development lifecycle and business management.

How generative AI reduces costs... GenAI reduces expenses primarily by automating tasks that previously required skilled human intervention or slow manual processes:

Software and IT development: GenAI tools accelerate workflow by generating repetitive code (boilerplate), creating test scripts, and writing technical documentation. Some companies report reductions of 30% to 45% in development costs.

Data management and R&D: GenAI can synthesize training data, which is crucial when historical data is scarce or protected by privacy, reducing research and development costs by about 10% to 15%.

Customer operations: Advanced chatbots based on LLMs can manage a higher percentage of complex queries without constant human supervision, decreasing the cost per ticket by up to 60%.

Mechanical and structural design: The use of generative AI allows for the optimization of material use, creating lighter and more resistant designs that reduce waste and production costs.

The hidden side of costs and risks...Despite the potential for savings, experts warn of the "cost iceberg" of GenAI(below):

Uncertainty and scale: Computing costs can skyrocket when moving from pilots to production systems, with predictions of a nearly 90% increase in cloud spending between 2023 and 2025 due to GenAI.

Security and privacy risks: The use of LLMs increases the opacity of systems, making it difficult to control sensitive data and opening doors to leaks and cyberattacks.

Continuous maintenance: Unlike traditional software, AI models require constant retraining and monitoring. It is estimated that up to 75% of the resources initially invested need to be maintained for ongoing support to prevent model degradation.

Biases and hallucinations: The lack of transparency in LLMs can introduce biases or "hallucinations" (false information), which generates legal and compliance risks, especially in sensitive sectors such as finance and medicine.

To maximize return on investment (ROI), organizations are adopting strategies such as intelligent model routing (using smaller and cheaper models for simple tasks) and the use of AI gateways to centralize governance and spending control.

Provided by Cell Press 

Tuesday, April 21, 2026


TECH


Industrial electrification is now a security imperative, finds analysis

Industrial electrification is becoming a matter of economic security as well as decarbonization, according to new Oxford analysis. Continued reliance on fossil fuels leaves 75% of global industry exposed to recurring price shocks, while electrification offers a pathway to stable and resilient energy costs.

The latest disruption linked to tensions around the Strait of Hormuz is only the most recent example of a broader pattern. The 2022 Russian gas crisis forced widespread factory closures and production shifts across Europe and beyond, with many energy-intensive industries yet to fully recover. The authors argue that such shocks are not isolated events, but symptoms of a structural vulnerability tied to fossil fuel dependence.

The impact has been global and persistent. In Asia, the 2022 LNG price spike forced factory shutdowns in Pakistan and Bangladesh and drove up costs for manufacturers in Japan and South Korea. Now, tensions around the Strait of Hormuz are once again feeding through into higher energy prices, renewing pressure on industrial producers across the region. The message is clear: fossil fuel shocks are not one-off events, but a repeated risk.

"Industry has now lived through two major fossil fuel shocks in three years. First the 2022 gas crisis and now Hormuz," says Jan Rosenow, Professor of Energy and Climate Policy at the University of Oxford. "At some point you have to ask: how many times does the alarm have to go off before we change the system?"

Industry's slow shift away from fossil fuels...Industry runs almost entirely on fossil fuels and is therefore uniquely exposed to these risks, the authors say. Yet, despite the exposure, it has been among the slowest sectors to transition.

The analysis highlights that the technologies needed to electrify industry are already becoming available at scale. Recent developments include the delivery a 95-tonne, 16-meter evaporator for one of the world's most powerful industrial heat pumps at BASF's Ludwigshafen chemical site, and the commissioning of Southeast Asia's first industrial heat battery at a cement plant in Saraburi, Thailand built entirely with local supply chains in just eight months. These projects demonstrate that industrial electrification is moving beyond pilot stages and beginning a global industrial shift.

"The technology to electrify industry exists today," says Professor Rosenow. "What's missing is the political will to fix the price signals and build the grids that would make it happen at scale."

Report findings...The new Oxford report, "High Voltage," provides the evidence base behind that shift. Drawing on more than 1,600 global climate scenarios alongside a systematic engineering review, the report finds that up to 90% of industrial energy demand could be electrified with existing and emerging technologies.

"What surprised us most in this research is how strong the convergence is across two completely independent lines of evidence," says Cassandra Etter-Wenzel, Researcher at the Environmental Change Institute, University of Oxford. "Detailed engineering studies and 1,600 global climate scenarios both point to the same conclusion: up to 90% of industrial energy demand could ultimately be electrified. The potential is not the constraint. The question is whether policy moves fast enough to realize it."

The authors emphasize that key electrification technologies, like heat pumps, electric boilers, heat batteries, and resistance heating, are already proven and commercially available. But deployment is being held back by policy and market failures.

Price, grid and finance barriers...Electricity prices remain artificially expensive relative to gas in many regions due to legacy tax and levy structures that disproportionately burden electricity. Reforming these price signals through electricity pricing reform, carbon pricing, and targeted support for electrified industrial heat will be crucial.

Grid access is another major constraint. Even where the technology exists and the economics work, long connection timelines stall industrial projects . Governments need to streamline permitting, enable anticipatory grid investment, and prioritize industrial connections to unlock progress.

Finally, first-of-kind industrial electrification projects face technology and integration risks that private capital won't bear alone. Instruments like Carbon Contracts for Difference—as used to support the BASF heat pump—grants, and concessional finance are essential to de-risk early deployment and drive down costs for what follows.

Electrification as a resilience strategy...The authors stress that reducing fossil fuel dependence is not only about emissions, but also about resilience. Each unit of fossil fuel replaced with domestic clean energy reduces exposure to geopolitical disruption and price volatility.

"The industries that electrify fastest will stop being victims of the next crisis," says Professor Rosenow. "Every unit of fossil fuel eliminated from an industrial process is a unit that can no longer be held hostage by a pipeline shutdown, a Strait closure, or a price spike."

Industrial electrification has evolved from a matter of decarbonization to a strategic security imperative. New research from institutions like the University of Oxford argues that fossil fuel dependence is now a structural vulnerability, leaving 75% of global industry exposed to recurring price shocks and geopolitical disruptions. 

The security case for electrification...Modern energy security is no longer just about securing oil and gas supplies; it is about the freedom from importing them. 

Resilience against geopolitical shocks: Unlike fossil fuels, which can be "held hostage" by pipeline shutdowns, strait closures, or price spikes, domestic clean energy eliminates exposure to external geopolitical leverage.

Economic stability: Electrification offers a pathway to stable and predictable energy costs. In the EU, large-scale electrification could cut fossil fuel dependence by two-thirds by 2040, delivering net savings of €29 billion per year in fuel imports.

"Security dividend": Transitioning to a distributed, electrified energy system provides a "security dividend" by creating a more resilient, decentralized network that is less vulnerable to centralized infrastructure sabotage. 

Industrial and defense implications...The shift toward electricity is increasingly viewed through the lens of national defense and industrial survival. 

Defense integration: Groups like Eurelectric have noted that energy systems are now a "second line of defense". There are calls to allocate a portion of defense spending (such as NATO's 1.5% GDP investment goal) toward energy infrastructure and clean innovation to bolster civil preparedness and military resilience.

Industrial competitiveness: Access to reliable, low-cost electricity is becoming a primary determinant for corporate site selection. Countries like China are pulling ahead, electrifying their energy systems by 10 percentage points each decade to anchor global manufacturing dominance.

Operational benefits: Electric equipment often provides better precision, safety, and energy efficiency—often up to three to four times higher than fossil fuel systems. 

Key strategic challenges...While the security benefits are clear, several "bottlenecks" remain to achieving this at scale(below): 

Grid capacity: The global grid must add or replace 80 million kilometers of lines by 2040 to handle the new load.

Technology gaps: While 60% of industrial heat can be electrified today, high-temperature processes still require further innovation.

New dependencies: The transition creates a new reliance on critical raw materials and technologies, currently dominated by China

Provided by University of Oxford 


TECH


Energy-efficient cooling elements developed from a 3D printer

Visitors to this year's Hannover Messe can experience a sudden drop in temperature at first hand—all brought about by simply stretching a metal alloy and then releasing it again. The underlying elastocaloric technology offers a cleaner, greener alternative to traditional cooling and heating systems. Professor Paul Motzki and his team at Saarland University are key players in the field and are driving developments ever closer towards real-world applications. Working with 3D-printing specialists led by Professor Dirk Bähre, they are also developing novel, energy-efficient geometries for the cooling elements. The team is showcasing their technology at Hannover Messe from 20 to 24 April (Hall 11, Stand D41).

The shiny cubes, each with a striking geometry, could easily be taken for stylish decorative items. For the researchers who work with these 3D-printed structures, however, their appeal lies in their functionality rather than their aesthetics. The manufacturing engineers in Professor Bähre's team and the smart materials specialists led by Professor Motzki are interested in how these metal structures behave in the innovative cooling and heating systems currently being developed in Saarbrücken.

"This is the next stage in the development of elastocaloric technology. The research we are currently undertaking on these new structures is still in the realm of basic research—but we are already thinking about practical use and developing solutions for real-world applications," explains Motzki. The novel geometries of these new cooling and heating elements are designed to boost heat transfer efficiency by maximizing the surface area over which thermal energy is exchanged.

Instead of cooling with refrigerants that are harmful to our climate, or heating with fossil fuels like oil or gas, elastocaloric systems use components manufactured from the shape-memory alloy nickel-titanium. Until now, Motzki's team at Saarland University has been researching the elastocaloric properties of bundles of ultrathin wires and thin sheets made from this alloy. These components release heat when pulled or compressed, and they absorb heat when the mechanical load is removed. The Saarbrücken engineers are using the elastocaloric effect to transport heat from one location to another—for example, to transfer heat out of a cooling chamber.

The research teams at Saarland University and at the Saarbrücken Center for Mechatronics and Automation Technology (ZeMA) have been investigating the elastocaloric effect for more than 15 years, with the long-term aim of cooling and heating cars, buildings and industrial facilities in an environmentally friendly and energy-efficient way. At this year's Hannover Messe, the team is demonstrating that their technology has moved beyond pure fundamental research and is already well on its way towards real-world applications.

Cool new materials...Enormous quantities of energy are consumed worldwide for cooling and heating—and as the climate changes, demand is set to rise further. Unlike conventional cooling and heating methods, elastocaloric technology promises significantly higher efficiency. Powered solely by electricity, elastocaloric systems are as clean as the electricity that is used to power them. The European Commission has identified elastocaloric cooling as the most promising alternative to conventional cooling technologies, and the World Economic Forum listed it among the "Top Ten Emerging Technologies." The technology is based on the special properties of nickel-titanium—an alloy that, when deformed, behaves very differently from conventional metals.

Nickel-titanium is what is known as a "shape memory alloy," i.e. the material can be deformed and then return to its original shape, due to a reversible phase transformation between two solid crystal lattice structures. This phase transformation is accompanied by heat transfer.

The elastocaloric technology offers a cleaner, greener alternative to traditional cooling and heating systems. Professor Paul Motzki and his team at Saarland University are key players in the field. Working with 3D-printing specialists led by Professor Dirk Bähre, they are developing novel, energy-efficient geometries for the cooling elements. Doctoral research students Thorben Trodler (left) and Michael Fries (right) are involved in the optimization of these delicate heat-exchange structures made from nickel-titanium alloy, through which air and water can flow. The three-dimensional alloy structures are produced layer by layer using additive manufacturing in a 3D printer. The team is showcasing their technology at Hannover Messe from 20 to 24 April. Credit: Oliver Dietze

"At room temperature, the alloy is in its high-temperature phase. When we apply tensile or compressive stress to the material, we force it to adopt the low-temperature phase. This is an exothermic process in which the material warms up and releases heat to the surroundings. Once the material has cooled back down to ambient temperature, we release the mechanical stress. This enables the alloy to transform back to its high-temperature phase and—as this is an endothermic process—the material cools down," explains Motzki.

Put simply: when a nickel-titanium wire is stretched, it releases heat to the air or liquid flowing past it; when the stress is removed, it cools down and is able to absorb heat from its surroundings. This mechanical deformation cycle of repeated tensile loading and unloading is the key principle behind the new technology. No additional sensors are required, as the material itself has its own intrinsic sensing properties.

"Each deformation of the wires corresponds to a specific electrical resistance value. So the resistance measurements can tell us exactly how the material is deforming at any given moment. That means a position sensor is effectively built in," Motzki explains.

The researchers in Saarbrücken aim to maximize thermal energy transfer by maximizing surface area. The larger the surface area, the more efficiently heat can be transferred to the working medium—air or water. Up until now, the team has increased surface area by creating bundles containing many ultrathin shape-memory wires. In the next generation of these devices, the cooling and heating elements will provide even more contact area by incorporating a porous geometric nickel-titanium structure.

To achieve this goal, Motzki's research group is working with Bähre's team to develop an intricate nickel-titanium structure through which the heat-transfer medium (air or water) can flow. The researchers are refining and optimizing the design of these delicate alloy lattices. A variety of complex geometries are undergoing experimental testing to determine which structures yield the most efficient heat transfer. The three-dimensional alloy structures are produced layer by layer using additive manufacturing in a 3D printer.

Preparing the technology for real-world applications...While laboratory experiments and testing are ongoing, Motzki and his team are also working to develop the emergent field of elastocalorics for real-world deployment. The materials that will be used in future elastocaloric cooling systems will need to be suitable for continuous operation in refrigerators and cooling units.

"We are working to develop materials and designs that are robust enough for continuous use and for ease of maintenance. We build questions about potential future applications into the development process right from the outset; it's a core principle of our research and it also shapes the curricula of our degree programs such as Systems Engineering and Sustainable Materials and Engineering," says Motzki, who, like Bähre, involves numerous doctoral researchers as well as undergraduate students in this work.

One of the questions being addressed experimentally is how to mechanically load the materials in ways that ensure a long service life. This involves matching the properties of the alloy to the tensile and compressive cycling regimes. "For example, in designs that use wire bundles, we want to achieve a lifetime of more than one million cycles," says Motzki.

At some point, however, even the best material will fatigue. "That's why we are also developing a simple and fast replacement concept. We are designing the relevant components so that they can be exchanged easily, because maintainability is a key factor in determining whether this new technology can translate into reliable day-to-day deployment," explains Motzki.

Provided by Saarland University

Monday, April 20, 2026


TECH


Swiss biomethane shows potential for domestic energy self-sufficiency through biomass

By using biomass intelligently, Switzerland could meet a substantial percentage of its own gas needs. This is the conclusion reached by a study led by the Paul Scherrer Institute PSI. Gas imports could be significantly reduced as a result, making Switzerland less dependent on the global market. The study was commissioned by the Swiss Federal Office of Energy (SFOE) and published at the beginning of this year.

The current turbulent global situation is leading to sharp fluctuations in the energy markets. The rise in oil and gas prices is dampening the economic outlook and increasing the risk of inflation. "But there are ways of reducing our dependence on fossil fuel imports and so substantially immunizing our economy against such events," says Tilman Schildhauer.

The chemical engineer works at the PSI Center for Energy and Environmental Sciences, where he conducts research in the field of methanation and power-to-X technologies. Working with two colleagues, he has carried out a new study to analyze in detail the hidden potential of biomass, such as wood, sewage sludge and green waste, to replace fossil gas and thus release less carbon dioxide, which is harmful to the climate. Their findings are encouraging: wood gasifiers, biogas plants and similar facilities, could supply a substantial proportion of Switzerland's future gas demand, which the study projects will decrease by a factor of three to five and is therefore expected to be considerably lower.

The study was conducted by PSI and Verenum AG on behalf of the Swiss Federal Office of Energy (SFOE) and published on the SFOE website at the beginning of this year. The researchers carried out a detailed analysis of a wide range of different technologies, considering all their respective advantages and disadvantages. Converting wood residues, green waste, sewage sludge and other biomass not only generates electricity and heat. It can also be used to produce biomethane.

Reducing dependency..."We won't achieve complete self-sufficiency when it comes to gas, but we can significantly reduce today's extreme dependency," Schildhauer explains.

The study suggests that this would require two steps. First, the energy system as a whole needs to be switched to more efficient electrical technologies such as heat pumps. This alone will significantly reduce the demand for gas. And second, as much biomethane as possible should be produced from biomass.

This is because many processes will continue to depend on gas in the future. "This doesn't just include gas-fired power stations, which have to step in during a power drought (dunkelflaute) when renewable sources supply too little electricity," says Christian Bauer, who contributed to the study and works on life cycle assessments at PSI. Many high-temperature industrial processes and synthesis processes in the chemical and pharmaceutical industries will continue to depend on gas.

However, due to its population density and topography, Switzerland cannot grow plants solely to produce energy. "Nevertheless, we can replace a large part of the natural gas we import today with biomethane from our own sources," says Bauer. The study found that around a quarter to half of expected future gas demand could be met by domestic sources. The rest does not have to be imported by gas tanker from distant countries but could also be sourced from other European countries with more agricultural and forest land.

Intelligently combining facilities and infrastructure...But how can existing biomass be utilized as intelligently as possible? "It is important to always keep the overall system in mind and not to take a compartmentalized view of local options," says Schildhauer, explaining the results of his analysis.

It makes little sense, for example, to use transportable wood instead of heat pumps to produce hot water in a heating network, when elsewhere a large industrial company needs the wood, or the biomethane produced from it, for high-temperature processes and is forced to import energy sources instead.

Wood gasifiers are available as small units—typically with an output of around 35 kilowatts to 1 megawatt—or as large-scale projects. In small units, combustion usually takes place in the same vessel as gas production. This results in a gas mixture that is only partially combustible and cannot be fed directly into the gas grid. In larger installations, on the other hand, combustion is usually physically separated from gasification.

"That gives you a product gas that is free of nitrogen and is also very suitable for methanation," says Schildhauer. Nickel-based catalysts can be used to convert the carbon monoxide and carbon dioxide in the gas into methane and water. The water can then be separated out through condensation, resulting in biomethane. "We can feed this directly into the gas grid, but naturally that depends on having the appropriate grid infrastructure in place."

The researcher is keen to point out that the biomass used to produce methane does not compete with food or animal feed production. "The material streams we are talking about would otherwise go to waste, and these volumes certainly have great potential," he says. The required facilities have already reached a high level of technical maturity. Several new types of gasifiers could be ready for the market within the next few years. Following some initial investments, the energy system would gradually be restructured, which would significantly smooth out price fluctuations during global crises.

Biomethane for domestic use is a renewable and sustainable alternative to fossil natural gas and LPG (cooking gas). It is obtained through the purification of biogas, removing impurities such as CO2 and moisture to achieve a methane concentration greater than 90%, which makes it chemically identical to natural gas.

Domestic applications...Biomethane can be used in homes in two main ways (below):

Injection into the grid: Because it has the same properties as natural gas, it can be injected directly into the existing gas pipeline infrastructure without the need to replace stoves, heaters or boilers.

Home biodigesters: Equipment such as HomeBiogas allows you to convert food scraps and animal waste into gas for cooking on-site, generating 2 to 3 hours of gas per day.

Advantages and benefits(below):

Energy independence: Reduces dependence on imported fuels and oil/dollar price fluctuations.

Sustainability: It is a 100% renewable source that reduces greenhouse gas emissions by up to 99% compared to fossil fuels.

Circular economy: It transforms organic waste (kitchen waste) into energy and produces liquid biofertilizer as a byproduct for gardens.

Access in remote areas: Compressed biomethane (virtual pipeline) allows gas to be brought to regions not served by traditional pipelines.

Challenges and disadvantages (below):

Initial cost: The investment to install biodigesters or purification plants can be high.

Temperature sensitivity: Production in domestic biodigesters decreases significantly in very cold climates, as the bacteria require constant heat.

Space and maintenance requirements: Home biodigesters need a sunny location and daily feeding of waste to maintain stable production.

Provided by Paul Scherrer Institute 


TECH


What your hobbies say about your mind may surprise you

Some everyday activities can reveal much more about how the brain works than it seems. Identified patterns reveal curious clues about how certain minds think.

Intelligence doesn't always appear in tests or numbers. Often, it reveals itself subtly, in the choices made in free time. There are activities that go beyond entertainment and function as true mental training. The most interesting thing is that, with the advancement of artificial intelligence, it is becoming possible to identify patterns in these choices — and better understand what they say about those who practice them.

Some hobbies are often underestimated, but hide a high level of complexity. Logic games, crossword puzzles, riddles, and brain teasers are clear examples of this.

Behind the apparent simplicity, these activities require pattern recognition, hypothesis formulation, and constant adaptation of strategies. It is not enough to follow an obvious path — often it is necessary to test possibilities and change approach quickly.

According to analyses based on artificial intelligence, this type of practice stimulates so-called lateral thinking. This refers to the ability to see solutions outside the traditional pattern, connecting ideas that, at first glance, do not seem related.

Furthermore, these games activate processes such as deduction and induction, which are fundamental for solving new problems. People who engage in this type of challenge tend to develop what is called fluid intelligence — the ability to deal with unfamiliar situations without relying exclusively on prior knowledge.

It is a constant training in adaptation and reasoning.

The silent habit that reconfigures thought...Among the most recurrent behaviors in profiles with high cognitive capacity, one stands out for its consistency: deep reading.

It's not about consuming content quickly, but about immersing oneself in denser texts, following complex ideas over time. This type of reading requires concentration, interpretation, and active construction of meaning.

When reading, the brain not only absorbs information — it creates scenarios, anticipates events, and establishes connections between different concepts. It is a dynamic mental activity that involves memory, imagination, and analysis.

Studies indicate that this habit strengthens verbal comprehension, expands vocabulary, and improves the ability to abstract. Furthermore, it contributes to more organized and coherent thinking.

For artificial intelligence, frequent readers show a greater ability to maintain focus and structure ideas consistently. In other words, reading functions as a kind of continuous mental simulation.

Learning something new as an exercise in flexibility...Another behavior that frequently appears is the interest in learning new languages ​​autonomously.

This process goes far beyond memorizing words. It requires adaptation to new grammatical structures, different sounds, and alternative ways of expressing ideas. The brain needs to reorganize its own patterns to accommodate this new system.

This type of practice develops so-called cognitive flexibility — the ability to switch between different ways of thinking. By dealing with more than one language, the person constantly trains the exchange of contexts and the mental control necessary to avoid interference between them.

In addition, learning a new language expands the ability to process complex information more quickly and accurately. It is a continuous exercise in adaptation.

Among the most complete hobbies from a cognitive point of view is musical practice.

Playing an instrument involves motor coordination, auditory perception, and real-time structural analysis. It's not just an artistic expression—it's a highly integrated mental exercise.

During musical performance, the brain needs to anticipate patterns, adjust movements precisely, and maintain rhythm. All of this happens simultaneously, demanding speed and control.

This type of activity strengthens memory, improves processing speed, and increases the ability to predict sequences. In a way, music functions as a logical system that unfolds over time.

But there is an element that connects all these behaviors.

The invisible factor behind it all...More than any specific activity, there is a common trait among people with high cognitive performance: intellectual curiosity.

This drive leads to a constant search for new knowledge, the exploration of varied themes, and the questioning of established ideas. It's not just superficial interest, but a need to understand more deeply.

This type of curiosity fuels continuous learning and the connection between different areas of knowledge. Whether through books, independent study, or exploring new challenges, it acts as an engine for intellectual development.

Ultimately, these hobbies don't, by themselves, determine someone's level of intelligence. But they help reveal interesting patterns about how certain minds operate.

And that's exactly what the title suggests—and answers: what we do in our free time can say much more about our way of thinking than we imagine.

Hobbies are much more than simple pastimes; they are "medicine" for the brain, revealing and influencing our mental structure, emotional health, and cognitive well-being. The regular practice of enjoyable activities acts on the limbic system, regulating emotions, reducing stress, and activating neurotransmitters linked to reward and pleasure, such as dopamine and serotonin.

Here's what different hobbies say about our minds and how they impact them:

Need for relaxation and stress management: Relaxing and hands-on hobbies (gardening, knitting, pottery, meditation) indicate a search for reducing cortisol, decreasing the activity of the amygdala—the area of ​​the brain associated with fear and stress. Studies indicate that 45 minutes of hobby a day can reduce cortisol by up to 30%.

Focus, creativity, and "flow": Activities such as painting, puzzles, drawing, and writing stimulate the "flow state," where the mind focuses intensely, pushing away negative thoughts and anxieties.

Intelligence, memory, and cognition: Hobbies that require mental engagement (logic games, chess, reading, playing instruments) strengthen cognitive reserve, improving memory and problem-solving skills, and preventing neurodegenerative diseases such as Alzheimer's.

Presence and escape valve: Manual and physical hobbies help to get out of the "autopilot" of routine, providing moments of presence and focus on the "here and now."

Emotion management: Dedication to personal hobbies helps increase self-esteem and emotional regulation, being an effective tool to alleviate symptoms of anxiety and depression.

Summary: Your hobbies show that your mind seeks balance, creative stimulation, or calm, and investing time in them is essential to maintain mental health and cognitive agility.

by mundophone

Sunday, April 19, 2026


SAMSUNG


The Galaxy Z TriFold is now discontinued

The tech world woke up to news that caught many enthusiasts by surprise, but which, upon closer inspection, reveals a lot about the current strategy of the Korean giant. The Galaxy Z TriFold, the device that promised to be the greatest exponent of foldable screen engineering, has been officially discontinued by Samsung. After disappearing in the blink of an eye from virtual shelves in South Korea and the United States, the brand confirmed that the stock has permanently sold out and there will be no new units on the way. If you were saving up for this unique device, I regret to inform you that the train has already left — and this time there seems to be no return ticket.

The Galaxy Z TriFold's journey was as intense as it was brief. The device, which stood out for its triple folding system and two distinct hinges, was never designed to be a mass success like the Galaxy Z Fold7 or the Galaxy S26 Ultra. Samsung has always treated it as an experimental project with a very limited run, almost like a functional prototype placed in the hands of anyone willing to pay the price of exclusivity.

Sales history shows that demand far exceeded the supply controlled by the brand:

-Launch in South Korea: Sold out in record time as soon as the first units became available.

-North American Market: Samsung kept small batches for sale, but the last restock, which occurred on April 10, disappeared in minutes.

-Price and Exclusivity: In China, the device cost around 19,999 Yuan (approximately 2,600 euros), competing directly with offerings such as the Huawei Mate XT.

You might wonder why a company would stop selling a product that sells out instantly. The answer lies in complexity. The Galaxy Z TriFold is an incredibly difficult piece of engineering to manufacture on a large scale. With two articulation points on the screen, the risk of mechanical failures and the production cost of the processor and flexible panels make its profitability questionable for mass production.

By discontinuing the model now, Samsung protects its trademark, avoiding long-term durability problems in a device that was still in the "real-world testing" phase. Furthermore, it focuses its resources on what really sells: conventional foldable models that have already proven their resistance in everyday use.

If your goal was to have a tablet that fits in your pocket, Samsung is now pointing to safer paths. The brand's official recommendation for TriFold orphans is the Galaxy Z Fold7, which offers a mature software experience, or the Galaxy S26 Ultra, for those who prioritize photography and raw processor power without sacrificing a generous screen.

However, we know very well that neither of these replacements delivers that "gadget of the future" feeling that the TriFold provided. The absence of an immediate successor leaves a void in the luxury market, but behind-the-scenes information suggests that Samsung hasn't given up on the format. The knowledge gained from this first generation will certainly be applied to future projects.

The seed planted for the Galaxy Z TriFold 2...Don't be discouraged, as this goodbye may only be a "see you soon." There are solid reports that Samsung is already working hard on a second version. The goal for the successor is clear: to smooth out the rough edges of this first attempt. The eventual Galaxy Z TriFold 2 is expected to be significantly thinner and lighter, correcting the excessive thickness that was the main criticism of the lucky few who managed to get their hands on the original model.

This short life cycle serves as a lesson for the market. Innovation has a cost, and sometimes that cost is ephemerality. The Galaxy Z TriFold thus goes down in history as a technological milestone that, although it didn't reach the pockets of most, proved that the limit of what we can fold is still far from being reached. If you weren't able to buy one, now you just have to wait for the next iteration, which promises to be more practical and, hopefully, more readily available.

mundophone

DIGITAL LIFE Half of the major digital platforms fail in transparency regarding advertising and user data, according to an international stu...