Monday, December 15, 2025

 

TECH


Perfect atomic layers paves the way for the next generation of quantum chips

For decades, progress in electronics has been linked to the miniaturization of components. Increasingly smaller transistors have enabled faster, more efficient, and cheaper chips. However, this strategy is reaching a delicate physical limit. When devices reach the atomic scale, almost invisible imperfections begin to seriously compromise performance. In technologies such as quantum computing, these defects can be simply fatal.

It is in this context that the recent advance by a group of researchers from South Korea gains relevance. For the first time, it was possible to manufacture atomic layers of a semiconductor continuously, virtually without flaws, and in a size compatible with industrial production.

The center of the discovery is molybdenum disulfide, known as MoS₂. It is a two-dimensional material, with a thickness equivalent to a single atom — more than a hundred times thinner than a human hair.

For years, MoS₂ has sparked interest because, unlike graphene, it is a "complete" semiconductor: it allows for controlled switching of electrical current on and off, something essential for transistors. The problem has always been manufacturing. Producing large areas of this material, uniform and without structural defects, seemed unfeasible outside the laboratory.

Microscopic defects, giant impacts...On an atomic scale, small flaws make a huge difference. In MoS₂, defects usually arise at the boundaries between crystalline domains. Although invisible to the naked eye, these imperfections interrupt the movement of electrons and destroy fundamental quantum properties.

For quantum chips, this means noise, loss of coherence, and processing errors. Eliminating these defects required something beyond point adjustments: it was necessary to control the positioning of atoms during the growth of the material.

The solution came from improving the so-called van der Waals epitaxy, applied to a special type of slightly inclined sapphire, known as a vicinal substrate. At the atomic level, this surface exhibits natural “steps” that act as invisible guides.

These steps orient the MoS₂ atoms during growth, forcing a more ordered organization. With precise control of temperature, pressure, and deposition, the researchers were able to form continuous, uniform, and virtually perfect monolayers in areas the size of a silicon wafer.

When the material proves its worth...Definitive validation came from electronic tests. The produced layers exhibited coherent quantum transport, with signs of phenomena such as weak localization and early indications of the quantum Hall effect. This indicates that electrons can move without losing their quantum phase—something essential for stable quantum chips.

In addition, the material exhibited high electron mobility. To demonstrate practical viability, the researchers fabricated complete arrays of transistors, which functioned efficiently at room temperature, close to the material's theoretical limits.

Why this matters for the future...Quantum computing requires extremely stable materials, and every defect is a potential source of error. A two-dimensional semiconductor, free of imperfections and capable of being manufactured on a large scale, removes one of the biggest bottlenecks in the sector.

More than a one-off breakthrough, the method can be adapted to other two-dimensional materials, expanding its impact on sensors, advanced memories, and low-power electronics. It doesn't mean perfect quantum chips tomorrow, but it shows that precise atomic manufacturing is already an industrial reality—and no longer just a scientific promise.

Key Atomic Layer Technologies:

Perfect Semiconductors: Researchers are producing continuous layers of semiconductors with the thickness of a single atom, with minimal defects, increasing the stability of qubits.

Artificial Atoms (Quantum Dots): Use electrons in silicon chips to create "atoms" that act as qubits, improving reliability compared to single-electron qubits.

Superconducting Qubits: Circuits made of materials such as aluminum, niobium, or tantalum, deposited on substrates (silicon/sapphire), which become superconductors at cryogenic temperatures, forming qubits in resonators.

Majorana Qubits: Nanowires formed by indium arsenide and aluminum that, at very low temperatures, generate quasiparticles (Majoranas) that store quantum information.

Ion Traps: Silicon chips with electrodes and waveguides (optical wiring) that use lasers to trap and manipulate individual ions, forming stable and scalable qubits.

Challenges and Requirements:

Stability: Qubits are extremely sensitive to vibrations, electromagnetic noise, and heating, requiring isolation and extreme cooling (near absolute zero).

Control: Precise manipulation of quantum states with lasers or microwaves for quantum operations.

Scalability: Industrial fabrication of perfect layers and large-scale control are crucial for practical quantum computers.

These approaches, combining the microfabrication of classical chips with new materials and precise atomic manipulation, are the basis for the next generation of quantum computing.

mundophone

 

DIGITAL LIFE


Tech savvy users have most digital concerns, study finds

Digital concerns around privacy, online misinformation, and work-life boundaries are highest among highly educated, Western European millennials, finds a new study from researchers at UCL and the University of British Columbia.

The research, published in Information, Communication & Society, also found individuals with higher levels of digital literacy are the most affected by these concerns.

Study methodology and data sources...For the study, the researchers used information from the European Social Survey (ESS)—a project that collects nationally representative data on public attitudes, beliefs and behavior, from thousands of people across Europe every two years.

They analyzed responses from nearly 50,000 people in 30 countries between 2020 and 2022.

For the ESS, participants were asked how much they thought digital tech infringes on privacy, helps spread misinformation, and causes work-life interruptions. Combining responses to the questions into a single index, the researchers generated a digital concern scale, ranging from 0 to 1, where a higher score indicates greater concern.

To establish their digital literacy and digital exposure, the respondents were asked how often they use the internet and to rate their familiarity with preference settings on a computer, advanced search on the internet, and using PDFs. At the country level, digital exposure was captured through the percentage of the population using the internet in each country.

The researchers looked at the levels of concern across different countries, as well as how the concern varies across social groups. They also looked at patterns relating to people's digital literacy and their exposure to digital tech.

Key findings on digital concerns...They found millennials (those aged 25–44 in 2022) reported greater concerns, compared to younger (15–24) and older adults (75+). They found no significant differences in the level of digital concerns between men and women, nor between income groups or between urban and rural residents.

Across the board, people were more concerned about the potential harms of digital technologies than not. Bulgaria was the only country in the study that did not exceed the mid-point (0.5) on the digital concern scale (0–1). Of all the countries studied, digital concern was lowest in Bulgaria (with a score of 0.47) and highest in the Netherlands (0.74), followed by the UK (0.73).

Compared with native-born citizens, migrants reported lower levels of digital concern, and those who were in work had a lower level of digital concerns than those out of work.

People with middle/high school education and those with a university degree reported greater levels of worry compared to their peers with no education or only primary school education.

The researchers found that those with greater tech know-how are more concerned about the negative impacts of digitalization, but this association is only observed among people who use digital technology on most days or on a daily basis.

Interpretation and expert commentary...The findings suggest that individuals may perceive the potential harms of digitalization as something that is beyond their control. So, the more they know about and are exposed to the issues, the more powerless and concerned they may feel.

Lead author Dr. Yang Hu (UCL Social Research Institute) said, "Our findings call into question the assumption that greater exposure to the digital world reduces our concern about its potential harm.

"Rather than becoming desensitized, greater use of digital technology seems to heighten our concerns about it, particularly among people who have a high level of digital literacy.

"Anxieties about digitalization have become a defining feature of today's world. As our use and understanding of technology grows, concern about its potential harm can impact individuals' mental health and quality of life, as well as wider societal well-being.

"As businesses, governments, and societies embrace new technologies, tech has become ubiquitous and digital literacy is essential for most people. The rapid development of AI is undoubtedly accelerating this process, so digital concern is not an issue that can be ignored."

Co-author Dr. Yue Qian (University of British Columbia, Canada) said, "Our results reveal dual paradoxes: those who are supposedly most vulnerable to digital harms—young people, older adults, and those with a low level of digital literacy—appear least concerned about the harms, while those with advanced digital skills report the most concern.

"While mainstream efforts at improving digital literacy have focused on bolstering practical skills, authorities should not ignore people's concerns about what rapid digitalization means for the subjective well-being of individuals and societies."

Provided by University College London

Sunday, December 14, 2025


TECH


'Periodic table' for AI methods aims to drive innovation

Artificial intelligence is increasingly used to integrate and analyze multiple types of data formats, such as text, images, audio and video. One challenge slowing advances in multimodal AI, however, is the process of choosing the algorithmic method best aligned to the specific task an AI system needs to perform.

Scientists have developed a unified view of AI methods aimed at systematizing this process. The Journal of Machine Learning Research published the new framework for deriving algorithms, developed by physicists at Emory University.

"We found that many of today's most successful AI methods boil down to a single, simple idea—compress multiple kinds of data just enough to keep the pieces that truly predict what you need," says Ilya Nemenman, Emory professor of physics and senior author of the paper.

"This gives us a kind of 'periodic table' of AI methods. Different methods fall into different cells, based on which information a method's loss function retains or discards."

An AI system's loss function is a mathematical equation that measures the error rate of the model's predictions. During training of an AI model, the goal is to minimize its loss by adjusting the model's parameters, using the error rate as a guide for improvement.

"People have devised hundreds of different loss functions for multimodal AI systems and some may be better than others, depending on context," Nemenman says. "We wondered if there was a simpler way than starting from scratch each time you confront a problem in multimodal AI."

A unifying framework...The researchers developed a unifying mathematical framework for deriving problem-specific loss functions, based on what information to keep and what information to throw away. They dubbed it the Variational Multivariate Information Bottleneck Framework.

"Our framework is essentially like a control knob," says co-author Michael Martini, who worked on the project as an Emory postdoctoral fellow and research scientist in Nemenman's group. "You can 'dial the knob' to determine the information to retain to solve a particular problem."

"Our approach is a generalized, principled one," adds Eslam Abdelaleem, first author of the paper. Abdelaleem took on the project as an Emory Ph.D. candidate in physics before graduating in May and joining Georgia Tech as a postdoctoral fellow.

"Our goal is to help people to design AI models that are tailored to the problem that they are trying to solve, while also allowing them to understand how and why each part of the model is working," he says.

AI-system developers can use the framework to propose new algorithms, to predict which ones might work, to estimate the needed data for a particular multimodal algorithm, and to anticipate when it might fail.

"Just as important," Nemenman says. "It may let us design new AI methods that are more accurate, efficient and trustworthy."

Eslam Abdelaleem led the work as an Emory graduate student. The day of the final breakthrough, the AI health tracker on his watch recorded his racing heart as three hours of cycling. “That’s how it interpreted the level of excitement I was feeling,” Abdelaleem says. Credit: Barbara Conner

A physics approach...The researchers brought a unique perspective to the problem of optimizing the design process for multimodal AI systems.

"The machine-learning community is focused on achieving accuracy in a system without necessarily understanding why a system is working," Abdelaleem explains. "As physicists, however, we want to understand how and why something works. So, we focused on finding fundamental, unifying principles to connect different AI methods together."

Abdelaleem and Martini began this quest—to distill the complexity of various AI methods to their essence—by doing math by hand.

"We spent a lot of time sitting in my office, writing on a whiteboard," Martini says. "Sometimes I'd be writing on a sheet of paper with Eslam looking over my shoulder."

The process took years, first working on mathematical foundations, discussing them with Nemenman, trying out equations on a computer, then repeating these steps after running down false trails.

"It was a lot of trial and error and going back to the whiteboard," Martini says.

Doing science with heart...They vividly recall the day of their eureka moment. They had come up with a unifying principle that described a tradeoff between compression of data and reconstruction of data.

"We tried our model on two test datasets and showed that it was automatically discovering shared, important features between them," Martini says. "That felt good."

As Abdelaleem was leaving campus after the exhausting, yet exhilarating, final push leading to the breakthrough, he happened to look at his smartwatch. It uses an AI system to track and interpret health data, such as his heart rate. The AI, however, had misunderstood the meaning of his racing heart throughout that day.

"My watch said that I had been cycling for three hours," Abdelaleem says. "That's how it interpreted the level of excitement I was feeling. I thought, 'Wow, that's really something!' Apparently, science can have that effect."

Applying the framework...The researchers applied their framework to dozens of AI methods to test its efficacy.

"We performed computer demonstrations that show that our general framework works well with test problems on benchmark datasets," Nemenman says. "We can more easily derive loss functions, which may solve the problems one cares about with smaller amounts of training data."

The framework also holds the potential to reduce the amount of computational power needed to run an AI system.

"By helping guide the best AI approach, the framework helps avoid encoding features that are not important," Nemenman says. "The less data required for a system, the less computational power required to run it, making it less environmentally harmful. That may also open the door to frontier experiments for problems that we cannot solve now because there is not enough existing data."

The researchers hope others will use the generalized framework to tailor new algorithms specific to scientific questions they want to explore.

Meanwhile, they are building on their work to explore the potential of the new framework. They are particularly interested in how the tool may help to detect patterns of biology, leading to insights into processes such as cognitive function.

"I want to understand how your brain simultaneously compresses and processes multiple sources of information," Abdelaleem says. "Can we develop a method that allows us to see the similarities between a machine-learning model and the human brain? That may help us to better understand both systems."

Provided by Emory University 


SONY


New Sony A7 V

Sony's new A7 V full-frame mirrorless camera has been getting rave reviews since it launched, however, it seems like third-party lens compatibility is one aspect that has been overlooked, as was recently discovered by one photographer's review of the premium hybrid mirrorless camera.

The Sony a7 V is an enthusiast-tier camera with a new, full frame, 33MP 'partially stacked' CMOS sensor, with a focus on high burst rates, capable autofocus and a complete suite of video features.

Like its predecessor, the a7 V features a 33MP sensor, but with extra readout circuitry to improve readout speeds (which were one of the a7 IV's weak points). We've seen this "partially stacked" technology in 24MP cameras like the Nikon Z6III and Panasonic S1II, but here it's being applied to a higher-resolution sensor.

Sony says this allows the a7 V to achieve much higher burst rates than its predecessor – 30 fps, up from 10 – and to do so with a full 14-bit readout, rather than requiring Sony's destructively lossy Raw compression. The maximum e-shutter speed has been increased to 1/16000 sec, too. The company also promises it won't have the same dynamic range reduction we saw with the Z6III, where increased read noise was evident if you pushed the shadows in post. Though we'll have to see if these claims are borne out in testing.

The IBIS system has also been upgraded, now stabilizing the sensor by 7.5EV, up from 5.5EV with the a7 IV.

The Sony A7 V only recently launched, and so far the reviews have been quite positive, with many reviewers praising both the full-frame camera's photo and video performance. DPReview called it "a genuine hybrid priced for mortals," thanks to its impressive all-round abilities. However, a photography review by Kai W on YouTube has revealed that Sony may have implemented some firmware changes that make the A7 V a no-go for E-Mount users who rely on third-party lenses, which are often a whole lot cheaper than Sony's in-house designs.

Kai starts off his review with a lot of praise for the hybrid camera's speed, autofocus, and video performance, however, when it gets to around the 16:15 mark, Kai tests the A7 V with a third-party lens, problems started to crop up. When testing E-mount lenses from a variety of Chinese manufacturers, including popular brands, like Viltrox and Sirui, Kai found that the camera exhibited a variety of malfunctions, with all resulting in a failure to capture an image. It should be noted that this incompatibility appears limited to autofocusing lenses, which makes sense, because most non-AF lenses aren't electrically coupled in any meaningful way.

The Sony a7 V looks like a hugely capable all-rounder, promising high resolution for its class, paired with fast shooting, the latest AF features and the ability to shoot fast, smooth video. It represents an appreciable step forward for Sony shooters, and perhaps it needed to.

The a7 IV was the first mid-range full-frame camera to push beyond 24MP, but this somewhat undercut the video, where any gain in detail was offset by levels of rolling shutter higher than its preexisting rivals. And while, back in 2021, you could fairly confidently address the impossibly complex question: "which of these models has the best autofocus" with the simple answer: "the Sony," much has changed since then.

In the four years since its launch, the rival offerings from Canon and Nikon have caught up in term of generic subject tracking, and moved ahead in terms of the range of subjects they recognize. Both brands have also made big advances in video, offering faster speeds, smoother readout and Raw video capture. Canon's recent EOS R6 III finally matched the a7 IV's remaining standout quality: photo resolution.

There is speculation that the incompatibility with third-party lenses may be a result of pre-release firmware, however, the A7 V is already available in Europe, and there appears to be no firmware update available for the new hybrid camera. Sony also states on its website that "a software update may be required for some lenses," so it's entirely possible that the responsibility for the incompatibility lies with the lens manufacturers, although this disclaimer is only referencing the A7 V's new 60 fps continuous shooting mode. One could then argue that it likely would have been fairly straightforward for Sony to implement a system to detect whether a lens is compatible with the new system in order to warn the user and give them the option to drop to a slower shooting mode.

It's worth noting, though, that options such as open gate shooting, native resolution video and internal Raw capture that are becoming common elsewhere aren't present here. Maybe Sony (perhaps correctly) doesn't believe enough mid-market hybrid shooters are going to need these features, or perhaps they're being saved for a future FX series camera. Either way, it feels like the a7 IV story all over again, with the a7 V looking competent, rather than excellent for video.

The a7 V uses the same 3.69M dot viewfinder as its predecessor, with the optics giving 0.78x magnification. It gains a tilting cradle on which its slightly larger, fully articulated rear screen is mounted. This means it can be tilted up or down, close to the back of the camera for waist-level or overhead stills shooting as well as flipping out to the side for videos or selfies. The added movement also lets you move the screen away from the camera, reducing the risk of the screen fouling your cables when you flip it out. The new panel has around 2.1M dots, giving around a 1024 x 682px resolution.

The a7 V uses the same NP-FZ100 battery that the a7 series has used for several generations, now. It's a fairly substantial 16.4Wh unit that powers the camera to a rating of 750 shots per charge if you rely on the rear screen and 630 shots per charge if you use the viewfinder. These are both impressive figures for a camera in this class, especially given that the CIPA-defined tests tend to significantly underestimate the number of shots most people find they actually get. Everyone's usage differs, of course, but so long as you don't spend lots of your time reviewing the images you just shot, it's not unusual to get double the rated number of shots.

That's why I keep stressing the a7 V's appeal to existing Sony shooters, because while it looks to do pretty much everything very well, there's not a lot, beyond its impressive battery life, that you can point to that screams "it's better than its peers at..."

Maybe we're past the point at which each new camera reaches greater heights than the competition, but Sony's latest feels like a camera that clears the current bar, rather than raising it. The Sony a7 V looks like a hugely capable all-rounder, but that's likely to be more exciting to Sony users than to the wider market, because so do its peers.

When asked about the incompatibility, Sony stated that "we do not guarantee third-party lens compatibility," seemingly neither confirming nor denying intentional incompatibility.

mundophone

Saturday, December 13, 2025


DIGITAL LIFE


Forget santa, HP warns hackers are coming for your cookies

HP Threat Research(threatresearch.ext.hp.com) just issued a new security report detailing a growing trend by attackers towards hijacking session cookies as an alternative means to tried-and-true credential theft. The reason hackers are finding a bigger appetite for sessions cookies is because today's hybrid work environment has led to changes that make stealing cookies more appealing than the old way of doing things.

Citing its 2025 Work Relationship Index, HP says one in five employees now work flexibly across office, home, and mobile environments. Meanwhile, enterprises are increasingly moving their core infrastructures to the cloud for the convenience of managing IT chores with a web browser, rather than utilizing on-premise domain controllers. This has led to a change in the way threat attackers breach organizations.

"Rather than steal credentials, attackers are now increasingly focusing on stealing authentication cookies. In this type of attack, a threat actor no longer needs to steal credentials or bypass MFA. Instead, they simply need the browser cookie that proves the target user (e.g. a system administrator) is logged in. Once they have it, they effectively have the privileges and access of that user," HP says.

This doesn't mean MFA (multi-factor authentication) is no longer important. HP notes that despite the draw of warm cookies, bad old fashioned credential theft is still popular. It's also preventable with MFA. However, organizations would be wide to also assess the risks inherent with today's hybrid work environments.

Whenever a user logs into a system, an authentication session is opened for that user and is used to keep the login active while interacting with the system. There are different ways of storing an open authentication system, such as locally on the user's device. However, storing sessions in the form of a cookie "is standard practice and used by most web applications," as it negates the need for manual session management. Therein lies the risk.

"If an attacker can obtain the authentication cookie, they can take over the active session and gain unauthorized access to a system. This gives the attacker the same access to the system as the initial user. So, if the user is a Microsoft Entra administrator, the attacker can gain critical permissions to an entire organization. In such a case, the attacker could weaken or bypass security controls, gain elevated privileges, or set up a persistent backdoor," HP warns.

Even worse, MFA offers absolutely no protection against this type of attack, since the threat actor has taken over the active session. And according to HP's data, token theft is the most common technique employed by hackers to bypass MFA and infiltrate Microsoft 365 services.

How is this done, though? HP says threat actors are using information stealers to swipe session cookies.

"Attackers infect an endpoint with malware that is capable of either directly taking over a session and injecting commands, or exfiltrating relevant active cookies from the system to an attacker-controlled server," HP explains.

HP's report goes on to highlight notable documented incidents of session cookie theft, including one that impacted Electronic Arts and resulted in hackers stealing 780GB of data. It also outlines preventative steps organizations can take, such as binding active sessions to a specific context, reducing how long sessions are active before needing to be validated again, and requiring re-authorization for sensitive actions such as adding a new administrator or changing passwords.

https://threatresearch.ext.hp.com/tracing-the-rise-of-breaches-involving-session-cookie-theft/

mundophone

 

CES 2026


Displace Pro TV 2: first TV with native AI to be revealed at CES. features gesture interaction and high personalization

There's another innovative novelty promised for the 2026 edition of the Consumer Electronics Show, which begins on January 6th. Displace, which calls itself the world's first company to launch a wireless television, returns to Las Vegas with another innovation. The technology company promises to show, among other things, the Pro TV 2, which it presents as "the first television with native artificial intelligence".

The Pro TV 2 features a 65-inch 4K OLED screen, with twice the brightness of the Pro TV 1, voice assistant features, and dedicated native NPUs and TPUs, which will allow for local AI processing, voice and gesture control, and more personalized content.

The company also emphasizes the privacy concerns associated with the design of the entire concept, noting that the local processing of operations and Displace's proprietary browser-based operating system ensure that all confidential user data always remains on the device.

In this new TV, whenever a video is paused, the TV displays relevant products from the scene, based on the user's personal preferences.

The user can also search for content using their voice and interact with the TV through gestures, without needing to use the remote control. Computer vision recognizes the user's face and gestures.

One of the demonstrations you'll see at CES is the ability to configure media sources of your choice (text-based news sites) and ask the TV to automatically create personalized video news channels based on the chosen content.

“Displace is redefining TV with a cutting-edge smart screen, using integrated AI chips that deliver real-world environmental experiences without compromising user privacy,” highlights Balaji Krishnan, founder and CEO of the technology company.

“The Pro TV 2 offers highly personalized AI experiences that feel easy and intuitive, paving the way for a new era where the TV becomes a true smart computer on your wall,” continues the executive in the press release about their presence at the fair.

During CES, Displace will be giving several live demonstrations of the technology. The equipment will also be available for purchase at the fair.

Displace's first television was also presented at CES in 2023, and it truly marked an innovation by eliminating all the wires needed to connect the equipment, including the power cord, as the model comes equipped with a battery. This model is the one featured in the photo accompanying this article.

Following its successful CES 2025 appearance, Displace, the creator of the world's first wireless television, today announced its return to CES 2026, the world's most powerful consumer tech event. During the annual conference on Jan. 6-9, 2026, Displace will showcase its new products, including Pro TV 2, the first AI-native TV, and conduct live demos of its AI features, signaling that the future of TV goes beyond passive entertainment.

Pro TV 2 arrives as the demand for TVs to do more, including function like phones, increases. Consumers want more interactive TV experiences, such as direct purchasing opportunities, highly personalized content and productivity tools, and Pro TV 2 delivers, with powerful, privacy-first, multimodal intelligence directly on the wall. This innovative product's dedicated native NPUs and TPUs enable local AI processing for voice and gesture control, personalized content using computer vision and fine-tuned local models. Coupled with the OS 2.0, the system transforms the TV from a passive display into an intelligent, ambient computing hub.

mundophone

Friday, December 12, 2025

 

DIGITAL LIFE


Fairness in AI: Study shows central role of human decision-making

AI-supported recommender systems should provide users with the best possible suggestions for their inquiries. These systems often have to serve different target groups and take other stakeholders into account who also influence the machine's response: e.g. service providers, municipalities or tourism associations.

So how can a fair and transparent recommendation be achieved here?

Researchers from Graz University of Technology (TU Graz), the University of Graz and Know Center investigated this using a cycling tour app from the Graz-based start-up Cyclebee. They conducted research into how the diversity of human needs can be taken into account by AI. The study was awarded a Mind the Gap research prize for gender and diversity by TU Graz.

The findings are published in the journal Frontiers in Big Data.

Impact on numerous groups..."AI-supported recommender systems can have a major influence on purchasing decisions or the development of guest and visitor numbers," says Bernhard Wieser from the Institute of Human-Centered Computing at TU Graz.

"They provide information on services or places worth visiting and should ideally take individual needs into account. However, there is a risk that certain groups or aspects are under-represented."

In this context, an important finding of the research was that the targeted fairness is a multi-stakeholder problem, as not only end users play a role, but also numerous other actors.

These include service providers such as hotels and restaurants along the routes and third parties such as municipalities and tourism organizations. And then there are stakeholders who don't even come into contact with the app but are nevertheless affected, such as local residents who could feel the effects of overtourism.

According to the study, reconciling all these stakeholders cannot be solved with technology alone.

"If the app is to deliver the fairest possible results for everyone, the fairness goals must be clearly defined in advance. And that is a very human process that starts with deciding which target group to serve," says Wieser.

Involving all actors in the design...This target group decision influences the selection of the AI training data, its weighting and further steps in the algorithm design. In order to involve the other stakeholders as well, the researchers propose the use of participatory design, in which all actors are involved, in order to harmonize their ideas as well as possible.

"Ultimately, however, you have to decide in favor of something, so it's up to the individual," says Dominik Kowald from the Fair AI group at the Know Center research center and the Institute of Digital Humanities at the University of Graz. "Not everything can be optimized at the same time with an AI model. There is always a trade-off."

Ultimately, it is up to the developers to decide what this trade-off looks like, but according to the researchers, it is important for end users and providers that there is transparency. Users want to be able to adapt or influence the recommendations, and providers want to know the rules according to which routes have been set or providers ranked.

"Our study results are intended to support software developers in their work in the form of design guidelines, and we also want to provide guidelines for political decision-makers," says Wieser.

"It is important that we make recommender systems increasingly available to smaller, regional players thanks to technological developments. This would make it possible to develop fair solutions and thus create counter-models to multinational corporations, which would sustainably strengthen regional value creation."

Provided by Graz University of Technology

  TECH Perfect atomic layers paves the way for the next generation of quantum chips For decades, progress in electronics has been linked to ...