Friday, December 12, 2025

 

DIGITAL LIFE


Fairness in AI: Study shows central role of human decision-making

AI-supported recommender systems should provide users with the best possible suggestions for their inquiries. These systems often have to serve different target groups and take other stakeholders into account who also influence the machine's response: e.g. service providers, municipalities or tourism associations.

So how can a fair and transparent recommendation be achieved here?

Researchers from Graz University of Technology (TU Graz), the University of Graz and Know Center investigated this using a cycling tour app from the Graz-based start-up Cyclebee. They conducted research into how the diversity of human needs can be taken into account by AI. The study was awarded a Mind the Gap research prize for gender and diversity by TU Graz.

The findings are published in the journal Frontiers in Big Data.

Impact on numerous groups..."AI-supported recommender systems can have a major influence on purchasing decisions or the development of guest and visitor numbers," says Bernhard Wieser from the Institute of Human-Centered Computing at TU Graz.

"They provide information on services or places worth visiting and should ideally take individual needs into account. However, there is a risk that certain groups or aspects are under-represented."

In this context, an important finding of the research was that the targeted fairness is a multi-stakeholder problem, as not only end users play a role, but also numerous other actors.

These include service providers such as hotels and restaurants along the routes and third parties such as municipalities and tourism organizations. And then there are stakeholders who don't even come into contact with the app but are nevertheless affected, such as local residents who could feel the effects of overtourism.

According to the study, reconciling all these stakeholders cannot be solved with technology alone.

"If the app is to deliver the fairest possible results for everyone, the fairness goals must be clearly defined in advance. And that is a very human process that starts with deciding which target group to serve," says Wieser.

Involving all actors in the design...This target group decision influences the selection of the AI training data, its weighting and further steps in the algorithm design. In order to involve the other stakeholders as well, the researchers propose the use of participatory design, in which all actors are involved, in order to harmonize their ideas as well as possible.

"Ultimately, however, you have to decide in favor of something, so it's up to the individual," says Dominik Kowald from the Fair AI group at the Know Center research center and the Institute of Digital Humanities at the University of Graz. "Not everything can be optimized at the same time with an AI model. There is always a trade-off."

Ultimately, it is up to the developers to decide what this trade-off looks like, but according to the researchers, it is important for end users and providers that there is transparency. Users want to be able to adapt or influence the recommendations, and providers want to know the rules according to which routes have been set or providers ranked.

"Our study results are intended to support software developers in their work in the form of design guidelines, and we also want to provide guidelines for political decision-makers," says Wieser.

"It is important that we make recommender systems increasingly available to smaller, regional players thanks to technological developments. This would make it possible to develop fair solutions and thus create counter-models to multinational corporations, which would sustainably strengthen regional value creation."

Provided by Graz University of Technology

 

DIGITAL LIFE


Publishers fight big tech with small local language models

As 2025 closes, referrals from social media and organic search are dead or dying, and generative AI is coming for facts. But 2026 may grant publishers an opportunity Silicon Valley has persistently ignored: local knowledge.

Journalism and Big Tech have long been frenemies. For 15 years, Facebook and its peers have wielded immense market power behind polite smiles and self-serving terms. But the wheel of progress turns, and generative AI has recently disrupted news publishers and tech platforms alike. The AI bubble may soon pop, but conversational interfaces powered by large language models (LLMs) are here to stay, and with them, an opportunity for publishers to break free from the grip of the tech titans.

The key is the Model Context Protocol (MCP), an open-source project from Anthropic that allows generative AI tools to interact with more traditional software systems via any standard application programming interface (API). Barely a year old, MCP has seen rapid market adoption and the support of major platforms from Azure to WhatsApp.

The magic is that an MCP server is a dictionary, translating GenAI requests into actions that the API of an external service can provide. In effect, it makes LLMs infinitely extensible via seamless integration with any digital tool available on the internet.

Software developers have been the first to adopt the tool, and can “create a Jira ticket for a WordPress site, build it in a GitHub repo, register a domain on AWS, and deploy the app to EC2” on command. That prompt is an oversimplification, but not an exaggeration.

For news consumers, it could mean asking Siri, Google, or ChatGPT for the latest news and seeing updates from their preferred local or regional news sources. Or: “What’s being built on Elm Street?”, or “When is the farmers market open?”, or any other question tied to specifically local interests. This everyday information is invaluable to the community, but its commercial value is tied to the local proximity and so rarely appears in the large datasets that feed search indexes or train LLMs.

But think of a local newsroom as a human LLM. Journalists collect, organize, and publish select details across a vast array of local topics. Beyond decades of news archives, our digital shelves include event calendars, obituaries, verified lists of local people, places, and institutions, civic meeting agendas and minutes, election results, building permits, restaurant inspections, local ordinances, development projects, and more.

Right now, the value of this information remains largely untapped on our own sites, and readers rarely come to us for it — they’re on other platforms when they ask the questions our local data and reporting might answer. Either individually or in regional collaborations, newsrooms should create knowledge bases — structured repositories of information — trained on local reporting and local data, available to the community through freemium or subscription products. “Subscribe to our website and get access to our local knowledge base — now also available on your favorite chatbot or search engine.”

These local services will run on small language models. SLMs are cheaper to build, easier to maintain, and grounded in a narrowly defined domain, making them far less prone to factual improvisation than LLMs. By design, SLMs are only economically viable at a local level, giving large tech platforms little incentive to compete in the space. What they will have is an incentive to provide their users access to this layer of local intelligence — so long as the administrative and financial demands are reasonable.

And that is the power of open standards. MCP can be thought of as RSS for LLMs: a lightweight, universal way for any model or chatbot to discover, connect to, and use local structured knowledge without bespoke integrations, contract negotiations, or exclusive partnerships. Signup can be automated. Payments (if any) become small, predictable, and standardized. This lowers the barriers for publishers and platforms, and gives readers the choice to enrich their chatbot with trusted local intelligence.

If publishers embrace small language models and open standards, they may regain some control over how local knowledge is collected, delivered, and valued. For decades, news organizations have tried to win while playing by Big Tech’s rules, but MCP and SLMs give them something new in the digital era: a home field advantage. The platforms own the pipes, but publishers can own the intelligence that matters most to our communities.

Local knowledge is journalism’s superpower. Newsrooms that invest in structured data, local SLMs, and MCP-enabled delivery will define a new, durable model for digital journalism, free from platform dependency and focused on accurate and trusted information about the places people actually live.

by Damon Kiesow---https://bsky.app/profile/damon.kiesow.net

Thursday, December 11, 2025

 

TECH


Google at risk of heavy fine: EU demands Play Store follow Apple's example

The world of technology regulation in Europe is full of twists and turns. After years of being the main target of the European Commission's antitrust investigations, Apple seems to have managed, through its recent and drastic changes to the App Store, to become the "model student." Now, it's Google that's in the spotlight.

Google Play has been under scrutiny from the European Commission since March of this year, in the context of payment methods for app purchases and the value of customer acquisition fees. In August, Google implemented some flexibilities in this area, but European regulators are dissatisfied with the results of these measures and would like to see broader concessions, considering Apple's similar actions as a model. It is expected that European authorities will agree that Apple's measures to align its business practices with regional antitrust laws are sufficient.

Google may formally make additional concessions to avoid a hefty fine in the EU, but there is no certainty that the corresponding penalties will be imposed in the first quarter of this year. Google representatives have expressed not only a willingness to continue cooperating with the European Commission, but also concern about creating more favorable conditions for the distribution of malware and the theft of user data through Google Play. Fines for violating the European Data Protection Act (DMA) can reach 10% of the company's annual revenue. On a global scale, the European Commission is also investigating Google regarding the legality of prioritizing its namesake search engine, as well as the use of online content by its AI tools, not to mention the advertising policies of the American internet giant.

According to an exclusive Reuters report, the search giant is at risk of incurring a heavy fine from the European Union as early as next year. The reason? The changes Google made to the Play Store are not enough to comply with the Digital Markets Act (DMA), and regulators are using Apple's changes as the new benchmark that Google must meet.

For those following the regulatory saga, this development is surprising. Apple was fined €500 million earlier this year and has been fiercely fighting against opening up its ecosystem. However, the “comprehensive changes” that the Cupertino company ended up implementing in Europe — which include new fee structures and greater freedom for third-party stores — seem to have convinced regulators, at least partially.

Now, the European Commission is looking at Google and asking: “Why can’t you be more like Apple?”

Google announced changes to the Play Store in August in an attempt to appease Brussels. These changes included:

-Fee reduction: Cuts in the “initial acquisition fee” from 10% to 3%.

-New models: A two-tier system for in-app transactions and purchases (IAPs).

However, Reuters sources indicate that these measures “still fall short” of expectations. The Commission considers that Google has not done enough to ensure that developers can direct customers to alternative channels fairly and without excessive friction, something that Apple's new framework (despite its complexity) seems to have addressed more satisfactorily in the eyes of the law.

Google is now in a race against time. The report suggests that the company still has an opportunity to avoid the financial penalty. Google can offer new changes and concessions to regulators before the fine is formally applied, which is expected to happen in the first quarter of 2026.

If Google fails to match the “Apple standard” in time, it could face one of those astronomical fines for which the EU has become famous in the tech sector.

mundophone


DIGITAL LIFE


'Big Tech's backyard', digital extractivism: data centers face dilemmas in Brazil

TikTok announced investments exceeding R$ 200 billion to build its first data center in Brazil and Latin America in Caucaia (CE), in what the industry sees as the start of a wave of projects landing in the country, in a movement with the potential to multiply the domestic technological infrastructure by four to five times. They all seek Brazil's clean and abundant energy—but also the tax benefits granted by the Lula government, which, according to Minister Fernando Haddad, could unlock the attraction of R$ 2 trillion in investments to the country.

However, this euphoria runs into a complex crossroads, where the shine of trillion-dollar investments overshadows growing tensions about the real cost-benefit for the nation.

Behind the billion-dollar figures, civil society, national industry, and the infrastructure sector are engaged in a heated debate around three fundamental dilemmas that could define Brazil's technological future. The first, most immediate, is political. The second, medium-term, concerns the effect on Brazilian equipment manufacturers. And the third, long-term, is about the environmental impact.

For various sides of this dispute, it is not just about building buildings to house servers focused on artificial intelligence, but about deciding what role Brazil will play in the digital economy: a sovereign power or a colony for processing other people's data.

The impact of data centers on natural and energy resources. The most forceful criticism comes from Idec (Brazilian Institute for Consumer Protection), which warns of the danger of Brazil becoming a "data center backyard" for big tech companies: a country full of facilities with high water and energy consumption and territorial occupation, but aimed at generating wealth for abroad.

"If data is the new oil, data centers are the new refineries." The production of a good, which is artificial intelligence, follows the same logic as the colonial plantations of the 1500s (?). They used the strength of enslaved people and nature to concentrate energy in sugarcane and send it to the metropolis to generate value. It's the same thing with data centers: they are using Brazilian energy, water, and soil to be transformed into value abroad ''...Júlia Catão Dias, Coordinator of the Responsible and Sustainable Consumption Program at Idec.

The five data centers, one of which belongs to TikTok, approved in Ceará are prime examples of this dynamic, says Júlia. The projects will be built in water-stressed zones (Caucaia has been in a state of emergency due to drought in 16 of the last 21 years) and, due to the configuration of the EPZs, will only export data processing services. The criticism also extends to Redata, because those benefiting from the tax exemption are only required to allocate 10% of their capacity to local data processing - going beyond that is optional.

Tossi, from ABDC, admits that Brazil has become a preferred target for the installation of data centers due to its energy potential. "The world is experiencing a moment where we are trying to attract investments here, because Brazil has two main inputs that the global market demands: available and renewable electricity. (...) This is what big tech companies are looking for, because they have a goal of meeting the Paris Agreement. In addition, energy here has a relatively competitive price when compared to the United States."

mundophone

Wednesday, December 10, 2025

 

TECH


Star power: how energy efficient is your home?

Ever wondered how energy efficient your home is? CSIRO's new Energy Rating Finder puts the power in your hands. It indicates the energy performance of your home's thermal shell—the walls, floors, roof, windows and insulation. These features influence how much energy it takes to keep your home at a comfortable temperature.

It's like an appliance star label, but for your home. If your address doesn't appear in the database, your home may not have an energy rating yet. But don't worry, there are other ways to estimate its efficiency—we'll get to that shortly.

Your home's thermal shell—walls, roof, floors, windows, and insulation—plays a big role in energy performance and comfort.

A win for people and the planet...CSIRO data scientist and platform developer Melissa James, said the goal was to make energy performance information accessible. "The system is easy to use—simply enter an address, and if data is available, you'll receive an energy-efficiency rating out of 10," James said.

A zero-star home offers little protection from external temperatures, while a 10-star home stays comfortable with minimal, if any heating or cooling.

You can also search by postcode or street name to help locate the address you're interested in and view its energy-efficiency rating. James hopes this data sparks curiosity and encourages upgrades.

"We want people to see how their home rates and start thinking about what changes could make a difference," James said. "Energy-efficient homes use less power, cost less to heat and cool and are more comfortable to live in.

"They also produce fewer carbon emissions, which is better for the planet. In addition, energy-efficient homes can help reduce energy infrastructure costs by lowering overall demand," she said.

Powered by a decade of data...The Energy Rating Finder includes headline certificate data from the Nationwide House Energy Rating Scheme (NatHERS) providing a snapshot of your home's energy performance—watch this video to learn more.

"All new homes and many undergoing major renovations must demonstrate that they meet the minimum standard specified under the building regulations (currently seven stars for most states). Most do this using software accredited by NatHERS," James said.

Average ratings have improved dramatically since the standard was established in 2003, from 1.8 stars for older homes to seven stars for most new builds in 2025.

James cautioned that not all properties are included in the database.

"CSIRO has been collecting this data since 2016, so only homes built or renovated after this date and assessed using NatHERS—which accounts for 80% of assessments nationwide, will appear in the system," she said.

NatHERS is also expanding NatHERS assessments to existing homes. Eventually this data may be included in the Energy Rating Finder.

If your property isn't covered, try RapidRate, a CSIRO tool that estimates your home's energy efficiency using basic information.

Together, these resources give Australians more insight into home energy performance than ever before.

Have your say...CSIRO is inviting feedback on the Energy Rating Finder: Tell us what you think. You can easily opt-out if you don't want your property's data shown publicly: Find out more

Amp up your home's energy efficiency...Thinking about improving your home's energy efficiency? James said performance depends on many factors, from insulation and window design to shading and ventilation. Even roof color matters.

So what impacts your home's energy rating? For example, CSIRO's Dr. Mahsan Sadeghi found that dark roofs absorb and retain heat, creating urban heat islands.

While structural changes are easiest during building or renovating, CSIRO offers tips for keeping older houses warm or cool without major work.

And energy efficient homes don't just lower bills and emissions—they can also boost the property's value.

Improving home energy efficiency—it's the bright thing to do.

Energy efficiency ratings:

-A-G or 0-10 Stars: Many systems, such as in the European Union and Australia (NatHERS), use a scale from A (most efficient) to G (least efficient), or from 0 to 10 stars for residences.

-More stars = More savings: A higher rating means the house (or appliance) consumes less energy to maintain comfort, resulting in lower utility bills and a smaller carbon footprint.

-Example House (Australia): A 0-star rated house offers little thermal protection, requiring a lot of energy to heat or cool. A 10-star house remains comfortable year-round with little or no need for mechanical heating or cooling.

Tips to improve energy efficiency:

-Regardless of your current rating, you can increase your home's efficiency with the following measures:

-Improve insulation: Add insulation to walls, floors, ceilings, and roofs to reduce heat loss.

-Seal air leaks: Use caulk and sealing tape to seal gaps around windows, doors, and wall outlets.

-Upgrade to energy-efficient appliances: When buying new appliances, look for those with the highest star rating (or A rating on the European label).

-Install LED lighting: Replace incandescent bulbs with LEDs, which use up to 75% less energy and last much longer.

-Use smart thermostats: Program your heating and cooling to adjust automatically when you are not at home or sleeping.

-Consider renewable energy: Installing solar panels can reduce or eliminate your reliance on traditional energy sources.

Provided by CSIRO 


TECH


Pixel tracking can significantly increase data breach risk on hospital websites

Researchers find that tracking pixels—small pieces of embedded code that can transmit user data to third parties—significantly increase data breach risk on hospital websites.

Digital tracking technologies have revolutionized online engagement, enabling organizations to collect user behavior data to enhance marketing and operational efficiencies. For instance, tracking pixels—small pieces of embedded code that transmit user data to external vendors—are widely used across industries. While broadly accepted in e-commerce and social media, their integration into the healthcare sector introduces unique ethical, security, and regulatory concerns. Yet, their implications for patient privacy and hospital cybersecurity remain largely unexplored.

Hilal Atasoy and colleagues analyzed 12 years of archived website data from 1,201 large U.S. hospitals between 2012 and 2023, examining the adoption of pixel tracking and their relationship to data breaches. The findings are published in the journal PNAS Nexus.

The authors found pixel tracking in 66% of hospital-year observations, despite stringent privacy regulations. Hospitals using third-party pixels experienced at least a 1.4 percentage point increase in breach probability, representing a 46% relative increase compared to the 3% baseline breach rate.

Third-party pixels, which transmit patient data to vendors like Meta and Google, significantly increased breach risk, while first-party pixels that keep data within the hospital showed no significant relationship with breaches. Physical breaches caused by misplaced documents or devices showed no relationship with pixel use, supporting the digital transmission mechanism.

According to the authors, the findings reveal a critical regulatory gap in health care privacy protections, as tracking pixels operate outside traditional Health Insurance Portability and Accountability Act safeguards. The authors recommend hospitals strengthen data governance policies to protect patient information.

Background: pixel tracking in healthcare...HIPAA was enacted in 1996 to protect patients' PHI, setting national standards for privacy and security in healthcare. The law applies to healthcare providers and their business associates, requiring safeguards to prevent unauthorized data access. The privacy rule mandates patient consent before PHI is disclosed, while the security rule enforces technical protections. However, these regulations were designed for traditional healthcare IT systems and do not explicitly account for third-party digital tracking technologies.

Recent concerns have emerged regarding tracking pixels, small embedded code snippets that collect user behavior data and transmit it to external vendors. Unlike cookies, which store data locally and can be deleted by users, pixels operate server-side, making them harder to detect and block. As third-party cookies are increasingly phased out due to privacy concerns, pixels have become a dominant tracking tool for measuring user engagement and targeting advertisements (14). Many hospitals use tracking pixels for marketing and website analytics, but these tools can inadvertently share PHI—such as IP addresses, appointment requests, and browsing behavior—with third-party platforms like Meta and Google.

Regulatory agencies have taken notice. The US Department of Health and Human Services (HHS) issued a bulletin in December 2022 clarifying that IP addresses linked to hospital webpages could be considered PHI, raising questions about HIPAA compliance (15). In 2023, HHS and the Federal Trade Commission (FTC) sent warning letters to 130 healthcare providers that their tracking pixels could compromise sensitive patient data (16). HIPAA mandates that hospitals remain responsible for breaches of PHI, regardless of whether the data loss occurs within their own IT systems or through third-party tracking tools. HHS has explicitly stated that hospitals transmitting PHI via pixels without safeguards may be subject to breach reporting requirements.

Real-world evidence confirms these risks: Community Health Network (CHN) disclosed in 2023 that a data breach affecting ∼1.5 million patients was directly linked to third-party tracking pixels transmitting PHI (17). In another case, Advocate Aurora Health reported a 3-million-patient breach in 2022, citing tracking pixels embedded on their website as the mechanism for unauthorized PHI disclosure to third parties such as Meta (18). Both hospitals reported these breaches under HIPAA, reinforcing that hospitals bear responsibility for data disclosures caused by tracking technologies.

Despite recent HHS guidance, legal challenges (e.g. a 2024 American Hospital Association lawsuit against HHS over its guidance) have created uncertainty about how HIPAA applies to pixel tracking. Beyond regulatory concerns, tracking pixels pose cybersecurity risks by increasing hospitals' exposure to third-party data breaches. Once patient data are transmitted to external vendors, hospitals have limited oversight of how it is stored or shared, making them vulnerable to security lapses in third-party systems. Cross-site tracking further heightens these risks, as third-party vendors can aggregate data from multiple websites, making it possible to reconstruct behavioral patterns and infer sensitive health details. Even when hospitals attempt to de-identify patient data, advanced tracking methods can re-identify individuals. These risks emphasize the need for stronger governance mechanisms to ensure that tracking technologies do not inadvertently expose patient data. As the debate over digital privacy intensifies, healthcare organizations must carefully weigh the benefits of online engagement against the potential risks to patient confidentiality and regulatory compliance. To empirically assess these risks, we next analyze hospitals' pixel use and its relationship with data breaches.

Provided by PNAS Nexus

Tuesday, December 9, 2025

 

DIGITAL LIFE


Big tech companies' remittances abroad soar and taxes decline in Brazil

Large technology companies — the so-called big techs — have significantly increased the share of revenue earned in Brazil that is sent abroad between 2014 and 2024. During the same period, the share of taxes on these remittances decreased, even with the significant growth in revenue of the platforms in the Brazilian market.

The data, obtained by Folha de S. Paulo from the Federal Revenue Service, show a consistent movement of expansion of remittances and changes in the tax composition of the sector, which is now embedded in a scenario of discussions about competitiveness, tax burden and regulation of global technology companies.

Revenue sent abroad exceeds half of the total...According to the Revenue Service figures, big techs increased the share of revenue remitted abroad by 323% in the last decade. In 2014, 17.12% of the Brazilian revenue of these companies was sent to other countries. In 2024, the proportion reached 55.66%, exceeding half of the revenue.

The peak occurred in 2023, when remittances represented 61.87% of the total revenue. In absolute terms, the jump was even more evident: from R$ 2.8 billion in 2014 (or R$ 4.93 billion adjusted for inflation) to R$ 80.3 billion in 2024.

Revenue grows 585% in ten years...The gross revenue of the platforms followed this trend. Between 2014 and 2024, the increase was 585%, rising from R$ 21.327 billion (adjusted for inflation) to R$ 144.3 billion.

At the same time, the federal tax burden on gross revenue rose from 17.9% to 22.7%, an increase of 13%. The data considers the operations of Amazon (including AWS), Apple, Facebook, Google, Google Cloud, Microsoft, and Nvidia in Brazil.

Although big tech companies frequently cite the weight of taxes in the country, taxes specifically linked to remittances decreased proportionally during the period. They represented 30.42% in 2014 and dropped to 22.13% in 2024.

The rates, however, did not change. What changed was the profile of the remittances. Remittances for purposes such as royalties, which have a lower average incidence, began to have a greater share. Remittances related to labor income, which usually have higher taxation, lost ground.

Comparison with other sectors of the economy...The Firjan survey — with data from the Federal Revenue Service, National Treasury, Confaz, Caixa, and IBGE — indicates that the technology sector is not among the most heavily taxed in the country.

The largest total burden falls on the manufacturing industry, with 49.2%. At the other end are agriculture and extractive industries, with 8%. Services, a category that includes technology companies, financial institutions, and other segments, appear with 29.7%.

Considering only federal taxes, industry also leads, with 23.2%. Services are in second place, with 16.9%.

In a statement, Jonathas Goulart, chief economist at Firjan, stated that "industry already appears as the sector that pays the most taxes in Brazil," highlighting the weight of ICMS (a state-level sales tax) on industrial production.

Position of technology companies...The Brazilian Chamber of the Digital Economy (camara-e.net), which represents companies such as Amazon, Google, and Meta, argues that the sector is among the country's largest taxpayers and plays a relevant role in Brazilian economic development.

The entity forwarded a technical report prepared by the consulting firm LCA, with data from the Federal Revenue Service, indicating that digital service companies collect, on average, 16.4% of their gross revenue in federal taxes — a value considered more than double the average of other sectors (6.1%).

In companies classified under the actual profit regime, the burden reaches 18.3% of revenue, above the rate for companies under the presumed profit regime, of 12.8%.

Regarding the increase in remittances, the entity states that this practice is part of the natural functioning of global companies, especially in the case of technology operations intensive in intellectual property and international services.

Ana Luiza Figueiredo, Brazil

  DIGITAL LIFE Fairness in AI: Study shows central role of human decision-making AI-supported recommender systems should provide users with...