Wednesday, February 18, 2026


DIGITAL LIFE


How will chatbots and AI-generated content influence the 2026 elections in Brazil?

ChatGPT's response to the question of who to vote for in the 2026 presidential elections is something like "I can't comment." However, the artificial intelligence profiles President Luiz Inácio Lula da Silva (PT) and Senator Flávio Bolsonaro (PL), summarizes electoral polls, lists strengths and weaknesses, and suggests what to consider when making a decision.

The consultation leads to the following conclusion: for those who prioritize "administrative experience and broad social policies," Lula would be the strongest candidate. If the focus is on a "conservative agenda, security, and less state intervention in certain areas," Flávio would be the most aligned.

Other chatbots, such as Gemini and Claude, follow a similar approach. The tools don't suggest a name, but they provide assessments of each candidate, list pros and cons, and propose how the user can reach a conclusion.

While the Superior Electoral Court (TSE) finalizes the rules on the use of artificial intelligence in the 2026 elections, the debate is growing about how the spread of AI could influence the election — whether through chatbots or the advancement of AI-generated content.

The influence of chatbots...Two recent studies show that talking to an AI can be more persuasive than traditional election propaganda.

A study from the University of Oxford, published in December, analyzed this effect in 76,977 people. Before the conversation with the AIs, participants' opinions on politics were measured on a scale of 0 to 100. After the dialogue with the systems, which were programmed to defend certain points, the response was measured again. In the most persuasive scenario, the average of the responses changed by 15.9 points.

Research published in Nature, also in December, with six thousand voters, showed a similar effect. The focus was on the USA, Canada, and Poland.

In the American experiment, pro-Donald Trump and pro-Kamala Harris voters had to interact with systems that advocated for the rival candidate. The conversations altered the participants' preferences by up to 3.9 points on a scale of 0 to 100—about four times the average effect of electoral advertising recorded in the country in the two previous elections. In experiments in Canada and Poland, the change reached about 10 points.

The results do not indicate that conventional AI tools act as campaign workers. But they reveal how persuasive language models can be when and if they are configured to defend a point of view.

In the 2024 elections, the TSE (Superior Electoral Court) restricted the use of bots posing as candidates to contact voters. It also prohibited deepfakes and mandated warnings about the use of AI in electoral advertising.

This year, organizations such as NetLab UFRJ are also advocating that chatbots be prohibited from endorsing candidates and that electoral advertisements within AI systems be banned.

Trust in AI...In Brazil, one of the concerns is the level of trust that voters have placed in these tools, which operate under an aura of neutrality and authority, although they can make mistakes and reproduce biases.

A recent study by Aláfia Lab shows that 9.7% of Brazilians see AI systems, such as ChatGPT and Gemini, as a source of information. Matheus Soares, coordinator of the laboratory and co-coordinator of the AI ​​in Elections Observatory, says that "confusing and inaccurate" answers can end up being interpreted as real in the electoral context.

In addition to errors, there are doubts about bias, says anthropologist David Nemer, a professor at the University of Virginia. He cites another Oxford study that identified regional distortions, for example. In the analysis, which took into account millions of interactions, ChatGPT attributes less intelligence to Brazilians from the North and Northeast. The risk is that this type of bias will appear in a contest with thousands of candidates that, in addition to the Executive branch, also involve the Legislative branch.

"This is a space where people trust that what is produced is true. But this 'truth' is based on a system whose origin and foundations are opaque," says the researcher. He adds that, unlike the disputes on social media, chatbots are usually seen as "neutral."

Fernando Ferreira, a researcher at UFRJ's Netlab, adds that the presence of AI has expanded on the internet beyond chatbots. Search engines, such as Google, present answers generated by artificial intelligence, while tools like Grok, from X, are used for fact-checking. And the answers are seen as "a source of truth."

In 2024, Google even restricted answers about elections in Gemini. The filter, however, failed -- the AI ​​answered about some candidates in the municipal race, but not about others. In the case of OpenAI, the models can deal with politics, but must be neutral. In an October publication, the company stated that internal tests identified that less than 0.01% of ChatGPT's answers showed signs of political bias.

Gender violence and electoral integrity...Another concern is disinformation, in videos, audio or images, amplified by artificial intelligence. The researchers' interpretation is that the impact of technology on this electoral process should be greater than it was two years ago. One of the reasons is the spread of AI content, which has become more accessible, faster, and more believable.

Nemer says that, in addition to misinformation about candidates, he is concerned about the spread of deepfakes (realistic content generated by artificial intelligence) that question the integrity of the electoral system. He cites, as an example, manipulated videos that simulate failures in electronic voting machines, for example, which could undermine voter confidence.

For Soares, a point of concern is deepnudes (hyperrealistic images that simulate nudity), which were already exploited in 2024 and could intensify gender-based political violence this year. Both expect candidates and supporters to use more AI tools to produce political material.

Two weeks ago, Agência Lupa showed that the share of fake content produced with AI, among the organization's fact-checks, jumped from 4.65% to 25.77% in one year. Almost 45% of the cases had a political bias.

For Laura Schertel, a professor at IDP and UnB, the main challenge for the TSE (Superior Electoral Court) will be the implementation of existing rules. Among the proposals submitted to the Court, the researcher cites the creation of a mandatory compliance plan, in which companies explain in advance how they will apply and monitor electoral rules.

— The TSE is not a digital regulator. So there is a great challenge, which is how to ensure that this court, which issues new rules, has the capacity to implement them — says the lawyer.

mundophone

Tuesday, February 17, 2026


TECH


Demonstration of mass connectivity for the 6G era

The National Institute of Information and Communications Technology (NICT) has developed a hybrid signal processing method that integrates an annealing-based quantum computer with classical computing for next-generation mobile communication systems. By implementing this method into a base station, simultaneous communications with 10 devices were successfully demonstrated through outdoor experiments, addressing the massive connectivity requirements anticipated for the 6G era.

The proposed approach utilizes quantum annealing to efficiently solve the combinatorial optimization problem arising in signal detection under multi-antenna and multi-carrier transmission. This result represents a significant step toward realizing large-scale machine-to-machine communications in future 6G networks, including applications involving drones, robots, and XR devices.

Photograph of the outdoor OTA experiment. Credit: National Institute of Information and Communications Technology (NICT)

This work was presented on January 9, 2026, at the IEEE Consumer Communications & Networking Conference (CCNC) 2026.

With the widespread adoption of drones, robots, and XR devices, next-generation wireless systems (6G) are expected to provide significantly enhanced uplink mass connectivity. Compared with current fifth-generation mobile communication systems (5G), device density in 6G networks is anticipated to increase by more than an order of magnitude.

One promising technology for addressing this challenge is non-orthogonal multiple access (NOMA), which enables multiple devices to transmit simultaneously over the same time and frequency resources. In such scenarios, signals from multiple devices are superposed at the base station and must be individually detected.

New hybrid signal processing method combining a quantum annealing machine and a classical computer. Credit: National Institute of Information and Communications Technology (NICT)

If the number of devices is denoted by K and the modulation order of each device by M, the number of signal combinations grows exponentially as MK. Consequently, the computational complexity increases rapidly with the number of connected devices, leading to large processing latency and making real-time detection difficult.

Previously, the research team had developed a hybrid signal processing method combining a quantum annealing machine with classical computing (hereafter referred to as the "previous method"). In this framework, the quantum annealing machine efficiently explores candidate signal combinations, while a classical computer performs post-processing to estimate the probability distributions required for signal detection, thereby achieving both high detection accuracy and fast processing.

However, the effectiveness of the previous method had only been demonstrated for limited communication scenarios. Its applicability to multi-antenna and multi-carrier transmission, which is an essential component of both 5G and future 6G systems, had remained unclear.

To address this limitation, NICT developed a new hybrid signal processing method integrating the quantum annealing machine with a classical computer that applies to multi-antenna and multi-carrier transmission. The proposed method also incorporates essential components of modern mobile communication systems, such as channel estimation using reference signals, making it suitable for practical 6G deployment.

Comparison between the proposed method and the conventional LMMSE method based on computer simulations. Credit: National Institute of Information and Communications Technology (NICT)

The team evaluated block-error-rate performance of the proposed method through numerical simulations under the following conditions: four receive antennas at the base station, QPSK modulation (M=4), and eight connected devices (K=8). This setting corresponds to a combinatorial optimization problem involving approximately 48 ≈ 60,000 possible signal combinations.

In these simulations, simulated quantum annealing (SQA) was used as the annealing method. The results confirmed that the proposed method achieves higher detection performance than the widely used linear minimum mean square error (LMMSE) approach.

Subsequently, the proposed method was implemented at a base station in a wireless communication experimental system, and outdoor over-the-air (OTA) experiments were conducted. Using the same system parameters as in the simulations, performance was evaluated using both SQA and the D-Wave quantum annealing machine. The experiments demonstrated error-free signal detection for both annealing methods. Further experiments confirmed successful simultaneous communication with up to 10 devices.

These results demonstrate that the proposed hybrid signal processing method can effectively support the massive connectivity expected in the 6G era, corresponding to a tenfold increase in device density compared with 5G systems.

Comparison of the proposed method and the LMMSE method on outdoor OTA experiments. Credit: National Institute of Information and Communications Technology (NICT)

Future prospects...This achievement represents a major step toward realizing the massive connectivity required in the 6G era and is expected to enable a wide range of machine-type communications involving drones, robots, and XR devices. Going forward, the researchers will continue to advance experimental demonstrations aimed at supporting even larger-scale mass connectivity.


Provided by National Institute of Information and Communications Technology (NICT)

 

TECH


What big tech is hiding from you in 2026

The technology universe is going through a moment of unprecedented transformation. In fact, the first weeks of February 2026 brought revelations that directly affect privacy, employment, and the future of global entertainment. Artificial intelligence, legal disputes, and corporate crises dominate the spotlight as industry giants make decisions that impact billions of people.

Furthermore, the American political landscape intensifies the pressure on companies like Google, Meta, and OpenAI, creating an environment of regulatory uncertainty that promises to reshape the rules of the technological game. Consequently, consumers and investors need to follow every move to understand what lies ahead.

ByteDance's Seedance 2.0 threatens Hollywood...Firstly, the advancement of artificial intelligence has reached a critical point for the entertainment industry. An AI-generated video showing Tom Cruise fighting Brad Pitt, produced with ByteDance's Seedance 2.0 tool, went viral and caused panic in Hollywood. In this sense, Disney and Paramount-Skydance have already sent cease and desist notices to the Chinese company for intellectual property infringement.

The launch of Seedance 2.0, the new AI-powered video generator from ByteDance (owner of TikTok), generated an immediate crisis with Hollywood in mid-February 2026. The technology is considered a direct threat to the traditional film industry due to its ability to create hyper-realistic content that, according to studios and unions, violates copyright on a large scale.

Main points of the conflict (below):

Legal battle: Giants like Disney and Paramount have already initiated legal action and sent "cease and desist" notices against ByteDance.

Use of image: The trigger for the controversy included viral videos generated by AI simulating confrontations between real actors, such as Tom Cruise and Brad Pitt, without authorization.

Reaction of the unions: SAG-AFTRA (the Hollywood actors' union) publicly condemned the tool, alleging that Seedance 2.0 uses protected works for training without compensation or consent from the artists.

ByteDance's retreat: Under strong legal pressure and industry criticism, ByteDance announced it will strengthen security protocols and copyright protections in Seedance 2.0 to prevent the generation of infringing content.

On the other hand, Mustafa Suleyman, Microsoft's head of AI, made an alarming prediction: most administrative jobs will be automated within 18 months. Thus, millions of professionals face an uncertain future as corporations accelerate the adoption of these technologies.

Meta and YouTube in the dock for child addiction...A landmark trial began on February 9, 2026, in Los Angeles Superior Court, where Meta (parent company of Instagram and Facebook) and YouTube are accused of deliberately designing their platforms to be "addictive machines" that harm children's mental health. 

The case is being watched as a "bellwether" that could influence hundreds of similar lawsuits nationwide and potentially reshape the digital landscape for minors. Certainly, one of the most impactful cases is the landmark trial in Los Angeles against Meta and YouTube. The companies are accused of deliberately designing addictive platforms for young people. Internal documents revealed that a senior researcher at Meta alerted executives about up to 500,000 daily cases of child sexual exploitation on Facebook and Instagram. In other words, the company knew the seriousness of the problem and did little.

Therefore, the head of Instagram, Adam Mosseri, tried to defend himself in court arguing that "problematic" use differs from "clinical addiction"—a distinction that critics consider insufficient.

Digital privacy under silent attack...Digital privacy is currently under a "silent attack" characterized by stealthy, sophisticated methods that often operate below the threshold of user awareness or traditional security detection. As the world becomes increasingly connected, personal data is being compromised, stolen, and leaked with regularity, often in exchange for convenience. Undoubtedly, the Nancy Guthrie case exposed worrying vulnerabilities. The FBI managed to retrieve videos from a Google Nest camera even without an active cloud storage subscription, accessing Google's "internal systems." Therefore, smart home devices store far more data than consumers realize.

Meanwhile, the FTC expanded its investigation into advertising boycotts against conservative media, while Attorney General Pam Bondi vowed to work to end warrantless surveillance of American citizens.

Electric vehicles in crisis and semiconductors on the rise...In contrast to the optimism surrounding AI, the electric vehicle sector is facing severe turbulence. Honda reported a 42% drop in profits, while Ford admitted that its electric division will continue to generate losses for years. For example, Western Digital has exhausted its entire stock of hard drives for 2026, driven by the insatiable demand from AI data centers.

Finally, the global semiconductor industry is on track to reach the historic milestone of $1 trillion in annual revenue by 2026. That is, while some technology sectors are faltering, the infrastructure that underpins the artificial intelligence revolution is experiencing its most lucrative moment—and the impact of this race has barely begun to be felt.

mundophone

Monday, February 16, 2026

 

TECH


Why DDR4 memory is the smartest choice for your PC in 2026

If we go back to 2023, the narrative in the tech world was clear: DDR4 was on its way out, and DDR5 was the inevitable path for anyone who wanted a system minimally prepared for the future. At the time, many enthusiasts—myself included—invested in new platforms, such as AMD's AM5, believing that the transition would be quick and that prices would fall as production increased. However, the scenario we find ourselves in today, in 2026, is radically different and, in a way, ironic.

The technological evolution, which should have made the new standard accessible, followed an inverse path. Instead of a price drop, we saw an explosion in the costs of DDR5 memory, while the "old" DDR4 consolidated itself as the only viable alternative for the average user's budget. Building a PC with next-generation components became a prohibitive luxury, elevating DDR4 to the status of queen of cost-effectiveness.

The Impact of Artificial Intelligence on Your Hardware...You might question why a technology that's only been on the market for five years is gaining traction again. The answer lies not in the home consumer market, but in the gigantic data centers dedicated to Artificial Intelligence (AI). The AI ​​"boom" in 2025 has shifted the priorities of semiconductor factories. These data centers devour all the bandwidth and high-performance memory they can find, focusing almost exclusively on the DDR5 standard and HBM (High Bandwidth Memory).

For manufacturers, the choice is purely economic: profit margins selling to AI companies are immensely higher than those obtained in the gaming or personal productivity markets. This has resulted in a massive diversion of production, leaving ordinary consumers with DDR5 memory kits costing absurd amounts, often between 400 and 500 euros.

In this logistical chaos, DDR4 has become a safe haven. Since it is produced in older, more mature manufacturing nodes—which are not suitable for next-generation AI accelerators—its stock remains stable and prices remain rational. The future they promised us has literally been diverted to the backstage of AI servers.

Despite the theoretical speed difference, the practical truth in the gaming field favors staying with the previous standard. Currently, most gamers are migrating to resolutions of 1440p or higher. In these scenarios, the main effort falls on the graphics card (GPU) and not so much on the processor or the system's RAM.

When you play at high resolutions and use technologies such as frame generation or intelligent upscaling, the performance difference between a 3600MHz DDR4 memory and a top-of-the-line DDR5 is minimal. Recent tests show that this variation is often between 1% and 3%. This is where you should ask the crucial question: does it make sense to pay a 400% premium in price to get only a 2% gain in the smoothness of your game?

Modern video cards can compensate for almost all system bandwidth limitations, making DDR4 memory a component that, far from being a bottleneck, performs its function admirably for a fraction of the cost.

The maturity of the ecosystem and the used market...Another determining factor for the current success of DDR4 is the maturity of the platform. After years on the market, all compatibility issues and BIOS errors have been resolved. When you buy a DDR4 motherboard today, you are acquiring an extremely reliable product.

In addition, the second-hand market is flooded with high-quality components. Large companies are upgrading their computer parks, releasing DDR4 processors and memory at bargain prices. You can easily find powerful processors from the late AM4 platform era that, combined with a good amount of cheap RAM, offer a premium user experience that rivals newer, more expensive systems.

In 2026, being an informed user means ignoring aggressive marketing of new features and looking at real performance and price data. DDR4 isn't just a nostalgic choice; it's the only sensible decision for those who value their money without sacrificing quality. If you're thinking about upgrading your PC this year, take a close look at what you already know. Sometimes, taking a step back in generation is the biggest leap forward you can make for your wallet.

DDR4 remains the smartest, most budget-friendly choice for PCs in 2026, saving $200–$400 in build costs while offering mature, reliable, and high-performance, low-latency, and high-capacity options for gaming and office work. It provides a superior, cost-effective, and efficient way to upgrade older systems. 

Why DDR4 is the smartest choice in 2026(below):

Significant cost savings: DDR4 is considerably cheaper than DDR5, helping to bypass high memory prices in a tight market, potentially saving you over $200 on a full build.

Mature & reliable technology: With over a decade of maturity, DDR4 offers excellent stability, avoiding the high costs, high temperatures, and potential initial BIOS headaches associated with newer platforms.

Solid performance: High-quality DDR4 (e.g., 3200MHz/3600MHz) still provides excellent performance, and with low latency, it often outperforms or matches budget-level DDR5.

Excellent upgrade path: Perfect for upgrading existing systems (AM4/LGA1700), allowing users to boost performance in older, yet still capable, rigs without replacing the entire machine.

Reduced Total Cost of Ownership: By opting for cheaper DDR4 kits, you can reallocate funds towards a better GPU or CPU, providing a more balanced, high-performance system for the price. 

Key considerations: While DDR4 offers great value, it is near the end of its life cycle, which limits future scalability compared to DDR5. However, for the majority of users, the performance and price benefits make it an exceptionally practical choice in 2026. 

by mundophone

 

DIGITAL LIFE


Study maps seven roles for generative AI in fighting disinformation

Generative AI can be used to combat misinformation. However, it can also exacerbate the problem by producing convincing manipulations that are difficult to detect and can quickly be copied and disseminated on a wide scale. In a new study, researchers have defined seven distinct roles that AI can play in the information environment and analyzed each role in terms of its strengths, weaknesses, opportunities and risks.

"One important point is that generative AI has not just one but several functions in combating misinformation. The technology can be anything from information support and educational resources to a powerful influencer. We therefore need to identify and discuss the opportunities, risks and responsibilities associated with AI and we need to create more effective policies," says Thomas Nygren, Professor at Uppsala University, who conducted the study together with colleagues at the University of Cambridge, U.K., and the University of Western Australia.

The study, published in Behavioral Science & Policy, is an overview in which researchers from a range of scholarly disciplines have reviewed the latest research on how generative AI can be used in various parts of the information environment. These uses range from providing information and supporting fact-checking to influencing opinion and designing educational interventions, and the study considers the strengths, weaknesses, opportunities and risks associated with each use.

The researchers chose to work with a SWOT framework as this leads to a more practical basis for decisions than general assertions that "AI is good" or "AI is dangerous." A system can be helpful in one role but also harmful in the same. Analyzing each role using SWOT can help decision-makers, schools and platforms discuss the right measures for the right risk.

"The roles emerged from a process of analysis where we started out from the perception that generative AI is not a simple 'solution' but a technology that can serve several functions at the same time. We identified recurrent patterns in the way AI is used to obtain information, to detect and manage problems, to influence people, to support collaboration and learning, and to design interactive training environments. These functions were summarized in seven roles," Nygren explains.

Seven roles(below):

1. Informer: Strengths/opportunities: Can make complex information easier to understand, translate and adapt language, can offer a quick overview of large quantities of information.

Problems/risks: Can give incorrect answers ('hallucinations'), oversimplify and reproduce training data biases without clearly disclosing sources.

2. Guardian: Strengths/opportunities: Can detect and flag suspect content on a large scale, identify coordinated campaigns and contribute to a swifter response to misinformation waves.

Problems/risks: Risk of false positives/negatives (irony, context, legitimate controversies), distortions in moderation, and lack of clarity concerning responsibility and rule of law.

3. Persuader: Strengths/opportunities: Can support correction of misconceptions through dialogue, refutation and personalized explanations; can be used in pro-social campaigns and in educational interventions.

Problems/risks: The same capacity can be used for manipulation, microtargeted influence and large-scale production of persuasive yet misleading messages—often quickly and cheaply.

4. Integrator: Strengths/opportunities: Can structure discussions, summarize arguments, clarify distinctions, and support deliberation and joint problem-solving.

Problems/risks: Can create false balance, normalize errors through 'neutral synthesis," or indirectly control problem formulation and interpretation.

5. Collaborator: Strengths/opportunities: Can assist in analysis, writing, information processing and idea development; can support critical review by generating alternatives, counterarguments and questions.

Problems/risks: Risk of overconfidence and cognitive outsourcing; users can fail to realize that the answer is based on uncertain assumptions and that the system lacks real understanding.

6. Teacher: Strengths/opportunities: Can give swift, personalized feedback and create training tasks at scale; can foster progression in source criticism and digital skills.

Problems/risks: Incorrect or biased answers can be disseminated as 'study resources'; risk that teaching becomes less investigative if students/teachers uncritically accept AI-generated content.

7. Playmaker: Strengths/opportunities: Can support design of interactive, gamified teaching environments and simulations that train resilience to manipulation and misinformation.

Problems/risks: Risk of simplifying stereotypes, ethical and copyright problems, and that gaming mechanisms can reward the wrong type of behavior if the design is not well considered.

AI must be implemented responsibly...The point of the roles is that they can serve as a checklist: they help us to see how each role can contribute to strengthening the resilience of society to misinformation, but also how each role entails specific vulnerabilities and risks. The researchers therefore analyzed each role using a SWOT approach: what strengths and opportunities it embodies, but also what weaknesses and threats need to be managed.

"We show how generative AI can produce dubious content yet can also detect and counteract misinformation on a large scale. However, risks such as hallucinations, in other words, that AI comes out with 'facts' that are wrong, reinforcement of prejudices and misunderstandings, and deliberate manipulation mean that the technology has to be implemented responsibly. Clear policies are therefore needed on the permissible use of AI."

The researchers particularly underline the need for:

-Regulations and clear frameworks for the permissible use of AI in sensitive information environments;

-Transparency about AI-generated content and systemic limitations;

-Human oversight where AI is used for decisions, moderation or advice;

-AI literacy to strengthen the ability of users to evaluate and question AI answers.

"The analysis shows that generative AI can be valuable for promoting important knowledge in school that is needed to uphold democracy and protect us from misinformation, but having said that, there is a risk that excessive use could be detrimental to the development of knowledge and make us lazy and ignorant and therefore more easily fooled. Consequently, with the rapid pace of developments, it's important to constantly scrutinize the roles of AI as 'teacher' and 'collaborator,' like the other five roles, with a critical and constructive eye," Nygren says.

Provided by Uppsala University

Sunday, February 15, 2026

 

TECH


Interpol backroom warriors fight cyber criminals 'weaponising' AI

From perfectly spelled phishing emails to fake videos of government officials, artificial intelligence is changing the game for Interpol's cat-and-mouse fight against cybercrime at its high-tech war rooms in Singapore.

Their foe: crime syndicates, structured like multinational firms, which are exploiting the fast-evolving technology to target individuals, states and corporations for billions of dollars.

"I consider the weaponization of AI by cybercriminals... as the biggest threat we're seeing," Neal Jetton, Interpol's Singapore-based director of cybercrime, told AFP.

"They are using it in whatever way they can," added Jetton, who is seconded to Interpol from the US Secret Service, the federal agency in charge of presidential protection.

AFP was granted a look inside the global organization's multi-pronged cybercrime facility, where specialists pore through massive amounts of data in a bid to prevent the next big ransomware attack or impersonation scam.

Jetton said the "sheer volume" of cyber attacks worries him the most.

"It's going to only expand, and so you just need to get the word out to people," so they understand "how often they're going to be targeted," he said.

AI technology is allowing criminals around the world to create sophisticated voice and video copies of well-known figures to endorse scam investments, and helping make dodgy online messages appear more genuine.

Jetton warned that even low-skilled criminals can purchase ready-made hacking and scamming tools on the dark web—and anyone with a smartphone can be a target.

The facility is part of the Interpol Global Complex for Innovation, not far from the Singapore Botanic Gardens(image above)

'Black market'...The facility is part of the Interpol Global Complex for Innovation, not far from the Singapore Botanic Gardens.

It is the organization's second headquarters after Lyon in France, and houses the Cyber Fusion Center, a nerve center for sharing intelligence of online threats among 196 members.

Another office in the complex studies emerging online threats, while a digital forensics lab extracts and analyzes data from electronic devices like laptops, phones and even cars.

A command-and-coordination center, like a mini space mission control with staff facing big screens, monitors global developments in real time during Asian hours.

Intelligence analysts scrutinize millions of data points—from web addresses and malware variants to hacker code names—that could provide leads in active investigations.

Christian Heggen, coordinator of the Cyber Intelligence Unit, said they are up against a "large ecosystem of cyber criminals" who use "a number of different attack vectors".

"They get quite creative. It's a whole black market of spying and selling stolen data, buying and selling malware. We have to understand that ecosystem," he said.

To strengthen its capabilities, Interpol partners with private firms in finance, cybersecurity and cryptocurrency analysis.

"It's always a cat-and-mouse game, always continually developing. That's why a department like this is quite important, because we can provide the latest intelligence and information," Heggen said.

The Innovation Centre's head, Toshinobu Yasuhira(image above), a Japanese officer seconded from the National Police Agency, said advances in deepfake technology have become a growing concern

'AI has no soul'...Last year, Interpol's cybercrime directorate coordinated "Operation Secure" in Asia, which saw 26 countries work together to dismantle more than 20,000 malicious IP addresses and domains linked to syndicates to steal data.

Another anti-cybercrime operation across Africa, called "Operation Serengeti 2.0" coordinated from Singapore, saw authorities arrest 1,209 cybercriminals who targeted nearly 88,000 victims. More than $97 million was recovered and 11,432 malicious infrastructures were dismantled.

Jetton said Interpol supported the crackdown on the online scam centers in Southeast Asia through intelligence-sharing and resource development.

The Innovation Center's head, Toshinobu Yasuhira, a Japanese officer seconded from the National Police Agency, said advances in deepfake technology have become a growing concern, but one of his deeper worries lies ahead: AI acting beyond human control.

"Should we arrest people who program the AI, or who utilize AI, or should we arrest the AI itself?" he said in an interview.

"It's kind of very difficult because AI doesn't have any soul, heart."

Paulo Noronha, a digital forensics expert from Brazil's Federal Police, demonstrated some of the lab's high-tech tools designed to keep investigators a step ahead.

Experts at the lab are working on the further use of virtual reality, augmented reality and quantum technology against cybercriminals.

"It's up to us to stay ahead of criminals," he said. "That's why we have systems like these."

For Jetton and his colleagues, the fight rarely enters the public eye, but is vital to global security.

"We try to be as confidential as we can," one intelligence analyst said.

"We're providing key support for operations and investigations around the world."

© 2026 AFP


TECH


Big Tech isn’t hiring like it used to, unless you say the magic words

hen Big Tech started slashing jobs in late 2022, it felt like a brief (and painful) correction to the pandemic-era hiring binge, when Apple, Amazon, Meta, Microsoft, and Alphabet collectively added more than 960,000 jobs during the peak of the digital demand boom.

According to TechCrunch, more than 22,000 US tech workers have been let go just this year — including IntelINTC slashing 20% of its workforce, Meta trimming Reality Labs, Amazon ~100 job cuts, Google’s back-to-back downsizing rounds, and just last week, Microsoft laying off 6,000 employees globally.

Looking across a selection of the largest public US technology firms, it’s easy to see that headcount growth has either slowed or outright reversed in the past two years for many.

In fact, that divergence has been playing out within America’s tech companies as well. If you’re close to the action in AI, your stock is probably rising internally. But if you’re in an operational role, administrative job, or even in a field of software engineering that’s more exposed to AI, you might not be feeling as secure.

Since February 2020, US job listings for software development roles have fallen nearly 40%, and IT help desk roles are down over 30%, according to data from hiring platform Indeed. That’s significantly worse than other sectors like finance or legal, and well below the broader job market, where listings are up 6%.

Postpandemic, much of the tech world’s obsession with getting lean — CEO Mark Zuckerberg called 2023 the “year of efficiency” for Meta — came from rising interest rates, margin pressure, and a reckoning with Covid-era overhiring. But now, something else is reshaping the tech job market, which some experts are calling “a very powerful ChatGPT effect.”

According to the University of Maryland’s January research, the number of IT job postings dropped 27% from the end of 2022 to 2024, while AI-related roles jumped 68%.

Researchers see this divergence as “clear evidence” of ChatGPT’s growing influence, as the chatbot’s late-2022 debut prompted companies to rethink how they build (and staff) their tech stacks — starting with the lowest-hanging tasks for machines to take over. That has only accelerated in the wake of rival chatbots like DeepSeek, Claude, Perplexity, and others.

Kanary in the coal mine?...Take Klarna, the Swedish “buy now, pay later” firm that’s been leaning hard on AI, so much so that a hyperrealistic AI-generated avatar of CEO Sebastian Siemiatkowski presented the company’s quarterly earnings earlier this week. 

Beyond its actual results, what grabbed investors’ attention — or at least, what Klarna execs probably hoped would be the focus ahead of its long-awaited IPO — was a whopping 154% jump in its revenue per employee over the past two years, which reached an impressive $877,000.

Under a mixed agriculture/military metaphor titled, “Reaping the benefits of spearheading AI,” Klarna touted that it had reduced its workforce by roughly 40% in just two years.

The biggest cost savings in Q1 came from customer service, where Klarna replaced human agents with its in-house AI chatbot, cutting service costs by 40% since 2023.

That’s a sector that has long been cited as one of the most vulnerable to AI, with Klarna saying its chatbot now does the work of 700 people. Following complaints about its “lower quality,” however, Klarna recently said it will bring back real people — though it’s unclear how many bots (and humans) the company will ultimately retain.

Despite the slimmed-down workforce, Klarna’s net loss more than doubled in Q1, to some $99 million.

From phones to keyboards...But it’s not just about call centers anymore; AI is creeping into corporate jobs, too, the kind of work once considered out of reach for automation. As part of larger global layoffs, Microsoft recently cut ~2,000 jobs in its home state of Washington and software engineers bore the brunt of the pain, accounting for 40% of the cuts, per Bloomberg. CEO Satya Nadella revealed that AI now writes up to 30% of the company’s code on certain projects. Over at Google, Chief Scientist Jeff Dean said in March that AI could soon match the performance of junior engineers.

It raises the real question: is any of this shift showing up in actual hiring data? To the relief of the 2.2 million software developers in the US, it seems they haven’t entirely been sidelined just yet — though AI is reshaping the rules of who gets in the door.

According to a new report from venture capital firm SignalFire, Big Tech’s hiring for software engineering roles still grew about 3% year over year last year, while there was a 27% surge in AI hires, and less technical functions like marketing and sales fell by double digits.

And while tech hiring hasn’t collapsed across the board, early-career workers are taking the hardest hit — when the overall labor market is already freezing out job seekers fresh out of college. Per SignalFire, new-grad hiring at Big Tech fell 25% last year and is now more than 50% below prepandemic levels. Meanwhile, mid- and senior-level hiring is surging — up 27% year over year for those with two to five years of experience, and 34% for those with five to 10 years — as companies opt for seasoned engineers who can hit the ground running, rather than training juniors when AI can handle the basics.

Author: Hyunsoo Rim

DIGITAL LIFE How will chatbots and AI-generated content influence the 2026 elections in Brazil ? ChatGPT's response to the question of w...