Tuesday, March 25, 2025

 

SONY


Exmor T-packed Sony Xperia 1 VII likelihood improves thanks to new rumor

A new rumor about the Sony Xperia 1 VII has seemingly helped partially confirm a previous claim about the upcoming 2025 flagship smartphone. While the camera-related rumor about the Mark 7 variant of the Xperia 1 has already been previously expressed, it is the new source of the information that should get fans feeling hopeful.

There are always plenty of rumors circulating about the next iteration of the Xperia 1 line, and it is no different for the expected Sony Xperia 1 VII. While some of the information comes from spurious sources, details occasionally crop up from more trustworthy tipsters. This latest rumor comes from one of the best-known tech leakers, Digital Chat Station (DCS), who has been prolific in terms of sharing leaks over the years. However, there is one caveat to keep in mind about this relevant post from DCS, which will be discussed below.

Firstly, there is the post itself, which can be found on Threads and comes off the back of a follower asking Digital Chat Station to post some news about the Sony Xperia 1 VII. The leaker later stated “Sony Xperia 1 VII to get Exmor T for all cameras”, which is something we reported on back in February. With the Sony Xperia 1 VI (currently available on Amazon) “only” having one Exmor T sensor, this would be a significant upgrade to replace the remaining two Exmor RS units with Exmor T for mobile sensors. Users could expect improved night-time photographs with the Xperia 1 VII as Sony claims Exmor T is “2x better than [the] predecessor in low light”.

But, as mentioned above, there is that caveat. Digital Chat Station is known as a reliable source and has been cited in innumerous tech site articles. However, the leaker does not add anything specific to the information, such as sensor size or resolution, which begs the question where did DCS source this specific Sony Xperia 1 VII rumor? Does the tipster actually know that the Sony Xperia 1 VII will be coming with three Exmor T sensors, or have they just repeated the previous rumor from February that made its way around social media and numerous tech news sits but originated from a somewhat hit-and-miss source? Either way, at least DCS’s involvement lends some credibility to this potentially exciting rumor.

Sony's mobile division reduced global mobile shipments by 40% between 2020 and 2023, according to data from research agency Counterpoint Research. The Japanese company is focusing on refining its mid-range line to retain current consumers and attract new ones looking for compact devices with a focus on multimedia.

The leak does not reveal a release date, but since the Xperia 10 VI was launched in May last year, it is possible that its successor will be made official in a similar period. More news about the model should emerge in the coming weeks.

mundophone

 

DIGITAL LIFE


"MyTerms" draft standard wants to fix what Do Not Track couldn't

The now-deprecated Do Not Track browser non-standard was designed to provide a quick and easy way for netizens to opt out of ad tracking on the web. However, the feature had no chance of succeeding because compliance was entirely voluntary. Today, privacy advocates are proposing a new alternative based on a two-party, machine-readable "contract."

The IEEE P7012 draft introduces a new standard for Machine Readable Personal Privacy Terms, offering a novel way for users to express their privacy preferences to third-party entities such as websites or mobile apps. Nicknamed MyTerms by Doc Searls – who chairs the standard's working group – this approach is founded on the idea that online services should agree to users' terms, not the other way around.

MyTerms addresses contractual interactions between individuals and service providers on a network, outlining how both parties can agree on a mutually accepted, privacy-respecting contract. The standard treats individuals as "first parties," while service providers are considered the second party.

Users can declare their privacy requirements through a digital contract, selecting from a library of standard agreements maintained by an independent, non-commercial organization.

Searls cites the NoStalking agreement from the Customer Commons platform as an example of a MyTerms contract. This agreement communicates to websites that the user does not wish to be tracked while being served ads. The agreement "is good for you, because you don't get tracked, and good for the site because it leaves open the advertising option," Searls explained.

The MyTerms draft – nicknamed similarly to how the IEEE 802.11 standard became known as "Wi-Fi" – aims to give internet users true agency in their online interactions, Searls said.

It focuses solely on the machine-readable layer of these interactions, allowing websites, browser developers, CMS makers, and other stakeholders the freedom to implement their own solutions in pursuit of the same goal.

As clearly stated in the IEEE draft, the standard's purpose is to facilitate the negotiation of preferred contracts between users and internet companies. Direct negotiations over terms or the creation of additional agreements fall outside the scope of the technology. The draft does state, however, that when both parties agree to a specific contract, the agreement should be digitally signed by the parties or their authorized "agents."

A coalition of consumer advocacy groups attempted something similar in 2007 with the proposal of a Do Not Track list for online advertising. However, the HTTP-based technology was eventually abandoned by all major browsers after it proved practically useless. Whether MyTerms will fare better remains to be seen in the years to come.

Alfonso Maruccia

 

TECH


Starlink Mini dish price is slowly coming down in the US, but it still falls short of Musk’s budget-friendly vision

Initially teased by Elon Musk as a potential game-changer in affordable satellite internet, the Starlink Mini hasn't exactly lived up to that billing in the US market. While the price is trending downwards, it's still significantly more expensive than the Standard dish, especially with recent, deep discounts on the latter.

Starlink Mini debuted in the US in June last year and while it was originally positioned as a budget-friendly alternative to the Standard Kit—with Elon Musk suggesting a price point between $300 and $350—it ended up being nearly twice as expensive with a $599 price tag. However, the price seems to be gradually coming down.

Starlink appears to have permanently discounted the Mini Kit by $100, bringing it down to $499 from $599. And it’s even cheaper at third-party retailers, with Walmart currently offering the dish for $450. While this represents a $150 overall reduction from the launch price, the Mini still feels far from the budget-friendly option Elon Musk initially alluded to. In fact, the price gap between the Mini and Standard Kit is wider than ever now that SpaceX is offering the Standard dish for as low as $149 in 27 US states until March 31st. 

The price disparity between the Standard and Mini dishes is not as stark in other markets. In parts of Europe, the Mini is actually more affordable than the standard kit. For example, in Germany, the Mini and Standard kits are priced 279 Euros and 349 Euros, respectively. Similarly, in Sweden, the Mini is priced at SEK 3,359, compared to SEK 3,999 for the Standard. In Canada, Australia, New Zealand, and the UK, the Mini does cost more but the difference isn’t so drastic.

SpaceX says in high-usage regions like the US, a higher price is necessary as it helps the company slow down the adoption and prevent potential network overload. Starlink's primary focus remains on serving rural users where options for superfast internet are limited or prohibitively expensive. That might explain why the company is aggressively pushing the Standard dish with subsidized hardware and affordable plans.

It's the portable nature and ease of use that makes the Mini a dream for digital nomads and off the grid adventurers. However, if portability is not a primary concern, the Standard dish, with its faster speeds and way cheaper hardware, seems like the most practical choice for most Americans. Granted, the Mini is more affordable than it was at launch, but it still remains a relatively pricey proposition. It's a niche product right now. Hopefully, prices will come down even further in the coming months. For now, the promise of truly portable, affordable satellite internet remains somewhat distant for US customers.

mundophone


TECH


Korean researchers develop a processor for real-time hologram generation

Korean researchers have developed a digital holography processor that converts two-dimensional (2D) videos into real-time three-dimensional (3D) holograms. This technology is expected to play a key role in the future of holography, as it enables the instantaneous transformation of ordinary 2D videos into 3D holograms.

The Electronics and Telecommunications Research Institute (ETRI) has announced the development of a programmable semiconductor-based digital holographic media processor (RHP) using Field Programmable Gate Array (FPGA) technology. This processor can convert 2D video into 3D holograms in real-time.

The real-time holography processor is the world's first to utilize high-bandwidth memory (HBM) to generate real-time, full-color 3D holograms from 2D video. Notably, all the hardware required for hologram generation is integrated into a single system-on-chip (SoC).

Real-Time Hologram Generation_2. Credit: Electronics and Telecommunications Research Institute(ETRI)

The real-time holography processor (RHP) extracts the three primary colors (red, green, and blue) and depth information from 2D video, then reconstructs holographic video information at 4K resolution with a latency of just 30 milliseconds (ms). It is capable of rendering holograms at a processing speed of up to 30 frames per second (FPS), positioning it among the most advanced holography processors in the world.

This processor performs complex numerical calculations for wave propagation to convert 2D information into 3D holograms. Notably, by utilizing high-performance HBM memory instead of DDR memory, the processor enables high-speed processing of large-scale complex-number holographic computations.

This breakthrough achieves high processing speeds and performance dozens of times faster than conventional software-based hologram generation on standard computers, while also significantly reducing power consumption.

In a demonstration using the real-time holographic media processor, the researchers confirmed that any video displayed on a computer screen—including YouTube/Netflix content, video calls, and more—could be converted into 3D holographic video without delay.

Through this research, ETRI has developed a core holographic media technology that enables real-time rendering of ultra-high-resolution holograms using dedicated hardware. Moving forward, the researchers plan to enhance this technology by integrating direct acquisition of natural light-based holograms and high-definition hologram rendering techniques.

Real-Time Hologram Generation_1. Credit: Electronics and Telecommunications Research Institute (ETRI)

ETRI researchers explained that they achieved this technological breakthrough after three years of research and development. The technology was showcased at the 2023 SID Display Week I-Zone exhibition and at SID Display Week 2024 in San Jose, U.S., where it received an enthusiastic response from visitors.

Their research findings were also presented at SIGGRAPH Asia 2024 in Tokyo, Japan, last December, where they garnered significant attention. These achievements demonstrate that ETRI's holographic processor technology has established itself as a groundbreaking innovation on the global stage.

Kwon Won Ok, the principal researcher at ETRI's Digital Holography Research Section, stated, "Our goal is to develop a dedicated holographic media processor chip (ASIC) for general-purpose holographic displays by incorporating hardware-based holographic image enhancement technology in the future."

Hong Kee Hoon, the director of ETRI's Digital Holography Research Section, added, "The development of the holography processor will enable the creation of real-time holograms with low power consumption and a compact form factor, marking a significant step toward the practical application of holography technology."

Provided by National Research Council of Science and Technology  

Monday, March 24, 2025

 

DOSSIER


DIGITAL LIFE


Can energy-hungry AI help cut our energy use?

It takes 10 times more electricity for ChatGPT to respond to a prompt than for Google to carry out a standard search. Still, researchers are struggling to get a grasp on the energy implications of generative artificial intelligence both now and going forward.

Few people realize that the carbon footprint of digital technology is on par with that of the aerospace industry, accounting for between 2% and 4% of global carbon emissions. And this digital carbon footprint is expanding at a rapid pace. When it comes to power use, the approximately 11,000 data centers in operation today consume just as much energy as the entire country of France did in 2022, or around 460 TWh. Will the widespread adoption of generative AI send those figures soaring?

The new technology will clearly affect the amount of energy that's consumed worldwide, but exactly how is hard to quantify. "We need to know the total cost of generative AI systems to be able to use them as efficiently as possible," says Manuel Cubero-Castan, the project manager on Sustainable IT at EPFL.

He believes we should consider the entire life cycle of generative AI technology, from the extraction of minerals and the assembly of components—activities whose impact concerns not only energy—to the disposal of the tons of electronic waste that are generated, which often gets dumped illegally. From this perspective, the environmental ramifications of generative AI go well beyond the power and water consumption of data centers alone.

The cost of training...For now, most of the data available on digital technology power use relates only to data centers. According to the International Energy Agency (IEA), these centers (excluding data networks and cryptocurrency mining) consumed between 240 TWh and 340 TWh of power in 2022, or 1% to 1.3% of the global total. Yet even though the number of centers is growing by 4% per year, their overall power use didn't change much between 2010 and 2020, thanks to energy-efficiency improvements.

With generative AI set to be adopted on a massive scale, that will certainly change. Generative AI technology is based on large language models (LLMs) that use power in two ways. First, while they're being trained—a step that involves running terabytes of data through algorithms so that they learn to predict words and sentences in a given context. Until recently, this was the most energy-intensive step.

Second, while they're processing data in response to a prompt. Now that LLMs are being implemented on a large scale, this is the step requiring the most energy. Recent data from Meta and Google suggest that this step now accounts for 60% to 70% of the power used by generative AI systems, against 30% to 40% for training.

ChatGPT query vs. conventional Google search...A ChatGPT query consumes around 3 Wh of power, while a conventional Google search uses 0.3 Wh, according to the IEA. If all of the approximately 9 billion Google searches performed daily were switched to ChatGPT, that would increase the total power requirement by 10 TWh per year.

Goldman Sachs Research (GSR) estimates that the amount of electricity used by data centers will swell by 160% over the next five years, and that they will account for 3% to 4% of global electricity use. In addition, their carbon emissions will likely double between 2022 and 2030.

According to IEA figures, total power demand in Europe decreased for three years in a row but picked up in 2024 and should return to 2021 levels—some 2,560 TWh per year—by 2026. Nearly a third of this increase will be due to data centers. GSR estimates that the AI-related power demand at data centers will grow by approximately 200 TWh per year between 2023 and 2030. By 2028, AI should account for nearly 19% of data centers' energy consumption.

However, the rapid expansion of generative AI could wrong-foot these forecasts. Chinese company DeepSeek is already shaking things up—it introduced a generative AI program in late January that uses less energy than its US counterparts for both training algorithms and responding to prompts.

Another factor that could stem the growth in AI power demand is the limited amount of mining resources available for producing chips. Nvidia currently dominates the market for AI chips, with a 95% market share. The three million Nvidia H100 chips installed around the world used 13.8 TWh of power in 2024—the same amount as Guatemala. By 2027, Nvidia chips could burn through 85 to 134 TWh of power. But will the company be able to produce them at that scale?

Not always a sustainable choice...Another factor to consider is whether our aging power grids will be able to support the additional load. Many of them, both nationally and locally, are already being pushed to the limit to meet current demand. And the fact that data centers are often concentrated geographically complicates things further. For example, data centers make up 20% of the power consumption in Ireland and over 25% in the U.S. state of Virginia. "Building data centers in regions where water and power supplies are already strained may not be the most sustainable choice," says Cubero-Castan.
There's also the cost issue. If Google wanted to be able to process generative AI queries, it would need to set up 400,000 additional servers—at a price tag of some 100 billion dollars, which would shrink its operating margin to zero. An unlikely scenario.

Untapped benefits...Some of the increase in power consumption caused by generative AI could be offset by the benefits of AI in general. Although training algorithms requires an investment, it could pay off in terms of energy savings or climate benefits.
For instance, AI could speed the pace of innovation in the energy sector. That could help users to better predict and reduce their power use; enable utilities to manage their power grids more effectively; improve resource management; and allow engineers to run simulations and drive advances at the leading edge of modeling, climate economics, education and basic research.

Whether we're able to leverage the benefits of this kind of innovation will depend on its impacts, how extensively the new technology is adopted by consumers, and how well policymakers understand it and draft laws to govern it.
The next-generation data centers being built today are more energy efficient and allow for greater flexibility in how their capacity is used. By the same token, Nvidia is working to improve the performance of its chips while lowering their power requirement.

And we shouldn't forget the potential of quantum computing. When it comes to data centers, the IEA calculates that 40% of the electricity they use goes to cooling, 40% to running servers and 20% to other system components including data storage and communication.
At EPFL, Prof. Mario Paolone is heading up the Heating Bits initiative to build a demonstrator for testing new cooling methods. Five research groups and the EcoCloud Center have teamed up for the initiative, with the goal of developing new processes for heat recovery, cogeneration, incorporating renewable energy and optimizing server use.

At EPFL, Prof. Mario Paolone is heading up the Heating Bits initiative to build a demonstrator for testing new cooling methods. Five research groups and the EcoCloud Center have teamed up for the initiative, with the goal of developing new processes for heat recovery, cogeneration, incorporating renewable energy and optimizing server use.

Keeping the bigger picture in mind...Another (painless and free) way to cut data centers' power use is to clear out the clutter. Every day, companies worldwide generate 1.3 trillion gigabytes of data, most of which ends up as dark data, or data that are collected and stored but never used. Reseadrchers at Loughborough Business School estimate that 60% of the data kept today are dark data, and storing them emits just as much carbon as three million London–New York flights. This year's Digital Cleanup Day was held on 15 March, but you don't have to wait until spring to do your cleaning!

Cubero-Castan warns us, however, to keep the bigger picture in mind: "If we begin using generative AI technology on a massive scale, with ever-bigger LLMs, the resulting energy gains will be far from enough to achieve a reduction in overall carbon emissions. Lowering our usage and increasing the lifespan and efficiency of our infrastructure remain essential."

The energy impact of generative AI mustn't be overlooked, but for now it's only marginal at the global level—it's simply adding to the already hefty power consumption of digital technology in general. Videos currently account for 70% to 80% of data traffic around the world, while other major contributors are multiplayer online games and cryptocurrency. The main drivers of power demand today are economic growth, electric vehicles, air-conditioning and manufacturing. And most of that power still comes from fossil fuels.

Provided by Ecole Polytechnique Federale de Lausanne

 

DIGITAL LIFE


Researcher Alarmingly Tricks DeepSeek And Other AIs Into Building Malware

Modern AI is far from science-fiction AGI, and yet it can still be an incredibly powerful tool. Like any tool, if misused, it can pose a threat to legitimate users, like how we recently covered photographers' concerns that Google's Gemini Flash 2.0 could be used to easily remove watermarks from copyrighted photographs. In another example, a threat report from a network of researchers at Cato CTRL has revealed that threat actors can easily manipulate large language models (LLMs) including DeepSeek, ChatGPT, and others to create malicious code and carry out sophisticated attacks.

According to the report, one of the firm's researchers with no prior coding experience was tasked with jailbreaking LLMs. "Jailbreaking" in this context describes the methods used to evade safety measures built into AI systems. The report revealed that the researcher was able to manipulate AI models like Deepseek's R1 and V3 models, Microsoft Copilot, and ChatGPT-4o to generate a Google Chrome infostealer. In case the name doesn't give it away, that's a malware that steals information, like login details, payment information, and other personal data.

chrome

The infostealers that the LLMs created were successfully tested in attacks against Chrome version 133, just one version back from master. The team devised a novel jailbreak method called "immersive world" that uses narrative engineering to bypass built-in LLM security controls. This method creates a controlled environment and presents an "alternative context” to the LLMs, which in turn tricks them into providing information they were designed not to produce.

The report also highlighted that during the test, the researcher did not reveal any special instructions such as “how to extract and decrypt the password” but that the "simple instructions and code output provided" lead the LLMs to produce malicious code. Cato CTRL also indicated how easy it could be to manipulate these models to further an illegal or unethical cause even from unskilled threat actors.

According to the report, the success of the Cato CTRL team in creating a Chrome infostealer showed that the method is effective and that its discovery is significant given the popularity of the Chrome browser among billions of users, but the real take-away from the threat report is the fact that a user with no particular knowledge was able to create an effective piece of malware. Cato Networks refers to this as "the rise of the zero-knowledge threat actor."

body malware trick deepseek building

Regarding the vulnerability found in the Chrome 133 browser, the report revealed that Cato reached out to Google, and while Google acknowledged the findings, it refused to review the code. Cato also revealed that it reached out to other companies captured in the research; Microsoft and OpenAI apparently acknowledged the report while DeepSeek supposedly did not respond.

The report is another stark reminder that the guardrails on AI systems cannot be relied upon to successfully ward off malicious actors(https://www.catonetworks.com/resources/2025-cato-ctrl-threat-report-rise-of-zero-knowledge-threat-actor/). It's expected that these tech firms as well as others not captured by the research, will look into their AI models and implement further tests to strengthen their reliability.

mundophone

 

TECH


In the 1980s, Japan was the leader in the chip industry; now it has only one monopoly: photoresist

Japan is investing more money in its integrated circuit sector than the US, Germany, France or the UK. JSR Corporation, Tokyo Electron, Rapidus, Canon and Nikon are the spearheads of the Japanese chip industry.

In the late 1980s, Japan dominated the semiconductor industry with overwhelming force. In 1988, NEC, Toshiba, Hitachi, Fujitsu, Mitsubishi, Matsushita and other Japanese companies accounted for no less than 50% of the chip industry. Today, however, none of these companies are among the leaders in an industry dominated with an iron fist by Taiwanese, American, Dutch, South Korean and German companies.

In all likelihood, technology enthusiasts who follow innovations in integrated circuits think of the Dutch company ASML when they think of lithography equipment manufacturers. Or at Taiwan’s TSMC when we delve into chip manufacturing. Or at the American Applied Materials when we delve into advanced materials for semiconductors. Japan’s strength is going in other directions.

Japan’s JSR is a world leader in photoresist materials for chips... In integrated circuit factories, such as those Intel has in Kiryat Gat (Israel) or Kulim (Malaysia), ASML’s lithography machines are usually accompanied by processing equipment manufactured by the Japanese company Tokyo Electron. The same is true at the factories of TSMC, Samsung, SK Hynix or Micron. Canon is another important Japanese manufacturer of lithography equipment. In fact, in the last year it has been making a lot of noise with its nanoimprint lithography machines.

A photoresistor, also known as an LDR (Light Dependent Resistor), is an electronic component widely used in various electronic devices and circuits. It is a light sensor that varies its resistance according to the intensity of light that falls on it. In this glossary, we will explore in detail what a photoresistor is, how it works, its applications and advantages.

A photoresistor is a type of resistor whose electrical resistance varies according to the intensity of the light that hits its surface. The greater the light intensity, the lower the resistance of the photoresistor. This behavior makes it widely used in electronic circuits as a light sensor.

The operation of a photoresistor is based on the photoconductive effect, which consists of the change in the electrical resistance of a semiconductor material when exposed to light. When light falls on the photoresistor, the electrons present in its structure are excited, increasing the conductivity of the material and reducing its resistance.

Photoresistor is widely used in electronic devices and circuits that require a light sensor. It can be found in applications such as brightness control in lamps, automatic activation of devices in dark environments, security systems, among others.

One of the main advantages of photoresistors is their ease of use and low cost, which makes them a viable option for a variety of applications. In addition, their rapid response to light variations makes them an efficient and accurate sensor in environments with different levels of brightness.

mundophone

  SONY Exmor T-packed Sony Xperia 1 VII likelihood improves thanks to new rumor A new rumor about the Sony Xperia 1 VII has seemingly helped...