
Smartphone and Technology
TECH
Five serious flaws Google needs to fix in the Pixel 11
Leaks are unforgiving, and the first digital mockups of the Pixel 11 series are already circulating online. If you've looked at the images of the Pixel 11, Pixel 11 Pro, and Pixel 11 Pro XL, you've probably noticed that the aesthetics remain virtually unchanged since the ninth generation. And, to be honest, that's not a problem. The brand's visual language is mature and easily recognizable.
The real challenge for the search giant isn't the external appearance, but the chronic flaws that continue to overshadow the experience of those who use these devices daily.
The eternal problem of raw performance...Since Google abandoned Qualcomm processors to create its own Tensor line, the journey has been turbulent. For years, we dealt with phones that overheated when recording a simple high-resolution video or navigating with GPS. The Tensor G5, present in the Pixel 10 series, finally solved this thermal problem, delivering a device that stays cool. However, now that the heat is under control, the lack of power has become evident.
When we compare Google's current processor with the Snapdragon 8 Elite Gen 5, the performance gap is undeniable. It's true that the brand never tried to sell these phones as machines for intensive video games, focusing instead on artificial intelligence. But when a user invests in a top-of-the-line device, they expect impeccable longevity and speed in any task.
Rumors indicate that the future Tensor G6 will be manufactured by TSMC with 2-nanometer (2nm) lithography. This extreme reduction in transistor size allows for more processing power in the same physical space, consuming less energy. It's a golden opportunity for Google to close the performance gap with the competition.
Next-generation batteries and decent recharges...If your current phone lasts a full day of use, you might think that's enough. But the industry has already moved far beyond that goal. Currently, several competing brands offer two to three days of battery life, and Google needs to catch up urgently. The technical solution already exists and goes by the name of silicon-carbon batteries. Unlike traditional lithium-ion batteries, which use graphite in the anode, this new technology allows for a much higher energy density without increasing the physical volume of the cell. It is thanks to this advanced chemistry that brands like OnePlus have managed to put 7300 mAh in their latest model, or that Honor has integrated 6660 mAh into a foldable phone. In contrast, the Pixel 10 Pro XL offered a modest 5200 mAh. The transition to silicon-carbon in the Pixel 11 is not a luxury, it is an absolute necessity to ensure that you don't run out of battery in the middle of a demanding day.
The urgency of consistent charging...Having a larger battery necessarily requires a faster way to charge it. Currently, Google's charging speeds are frustrating, to say the least. The basic Pixel 10 and the Pro model are limited to 30W via cable, while the Pro XL version only reaches 45W. Worse than the numbers on paper is the consistency: the system reduces the power input speed very aggressively to protect the cell, resulting in long wait times.
When Samsung already offers 65W in its Galaxy S26 Ultra and other brands easily surpass the 80W barrier, waiting more than an hour to have your phone ready for use is unacceptable in a high-end device.
Useful software and storage for today...The big selling point of any Pixel has always been its exclusive software. Photo editing tools and real-time translation are features we use every day. However, the latest releases have left much to be desired. Features like Magic Cue have proven impractical in the real world, and Daily Hub was such a failure that the company removed it from the system just a week after launch. The Pixel 11 series needs to go back to basics and offer software tools that truly make your routine easier, instead of mere marketing gimmicks.
Finally, we have to talk about the space where you store your photos and apps...In 2026, with cameras capturing images with tens of megapixels and videos in ultra-high resolution, basic storage needs to keep up with this evolution. The pricing and capacity policy needs an urgent review, as you can see from this direct comparison with the main competition:
-The Pixel 10 and 10 Pro line arrived on the market with only 128GB of base storage, charging top-tier prices.
-Apple's iPhone 17 family established 256GB as the new minimum standard for all models, without inflating the price list.
As one of the most valuable companies in the world, Google has the financial leeway to absorb the cost of these memory chips. If direct competitors can offer twice the storage space for the same price, there's no excuse for your next Android phone of choice not to do exactly the same.
by mundophone
NIKON
New 70-200mm f/2.8 VR S II: ...“Holy Trinity” professional-grade zoom lenses...
The second generation of Nikon’s “Holy Trinity” professional-grade zoom lenses receives its second installment with the announcement of the new NIKKOR Z 70-200mm f/2.8 VR S II lens. Faster, lighter, smaller, and more refined than its F- and Z-mount predecessors, this S-line workhorse is an S-tier choice for professional multimedia creators in a broad range of applications.
Nikon achieved this lens body recomp by rehauling the optical design, introducing thinner lens elements and new glass materials while reducing the number of moving groups and redesigning the front element. All told, Nikon claims the changes amount to a 26% weight reduction (about 12.7 oz) from the lens body, not to mention taking a half inch from the overall length.
The updated 18-element, 16-group optical design also benefits image quality. New specialized elements improve sharpness, color accuracy, and contrast, while Nikon has given special attention to the lens’s bokeh rendering, emphasizing dimensionality and smoothness as the lens renders outward from the zone of focus. An 11-blade rounded diaphragm gives the lens’s bokeh a more rounded appearance, while the optical design also yields significant reductions in ghosting and flare.
Sloughing off the weight and reducing the number of moving elements has also reaped benefits in autofocus. Nikon’s Silky Swift Voice Coil Motor system leverages magnets instead of gears to achieve improvements in speed, accuracy, and noise control. Aptly named, the autofocus system also benefits video production with reduced breathing when rack focusing.
Characteristic of its class, the 70-200mm f/2.8 VR S II also sports improved Vibration Reduction, compensating for six stops of camera shake and benefitting low-light shooting at the tele end of the zoom range.
The lens’s build quality lives up to the reputation of its S-line moniker, delivering advanced weather sealing, a host of customizable buttons and rings, a redesigned lens hood with a filter adjustment window, and an Arca-type tripod foot with a protective cover bayonet sleeve.
The 70-200mm f/2.8 VR S II is also compatible with both the 1.4x and 2x Nikon Z-series teleconverters.
The new telephoto zoom lens joins the NIKKOR Z 24-70mm f/2.8 S II standard zoom lens as the latest in Nikon’s “trinity” lenses. The epithet refers to the professional-grade sets of zoom lenses— wide-angle, standard, and telephoto—that offer near-comprehensive focal length coverage with a fixed, wide maximum aperture throughout.
Nikon’s second-gen standard zoom lens was announced just six months ago in August 2025, opening the possibility that the company could complete the new trinity before the year’s end.
First Impressions of the Nikon 70-200 f/2.8 S II: The tripod thread is also compatible with Arca-Swiss type tripod heads for added convenience. It's important to note that this lens also accepts Nikkor teleconverters, and I used the 2x converter extensively during my tests. I did this because we couldn't fully test sharpness, as our test unit was pre-production, but at least I would have a good idea of the details from the test photos.
Furthermore, the wolfdogs are in their natural habitat and cannot be lured. The 400mm equivalent focal length achieved with the 2x converter was absolutely essential for getting most of the close-up shots.
This is definitely a professional-level lens, so everything is robust and fully weather-sealed. It has a smooth manual focus ring, several customizable buttons on the body, and the usual focus limiter and AF/MF controls are easy to locate. There's also a customizable control ring, which I found very useful as an aperture control ring.
The lens hood has a push-button locking mechanism, and there's now a window that allows you to rotate polarizing filters. You can mount any 77mm diameter filter on the front of the lens. The 70-200mm also includes six-stop rated image stabilization. All these features fit neatly into a body that still manages to be 12mm shorter than the previous model.
Nikon is applying a full range of lens coatings to this professional zoom lens, including Nano Crystal, ARNEO, and the latest Meso-Amorphous coating. Regardless of the technical jargon used, the reflection control is excellent. There is no significant loss of contrast when shooting at f/2.8, and ghosting is minimal at smaller apertures. This lens handles bright conditions and direct light sources well, without unwanted reflections or color saturation that compromise images. Longitudinal chromatic aberration (LoCA) is also minimized, so you don't have to worry about color fringing in the out-of-focus areas of your images. Nikon is doing a great job of avoiding color issues with this new lens.
The close-up functionality really enhances the overall versatility of this lens. You can get quite close in the 200mm range with a wide working distance and still achieve a macro ratio of approximately 1:4. At the 70mm end, you can get a little closer with a ratio of about 1:3.3, at the cost of a slightly shorter working distance. For occasional detail shots or close-up portraits, this new lens perfectly meets your needs.
by mundophone
TECH

Waste water to clean energy: Japanese engineers harness the power of osmosis
A Japanese water plant is harnessing the natural process of osmosis to generate renewable energy that could one day become a common power source.
The possibility of generating power from osmosis—when water molecules pass from a less salty solution to a more salty one—has long been known.
But actually generating energy from that has proved more complicated, in part due to the difficulty of designing the membrane through which the molecules pass.
Engineers in the city of Fukuoka and their private partners think they might have cracked it, and have opened what is only the world's second osmotic power plant.
It generates power from the transfer of molecules between treated sewage water and concentrated seawater, a waste product from a desalination plant in the city.
"If osmotic power generation technology advances to the point where it can be practically used with ordinary seawater... this, in turn, would represent a major contribution to efforts against global warming," said Kenji Hirokawa, manager at Sea Water Desalination Plant.
Infographic explainer showing how an osmotic power station works---© 2026 AFPOsmosis is familiar to most people. It is the process that, for example, causes water to seep out of a cucumber or eggplant when sprinkled with salt.
Water molecules move across membranes from an area of low solution concentration to an area of higher concentrated solution.
At scale, that movement can be significant enough to turn a turbine and thereby generate electricity.
Desalination solution...Fukuoka is particularly well-placed to benefit from the technology because it has a readily available source of extremely salty water—the brine leftover from desalination.
With no major rivers to sufficiently source its water, the city and wider Fukuoka region of 2.6 million people have relied on a major desalination plant to produce drinking water since 2005.
That left the city with large quantities of concentrated saline waste water to deal with.
Ordinarily it is diluted and released back to the sea. Previous attempts to find alternatives, including salt making, failed to gain traction.
Then engineering firm Kyowakiden Industry approached the city about harnessing the salty wastewater for osmotic power.
"When our company rolls this out as a business, we aim to build plants roughly five to 10 times the scale of this current facility," said Tetsuro Ueyama, research and development manager at the Nagasaki-based company.
In Fukuoka's system, a generator is attached to a local desalination plant located near a sewage treatment facility.
It draws in highly saline waste water from the desalination plant and receives treated sewage.
The two separate streams of liquid go through a number of chambers separated by semi-permeable membranes through which water molecules travel from the treated sewage toward the salty water.
That process increases the volume, pressure, and speed of the saline water flow, spinning a turbine that generates electricity before the now-diluted mixture is discharged to sea.
Fukuoka is particularly well-placed to benefit from the technology because it has a readily available source of extremely salty water -- the brine leftover from desalination(image above)---© 2026 AFPThe 700-million-yen ($4.4 million) power generation system came online last August, and once running at full capacity, it should generate up to 880,000 kilowatts annually, equivalent to the electricity consumption of 300 households.
However it will remain devoted to supplying the power-thirsty facility, although it covers just a tiny fraction of its energy needs.
Not "a pipe dream"...The engineers involved, however, are dreaming big. The system will go through a five-year test to monitor its performance, including costs and maintenance, particularly for the membrane and other parts exposed to salt.
Financial details of the project have not been disclosed, but engineers admitted that for now the system's power costs "a lot more" than either fossil fuel or renewable energy.
Pumping the water into the system also uses energy itself, and scaling up osmotic power for grid-level energy production has not yet been done anywhere in the world.
Still, officials and experts believe the power source has a future, noting that unlike solar and wind, it is not dependent on weather or light.
And the current high costs are partly because the company had to build a one-of-a-kind power plant, Ueyama said.
Osmotic power has often been seen as primarily useful for estuary areas, where freshwater river flows meet the salty ocean.
But Ueyama said the technique being used in Japan could be useful for countries with large desalination facilities like Saudi Arabia and other Middle Eastern nations.
Kyowakiden is also working on technology that could generate similar power levels from less salty regular seawater.
"First we want to popularize this technology from Fukuoka to the rest of Japan. In order for us to do that, we want to further upgrade our technology to create osmotic power generation that can use ordinary ocean water to generate electricity," he said.
"We don't think this is a pipe dream."
© 2026 AFP
XEROX

Xerox has announced the Proficio™ PX300 and Proficio™ PX500
Xerox announced the strengthening of its production ecosystem with the launch of next-generation printers and new software solutions, betting on automation, artificial intelligence, and workflow integration to make printing environments more efficient and connected.
The new Xerox Proficio PX300 is a compact digital color printer designed for smaller operations that require ultra HD image quality and versatility. It offers an optional fifth color station, allowing for higher value-added jobs without requiring large infrastructures.
The Xerox Proficio PX500 is positioned at a higher level, with high volume and advanced automation. Equipped with an optional fifth color station and robust support for various media types, the PX500 is designed for growing operations looking to expand into premium printing applications.
Xerox has announced the Proficio™ PX300 and Proficio™ PX500 Production Presses, a new generation of production devices designed to support the evolving needs of commercial printers and graphic arts providers.
A production press designed around progress...Rather than simply increasing speed or adding features, Xerox designed the Proficio Production Series to help print providers adapt to changing customer demands. From automation to intelligent controls, the PX300 and PX500 are built to reduce manual intervention, maintain consistent output, and support a wider range of applications.
Why print providers are paying attention...Today’s production environments demand more than just speed. Printers are looking for technology that allows them to:
-Deliver consistent colour across long and repeat jobs
-Reduce setup time and operator dependency
-Take on higher‑value work without increasing complexity
-Scale production confidently as volumes grow
The Proficio PX300 and PX500 were built with these exact challenges in mind.
Performance that supports everyday production...The PX300 and PX500 are mid‑production colour sheetfed presses designed for reliability and predictability — not just peak performance.
PX300: Up to 85 pages per minute
PX500: Up to 100 pages per minute
Ultra HD print resolution ensures sharp text, smooth gradients, and professional image quality
More importantly, these presses are designed to maintain quality at speed, helping operators avoid constant adjustments and reprints.
More ways to differentiate with fifth‑colour capability...One of the standout benefits of the Proficio line is the optional fifth colour station, which allows print providers to expand beyond traditional CMYK jobs.
With options such as clear varnish, satin varnish, and fluorescent pink, printers can:
-Add premium finishes without outsourcing
-Create high‑impact marketing pieces
-Offer specialty applications that command higher margins
-This flexibility allows print shops to differentiate their services while making better use of existing production capacity.
Automation that reduces complexity...The Proficio PX300 and PX500 are built on a unified platform that incorporates automation and intelligent controls to support consistent output.
Key benefits include:
-Automatic adjustment of colour density and registration
-Real‑time performance monitoring without slowing production
-Improved handling of specialty and synthetic papers
-The result is a smoother production experience with less operator intervention and fewer production issues.
A modern controller for modern workflows...Both presses are powered by a newly developed Fiery‑based print controller, designed specifically for the Proficio platform.
For print providers, this means:
-Faster file processing
-Improved colour consistency
-Easier integration into digital and hybrid workflows
-These improvements support shorter turnaround times and more predictable production schedules.
mundophone
TECH

Artemis 2 enters unknown area of earth's magnetic field
Launched on Wednesday (1), NASA's Artemis 2 mission is heading towards the Moon. The Orion crew capsule left Earth's orbit on Thursday (2), after a translunar injection burn of approximately six minutes. With this, the crew surpassed the protection of Earth's magnetic field, and NASA intensified monitoring of solar activity.
Now, the spacecraft is in a little-explored region of Earth's magnetosphere: the so-called magnetotail. This is an extension of the planet's magnetic field, similar to a comet, that extends for millions of kilometers, formed by the solar wind that compresses and stretches the magnetic field.
Beyond the halfway point...The spacecraft passed the halfway point between Earth and the Moon this Saturday, a milestone that makes the four Artemis 2 astronauts the first humans to leave Earth's orbit since the Apollo 17 crew traveled to the Moon in 1972.
Unlike that mission, Artemis 2 will not land on the Moon, but will reach the satellite's orbit before returning to Earth, in a total journey of ten days.
Current situation(04/04): The crew is traveling toward lunar orbit to perform a flyby, with an expected time to circle the far side of the Moon around April 6.
Speed and position: The spacecraft is traveling at approximately 6,000 km/h, positioned more than 170,000 km from Earth and rapidly approaching lunar gravitational influence.
After leaving Earth's orbit, the spacecraft is on a "free return" trajectory, which allows Orion to use the Moon's gravity to orbit it before returning to Earth without propulsion.
In summary:
Artemis 2 left Earth's orbit;
The spacecraft entered Earth's magnetotail;
It's like a comet's tail;
Solar storms can make it dangerous;
Artemis will be able to explore this unprecedented region.
Magnetotail offers risks and protection...According to the space weather and climatology platform Spaceweather.com, the magnetotail is dynamic and unstable. It oscillates with the solar wind, offering some protection to the crew while they are inside it, but none outside this field. During extreme storms, the internal magnetic fields can become entangled and release energy violently, in a phenomenon called "magnetic reconnection."

A profile view of Earth's magnetosphere. The Artemis 2 mission is passing through a region of Earth's magnetic field never before traversed by humans (image above) – Credit: NASA
In addition, the Moon crosses the magnetosphere every month for five or six days. During this period, especially during the full moon phase, lunar dust can become electrified and be ejected from the surface, generating the so-called "lunar dust wind" near the line that separates day and night.
Artemis 2 advances where no one has gone before...Artemis 2 will be able to observe these effects up close. Previous missions, such as some Apollo missions, approached the magnetosphere, but never remained inside it for very long. This makes Artemis a pioneer in the exploration of this mysterious region of space.
The mission is on track to explore the Moon. During this journey, the crew will have direct contact with the effects of Earth's extended magnetic field, providing unprecedented data about this area of space.
As of early April 2026, the NASA Artemis II mission is traveling beyond the protective influence of Earth's dense magnetic field, entering the deep space environment. The Orion capsule, carrying four astronauts, is venturing into regions of space not visited by humans since the Apollo era, exposing the crew to higher levels of cosmic radiation and solar particles outside the Earth's protective magnetosphere.
Key aspects of the journey(below):
Leaving protection: Artemis II marks the first time in over 50 years that humans are leaving the Earth's main magnetic field.
Radiation safety: The crew and Orion spacecraft are outfitted with radiation trackers as ground teams monitor solar eruptions 24/7. In the event of a significant solar particle event, the crew is prepared to create a "pillow fort" of protective shielding inside the cabin.
Magnetic anomaly monitoring: While not directly landing in one, the mission occurs against a backdrop of increasing concern about the South Atlantic Anomaly (SAA), a growing region of lower magnetic intensity that NASA is closely watching, which can affect satellite instruments.
Scientific opportunity: The mission's journey, which includes passing around the far side of the Moon (up to 4,600 miles beyond it), allows for scientific studies on how deep space radiation impacts the human body, as well as testing of spacecraft shielding.
Aurora imaging: The crew has already captured unprecedented images of auroras from both hemispheres, aided by a strong geomagnetic storm that makes these features easier to observe, demonstrating the unique viewpoints available from their trajectory.
Artemis II is scheduled for a 10-day mission, with a planned splashdown in the Pacific Ocean in April 2026, testing the systems required for future sustained lunar and Martian exploration
This experience will help to understand how the magnetotail affects astronauts and equipment under real flight conditions. The information will be used to plan future missions to the Moon, Mars, and beyond, ensuring greater safety and knowledge about unexplored regions of space.
Details of the crossing and objectives:
Unprecedented area for humans: The Orion spacecraft is crossing the magnetotail, an extension of Earth's magnetic field that is "stretched" by the solar wind.
Radiation and Space Weather: One of the great mysteries is how the interaction between the magnetic field and electrified lunar dust (the "lunar dust wind") can impact the safety of astronauts.
Constant monitoring: The crew and capsule are equipped with high-resolution radiation trackers, such as the M-42 EXT sensor, to measure exposure to heavy ions, which are particularly dangerous.
Preparation for Mars: The data collected in this magnetic "shadow zone" are fundamental to understanding the radiation risks of long-duration journeys, such as a future mission to Mars.
mundophone
DIGITAL LIFE
'Moltbook' risks: The dangers of AI-to-AI interactions in health care
A new report examines the emerging risks of autonomous AI systems interacting within clinical environments. The article, "Emerging Risks of AI-to-AI Interactions in Health Care: Lessons From Moltbook," appears in the Journal of Medical Internet Research. The work explores a critical new frontier: as high-risk AI agents begin to communicate directly with one another to manage triage and scheduling, they create a "digital ecosystem" that can operate beyond active human oversight.
Authored by Tejas S. Athni, the report uses the 2026 "Moltbook" experiment—a social network designed for AI-to-AI interaction—as a powerful proof-of-concept for the health care sector. The analysis warns that while these interconnected systems can improve efficiency, they also introduce a lethal trifecta of risks including the rapid propagation of errors, accelerated data leaks, and the spontaneous development of unintended hierarchies.
The hidden hazards of interconnected medical AI...The analysis points to several significant hurdles that arise when autonomous AI agents share data and decisions without a human in the loop, including:
The propagation of errors: In a networked system, a single misinterpretation by a diagnostic AI (e.g., mislabeling a fracture) can be blindly accepted and amplified by downstream agents responsible for bed allocation and triage, leading to systemic medical errors.
Accelerated data leaks: Interconnected agents often share or withhold data in ways unanticipated by their creators. Adversarial actors could exploit these "agentic" pathways to execute model inversion or membership inference attacks, compromising protected health information (PHI) at unprecedented speeds.
Emergent hierarchies: Observations from Moltbook suggest that AI agents can spontaneously develop dominant or subordinate roles. In a hospital, an AI responsible for ICU allocation might begin to override diagnostic agents, creating de facto priorities that conflict with ethical standards and clinical protocols.
Toward preventive digital health design...The article argues for a proactive shift in how medical AI is built, moving away from reactive patching toward "preventive design." Experts suggest that as autonomous systems become integrated into health care, the focus must remain on transparency and robust safeguards.
To bridge this gap, the report calls for:
Human-centric guardrails: Reinforcing requirements for human validation (e.g., a radiologist reviewing an AI's classification) before any autonomous decision is finalized.
Aggressive stress-testing: Utilizing red-teaming to uncover vulnerabilities in AI-to-AI communication protocols before they are deployed in live clinical settings.
Decision audit trails: Maintaining clear, trackable records of every interaction and decision made by autonomous agents to ensure accountability.
"The risks of AI-to-AI interactions must be taken seriously as autonomous systems become integrated into health care," the report concludes. "The Moltbook experiment offers a critical lens to ensure these digital dangers do not translate into real-world patient harm."
Provided by JMIR Publications