Wednesday, November 6, 2024

 

DIGITAL LIFE


Perfil de usuários de iPhone e Android é totalmente diferente, diz estudo

Advances in energy-efficient avalanche-based amorphization could revolutionize data storage

The atoms of amorphous solids like glass have no ordered structure; they arrange themselves randomly, like scattered grains of sand on a beach. Normally, making materials amorphous—a process known as amorphization—requires considerable amounts of energy.

The most common technique is the melt-quench process, which involves heating a material until it liquifies, then rapidly cooling it so the atoms don't have time to order themselves in a crystal lattice.

Now, researchers at the University of Pennsylvania School of Engineering and Applied Science (Penn Engineering), the Indian Institute of Science (IISc) and the Massachusetts Institute of Technology (MIT) have developed a new method for amorphizing at least one material—wires made of indium selenide, or In2Se3—that requires as little as one billion times less power density, a result described in a paper in Nature.

This advancement could unlock wider applications for phase-change memory (PCM)—a promising memory technology that could transform data storage in devices from cell phones to computers.

In PCM, information is stored by switching the material between amorphous and crystalline states, functioning like an on/off switch. However, large-scale commercialization has been limited by the high power needed to create these transformations.

"One of the reasons why phase-change memory devices haven't reached widespread use is due to the energy required," says Ritesh Agarwal, Srinivasa Ramanujan Distinguished Scholar and Professor in Materials Science and Engineering (MSE) at Penn Engineering and one of the paper's senior authors.

For more than a decade, Agarwal's group has studied alternatives to the melt-quench process, following their 2012 discovery that electrical pulses can amorphize alloys of germanium, antimony and tellurium without needing to melt the material.

Several years ago, as part of those efforts, one of the new paper's first authors, Gaurav Modi, then a doctoral student in MSE at Penn Engineering, began experimenting with indium selenide, a semiconductor with several unusual properties: it is ferroelectric, meaning it can spontaneously polarize, and piezoelectric, meaning that mechanical stress causes it to generate an electric charge and, conversely, that an electric charge deforms the material.

Modi discovered the new method essentially by accident. He was running a current through In2Se3 wires when they suddenly stopped conducting electricity. Upon closer examination, long stretches of the wires had amorphized.

"This was extremely unusual," says Modi. "I actually thought I might have damaged the wires. Normally, you would need electrical pulses to induce any kind of amorphization, and here a continuous current had disrupted the crystalline structure, which shouldn't have happened."

Untangling that mystery took the better part of three years. Agarwal shipped samples of the wires to one of his former graduate students, Pavan Nukala, now an Assistant Professor at IISc and member of the school's Center for Nano Science and Engineering (CeNSE) and one of the paper's other senior authors.

"Over the past few years we have developed a suite of in situ microscopy tools here at IISc. It was time to put them to test—we had to look very, very carefully to understand this process," says Nukala.

"We learned that multiple properties of In2Se3—the 2D aspect, the ferroelectricity and the piezoelectricity—all come together to design this ultralow energy pathway for amorphization through shocks."

Ultimately, the researchers found that the process resembles both an avalanche and an earthquake. At first, tiny sections—measured in billionths of a meter—within the In2Se3 wires begin to amorphize as electric current deforms them.

Due to the wires' piezoelectric properties and layered structure, the current nudges portions of these layers into unstable positions, like the subtle shifting of snow at the top of a mountain.

When a critical point is reached, this movement triggers a rapid spread of deformation throughout the wire. The distorted regions collide, producing a sound wave that moves through the material, similar to how seismic waves travel through the earth's crust during an earthquake.

This sound wave, technically known as an "acoustic jerk," drives additional deformation, linking numerous small amorphous areas into a single one measured in micrometers—thousands of times larger than the original areas—just like an avalanche gathering momentum down a mountainside.

"It's just goosebump stuff to see all these phenomena interacting across different length scales at once," says Shubham Parate, an IISc doctoral student and co-first author of the paper.

The collaborative effort to understand the process has created fertile ground for future discoveries. "This opens up a new field on the structural transformations that can happen in a material when all these properties come together. The potential of these findings for designing low-power memory devices are tremendous," says Agarwal.

Provided by University of Pennsylvania 

 

SAMSUNG


Exynos 2500 faces production setbacks: Samsung’s 3nm yield falls below 20%

While the Galaxy S25 Ultra is likely to feature Qualcomm’s Snapdragon 8 Elite chip, Samsung’s plans for the base Galaxy S25 and S25+ models remain unclear. Initially, the Exynos 2500 chip was expected to power some regional variants, but recent reports suggest that production issues with Samsung’s 3nm process may force a shift to alternative processors like MediaTek’s Dimensity 9400.

According to a report from Korean media outlet NewsWay, Samsung’s 3nm process yield has fallen below expectations, with a reported yield rate of less than 20%. This low output rate has discouraged Samsung from moving forward with large-scale production of the Exynos 2500 chip, which was initially slated for certain regional versions of the Galaxy S25 series.

The Exynos 2500’s production setbacks could have far-reaching implications. Samsung’s foundry division, already facing substantial financial challenges, could incur significant losses due to this issue, with estimates suggesting losses of up to 1 trillion won in the third quarter of this year alone.

Samsung has traditionally aimed to balance its reliance on third-party chipmakers like Qualcomm by producing its own Exynos processors, but low yields in its 3nm process may prompt a shift toward a more externally sourced chipset strategy.

The low yield rate is not just a challenge for Samsung’s System LSI division (responsible for Exynos chip design) but also impacts other customers dependent on Samsung’s foundries.

This situation has increased speculation about alternative processors, with industry insiders suggesting that Samsung might replace the Exynos 2500 with MediaTek’s Dimensity 9400 for the Galaxy S25 and S25+ in specific regions.

In fact, the Dimensity 9400 features performance improvements, power efficiency gains, and advanced AI capabilities that could make it a practical choice for Samsung.

Geekbench appearance of Galaxy S25+...While Samsung is testing the European version of the Galaxy S25+ model with Exynos 2500, this could be a part of an internal evaluation process rather than a final decision. Benchmark scores for the Exynos 2500 also hint at significantly lower performance capability than the Snapdragon flagship. This disparity also raises questions about whether the Exynos 2500 can meet the high-performance expectations associated with Samsung’s flagship lineup.

Ultimately, the final chipset selections may depend on how quickly Samsung can resolve its 3nm yield issues. In the meantime, Qualcomm’s Snapdragon 8 Elite remains the primary candidate for the Galaxy S25 Ultra, while MediaTek’s Dimensity 9400 could see an adoption in Samsung’s lineup as a replacement to the Exynos 2500 for the S25 and S25+.

mundophone

Tuesday, November 5, 2024

 

TECH


Leaker reveals technical specifications of MediaTek's Dimensity 8400 chipset

MediaTek is expected to announce a less-powerful version of the Dimensity 9400 which should arrive as the Dimensity 8400. Now, a leak has revealed details of its CPU and GPU and they hint at a significant improvement in performance.

MediaTek already unveiled its flagship chipset, the Dimensity 9400, and it has already appeared in a handful of smartphones such as the Oppo Find X8 and Find X8 Pro. As is the tradition, there will be a less powerful variant of the processor which is expected to be called the Dimensity 8400. MediaTek is yet to unveil it but there have been multiple reports about it, with the latest revealing details about the CPU and GPU.

The Dimensity 8400 will be an octa-core CPU with a 1+3+4 CPU architecture according to a post on Weibo by Digital Chat Station. The post also revealed that the 4nm chipset, which will be produced by TSMC, will feature Arm's Cortex-A725 cores. However, in a reply to a question in the comments of the post he clarified that the Dimensity 8400 will actually have four Cortex-A725 cores and four Cortex-A720 cores.

One of the Cortex-A725 cores will be overclocked with a clockspeed of over 3GHz while the remaining three Cortex-A725 cores will have a lower clockspeed but also in the 3GHz range. The Cortex-A720 premium efficiency cores, which replace the Cortex-A510 of the Dimensity 8300 inside the Poco X6 Pro (curr. available on Amazon.de for €295), will have a 2.xGHz clockspeed.

For its GPU, the Dimensity 8400 is reported to use the same GPU IP as the Dimensity 9400 which is the Arm Immortalis-G925. However, it is not known if it will have the same number of cores. The source does mention that the performance of the chipset is still being optimized.

There are guesses as to which manufacturer will be the first to announce a phone powered by the Dimensity 8400. There are suggestions that it may be one of the Redmi K80 models while some say it might be one of the models in the upcoming Oppo Reno 13 series. More details should surface in the coming weeks.



Kirin 9100 specs revealed: 6 nm chip tipped to launch with a Cortex-X1 prime core...A new leak has revealed important specifications of Huawei's upcoming Kirin 9100. It comes with an 8-core CPU manufactured on SMIC's N+3 (6 nm) node. For the GPU, it will use an in-house Maleoon 910 GPU from last year.

Rumours about Huawei's new SoC have been picking up steam ahead of the Mate 70 series' launch. While some have been pure fluff made up by Chinese netizens, others paint a more accurate picture. A Telegram user who goes by @spektykles has now revealed what look like the Kirin 9100's official specifications.

The chip is codenamed HiSilicon Baltimore and it comes with an 8-core CPU with 1x Cortex-X1 (2.67 GHz), 3x Cortex-A78 (2.32 GHz) and 4x Cortex-A55 (2.02 GHz). It is restricted to Arm v8 (and older CPU cores) due to ongoing sanctions. For the GPU, it uses a Maleoon 910, presumably the same one found on last-year's Kirin 9010.

Performance-wise, the Kirin 9100 will trade blows with the Snapdragon 8 Gen 2 in CPU and Snapdragon 8+ Gen 1 in GPU. It will be manufactured on SMIC's N+3 node (6 nm) and not 5 nm as previously rumoured. This will result in a bump in power-efficiency (between Snapdragon 8 Gen 3 and Gen 2). It is a bit too early to estimate its benchmark numbers, but a leak from earlier this year said it scores around 1.1 million points in AnTuTu.

mundophone

 

DIGITAL LIFE


Moving Quantum Computing from Hype to Prototype

Researchers address quantum computing security challenges

Alongside artificial intelligence, quantum computing is one of the fastest-growing subsets in the high-performance computing community. But what happens when this relatively new and powerful computing method reaches the limit of the cyberinfrastructure and network security capabilities of today?

Researchers at the National Center for Supercomputing Applications are addressing this issue before it becomes a problem.

"The problem is urgent because practical quantum computers will break classical encryption in the next decade," said NCSA Research Scientist Phuong Cao.

"The issue of adopting quantum-resistant cryptographic network protocols or post-quantum cryptography (PQC) is critically important to democratizing quantum computing. The grand question of how existing cyberinfrastructure will support post-quantum cryptography remains unanswered."

Cao and Jakub Sowa, a University of Illinois Urbana-Champaign undergraduate student and participant in the Illinois Cyber Security Scholars Program as well as the CyberCorps: Scholarship for Service, presented a paper on this topic at September's IEEE International Conference on Quantum Computing and Engineering in Montreal.

Their findings proposed the design of a novel PQC network instrument housed at NCSA and the University of Illinois, and integrated as a part of the FABRIC testbed; showcased the latest results on PQC adoption rate across a wide spectrum of network protocols; described the current state of PQC implementation in key scientific applications like OpenSSH and SciTokens; highlighted the challenges of being quantum-resistant; and emphasized discussion of potential novel attacks.

"The main challenges of adopting PQC lie in algorithmic complexity and hardware, software and network implementation," Cao said. "This is the first large-scale measurement of PQC adoption at national-scale supercomputing centers and our results show that only OpenSSH and Google Chrome have successfully implemented PQC and achieved an initial adoption rate of 0.029% at this time."

Cao is the principal investigator for a plan on "Quantum-Resistant Cryptography in Supercomputing Scientific Applications." This will enable a network instrument to measure the adoption rate of PQC and allow universities and research centers to switch to PQC in order to safeguard sensitive data and scientific research. The project will set a national example of migrating cyberinfrastructure to be quantum resistant and build public trust in the security of scientific computing by demonstrating the increased adoption rate over time.

Cao is joined by co-principal investigators and NCSA researchers Anita Nikolich, Ravishankar Iyer and Santiago Núñez-Corrales.

"Transitioning to PQC algorithms across sectors will be a lengthy process," Nikolich said. "Our work will be the first step to understanding the scope of the problem in the scientific infrastructure community. FABRIC touches multiple locations across the globe, which will give us good points of visibility into the challenge."

"Quantum computing's inherent uncertainty presents a unique opportunity to both obscure cryptographic computations and develop novel applications that exploit this uncertainty," Iyer said. "This proposal aims to explore similar challenges, leveraging NCSA's world-class computing resources to investigate new attacks targeting supercomputing workloads that were previously impractical."

"This project opens a new avenue into NCSA's quantum strategy. Potential future risks introduced by quantum technologies reconfigure now our understanding of the landscape of trust and security in advanced computing," Núñez-Corrales said.

"Mapping the adoption of PQC protocols will provide valuable information toward hardening cyberinfrastructure nationally. We anticipate this to be a significant and lasting contribution. In addition, and as collaborators within the Illinois Quantum Information Science and Technology Center (IQUIST), our project creates opportunities to interface the expertise of theorists in Quantum Information Science on campus with security concerns found in the regular operation of leadership-class supercomputing facilities."

"This project will provide valuable input to plans for transitioning SciTokens to PQC, ensuring that our federated ecosystem for authorization on distributed scientific computing infrastructures is prepared to resist quantum computing attacks," said NCSA Principal Research Scientist Jim Basney and principal investigator of the SciTokens project.

"Understanding the efficiency of token signing and verification, along with the impact on token length, will be essential for planning a smooth transition."

In August, the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) finalized its principal set of encryption algorithms designed to withstand cyberattacks from a quantum computer. The results of an eight-year effort by NIST, these encryption standards are an example of the necessary commitment to future computing security, which Cao is involved in through the NIST High Performance Security Working Group.

Provided by National Center for Supercomputing Applications

 

DIGITAL LIFE


Google logo on smartphone screen with AI concept image behind

Google Claims World First As AI Finds 0-Day Security Vulnerability

An AI agent has discovered a previously unknown, zero-day, exploitable memory-safety vulnerability in widely used real-world software. It’s the first example, at least to be made public, of such a find, according to Google’s Project Zero and DeepMind, the forces behind Big Sleep, the large language model-assisted vulnerability agent that spotted the vulnerability.

If you don’t know what Project Zero is and have not been in awe of what it has achieved in the security space, then you simply have not been paying attention these last few years. These elite hackers and security researchers work relentlessly to uncover zero-day vulnerabilities in Google’s products and beyond. The same accusation of lack of attention applies if you are unaware of DeepMind, Google’s AI research labs. So when these two technological behemoths joined forces to create Big Sleep, they were bound to make waves.

Google Uses Large Language Model To Catch Zero-Day Vulnerability In Real-World Code...In a Nov. 1 announcement, Google’s Project Zero blog confirmed that the Project Naptime large language model assisted security vulnerability research framework has evolved into Big Sleep. This collaborative effort involving some of the very best ethical hackers, as part of Project Zero, and the very best AI researchers, as part of Google DeepMind, has developed a large language model-powered agent that can go out and uncover very real security vulnerabilities in widely used code. In the case of this world first, the Big Sleep team says it found “an exploitable stack buffer underflow in SQLite, a widely used open source database engine.”

The zero-day vulnerability was reported to the SQLite development team in October which fixed it the same day. “We found this issue before it appeared in an official release,” the Big Sleep team from Google said, “so SQLite users were not impacted.”

AI Could Be The Future Of Fuzzing, The Google Big Sleep Team Says...Although you may not have heard the term fuzzing before, it’s been part of the security research staple diet for decades now. Fuzzing relates to the use of random data to trigger errors in code. Although the use of fuzzing is widely accepted as an essential tool for those who look for vulnerabilities in code, hackers will readily admit it cannot find everything. “We need an approach that can help defenders to find the bugs that are difficult (or impossible) to find by fuzzing,” the Big Sleep team said, adding that it hoped AI can fill the gap and find “vulnerabilities in software before it's even released,” leaving little scope for attackers to strike.

“Finding a vulnerability in a widely-used and well-fuzzed open-source project is an exciting result,” the Google Big Sleep team said, but admitted the results are currently “highly experimental.” At present, the Big Sleep agent is seen as being only as effective as a target-specific fuzzer. However, it’s the near future that is looking bright. “This effort will lead to a significant advantage to defenders,” Google’s Big Sleep team said, “with the potential not only to find crashing test cases, but also to provide high-quality root-cause analysis, triaging and fixing issues could be much cheaper and more effective in the future.”

The Flip Side Of AI Is Seen In Deepfake Security Threats...While the Big Sleep news from Google is refreshing and important, as is that from a new RSA report looking at how AI can help with the push to get rid of passwords in 2025, the flip side of the AI security coin should always be considered as well. One such flip side being the use of deepfakes. I’ve already covered how Google support deepfakes have been used in an attack against a Gmail user a report that went viral for all the right reasons. Now, a Forbes.com reader has got in touch to let me know about some research undertaken to gauge how the AI technology can be used to influence public opinion. Again, I covered this recently as the FBI issued a warning about a 2024 election voting video that was actually a fake backed by Russian distributors. The latest VPNRanks research is well worth reading in full, but here’s a few handpicked statistics that certainly get the grey cells working.

50% of respondents have encountered deepfake videos online multiple times.

37.1% consider deepfakes an extremely serious threat to reputations, especially for creating fake videos of public figures or ordinary people.

Concerns about deepfakes manipulating public opinion are high, with 74.3% extremely worried about potential misuse in political or social contexts.

65.7% believe a deepfake released during an election campaign would likely influence voters’ opinions.

41.4% feel it’s extremely important for social media platforms to immediately remove non-consensual deepfake content once reported.

When it comes to predictions for 2025, global deepfake-related identity fraud attempts are forecasted to reach 50,000 and in excess of 80% of global elections could be impacted by deepfake interference, threatening the integrity of democracy.

Davey Winder

 

DIGITAL LIFE


FBI Issues Warning On Fake Election Security Videos, What You Need To Know

Over the weekend, the Federal Bureau of Investigation (FBI) issued a warning that there have been deceptive videos in circulation "falsely claiming to be from the FBI relating to election security." Two misinformative videos pertaining to ballot fraud and Kamala Harris' husband have been called to light as false with no further action beyond the announcement. 

Like it or not, every presidential election period has marketing that lays it on so thick that it can get hard to separate the wheat from the chaff. From moving messages to poor acting (and everything in-between), U.S citizens are treated to all kinds of information intending on selling some kind of political angle. Unfortunately, fake information—ones that spread politically fake information along with imagery—continues to be part of the mix.

Ahead of the election, the FBI called out two videos—currently in circulation claiming to be FBI security videos—as fakes and that the content "they depict is false." One of the videos makes the claim that the FBI has apprehended three linked groups committing ballot fraud, while the other one is related to the Second Gentlemen Douglas Emhoff.

According to the FBI's post on X, it is currently "working closely with state and local law enforcement partners to respond to election threats and protect our communities as Americans exercise their right to vote." 

This post comes a day after news broke out that the Office of the Director of National Intelligence (ODNI), the FBI, and the Cybersecurity and Infrastructure Security Agency (CISA) intercepted a video that falsely showed individuals claiming to be from Haiti voting illegally in several counties in Georgia. Another video accused an individual associated with the Democratic presidential campaign receiving a bribe from a U.S. entertainer. It was found that both videos was created by Russian influence actors in order to spread further division among Americans.

In any case, the FBI encourages to everyone to stay vigilant when it comes to seeking election and voting information. Reliable sources, such as local election offices, would be your best bet. However, the FBI also asks that anyone suspecting any criminal activity to contact their state/local law enforcement or local FBI field office.

mundophone

Monday, November 4, 2024

 

TECH


Color Kindle with highlighted text in front of an airplane window

Amazon's new Kindle Colorsoft e-reader slammed over screen discoloration issues

Amazon's first-ever color e-reader, the $280 Kindle Colorsoft Signature Edition, has recently gone on sale, and a lot of people aren't happy with it. According to reports, some users are discovering a discolored, yellow strip appearing at the bottom of the device, leading to a slew of negative reviews.

Amazon introduced the Kindle Colorsoft last month. Featuring a 7-inch display (300 PPI black and white, 150 PPI color), Amazon said the device features an oxide backplane with custom waveforms to boost performance and contrast, while nitride LEDs and custom algorithms help enhance color and improve brightness without washing out fine details.

A less welcome element in the Kindle Colorsoft that's being discovered by several buyers is the weird yellow tinge that appears at the bottom of the screen. There are also reports of discoloration along the vertical edge.

As highlighted by The Verge, one Redditor said it's easy to notice the yellow hue when the page is supposed to be evenly lit and colored like a piece of paper. Another said they only noticed it when using the Colorsoft's edge lighting.

There are numerous 1-star reviews of the Kindle Colorsoft on its Amazon page, almost all of which mention the yellow banding. There are also complaints about dead pixels and the general blurriness of some text. Of the 447 global reviews, the majority, 32%, have given a single star, with the next most-common rating being 2 stars. It means the e-reader currently has a customer rating of just 2.6 out of 5.

A lot of angry buyers are exchanging their Kindle Colorsoft units or just returning them for a refund. While we still don't know if this is a software or hardware problem, a screenshot on Reddit shows an Amazon customer service agent confirming that the company is aware of the issue and is working on a fix. Interestingly, several users in another thread reported that the problem did not appear until after an initial software update.

Amazon spokesperson Jill Tornifoglio told The Verge that customers who notice the yellow banding on their device can reach out to Amazon's customer service team.

It's unclear how widespread the issue is, but it's certainly affecting more than a handful of Colorsoft units. Some say that the yellow tinge is very difficult to spot – The Verge said the discoloration is more obvious in pictures than in real life – but it's not something you'd expect to find in a new device, especially one that costs $280.

mundophone

  DIGITAL LIFE Advances in energy-efficient avalanche-based amorphization could revolutionize data storage The atoms of amorphous solids lik...