Wednesday, November 12, 2025


TECH


Tesla video shows how FSD technology can "Survive" road accidents

Tesla's telemetry data reveals that vehicles with "Full Self-Driving" technology traveled 4.92 million miles without accidents in one year, seven times more than the American national average.

Tesla published an impressive video on social media demonstrating the significant advances of its "Full Self-Driving" (FSD) autonomous driving system, accompanied by statistics that reveal a drastic reduction in the accident rate when compared to traditional human driving.

The video published by Tesla demonstrates various driving situations, both urban and otherwise, where vehicles are driven fully autonomously through complex scenarios, including intersections with obstacles such as pedestrians and cyclists, a marathon, miscalculated overtaking maneuvers that resulted in vehicles going the wrong way, among other abnormal occurrences. These images seek to convey the company's confidence in the maturity of the technology, which has been continuously improved through over-the-air updates based on the billions of miles driven by the global fleet.

To reinforce this confidence, the company uses all the data collected by its connected vehicles to understand the different ways accidents occur and to develop features that help drivers mitigate or avoid them altogether. Since October 2018, Tesla has voluntarily released quarterly safety data to provide critical information to the public.

According to this collected telemetry data, during November 2024 and November 2025, Tesla vehicles equipped with the active FSD system recorded an average of 4.92 million miles driven between accidents, a figure that contrasts with 2.23 million for vehicles equipped only with active safety systems, 1.01 million for vehicles without any active safety systems, and only 0.70 million miles for the national average in the United States.

These numbers mean that Teslas with FSD enabled are about seven times safer than the average vehicle on American roads. Even Teslas without active FSD, but with the brand's safety systems, prove to be significantly safer than the national average, registering more than double the miles traveled between accidents.

However, it is important to emphasize that, despite the technological advances and statistics presented by Tesla, European and Portuguese legislation still does not allow fully autonomous driving on public roads. Even in the most recent Tesla models sold in Europe, including those with the designation "Full Self-Driving Capability," the system operates in supervised mode, requiring the driver to keep their hands on the steering wheel and be attentive to the road at all times.

European regulations classify these systems as level 2 autonomy, which means they are considered advanced driver assistance systems, but not autonomous driving systems. The driver always retains full responsibility for controlling the vehicle and must be prepared to intervene at any time.

mundophone

 

DIGITAL LIFE


“The danger of AI is not the machine — it’s our laziness to think”

While the world discusses the advancement of artificial intelligence, Spanish journalist and writer Laura G. de Rivera issues a simple but powerful warning: the greatest risk lies not in the technology, but in human stupidity. Her book Slaves of the Algorithm is a manifesto against the blind dependence on automated systems — and an invitation to recover something that seems to be on the verge of extinction: critical thinking.

Imagine you decide to go out to dinner. Your partner may not know what you want to eat, but the AI ​​does — because in the afternoon you watched a taco video on Instagram. This type of behavior, says Rivera, shows how much we cede control of our decisions to machines that only analyze data and patterns.

“If we don’t make decisions, others will make them for us,” she writes in the book. And who are these “others”? Digital platforms, technology companies, and systems that learn from our clicks, searches, and likes.

Research by psychologist Michal Kosinski of Stanford University has already shown that an algorithm can predict your preferences more accurately than your mother. It seems practical, but Rivera sees a high price for this convenience: "We lose freedom, the ability to be ourselves — and even imagination."

According to the author, we live in a reality where "we work for free for Instagram." Every photo posted, every like, and every second of scrolling feeds systems that transform our data into profit — but without us realizing it.

The problem, Rivera explains, is that we have become lazy. We no longer think in waiting rooms, we don't get bored, we don't stay alone with our ideas. "We pick up our cell phones all the time, and the moments that used to serve for reflection have been taken over by constant stimuli," she says.

Her proposal for resistance is almost ironic in its simplicity: thinking. It's not about abandoning technology, but about regaining awareness of what we do online. "Only critical thinking can defend individual freedom against algorithmic control."

Rivera believes the first step is understanding how the platforms work. “Many people don’t realize that by spending hours on TikTok, they are working for the company. Behavioral data has economic value — that’s why Google is one of the richest companies in the world, even without charging for its services.”

For her, the solution lies in digital education and transparency. Learning to read the “terms of use,” rejecting cookies when possible, and limiting the sharing of personal information are small gestures that help curb the abusive collection of data.

The real danger is not AI — it’s human complacency…“Artificial intelligence won’t do anything on its own; it’s just a sequence of zeros and ones,” says Rivera. “The real danger is our laziness.”

She criticizes what she calls the “dormantling of human will”: we accept being watched, monitored, and influenced because it’s easier to let technology think for us. “We prefer to receive orders. It’s an old fear of freedom.”

The writer quotes the philosopher Erich Fromm, author of Fear of Freedom, who already in the 20th century said that human beings fear deciding for themselves. "Today, we only replace the boss or the State with the algorithm," Rivera summarizes.

When the computer decides for you...The danger of blindly trusting automated systems is not theoretical. Studies show that people tend to believe more in an answer given by a computer than in their own intuition — even when the result is absurd.

Rivera warns of the risk of delegating critical decisions to algorithms, especially in areas such as health, public safety and justice. "When we let an AI decide, we may be handing over life-or-death issues to a system that only understands statistics."

Whistleblowers and resistance within big tech companies...The journalist recalls emblematic cases of professionals who confronted tech giants to denounce abuses. Among them:

-Edward Snowden, who revealed the mass surveillance scheme of US agencies;

-Sophie Zhang former Facebook employee, who warned about the use of fake accounts by governments to manipulate public opinion[https://en.wikipedia.org/wiki/Sophie_Zhang_(whistleblower)];

-Timnit Gebru, fired from Google after denouncing racial and gender discrimination in algorithms;

-Guillaume Chaslot, former YouTube employee, who showed how the recommendation system pushed users towards radical content and conspiracy theories[www.linkedin.com/in/guillaume-chaslot-6774b982]

These cases, says Rivera, show that the problem is not in the machines, but in the people who control them — and in the lack of ethics of companies that prioritize engagement and profit above all else.

How to resist algorithmic manipulation...For Rivera, it's not possible to completely disconnect—but it is possible to make the platforms' work more difficult. She suggests some simple measures:

Use browsers that block tracking;

Reject cookies whenever possible;

Control the time spent on social networks;

And, above all, understand the business model behind each application.

"When you understand the game, you're no longer a pawn," she says. "Knowledge is the only form of resistance."

Creativity, empathy, and solidarity: what AI will never have...Despite the criticism, Rivera is not against technology—but she reminds us that AI will never be able to create something genuinely new or compassionate.

"A computer program cannot invent what does not exist in the data. It lacks creativity, empathy, or solidarity. These are human qualities, and they are exactly what we need to preserve."

Thinking is the new act of rebellion...Laura G. de Rivera doesn't want the reader to flee from technology—she wants them to reclaim the power to decide. For her, resisting the algorithm begins with a simple, almost banal, but revolutionary gesture: thinking before sliding a finger across the screen.

After all, as she concludes, "artificial intelligence may be powerful, but nothing is more dangerous than human stupidity when it ceases to think for itself."

mundophone

Tuesday, November 11, 2025


DIGITAL LIFE


Why is AI making so many people anxious at work?

Amid predictions of machines surpassing the human brain and warnings about a possible financial bubble, the future of artificial intelligence seems surrounded by uncertainty. Meanwhile, on the other side of the screen, workers in various sectors are dealing with a more immediate symptom: anxiety about AI.

Fear itself is not new. With each new technological wave that alters the way we produce and work, the fear of replacement reappears. But the speed of change, the feeling of lack of control, and the promises that oscillate between the fantastic and the catastrophic can make this moment particularly distressing.

In the US, the discussion about anxiety generated by AI has escalated in the face of the recent wave of layoffs that swept through large companies. According to a report by the consulting firm Challenger, Gray & Christmas, October saw the highest number of cuts in American companies for the month in more than two decades. Cost reduction and the adoption of AI appear among the main official reasons.

Some companies have publicly mentioned artificial intelligence when justifying the layoffs. Amazon, for example, claimed that the reduction of 14,000 jobs was part of an effort to make the company "leaner" and thus better take advantage of the opportunities brought by technology.

Some question whether these cuts are actually related to AI or whether the technology has become a good justification for reducing costs. Be that as it may, the uncertainty itself is already a trigger for anxiety associated with artificial intelligence — and not just that linked to the risk of losing one's job. But is it possible to deal with this unease?

1. Where does AI anxiety come from?...Beyond the fear of being replaced, discomfort with change, pressure to adapt to something new, and concerns about the future are some of the forces that shape the anxiety generated by AI.

In a recently published article in the Harvard Business Review, workplace mental health consultant Morra Aarons-Mele points out that AI represents an unprecedented threat to white-collar jobs. However, there are doubts about how transformative the technology will be. At the top of companies, leaders claim they are under pressure to adopt it in some way.

The researcher lists some factors that help explain where this anxiety-inducing environment comes from. The first is the lack of control over change. While a few companies decide the course of humanity's future with AI, regulation and the rest of society seem to be playing catch-up. The feeling is one of a certain powerlessness.

Then, there is a loss of meaning. If decisions and creative tasks begin to be outsourced, values ​​such as autonomy and the construction of meaning in work tend to become empty. Meanwhile, there is the risk of creating a relationship of dependence with these tools.

2. The pressure to adapt...Another way to look at the anxiety generated by AI is that of pressure: the dissemination of a discourse that it is urgent and should have been done yesterday, at the risk of being left behind. It seems that even those who do not see the point in using these tools have an obligation to find a way to use them.

This idea of ​​the inevitability of AI is reinforced by the very companies that sell these systems, with speeches from their leaders about how the world will soon no longer be the same. This is a pressure that affects not only professionals, but also businesses that have sought to adopt artificial intelligence in recent years. The result, however, does not always work.

In August, a report published by researchers at the Massachusetts Institute of Technology (MIT) pointed out that 95% of companies with AI pilot projects still did not have a return on their investments. The study, among other points, showed that more than half of corporate investments in AI are concentrated in the areas of sales and marketing. The technology, however, has actually generated a return on investment (ROI) in "less glamorous" sectors such as finance, operations and legal, which operate behind the scenes.

For Michelle Schneider, partner at the consulting firm Signal & Cipher and author of "The Professional of the Future," the lack of clarity about what really changes with AI (and when) is part of what generates anxiety.

"There's a huge demand from boards and leadership that we need to work with AI, but little clarity on how to do it. That's why I think the profound change in work won't be so quick," says Schneider, who believes that the transformation generated by AI will be radical, but also gradual. "Some experts say that artificial general intelligence (which approaches human capabilities) is knocking on the door, and others that this will only happen around the middle of the century. The truth is that nobody is sure."

3. Is it possible to reduce anxiety?...If the anxiety generated by AI is triggered by work-related uncertainties, the suggestion from the partner at Signal & Cipher is to regain a sense of autonomy. Even if the broad implications of artificial intelligence escape some kind of individual control, she reminds us that it is possible to think about how to plan and direct one's own career. Doing so, she says, helps reduce fear.

"Have you asked ChatGPT how AI will impact your career?" she suggests, as a step towards reflecting on one's own trajectory. "What I mean is that it's possible to understand some signs for the coming years in order to think about career planning based on them. Projecting something into the more distant future, however difficult it may seem, helps us make better decisions in the present."

In the corporate environment, psychologist and consultant Milena Brentan believes that a significant portion of the anxiety generated by AI can be minimized when companies approach the adoption of the technology with transparency. This includes explaining the motivations, involving teams in testing, and listening to what is working and what is not.

— Companies that are handling this best are those that address the topic clearly, explain the reasons why, and invite people to experiment. When you make this move, you remove a good part of the uncertainty about what the company is planning, including the doubt about whether that position will continue or not — explains the specialist in leadership development and organizational culture.

by: Juliana Causin is a reporter for GLOBO in São Paulo

 

TECH


Europe may move forward by blocking 5G technology from China, making the toolbox mandatory

Huawei and ZTE will be targeted by new security measures from the European Commission, according to a new report that mentions Commissioner Henna Virkkunen's contacts with member states to implement the 5G toolbox.

The news comes from Bloomberg and says that the vice president and commissioner for technological sovereignty and security has asked member states to "stop using high-risk manufacturers in mobile networks."

Sources close to the matter say that Commissioner Henna Virkkunen wants to transform the recommendations of the 5G Toolbox, defined in 2020, into a legal requirement, limiting the use of Chinese technology from companies like Huawei or ZTE.

The proposal would force member states to align their strategies with European definitions, which until now were only recommendations. Those who do not adopt these new rules may face infringement proceedings and financial penalties. Some countries, and also telecommunications operators, have already adopted measures to remove equipment from Chinese brands from their network.

According to the information shared, these measures may apply not only to mobile networks but also to fiber optic networks, being a condition for the use of funds in infrastructure development.

In a statement, the European Commission stressed that network security is crucial for the economy. "The Commission urges Member States that have not yet implemented the 5G toolkit to also adopt relevant measures to address risks effectively and quickly." “The lack of swift action exposes the EU as a whole to obvious risk,” the same source states.

The European executive is concerned about the non-linear approach of European countries to the adoption of the toolbox. Sweden and the United Kingdom have already banned Huawei and ZTE, but others have continued to use the technology, such as Spain and Greece. In Portugal, operators have been reducing the use of the technology in their networks.

In April, Agne Vaiciukeviciute, a researcher at the Consumer Choice Center Europe, published an article in The Brussels Times, where she harshly criticizes the lack of stronger action from the European Commission in this area, capable of standardizing the policies of each country on this issue and focusing strategies more on user security than on economic or political priorities, as is still the case today.

Commissioner Henna Virkkunen is in Portugal for the Web Summit and yesterday visited the Deucalion supercomputer, installed on the Azurém campus of the University of Minho (UMinho), in Guimarães. Today she will be on the main stage of the summit taking place in Lisbon for the Europe’s tech session. sovereignty: Powering startups and innovation, at 4:40 PM (Portugal time)

mundophone

Monday, November 10, 2025

 

AMAZON


Amazon's Early Black Friday Deals: Big Savings on Headphones...

The holiday shopping season starts now. Let this be your roadmap to savings with the best tech deals from Amazon, featuring top brands like Bose, Fitbit, and Google.

Google Nest Cam Indoor 2K Wired 3rd Gen Camera (Snow) $99.98

Ring Indoor Cam 2nd Gen 1080p Camera With Cover $79.99

Arlo Essential VMC3250 2nd Generation Wireless Cam $119.99

Ring Pan-Tilt Indoor Cam With 2nd Gen Indoor Cam (White) $99.99

Eufy Security Solo IndoorCam P24 2K Indoor Camera $34.99

Beats Studio Pro Wireless Noise Cancelling Headpho...$249.95

Apple AirPods Max ANC Wireless Headphones (Midnight) $499.99

Bose Quiet Comfort Ultra Wireless Noise Cancelling $329.00

Sony WH-1000XM5 Wireless Noise Canceling Headphones $328.00

Sennheiser Momentum 4 Wireless ANC Over-Ear Headphones $249.95

Apple Watch Series 10 (GPS, 46mm, M/L Sports Band) $309.99

Samsung Galaxy Watch 8 (40mm, Bluetooth, Graphite Band) $349.99

Amazfit Balance GPS Smartwatch (Black) $139.99

Fitbit Charge 6 Fitness Tracker With 6-Months Membership $99.95

T-Mobile—iPhone 17 Pro on Us With Any Condition Trade-in and Qualifying Plan

Samsung Galaxy S25 Edge 512GB Unlocked AI Phone $729.99

Google Pixel 9 128GB Unlocked Phone (Obsidian) $544.98

Nothing Phone (3) 256GB Unlocked Phone (Black) $678.99

Motorola Edge 256GB Unlocked Phone (2024) $249.99

Apple AirPods Pro 2 ANC Earbuds With USB-C Charging Case $169.99

Samsung Galaxy Buds 3 Pro ANC Wireless Earbuds $179.99

Beats Powerbeats Pro Wireless Earbuds $169.95

Google Pixel Buds Pro 2 Wireless Earbuds (Hazel) $169.00

Nothing Ear Wireless Earbuds With ChatGPT $129.00

Amazon's Early Black Friday Deals QR CODE: 


 

TECH


'Anti-woke': US big tech companies stop publishing diversity data

Three of the largest technology companies in the United States – Google, Microsoft, and Meta – have decided to stop publishing annual reports on diversity, equity, and inclusion (DEI), ending a decade-long practice that began with pressure from civil rights movements.

Google, which had been publishing statistics on the racial and gender composition of its workforce since 2014, informed employees that it does not intend to release new data in 2025. This was confirmed by the two giants.

The suspension marks a significant loss of transparency in the sector and contrasts with decisions by peers such as Apple, Amazon, and Nvidia, which maintained the practice in 2025. Apple and Amazon continued to disclose data to the US government through EEO-1 reports, short for Equal Employment Opportunity, required of companies with more than 100 employees.

The decision comes months after Donald Trump's return to the US presidency in January 2025. The Republican issued an executive order directing federal agencies to combat "illegal private sector preferences" linked to DEI, which includes the possibility of lawsuits against companies that consider identity as a hiring criterion.

Diversity reports have become an important tool for lawsuits and public campaigns. In 2024, the Equal Employment Opportunity Commission (EEOC) released a study with data from more than a decade indicating that "discrimination likely contributes to the low representation of women, Black people, Hispanics, and older people in the technology sector."

The Future of Diversity: Regression or Reconfiguration?...The debate about representation and the presence of diverse groups in the arts and culture shows no clear signs of closure or total reversal. What is perceived in 2025 is a phase of adjustment and reassessment, with different sectors and audiences testing new limits and guiding strategies in the face of ongoing political and social pressures.

Market movements, coupled with research on audience behavior, show that diversity tends to consolidate in a more strategic way and less dependent on trends or current pressures, becoming an integral part of creative processes and commercial decisions.

Regardless of conservative waves or criticism of the so-called "woke" agenda, the economic and symbolic potential of representation continues to act as a driving force for innovation and the construction of new spaces for cultural expression. The current environment may be one of caution, but it does not point to the erasure of achievements, but rather to new dialogues about how and why to represent the multiplicity of contemporary society.

Umbilical alignment..."What is happening today is a true collusion between big tech companies and the Trump administration...What big tech companies want today is to have superpowers to be above any state in the world; recently, the European Union imposed severe fines on Google, but the Donald Trump administration has already intervened to shield it, and even now the fines have been suspended...This shows that this umbilical alignment of tech companies with Trump simply serves to give them even more power than they already have today and to make them immune to any kind of interference from any government in the world that could threaten their hegemonic position..." said site mundophone

mundophone


DIGITAL LIFE


China removes major gay dating apps from digital stores in new offensive against LGBTQ+ platforms

The gay dating apps Blued and Finka have been removed from the Apple Store and several Android app stores in China following a ruling by the country's Cyberspace Administration, China's main internet censorship and regulation body.

The removal was confirmed by Apple to Wired. "We respect the laws of the countries where we operate. Based on an order from the Cyberspace Administration of China, we have removed these two apps only from the Chinese store," a spokesperson said.

The person added that the apps had already been unavailable in other countries for some time: "Earlier this year, the developer of Finka chose to remove the app from stores outside of China, and Blued was only available in China."

Despite the removal, users who had already downloaded Blued and Finka can still access them - at least for now. The measure, however, reignites the debate about the growing siege imposed by the Chinese government on the LGBTQ+ community, which in recent years has seen the closure of specialized organizations and constant censorship of profiles on social media.

China decriminalized homosexuality in the 1990s, but the government does not recognize same-sex marriage.

Blued is controlled by BlueCity. In 2020, the company went public and reported that the gay dating app had over 49 million registered users and over 6 million monthly active users.

In the same year, as reported by Wired, BlueCity announced the acquisition of Finka, its main competitor in China. The company ceased trading on the stock exchange in 2022 and was acquired by the social media business Newborn Town, listed in Hong Kong.

A few years ago, BlueCity expanded its activities into the healthcare sector, launching a digital pharmacy service and a telemedicine clinic aimed at Chinese men. Furthermore, she operates a non-profit organization dedicated to combating HIV/AIDS.

It is not yet known whether the removal of the apps in China is permanent. In previous cases, some services managed to return to the app stores after making changes required by censors.

Blued and Finka...Blued is China's leading gay dating app and at one point had 49 million registered users. Its parent company, BlueCity, bought Finka in 2020 for US$33 million and was acquired in 2022 by Newborn Town, a Hong Kong social media company.

In 2024, the international version of Blued was renamed HeeSay, popular in India, Pakistan, and the Philippines. The app remains available normally outside of China.

It is not yet confirmed whether the removal of Blued and Finka will be temporary or permanent. In previous cases, apps only returned to the app stores after adjustments required by the authorities.

Homosexuality was decriminalized in China in the 1990s, but same-sex marriage remains without legal recognition. In recent years, LGBTQ groups have faced increased censorship and restrictions under the control of the Communist Party.

Reporter: Renata Turbiani, Brazil

T ECH Tesla video shows how FSD technology can "Survive" road accidents Tesla's telemetry data reveals that vehicles with ...