Friday, November 14, 2025


TECH


Samsung Electronics announces groundbreaking antioxidant index metric to be used in Galaxy Watch8

Samsung Electronics recently announced the Antioxidant Index as the central feature of its new Galaxy Watch8. This groundbreaking metric, according to the company, is capable of measuring carotenoid levels in the skin in five seconds, offering the user an objective indicator of their fruit and vegetable consumption.

This is the first time a wearable manufacturer has attempted to quantify a nutritional biomarker directly and non-invasively in a consumer device. Samsung positions the technology as a "portable health advisor," aiming to bridge the gap between monitoring physical activity and the real impact of diet.

Until now, the assessment of nutritional biomarkers, such as carotenoids, relied on complex and expensive laboratory tests, notably Raman spectroscopy, which uses bulky equipment.

Samsung claims to have solved this challenge after seven years of research. The “main breakthrough,” according to Jinyoung Park, Engineer on Samsung’s Digital Health Team, was the miniaturization of the technology. The new BioActive sensor uses reflectance spectroscopy based on multi-wavelength LEDs, integrated into a compact photodetector.

The measurement process requires the user to place their thumb on the sensor. In seconds, calibration algorithms interpret the light absorbed by the skin and estimate the level of carotenoids.

Why measure carotenoids? Carotenoids are natural pigments (red, yellow, green) that the human body does not produce, obtaining them exclusively through diet, especially fruits and vegetables. Their level in tissues therefore reflects the consumption of these foods.

These compounds function as antioxidants, essential for neutralizing reactive oxygen species which, in excess, contribute to aging and increase the risk of chronic diseases.

“Antioxidant management is essential to slow down aging,” says Dr. Hyojee Joung, a specialist at Seoul National University, who collaborated on the development. The Antioxidant Index translates this complex measurement into three simple categories: Very Low, Low, or Optimal (corresponding to 100% or more of the WHO recommendation of 400g/day).

The challenge of inclusive accuracy...One of the biggest technical obstacles in optical measurement through the skin is the interference of melanin, which varies dramatically between different skin tones.

To ensure data reliability, Samsung engineers opted to perform the measurement using the fingertip. This area, according to the company, has a consistently low concentration of melanin in all individuals. Additionally, the system requires light finger pressure, which momentarily reduces blood flow and improves the accuracy of the optical signal.

Samsung states that the technology, incorporated into the Galaxy Watch8, was validated in clinical trials with hundreds of participants at the Samsung Medical Center to ensure its effectiveness in a diverse population.

Antioxidant Index: the impact on preventive health...The Antioxidant Index is not an immediate reflection of the last meal. “Carotenoids accumulate in tissues gradually,” explains Dr. Joung, noting that one to two weeks of dietary change are necessary for the index to reflect this change.

Samsung wants the metric to function as an indicator of overall well-being, also influenced by sleep, stress, and physical activity.

By transforming a complex biological data point into an actionable daily metric, the company is betting on the gamification of preventative health. “New sensors in wearable technologies can play a key role in promoting healthy eating habits,” concludes Professor Yoonho Choi of the Samsung Medical Center.

It remains to be seen how the market and the medical community will react to the accuracy of the functionality outside of a controlled environment, but, in any case, this Samsung innovation could signal a new era for wearables: that of biochemical monitoring.

mundophone



TECH




IDC: Europe leads global communications growth: the new bet on 5G and AI

Despite a persistent climate of economic uncertainty, the global telecommunications and pay-TV services sector is showing resilience. According to the latest report from consulting firm IDC, global revenues in this market are expected to reach US$1.53 billion this year, representing year-on-year growth of 1.7%.

The most notable data from the analysis is the role of Europe, the Middle East, and Africa (EMEA). This is the region of the world where communications revenue growth will be most significant, with a projected expansion of 3.2%, reaching a total value of US$477 million. These figures demonstrate a slight upward revision compared to IDC's initial projections, highlighting the strength of the European market in this sector.

The sustainability of this growth does not come from traditional services, but from operators adapting to the digital age.

Mobile Services Dominate: Mobile services continue to be the main revenue generator. This growth is driven by two key phenomena: increased data consumption (for streaming, social media, and browsing) and the expansion of M2M (Machine-to-Machine) applications. These new revenue streams are effectively offsetting the continued decline in traditional voice and messaging revenues.

Broadband on the Rise: Fixed data services should also maintain a healthy growth trajectory, driven by increasing demand for high-speed broadband and higher-speed solutions.

Pay TV: The only area showing a slowdown in revenue growth is pay TV, which is feeling the strong competition from streaming platforms (Netflix, Amazon Prime, etc.).

The contrast of Asia-Pacific and the rise of India...The growth outlook is not universal. Projections for the Asia-Pacific region have been revised downwards. IDC points to economic uncertainty in crucial countries such as China, Japan, and Indonesia as the main cause of this slowdown.

However, India emerges as a strong bright spot, registering exceptional growth in Average Revenue per Mobile User (ARPU). This suggests that, while the Asian market faces economic challenges, monetizing data services in markets like India remains very lucrative for operators.
Despite the market's resilience, IDC predicts that global growth will be capped at an annual rate of 1.5% over the next five years. The climate of economic uncertainty, geopolitical tensions (such as the war in Ukraine), and political instability continue to be restraining factors.

Faced with these challenges, IDC expects telecom operators to increasingly focus on internal strategies for improving margins and operational efficiency. This is where Artificial Intelligence (AI) becomes crucial:

Cost Optimization: Large companies in the sector are investing in AI to optimize network operations (predictive maintenance), improve customer service (automated support systems), and prevent fraud.

Smart Monetization: Technology also has the potential to generate new revenue and improve margins through advanced fraud detection systems and offer personalization, with dynamic price adjustments assisted by AI.
In short, the growth of the communications sector is ensured by the transition to data and broadband. But the future of operators' profit margins will be increasingly determined by their ability to invest in AI to increase internal efficiency and create more personalized services.

IDC

Thursday, November 13, 2025

 

DIGITAL LIFE


Malicious codes already use AI to rewrite themselves and escape detection systems

According to Futurism, a new wave of malware is beginning to use artificial intelligence to modify its own code and bypass antivirus and firewalls. The phenomenon represents a worrying shift in the race between cybercriminals and digital security companies: for the first time, there are records of programs that "think" and reconfigure themselves to avoid being detected.

The warning came from cybersecurity researchers who observed attacks in which the AI ​​incorporated into the malware generates almost instantaneous variations of the original code, altering sections that would be recognized by defense systems. Instead of simply copying patterns, these versions can rewrite entire functions, rendering the traditional signatures used by protection software useless.

The nightmare of antivirus systems...According to Futurism, the technique combines generative models with reinforcement learning mechanisms, which allows the malware itself to test versions of itself until it finds one capable of escaping the imposed barriers. It's as if the digital virus has learned to evolve through trial and error, a behavior previously restricted to research simulations.

Security companies claim that the advancement of so-called "generative malware" may force the sector to rethink the entire logic of digital defense, based on fixed detection patterns. In specialized forums, analysts discuss the creation of countermeasures also based on AI, capable of identifying anomalous behavior instead of just searching for suspicious code snippets.

The concern is that, given this capacity for continuous mutation, the boundaries between attack and defense will become even more blurred — and that the future of cybersecurity will depend on a permanent dispute between rival artificial intelligences.

How it works:

-Self-Adapting Malware: The new generation of malware is not static. They use built-in language models (LLMs) or embedded AI to modify their own code at runtime (during the execution of the attack).

-Advanced Polymorphic Evasion: While polymorphic malware has been around for some time, AI allows for much more sophisticated, real-time code variation, rendering traditional signature-based antivirus solutions (which look for specific code patterns) ineffective.

-Code Obfuscation: Cybercriminals exploit AI to obfuscate and rewrite parts of malicious code, altering its appearance without changing its functionality, making it difficult for security tools to detect.

-Just-in-Time AI: Google's GTIG has identified the active use of "just-in-time" AI in cyber operations, where malware adapts its behavior in real time to deceive defenders and harvest data.

Examples and Tools:

-Dark Web Tools: Platforms like "WormGPT" and "FraudGPT" are already accessible on the dark web. They allow cybercriminals to generate malicious code, create highly customized phishing campaigns, and bypass security, even without advanced coding skills.

-Real Campaigns: Research indicates the use of AI to generate and adapt malware in cyberattack campaigns, such as the "EvilAI" campaign, which disseminated malware through fake AI applications.

-Language Models: Cybercriminals exploit public LLMs, including Google's Gemini and open-source models in Hugging Face. They use these models to enhance every stage of their attacks, from reconnaissance to data theft.

The Experts' Warning...Researchers warn that current security software may struggle to keep up with these self-adapting malware. The cybersecurity industry needs to innovate rapidly to combat this growing threat, including using AI itself for defense.

mundophone

 

DIGITAL LIFE


United States and Europe strengthen restrictions on DeepSeek due to fears of Chinese state espionage

The advancement of DeepSeek, the Chinese artificial intelligence platform that has become popular for its performance and low cost, has generated a global wave of restrictions. Western governments fear that the service will allow the Chinese state to access confidential data of citizens and institutions, reviving geopolitical concerns and technological disputes.

The rapid growth of DeepSeek since its launch in early 2025 has attracted worldwide attention not only for its technical capabilities, but also for the associated political risks. The United States, Europe, and Asian countries have begun restricting the use of the tool in government networks, citing threats to national security and the confidentiality of information. At the heart of the controversy is Chinese legislation, which requires technology companies to share data with the government when requested.

Since January 2025, at least 17 US states have banned the use of DeepSeek in government systems. Among them are Texas, New York, Virginia, Tennessee, Iowa, and Georgia. The measure follows recommendations from a report published by the Congressional Select Committee on the Chinese Communist Party, which states that the application can divert data to servers in China and that its language model operates under mandatory censorship according to Chinese law.

The Pentagon, the Navy, and federal agencies such as NASA and the Department of Commerce have also blocked internal access to the platform. For military agencies, the risk is not limited to data leaks, but also to the possibility of manipulation of sensitive information.

Despite the restrictions, there is no prohibition on the personal use of the platform by citizens or private companies within the US. Still, experts indicate that distrust may discourage its commercial adoption.

Chinese data policy and legislation...The main source of concern is China's National Intelligence Law, which stipulates that companies in the country must cooperate with the government in cases of state interest. DeepSeek itself states that queries, messages, and files sent by users are stored on servers located in China.

Researchers and authorities fear that this information could be used for political espionage, economic monitoring, or social engineering, especially if accessed by intelligence agencies.

The concern about DeepSeek has gone beyond the US. South Korea, the Czech Republic, Taiwan, and Australia have blocked the chatbot on all official devices. Italy has completely banned the application due to privacy concerns. Germany has pressured Apple and Google to remove the app from their stores. Other European countries, such as France, Ireland, and Portugal, are considering similar measures.

On the other hand, the scenario is different in Africa and Latin America, where the platform has quickly become popular. For these countries, DeepSeek offers access to advanced AI at a lower cost, strengthening China's technological presence and expanding its geopolitical influence.

In the US Congress, lawmakers have proposed bills to completely ban the use of the platform on federal devices and restrict investments in Chinese AI technologies. So far, none of the initiatives have been approved, but the trend indicates that tensions in the sector are likely to increase.

The DeepSeek case reveals how AI has become a strategic field in international politics. The dispute is not limited to technological innovation: it involves data control, global influence, and digital sovereignty.

While governments try to balance privacy, competitiveness, and national security, the Chinese platform continues to grow in regions where the debate is less sensitive or where the need for technological access outweighs geopolitical concerns.

DeepSeek collects a vast range of users' personal data, raising serious concerns about privacy and the possibility of espionage, especially given China's national security laws that may require data sharing with authorities.

Data Collection Practices:

According to its own privacy policy, DeepSeek collects:

-Profile Information: Username, email, phone number, and date of birth.

-User-Provided Data: Everything the user types or uploads, including chat history, prompts, and audio inputs.

-Automated Data: IP address, device model, operating system, system language, and even keystroke patterns.

-Usage Data: Resources used and actions performed.

-Cookies and Trackers: Web beacons and other tracking technologies to monitor user behavior.

-Third-Party Data: Information from linked accounts (such as Google or Apple) and advertising partners.

Key Concerns and Accusations

-Data Storage in China: All user data is stored on servers in China, where cybersecurity and national laws may compel the company to share information with the Chinese government if requested.

-Potential Government Espionage: Intelligence agencies, such as that of South Korea, have expressed concerns that DeepSeek could provide unlimited access to user data to advertisers and potentially Chinese authorities.

-Banns in Governments: Due to these risks, some countries and government entities (such as Italy, Taiwan, Australia, and parts of the US) have banned or restricted the use of DeepSeek in their official bodies. Microsoft has also banned its use internally.

-Data Transfer to Other Chinese Giants: Security analyses indicate that the app sends data to other Chinese companies, such as Baidu and ByteDance (owner of TikTok).

-Indefinite Retention Policy: DeepSeek's privacy policy does not specify a maximum data retention period, allowing the company to retain information for as long as it deems necessary, which increases the risk in the event of a security breach.

In short, although DeepSeek claims to have security and transparency measures, its extensive data collection practices and information storage in China raise serious privacy concerns and accusations of potential espionage.

by: Infobae

Wednesday, November 12, 2025


TECH


Tesla video shows how FSD technology can "Survive" road accidents

Tesla's telemetry data reveals that vehicles with "Full Self-Driving" technology traveled 4.92 million miles without accidents in one year, seven times more than the American national average.

Tesla published an impressive video on social media demonstrating the significant advances of its "Full Self-Driving" (FSD) autonomous driving system, accompanied by statistics that reveal a drastic reduction in the accident rate when compared to traditional human driving.

The video published by Tesla demonstrates various driving situations, both urban and otherwise, where vehicles are driven fully autonomously through complex scenarios, including intersections with obstacles such as pedestrians and cyclists, a marathon, miscalculated overtaking maneuvers that resulted in vehicles going the wrong way, among other abnormal occurrences. These images seek to convey the company's confidence in the maturity of the technology, which has been continuously improved through over-the-air updates based on the billions of miles driven by the global fleet.

To reinforce this confidence, the company uses all the data collected by its connected vehicles to understand the different ways accidents occur and to develop features that help drivers mitigate or avoid them altogether. Since October 2018, Tesla has voluntarily released quarterly safety data to provide critical information to the public.

According to this collected telemetry data, during November 2024 and November 2025, Tesla vehicles equipped with the active FSD system recorded an average of 4.92 million miles driven between accidents, a figure that contrasts with 2.23 million for vehicles equipped only with active safety systems, 1.01 million for vehicles without any active safety systems, and only 0.70 million miles for the national average in the United States.

These numbers mean that Teslas with FSD enabled are about seven times safer than the average vehicle on American roads. Even Teslas without active FSD, but with the brand's safety systems, prove to be significantly safer than the national average, registering more than double the miles traveled between accidents.

However, it is important to emphasize that, despite the technological advances and statistics presented by Tesla, European and Portuguese legislation still does not allow fully autonomous driving on public roads. Even in the most recent Tesla models sold in Europe, including those with the designation "Full Self-Driving Capability," the system operates in supervised mode, requiring the driver to keep their hands on the steering wheel and be attentive to the road at all times.

European regulations classify these systems as level 2 autonomy, which means they are considered advanced driver assistance systems, but not autonomous driving systems. The driver always retains full responsibility for controlling the vehicle and must be prepared to intervene at any time.

mundophone

 

DIGITAL LIFE


“The danger of AI is not the machine — it’s our laziness to think”

While the world discusses the advancement of artificial intelligence, Spanish journalist and writer Laura G. de Rivera issues a simple but powerful warning: the greatest risk lies not in the technology, but in human stupidity. Her book Slaves of the Algorithm is a manifesto against the blind dependence on automated systems — and an invitation to recover something that seems to be on the verge of extinction: critical thinking.

Imagine you decide to go out to dinner. Your partner may not know what you want to eat, but the AI ​​does — because in the afternoon you watched a taco video on Instagram. This type of behavior, says Rivera, shows how much we cede control of our decisions to machines that only analyze data and patterns.

“If we don’t make decisions, others will make them for us,” she writes in the book. And who are these “others”? Digital platforms, technology companies, and systems that learn from our clicks, searches, and likes.

Research by psychologist Michal Kosinski of Stanford University has already shown that an algorithm can predict your preferences more accurately than your mother. It seems practical, but Rivera sees a high price for this convenience: "We lose freedom, the ability to be ourselves — and even imagination."

According to the author, we live in a reality where "we work for free for Instagram." Every photo posted, every like, and every second of scrolling feeds systems that transform our data into profit — but without us realizing it.

The problem, Rivera explains, is that we have become lazy. We no longer think in waiting rooms, we don't get bored, we don't stay alone with our ideas. "We pick up our cell phones all the time, and the moments that used to serve for reflection have been taken over by constant stimuli," she says.

Her proposal for resistance is almost ironic in its simplicity: thinking. It's not about abandoning technology, but about regaining awareness of what we do online. "Only critical thinking can defend individual freedom against algorithmic control."

Rivera believes the first step is understanding how the platforms work. “Many people don’t realize that by spending hours on TikTok, they are working for the company. Behavioral data has economic value — that’s why Google is one of the richest companies in the world, even without charging for its services.”

For her, the solution lies in digital education and transparency. Learning to read the “terms of use,” rejecting cookies when possible, and limiting the sharing of personal information are small gestures that help curb the abusive collection of data.

The real danger is not AI — it’s human complacency…“Artificial intelligence won’t do anything on its own; it’s just a sequence of zeros and ones,” says Rivera. “The real danger is our laziness.”

She criticizes what she calls the “dormantling of human will”: we accept being watched, monitored, and influenced because it’s easier to let technology think for us. “We prefer to receive orders. It’s an old fear of freedom.”

The writer quotes the philosopher Erich Fromm, author of Fear of Freedom, who already in the 20th century said that human beings fear deciding for themselves. "Today, we only replace the boss or the State with the algorithm," Rivera summarizes.

When the computer decides for you...The danger of blindly trusting automated systems is not theoretical. Studies show that people tend to believe more in an answer given by a computer than in their own intuition — even when the result is absurd.

Rivera warns of the risk of delegating critical decisions to algorithms, especially in areas such as health, public safety and justice. "When we let an AI decide, we may be handing over life-or-death issues to a system that only understands statistics."

Whistleblowers and resistance within big tech companies...The journalist recalls emblematic cases of professionals who confronted tech giants to denounce abuses. Among them:

-Edward Snowden, who revealed the mass surveillance scheme of US agencies;

-Sophie Zhang former Facebook employee, who warned about the use of fake accounts by governments to manipulate public opinion[https://en.wikipedia.org/wiki/Sophie_Zhang_(whistleblower)];

-Timnit Gebru, fired from Google after denouncing racial and gender discrimination in algorithms;

-Guillaume Chaslot, former YouTube employee, who showed how the recommendation system pushed users towards radical content and conspiracy theories[www.linkedin.com/in/guillaume-chaslot-6774b982]

These cases, says Rivera, show that the problem is not in the machines, but in the people who control them — and in the lack of ethics of companies that prioritize engagement and profit above all else.

How to resist algorithmic manipulation...For Rivera, it's not possible to completely disconnect—but it is possible to make the platforms' work more difficult. She suggests some simple measures:

Use browsers that block tracking;

Reject cookies whenever possible;

Control the time spent on social networks;

And, above all, understand the business model behind each application.

"When you understand the game, you're no longer a pawn," she says. "Knowledge is the only form of resistance."

Creativity, empathy, and solidarity: what AI will never have...Despite the criticism, Rivera is not against technology—but she reminds us that AI will never be able to create something genuinely new or compassionate.

"A computer program cannot invent what does not exist in the data. It lacks creativity, empathy, or solidarity. These are human qualities, and they are exactly what we need to preserve."

Thinking is the new act of rebellion...Laura G. de Rivera doesn't want the reader to flee from technology—she wants them to reclaim the power to decide. For her, resisting the algorithm begins with a simple, almost banal, but revolutionary gesture: thinking before sliding a finger across the screen.

After all, as she concludes, "artificial intelligence may be powerful, but nothing is more dangerous than human stupidity when it ceases to think for itself."

mundophone

Tuesday, November 11, 2025


DIGITAL LIFE


Why is AI making so many people anxious at work?

Amid predictions of machines surpassing the human brain and warnings about a possible financial bubble, the future of artificial intelligence seems surrounded by uncertainty. Meanwhile, on the other side of the screen, workers in various sectors are dealing with a more immediate symptom: anxiety about AI.

Fear itself is not new. With each new technological wave that alters the way we produce and work, the fear of replacement reappears. But the speed of change, the feeling of lack of control, and the promises that oscillate between the fantastic and the catastrophic can make this moment particularly distressing.

In the US, the discussion about anxiety generated by AI has escalated in the face of the recent wave of layoffs that swept through large companies. According to a report by the consulting firm Challenger, Gray & Christmas, October saw the highest number of cuts in American companies for the month in more than two decades. Cost reduction and the adoption of AI appear among the main official reasons.

Some companies have publicly mentioned artificial intelligence when justifying the layoffs. Amazon, for example, claimed that the reduction of 14,000 jobs was part of an effort to make the company "leaner" and thus better take advantage of the opportunities brought by technology.

Some question whether these cuts are actually related to AI or whether the technology has become a good justification for reducing costs. Be that as it may, the uncertainty itself is already a trigger for anxiety associated with artificial intelligence — and not just that linked to the risk of losing one's job. But is it possible to deal with this unease?

1. Where does AI anxiety come from?...Beyond the fear of being replaced, discomfort with change, pressure to adapt to something new, and concerns about the future are some of the forces that shape the anxiety generated by AI.

In a recently published article in the Harvard Business Review, workplace mental health consultant Morra Aarons-Mele points out that AI represents an unprecedented threat to white-collar jobs. However, there are doubts about how transformative the technology will be. At the top of companies, leaders claim they are under pressure to adopt it in some way.

The researcher lists some factors that help explain where this anxiety-inducing environment comes from. The first is the lack of control over change. While a few companies decide the course of humanity's future with AI, regulation and the rest of society seem to be playing catch-up. The feeling is one of a certain powerlessness.

Then, there is a loss of meaning. If decisions and creative tasks begin to be outsourced, values ​​such as autonomy and the construction of meaning in work tend to become empty. Meanwhile, there is the risk of creating a relationship of dependence with these tools.

2. The pressure to adapt...Another way to look at the anxiety generated by AI is that of pressure: the dissemination of a discourse that it is urgent and should have been done yesterday, at the risk of being left behind. It seems that even those who do not see the point in using these tools have an obligation to find a way to use them.

This idea of ​​the inevitability of AI is reinforced by the very companies that sell these systems, with speeches from their leaders about how the world will soon no longer be the same. This is a pressure that affects not only professionals, but also businesses that have sought to adopt artificial intelligence in recent years. The result, however, does not always work.

In August, a report published by researchers at the Massachusetts Institute of Technology (MIT) pointed out that 95% of companies with AI pilot projects still did not have a return on their investments. The study, among other points, showed that more than half of corporate investments in AI are concentrated in the areas of sales and marketing. The technology, however, has actually generated a return on investment (ROI) in "less glamorous" sectors such as finance, operations and legal, which operate behind the scenes.

For Michelle Schneider, partner at the consulting firm Signal & Cipher and author of "The Professional of the Future," the lack of clarity about what really changes with AI (and when) is part of what generates anxiety.

"There's a huge demand from boards and leadership that we need to work with AI, but little clarity on how to do it. That's why I think the profound change in work won't be so quick," says Schneider, who believes that the transformation generated by AI will be radical, but also gradual. "Some experts say that artificial general intelligence (which approaches human capabilities) is knocking on the door, and others that this will only happen around the middle of the century. The truth is that nobody is sure."

3. Is it possible to reduce anxiety?...If the anxiety generated by AI is triggered by work-related uncertainties, the suggestion from the partner at Signal & Cipher is to regain a sense of autonomy. Even if the broad implications of artificial intelligence escape some kind of individual control, she reminds us that it is possible to think about how to plan and direct one's own career. Doing so, she says, helps reduce fear.

"Have you asked ChatGPT how AI will impact your career?" she suggests, as a step towards reflecting on one's own trajectory. "What I mean is that it's possible to understand some signs for the coming years in order to think about career planning based on them. Projecting something into the more distant future, however difficult it may seem, helps us make better decisions in the present."

In the corporate environment, psychologist and consultant Milena Brentan believes that a significant portion of the anxiety generated by AI can be minimized when companies approach the adoption of the technology with transparency. This includes explaining the motivations, involving teams in testing, and listening to what is working and what is not.

— Companies that are handling this best are those that address the topic clearly, explain the reasons why, and invite people to experiment. When you make this move, you remove a good part of the uncertainty about what the company is planning, including the doubt about whether that position will continue or not — explains the specialist in leadership development and organizational culture.

by: Juliana Causin is a reporter for GLOBO in São Paulo

TECH Samsung Electronics announces groundbreaking antioxidant index metric to be used in Galaxy Watch8 Samsung Electronics recently announce...