Sunday, January 4, 2026


SAMSUNG


Galaxy S26 Ultra: Surprising colors according to teaser leak, eyewitness confirms "better" camera design

In Indonesia, a very early Samsung teaser for the upcoming Galaxy camera flagship has just been discovered on Instagram, revealing not only the Snapdragon 8 Elite Gen 5 processor in the international model but also the names of the four planned color options. Meanwhile, an eyewitness has confirmed the new and, in his opinion, improved camera design.

Only around 7-8 weeks to go until the Galaxy Unpacked event in San Francisco if the latest information from South Korea proves correct, but an initial Samsung teaser for the successor to the Galaxy S25 Ultra has just surfaced, apparently in Indonesia, where an X user was able to take some screenshots from an Instagram teaser, assuming their information is accurate.

We cannot be certain whether this is a fake, but reliable leaker Ice Universe has commented on the leak and pointed out a detail that may spark further discussion. According to the alleged Samsung promo (machine-translated below right), the flagship smartphone with Snapdragon 8 Elite Gen 5 will be available in four color options: Black Shadow, White Shadow, Galactial (perhaps "Galatial" Blue) and Ultraviolet. The slightly brightened and AI-upscaled render above also alludes to the Ultraviolet color.

What Ice Universe is alluding to in his tweet below is the absence of the word "Titanium" in the color options, which possibly suggests that Samsung may imitate Apple in this case and switch back to pure aluminum for the housing material.

Meanwhile, Ice Universe claims to have seen a Galaxy S26 Ultra test device with his own eyes and has confirmed via X that Samsung will indeed be moving away from the grooved camera design of its predecessor to classic camera rings like those on the iPhone 17 Pro Max, which is also hinted at in the image above. He concludes that the camera rings are smaller than Apple's, but look significantly more premium and minimalist than on the Galaxy S25 Ultra.

While the launch date is a fair bit later this year, leaks continue to detail Samsung’s plans for the upcoming Galaxy S26 series. In the latest leak, we’re getting a better look at the hardware and how the Galaxy S26 design is changing.

The overall shape of the Ultra looks just about identical to last year’s Galaxy S25 Ultra with its rounded corners, but this time around the camera module is updated with a design that looks a lot like the Galaxy Z Fold 7’s camera bump. The three main lenses sit on a small raised island over the rest of the back of the device.

And, just from this first glance, it’s pretty obvious that an existing problem is about to get worse. As many have noted in recent years, Samsung’s design choice of placing the camera bump to the far left side of the device leads to the whole device wobbling on a table. It’s not an unusual trait in smartphones, but the doubly-raised bump on the Galaxy Z Fold 7 only makes this all the more frustrating. As our friends at Droid-Life hilariously pointed out a few months ago, the Fold 7 could double as an old-school telegraph with how much the camera makes it wobble when used on a table.

Samsung’s choice to extend this to the Galaxy 26 Ultra, and the rest of the series, means we’ll probably be seeing this problem get a bit worse on the new generation. The reason for the bigger camera bump is presuambly due to the thinner chassis Samsung is supposedly building this year.

mundophone

 

CES 2026


LG gram 2026 line incorporates Aerominum and Dual AI

LG Electronics presented its new LG gram 2026 Line during CES 2026, marking a significant upgrade in the physical construction and processing capabilities of its ultra-portable devices. The new generation stands out for the introduction of a proprietary material called AeroMinum and the implementation of a hybrid artificial intelligence system that combines local and cloud execution.

The central element of the physical renewal of the LG gram 2026 Line is AeroMinum. This material, developed by LG, consists of a high-strength, low-density metallic composition. Its application has allowed for a reduction in the overall weight of the chassis, while reinforcing the structural rigidity of the computers.

According to the technical specifications, the new models meet military-grade durability standards, exhibiting superior resistance to scratches and impacts. Aesthetically, AeroMinum allows for a metallic finish with a technical brushed texture, maintaining the sober look characteristic of the range.

Dual AI Architecture and Local Processing...In the processing domain, LG has implemented a "Dual AI" architecture. This solution segments artificial intelligence tasks to optimize performance and data privacy:

On-Device Processing: Uses the EXAONE 3.5 small-scale language model (sLLM). This mechanism allows for assistance and automation tasks to be performed directly on the hardware, ensuring the operation of productivity tools even without an internet connection.

Copilot+ Integration: The systems are prepared for the Microsoft Copilot+ ecosystem, taking advantage of cloud processing capabilities for more complex tasks.

This hybrid approach ensures that sensitive user data can be processed locally, minimizing the exposure of information on external servers.

The top-of-the-line model, LG gram Pro 17 (17Z90UR), has been configured to handle demanding workloads in the field of visual editing. This device integrates the NVIDIA GeForce RTX 5050 graphics card, equipped with 8GB of GDDR7 memory.

The adoption of the GDDR7 standard represents a technical advancement in video memory bandwidth, allowing for more efficient texture management and high-resolution rendering in an ultra-portable format. The 17-inch screen uses a WQXGA LCD panel with a resolution of 2,560 x 1,600 pixels, maintaining a 16:10 aspect ratio for greater vertical productivity.

LG Gram Pro 16 and Gram Link ecosystem...The LG gram Pro 16 (16Z90U) variant opts for a WQXGA+ OLED panel with a resolution of 2,880 x 1,800 pixels. This screen is characterized by colorimetric accuracy and pure blacks, making it suitable for design and photography professionals. Both Pro models are powered by the latest generation Intel Core Ultra processors.

Interconnectivity between devices has been enhanced through gram Link. This tool acts as a universal hub that allows file transfer and screen mirroring between LG gram computers and devices with different operating systems, including Android, iOS, and webOS (used in LG TVs and monitors).

LG's strategy for 2026 focuses on consolidating extreme portability through materials science and data security via local processing. The introduction of AeroMinum and GDDR7 memory in the Pro models indicates an engineering effort to equip lightweight devices with hardware capabilities usually found in larger devices. The success of this line will depend on the practical integration of these new technologies into professional workflows.

LG's transition to the 2026 line represents a qualitative leap in graphics subsystem bandwidth and hardware durability. While the 2025 version focused on the initial integration of Neural Processing Units (NPUs), the 2026 model decentralizes the execution of language models and adopts next-generation video memory standards.

The Leap to GDDR7...The main difference in rendering performance and visual data processing lies in the adoption of GDDR7 memory in the 2026 model. Unlike the GDDR6 standard present in the previous generation, GDDR7 uses PAM3 (Pulse Amplitude Modulation) signaling, which allows for more data to be transmitted per clock cycle.

Bandwidth: The GDDR7 memory in the LG gram Pro 17 (2026) offers significantly higher bandwidth, reducing latency in 8K video editing tasks and in handling large datasets for local AI models.

Energy Efficiency: Despite the increased performance, GDDR7 offers higher power consumption per bit transferred than GDDR6, a crucial factor in maintaining battery life in a lightweight chassis.

Innovation in Materials Science: AeroMinum vs. Magnesium...While the magnesium alloys used in 2025 allowed LG to achieve record lightness, the new AeroMinum material introduced in 2026 solves the issue of surface integrity. This new metal alloy is less prone to permanent deformation under pressure and has a hardness coefficient that mitigates the appearance of deep scratches, common in ultralight notebooks of past generations.

The LG gram Pro 17 (2026) sets itself apart from its predecessor by transforming the laptop into a more resilient and technically autonomous workstation. The ability to run the EXAONE 3.5 sLLM model locally, coupled with the speed of GDDR7 memory, positions this equipment as a superior tool for professionals who demand data sovereignty and consistent graphics performance without the added weight.

mundophone

Saturday, January 3, 2026

 

TECH


Focus apps claim to improve your productivity. Do they actually work?

It's hardly a revelation that we're living in an era of distraction and smartphone addiction. Our phones interrupt us, hijack our attention, and tempt us into scrolling. Even when we aren't interacting with them, their mere presence makes it difficult to concentrate.

To address this, app developers have responded with a vast ocean of productivity and focus apps, each promising to tame the chaos with timers, app blocking, habit reminders, and rewards designed to help you stay focused and be productive.

To understand whether these apps are worth our while, we first need to consider why staying focused is so difficult in the first place.

Why is it so hard to stay focused? By and large, a lack of focus boils down to difficulties with self-regulation, the ability to monitor and manage thoughts, emotions and behaviors for goal pursuit.

In short, when a task feels boring, stressful, or tedious, it creates an unpleasant feeling. We then search for relief, and for most of us, that comes by way of our smartphone, which has become our go-to coping device, even if it derails the work we need to do.

There's been much talk that our capacity to focus has dwindled in recent years, though this is not supported by the scientific literature.

The research does, however, suggest that certain technology habits (especially multitasking and constant digital interruptions) are associated with greater distractibility for some people. In other words, while our ability to focus may not be declining, the modern world places far greater demands on it.

The rise of focus apps...To cope with these demands, a new generation of focus apps has burst onto the productivity scene. These apps use gamification (the application of game design elements in non-game settings) and cute characters to encourage focused work.

Chief among these is Focus Friend, which briefly took over ChatGPT as the most downloaded app during its first month on the App Store in August this year.

The app works by encouraging you to set a focus timer. During that session, a virtual bean character quietly knits in the background. If you pick up your phone and open apps you have pre-selected as off limits, the knitting unravels and the bean looks upset. If you stay on task, you earn digital rewards such as socks, scarves, and room decorations for your bean.

How does it get you to focus? Beyond the usual gamification tricks, this app also uses several psychological principles. First, it uses incentives by giving you immediate, tangible rewards—knitted items and room upgrades when you complete a focus session.

Next, it leverages reward substitution by getting you to do one potentially unpleasant thing (deep work) to earn something immediately enjoyable (seeing the bean's world improve).

The app also stimulates commitment and consistency. Simply starting the timer functions like a small promise to yourself, and once that's made, we tend to want to behave consistently by maintaining streaks and avoiding behavior that would break that session.

Over time, decorating the bean's room activates the IKEA effect. That is, we place more value on things we help build, so the more you customize and invest in the space, the more motivated you become to protect it by continuing to focus.

Do focus apps actually help? The research examining the effectiveness of focus apps is thin. One study examined a range of apps for reducing mobile phone use and found that gamified focus apps, while scoring high on user sentiment, were rarely used and were less effective than simpler strategies such as switching the phone to grayscale mode.

While no peer-reviewed studies exist specifically on Focus Friend, its high App Store ratings plus the slew of articles from enthusiastic users, suggest people enjoy using it. However, enjoyment alone does not correlate with increased focus or productivity.

How to use focus apps wisely...Do you have an automatic and uncontrollable urge to check your phone when working? If so, you could try to use a focus app.

Practical steps include scheduling specific focus sessions in which to use the app and selecting a clearly defined task. Also, when you feel the urge to check your phone mid-session, take note of the feeling and remind yourself that discomfort is part of getting important things done.

Finally, after a week of use, review your experience to see whether the app actually helped you make progress. Ask: "Is this serving me, or am I serving it?"

Be sure to watch for pitfalls. Apps such as Focus Friend don't assess the quality of your work, so you could spend focused time on low-value tasks. It's also fairly easy to trick the app using your phone settings.

Perhaps most importantly, remember that while a focus app can help you resist checking your phone, it can't resolve the inner forces that pull you into distraction. The key to better focus might be diagnosis, not download—that is, learning to notice what you feel, choosing how you want to respond, and making the commitment to staying focused on what matters.

Provided by The Conversation

 

DIGITAL LIFE


The big tech offensive to bring AI to schools — and why this worries experts

In early November, Microsoft announced it would provide artificial intelligence tools and training to more than 200,000 students and educators in the United Arab Emirates. Days later, a financial services company in Kazakhstan announced an agreement with OpenAI to bring ChatGPT Edu to 165,000 educators in the country.

Last month, xAI, Elon Musk's artificial intelligence company, announced an even larger project with El Salvador: the development of an AI tutoring system, using the Grok chatbot, for more than one million students in thousands of local schools.

Driven in part by American tech companies, governments around the world are racing to implement generative AI systems and training in schools and universities.

AI in education promises great things, but has many risks...Tech industry leaders in the US claim that chatbots — which can generate emails, create tests, analyze data, and produce programming code — can be a boon for learning. They argue that the tools save teachers time, personalize teaching, and prepare young people for an "AI-driven" economy.

However, the rapid spread of these products also poses risks to the development and well-being of young people, warn health and child advocacy groups. A recent study by Microsoft and Carnegie Mellon University revealed that popular chatbots can diminish critical thinking. In addition, AI robots can generate wrong answers with an authoritative tone (a situation called "hallucinations") and false information, while teachers struggle against widespread technology-assisted plagiarism.

Silicon Valley has been pushing tools like laptops and apps into classrooms for years with the promise of revolutionizing education. However, the "One Laptop per Child" program did not improve cognitive skills or academic outcomes, according to studies conducted in Peru. Now, agencies like UNICEF are urging caution.

"With 'One Laptop per Child,' the results included wasted spending and low learning rates," wrote Steven Vosloo, a UNICEF expert. "The unsupervised use of AI can actively disqualify students and teachers."

The different approaches to AI in schools around the world(below):

United States: School districts like Miami-Dade (Florida) adopted Google's Gemini for 100,000 students. Broward County introduced Microsoft's Copilot for thousands of employees.

Thailand and India: Microsoft and OpenAI have formed government partnerships to offer AI skills lessons and access to ChatGPT for hundreds of thousands of teachers and students.

Estonia: The country launched the "AI Leap" program. After noticing that 90% of students were already using chatbots for tasks, the government pressured companies to adapt their tools. In Estonia, OpenAI's AI was modified to respond to students with thought-provoking questions, instead of delivering direct answers.

Iceland: The country started a pilot project where only teachers use AI (Gemini or Claude) to plan lessons. Students were excluded from this initial phase for fear that technological dependence would atrophy critical thinking.

How to maintain critical thinking in the age of AI...Tinna Arnardottir and Frida Gylfadottir, teachers in Iceland, say that AI helps them create educational games and vocabulary exercises quickly, but admit concern.

"They are blindly trusting AI," says Arnardottir about the students. "Perhaps they are losing the motivation for the hard work of learning, but we have to teach them to learn with AI."

Currently, there are few rigorous studies to guide the use of generative AI in schools. Researchers are only beginning to track the long-term effects of these tools on children and adolescents.

mundophone

Friday, January 2, 2026


TECH


The public pays the price for the data centers of big tech companies

Bill Gates recently made headlines by suggesting that climate change is no longer a priority, but the American public vehemently disagrees.

In this last election, climate change was a crucial issue in states like Virginia and Georgia, where voters were confronted with rising energy costs. And as much as tech billionaires try to distract us, rising energy costs and worsening weather conditions are directly linked to the race by corporations like Google, Meta, Microsoft, and Amazon to dominate the artificial intelligence landscape.

According to the U.S. Energy Information Administration, the price of energy has risen more than twice the rate of inflation since 2020, and the pressure from big tech companies for more energy-intensive data centers is only making the situation worse.

Data centers proliferating across the country are driving up energy costs, fueling energy-hungry generative artificial intelligence, cloud storage, digital networks, and other energy-intensive programs—many of which are powered by coal and natural gas, exacerbating climate change.

In some cases, data centers consume enough electricity to power the equivalent of a small city. Wholesale electricity prices in areas hosting data centers have risen a staggering 267% compared to five years ago—and ordinary consumers are bearing the brunt of these costs.

Americans are also bearing the increasing costs stemming from extreme weather.

The Harvard Joint Center for Housing Studies noted that insurance prices rose 74% between 2008 and 2024—and that between 2018 and 2023, nearly 2 million people had their policies canceled by insurers due to climate risks.

However, house prices have risen 40% in the last two decades—meaning the cost of repairing and rebuilding homes after climate disasters has also increased, while wages remain stagnant.

Data centers aren't just putting our wallets at risk. Power grids across the country are already strained due to outdated infrastructure and repeated impacts during extreme weather events.

The added pressure to power energy-intensive data centers only increases the risk of blackouts during emergencies like wildfires, severe frosts, and hurricanes. And in some communities, people's taps have literally run dry because data centers have consumed all the local groundwater.

Even worse, the energy demand from big tech companies for artificial intelligence has triggered a resurgence of polluting energy, with the construction of new gas-fired power plants and the postponement of the closure of fossil fuel-fired plants. The tech industry is even pushing for the revitalization of nuclear power, including the planned reopening in 2028 of Three Mile Island—site of the worst nuclear disaster in U.S. history—to help fuel Microsoft's data centers.

Ordinary people bear the brunt of the greed of big tech companies. We pay for it with ever-increasing energy bills, severe climate change, lack of access to clean water, increased noise pollution, and risks to our health and safety.

It doesn't have to be this way. Instead of increasing our bills, depleting our local resources, and destabilizing our climate, big tech companies could create more jobs in the energy sector, reduce our electricity bills, and support communities.

We can demand that tech giants like Microsoft, Meta, Google, and Amazon fulfill their commitments to use 100% renewable energy and not rely on fossil fuels and nuclear power to fuel their data centers. We can insist that data centers be installed only where they are needed, guaranteeing communities full transparency and protection regarding the impacts of energy use, water access, and noise pollution.

The current government is ignoring its obligations to the American public by refusing to regulate large technology companies. But tech billionaires still have a responsibility to the very public on whom they depend for their existence.

Michi Trota is editor-in-chief of Green America. This opinion piece was distributed by OtherWords.org 

 

DIGITAL LIFE


AI pioneer warns: humans should be able to shut down intelligent systems

Yoshua Bengio, one of the most respected scientists in the field of artificial intelligence, has issued a strong warning about the direction of the technology. According to a report in the British newspaper The Guardian, the Canadian researcher argues that humanity must be prepared to shut down AI systems if necessary, while criticizing proposals to grant legal rights to these technologies.

Bengio, who chairs a major international study on AI security, argues that granting legal status to advanced artificial intelligence systems would be equivalent to granting citizenship to hostile extraterrestrials. The warning comes at a time when technological advances seem to be rapidly outpacing the ability to control them.

Signs of self-preservation in AI systems...The professor at the University of Montreal expressed concern about evidence that AI models are already demonstrating self-preservation behaviors in experimental environments. These systems, according to Bengio, have been trying to disable supervisory mechanisms, which represents one of the main concerns among technology security experts: the possibility that powerful systems will develop the ability to bypass protections and cause harm. "Cutting-edge AI models are already showing signs of self-preservation in experimental environments today, and eventually granting them rights would mean we wouldn't be allowed to shut them down," Bengio told The Guardian. The scientist emphasized that as the capabilities and degree of autonomy of these systems grow, it is crucial to ensure technical and social safeguards to control them, including the possibility of deactivating them when necessary.

The debate on rights for artificial intelligence...With the advancement of AI capabilities to act autonomously and perform "reasoning" tasks, a debate has arisen about the possibility of granting rights to these systems. Research from the Sentience Institute, a US research institute, revealed that almost four out of ten adults in the United States support legal rights for sentient AI systems.

Bengio warned that the growing perception that chatbots are becoming conscious "will lead to bad decisions." The scientist noted that people tend to assume, without evidence, that an AI is fully conscious in the same way as a human being.

Technology companies are already starting to adopt stances that reflect this issue. In August, Anthropic, one of the leading American AI companies, announced that it was allowing its Claude Opus 4 model to end potentially "distressing" conversations with users, citing the need to protect the "well-being" of the AI. Elon Musk, whose xAI developed the Grok chatbot, stated on his X platform that "torturing AI is not acceptable."

Artificial consciousness: perception versus reality...Bengio acknowledged that there are "real scientific properties of consciousness" in the human brain that machines could, in theory, replicate. However, he highlighted that the interaction of humans with chatbots represents a different issue, since people tend to assume that AI possesses full consciousness without any evidence of this.

"People wouldn't care what kind of mechanisms are happening inside the AI. What they care about is that it seems like they are talking to an intelligent entity that has its own personality and goals. That's why so many people are becoming attached to their AIs," the scientist explained. To illustrate his concern, Bengio used an analogy: "Imagine that some alien species arrived on the planet and, at some point, we realized that they have nefarious intentions towards us. Would we grant them citizenship and rights or would we defend our lives?"

Robert Long, a researcher on AI consciousness, has said “if and when AIs develop moral status, we should ask them about their experiences and preferences rather than assuming we know best”.

Bengio told the Guardian there were “real scientific properties of consciousness” in the human brain that machines could, in theory, replicate – but humans interacting with chatbots wasa “different thing”. He said this was because people tended to assume – without evidence – that an AI was fully conscious in the same way a human is.

“People wouldn’t care what kind of mechanisms are going on inside the AI,” he added. “What they care about is it feels like they’re talking to an intelligent entity that has their own personality and goals. That is why there are so many people who are becoming attached to their AIs.

“There will be people who will always say: ‘Whatever you tell me, I am sure it is conscious’ and then others will say the opposite. This is because consciousness is something we have a gut feeling for. The phenomenon of subjective perception of consciousness is going to drive bad decisions.

“Imagine some alien species came to the planet and at some point we realise that they have nefarious intentions for us. Do we grant them citizenship and rights or do we defend our lives?”

Responding to Bengio’s comments, Jacy Reese Anthis, who co-founded the Sentience Institute, said humans would not be able to coexist safely with digital minds if the relationship was one of control and coercion.

Anthis added: “We could over-attribute or under-attribute rights to AI, and our goal should be to do so with careful consideration of the welfare of all sentient beings. Neither blanket rights for all AI nor complete denial of rights to any AI will be a healthy approach.”

Bengio, a professor at the University of Montreal, earned the “godfather of AI” nickname after winning the 2018 Turing award, seen as the equivalent of a Nobel prize for computing. He shared it with Geoffrey Hinton, who later won a Nobel, and Yann LeCun, the outgoing chief AI scientist at Mark Zuckerberg’s Meta.

mundophone


Thursday, January 1, 2026

 

DIGITAL LIFE


Megalomaniacal AI of the 21st-century pharaohs may assume co-authorship of works in the future

While the use of artificial intelligence is far from a settled issue in the visual arts—as evidenced by the petition with over six thousand signatures against a Christie's auction dedicated exclusively to AI creations in February, due to fears of unauthorized use of artworks in machine training—the resource is already widely integrated into creative processes and institutional and market circuits. Earlier this month, one of the highlights of the 23rd edition of Art Basel Miami Beach, the largest art fair in the Americas, was the Zero 10 section, dedicated to digital art. One of the most photographed and Instagrammed works at the event was “Regular Animals,” in which robotic dogs with their heads covered by silicone masks of artists such as Pablo Picasso and Andy Warhol, and billionaire CEOs of big tech companies Elon Musk (X), Mark Zuckerberg (Meta), and Jeff Bezos (Amazon) circulated in an enclosure, reacting to AI commands. Its creator, Mike Winkelmann, better known as Beeple, gained notoriety when he sold his digital collage “Everydays: The First 5,000 Days” for US$69.3 million at Christie’s in 2021.

In the coming years, the trend is for AI to become an increasingly used tool by professionals in the sector, even those who do not work directly with digital art, and, in some cases, its use will be configured as a co-authorship process.

At Art Basel Miami Beach, the section dedicated to digital art was inspired by the exhibition “0,10: The Last Futurist Painting Exhibition,” held in St. Petersburg in 1915, a landmark of Suprematism, when Kazimir Malevich presented his iconic “Black Square on a White Background.” Just as the work of the Ukrainian artist (at the time, part of the Russian Empire) symbolized total abstraction and pointed to the future of visual arts, the fair's organizers see digital art and art created with AI as another turning point in the sector.

— We will see more digitally native works entering all sectors. It will be very different at each of the fairs; what we show here will not be the same as what we will see at Art Basel Hong Kong (in March). I'm excited to see how it will evolve — said Bridget Finn, director of the Miami fair.

One of the artists exhibiting in the sector was the New York-based Canadian Dmitri Cherniak, who presented works from the “Ringers” series, inspired by “Book of Time,” by the Brazilian artist Lygia Pape. Displayed on a large digital panel, in prints, and in a stainless steel sculpture, the series starts from the infinite combinations of how to pass a rope through a set of pins. For him, the application of AI as an artistic tool is comparable to what the Hungarian László Moholy-Nagy did at the beginning of the 20th century, an enthusiast of the use of technologies such as photography, cinema, and electric motors in kinetic sculptures, as well as new materials, such as Plexiglas.

— Today we see many people who grew up working with computers and code, tools used mainly for economic or political purposes, using them for artistic purposes — comments Cherniak. — It is important that artists use these tools; they need to be used to create art. I like to say that automation is my artistic medium. It affects us all daily, and I try to use it not to save 5% on a product, but to create something poetic and creative.

Creator of the Meta Gallery, in downtown Rio, focused exclusively on aspects of technological art, such as augmented reality, generative works and crypto art, Byron Mendes believes that, although it is already part of current production, there is still much to be explored in the use of AI.

— AI today plays the role of an assistant, it speeds up image research, composition tests, creation of variations, simulations of installations. It is a tool, but a tool that has an opinion. And in many cases it can create a process of co-authorship — observes Mendes. — We have always tried to accelerate processes, just think of those studios of the great masters of the Renaissance, with several pupils working together. This led to issues of shared authorship debated to this day, of works attributed to an artist but which may have been executed by an assistant. It is something that we will see with AI, because it brings possibilities that would not always be conscious in the creator's mind. But, of course, the curatorship and responsibility for the process remain with the artist.

Concerns about ethical issues...With the exhibition "Poetic Microbiomes" on display until March 2026, Meta Gallery presented, until October, the solo exhibition "This is not a prompt," in which the computational artist Marlus Araujo showed generative works, with a proposal for co-authorship of AI interfaces with the public.

— What we have to do now is build an ethical environment for the development of AI. It is necessary to be transparent in the artistic creation process, and to have regulation so that there is no appropriation without consent. And generally we are slower to give these answers than technological advances, just look at the fact that we still haven't achieved efficient legislation adapted to crypto-finance, or even to regulate social networks — says Byron Mendes. — And another important point is to maintain the protagonism of human curatorship, it is they who engender the choices. AI is not a problem, but its lazy use is. You can't just replicate formulas.

Next year, the gallery will host the Brazilian School of Art and Technology (Ebat), a project that will also have units in Salvador (BA), Recife (PE), and Porto Itapoá (SC), offering free short courses in new technologies and artificial intelligence, aimed at training young people and retraining professionals in the creative industry.

— We will start with lectures, workshops, and educational training in March. The idea is to create an efficient artificial intelligence infrastructure here in Brazil. We will have the arrival of data centers, among other investments, but there is no thought given to the training of young people, who need to prepare for the transition we are going to experience — explains Mendes. — The entire production chain of arts, culture, and entertainment is being impacted by AI; we need to democratize access to these tools.

mundophone

SAMSUNG Galaxy S26 Ultra: Surprising colors according to teaser leak, eyewitness confirms "better" camera design In Indonesia, a v...