DIGITAL LIFE

Using AI to decide your vote? "we can be easily manipulated"
This indecision can lead to the use of Artificial Intelligence (AI) tools such as ChatGPT or Gemini, which, despite being viable options for learning more about candidates, also present risks and - depending on how they are used - can also contribute to the spread of misinformation.
In recent years, we have seen an increasing use of Artificial Intelligence tools (ChatGPT, Gemini, etc.) to search for information on a wide variety of subjects, including politics. With the approach of the presidential elections, what are the risks of using this type of tool to know who to vote for?
These tools must be used well, always with a critical sense, so that we do not run the risk of the information being used in a biased way. For example, imagine that you have a candidate's political program and that it is submitted to one of these tools and you ask for a summary to be made.
This is a way for us to be better informed, because a summary of the program is made and sometimes it is even offered to be compared with others. Someone who doesn't have the patience to read all the programs might become interested in a specific point, want more details, and take the time to learn more or compare it with other programs.
But we are the ones who have to be in control. We are the ones who submit, who say what we want to do, and therefore, AI functions as an administrative assistant. Of course, afterwards we can have a conversation, which can range from how we vote to something more enlightened. In this conversation, it's important to always maintain a critical sense, because if we let ourselves be guided by the prior knowledge that these tools have, we can be easily manipulated. And we already know that there are contexts.
If we use AI tools to process information, it's excellent because it's a great help, and even if mistakes are made, they will be marginal. Of course, everyone knows their own situation, but what is expected is that the person makes an informed decision, and that it is their own decision.
It's a way, perhaps, for people to become more interested in subjects like politics, but it's important to maintain a critical sense and avoid open-ended questions that are susceptible to manipulation. It's about providing information, asking it to process, asking it to compare, and eventually asking some questions, but from this point on, some caution is necessary. Because if we consider that on the other side there is someone manipulating these tools, they have an advantage if we are not careful.
The problem here is that, when we ask an AI tool for opinions, it's very difficult to understand the reasoning. Because effectively there is no reasoning, there is statistical processing. The same applies to online questionnaires in some newspapers to decide the most suitable candidate for our profile, where they ask us a few questions and then make a suggestion. Nobody really justifies why that suggestion appears, so it's important to have a certain critical sense.
When we get into matters of opinion, we can't even understand what's behind it.
We interviewed Inês Lynce, a researcher and professor of Artificial Intelligence at Instituto Superior Técnico, about the use, risks, and best practices of Artificial Intelligence (AI) tools to learn more about electoral candidates. Watch the interview below:
Regarding the manipulation of information and the risk of asking very open-ended questions, on what basis do these tools make suggestions about, for example, voting intentions?
Just as if we were to ask the same question to a person, the best thing to do is to interrogate (so to speak) the person giving us the information in order to verify its credibility and see if there is any guarantee that there is more substantiated knowledge. This is a way to protect ourselves, to always be alert.
In other words, ask about the sources of information and where they got the data they cite?
Yes, but at this point, typically, they have a lot of difficulty and, every now and then, they invent things.
The hallucinations, right?
Yes, yes, it can be. In fact, nowadays one of the ways to identify documents lacking critical sense generated by these tools is to find references that don't exist or that have nothing to do with the topic. That happens.
There can be this dialogue, but leaving the decision of the vote in the hands of a tool...maybe it's better to just ask it about the election results so nobody has to go vote [laughs].
If it were simply processing all the information that exists on the Internet, we know that would have many risks. The sources and origin of information are extremely important.
Inês has already mentioned some good practices in the use of AI: using the tools to process information and having critical thinking skills were some of them. What other good practices or advice would you give to someone who wants to use these tools to learn more about the presidential elections?
I always advocate for human oversight. I don't recommend open-ended questions, but rather a conversation, because the AI develops based on what we've said before and, so to speak, creates our profile.
We can also present a series of sources, documents, and other information that you can use as a basis for presenting information and ask to "cross-check" it. Sources are increasingly important because that's what defines the credibility of the conclusions we reach. In podcast interviews, for example, AI can summarize the conversation or interview, and we can ask for the key points the candidate mentioned.
There's a lot of information, and AI helps us process this data, but it doesn't help us make value judgments—which is what's required when a person decides which candidate to vote for. But that's how it is with everything. It's like making a decision about our health based on what an AI tool tells us. If we have a cough and an AI tells us to drink tea with honey, the risk is minimal. But if you tell us to drink bleach, it's good that we have critical thinking skills and know that there's information like that circulating on the internet.
We must always be critical, if possible to standardize the information we provide, and cross-examine to see how reliable the information is when it is more openly presented.
mundophone
No comments:
Post a Comment