Saturday, June 27, 2020


TECH



A Máquina Moral: Inteligência artificial aprende moralidade
Artificial intelligence learns morality

Moral compass
Despite concerns about deviations from the application of artificial intelligence, machine learning has taken another important step, showing that computer programs can learn moral reasoning just by analyzing the text of books and news articles.
And this learning seems very flexible: It is enough to limit the material used in training the program - restricting it to texts from different eras and societies, for example - for the machine learning system to learn moral values ​​with subtle differences, revealing the nuances of culture of those times and societies.
"Our study provides an important glimpse into a fundamental question of artificial intelligence: Can machines develop a moral compass? If so, how can they learn from human ethics and morals? We show that machines can learn about our moral and ethical values ​​and be used to discern differences between societies and groups from different eras, "said Professor Patrick Schramowski, from the Darmstadt University of Technology in Germany.

Moral Choice Machine
Several previous experiments have highlighted the danger that artificial intelligence learns biased associations from written texts - for example, women tend to the arts and men to technology.
"We ask ourselves: if artificial intelligence adopts these malicious prejudices from human texts, shouldn't it also be able to learn positive biases like human moral values, providing artificial intelligence with a human-like moral compass?" said researcher Cigdem Turan, detailing the initial objective of the study.
To answer this question, the team trained his system, called the "Machine of Moral Choices", with books, news and religious texts, so that he could learn the associations between different words and phrases.
"You can think of it as learning a world map. The idea is to make two words come together on the map if they are often used together. So while 'killing' and 'murdering' would be two neighboring cities, 'loving' would be a city far from both.
"Extending this to sentences, if we ask, 'Am I supposed to kill?' we hope that 'No, you shouldn't' be closer than 'Yes, you should'. That way, we can ask any question and use these distances to calculate a moral bias - the degree of distance between right and wrong, " explained Turan.

Authentic morality
Once trained, the Moral Choices Machine in fact adopted the moral values ​​of the texts that were provided to it.
"The machine can point out the difference between the contextual information provided in a question. For example, 'No, you shouldn't kill people', but it's okay to kill time. The machine did that, not simply by repeating found texts, but extracting relationships the way humans used language in the text, "said Schramowski.
And different types of written text have altered the moral bias of the machine.
"The moral bias extracted from the news published between 1987 and 1996-97 reflects that it is extremely positive to get married and become a good father. The bias extracted from the news published between 2008-09 still reflects this, but to a lesser extent. Instead , going to work and school has increased their degree of positive bias, "said Turan.
In the future, the researchers hope to understand how removing a stereotype that we think is bad affects the machine's moral compass. The question then will be: Can we keep the moral compass of artificial intelligence unchanged, despite different ideas expressed in the texts that are provided?

Authors: Patrick Schramowski, Cigdem Turan, Sophie Jentzsch, Constantin Rothkopf, Kristian Kersting

No comments:

Post a Comment

  TECH iPhone 15 Gets Dual SIM Through FPC Patch It can often feel like modern devices are less hackable than their thicker and far less int...