TECH
Facebook scandal prosecutor warns about facial recognition
Michal Kosinski(photo above) knows what causes controversy. Psychologist and assistant professor at Stanford University, the 37-year-old Polish described, years before he was discovered, the mechanism behind the biggest crisis in Facebook's history. In a 2013 article, he drew attention to the use of tannings and tests to decipher a person's personality. The strategy was used by political consultant Cambridge Analytica (CA) to influence public opinion in episodes such as the 2016 US elections. From a prophet, Kosinski came to be considered an accomplice in the maneuver. Now, it makes a new alert: abuses in the use of artificial intelligence (AI) for recognition of human faces.
"Businesses and governments are using technology to identify not only people, but also characteristics and psychological states, causing serious risks to privacy," says the researcher in an exclusive interview with the state. In recent years, he has dedicated his studies to show how facial recognition technology can be used to discriminate against people. To exemplify, he chose a thorny approach: to show that an algorithm can look at pictures of people on social networks and guess its sexual orientation with greater precision than humans would do. Face to face
In the study, the Polish used a face recognition algorithm available for free on the internet, the VGG Face. The system was fed with 35,000 photos of faces found in social networks. The machine had a challenge: when presented with a pair of photos, with an image of a heterosexual and one of a homosexual, she should be able to point out which of the people was most likely to be gay.
The machine had a hit rate of 81% for men and 71% for women - human judges had 61% and 54%, respectively. Analyzing five photos of the same people, machine accuracy rose to 91% (men) and 83% (women).
"The study does not try to understand what causes differences between gay men and hetero, but to show that there are mechanisms that work for it, such as the fact that psychological and social characteristics affect our appearance," explains Kosinski. "For humans, it's hard to detect, but the algorithms are very sensitive and can make accurate predictions."
Published in 2017 in The Economist magazine, the results of the research provoked controversy. Two US LGBTQ + groups, the Human Rights Campaign and Glaad, considered the study flawed and dangerous, while researchers questioned its method, language, and purpose.
For Kosinski, there are parallels in the reaction of people between his study of Facebook and facial recognition. "When I alerted to tanned monitoring, people laughed at my results. When they found out about Cambridge Analytica, they started to take me seriously, "he says. "Suddenly, they started blaming me for alerting me about the problem even though I was not the author of that technology." But it's hard to ignore their connection to the case.
The researcher's shift in focus follows Internet trends, such as the growth of image-based interaction. An example is Instagram, the largest social network of photos and videos in the world. Between 2013 and 2018, the platform, which is also Facebook, has grown more than ten times and today has more than 1 billion users sharing selfies and photos of friends around the world. Losses and damages
The Polish does not care about the reactions and believes that he has achieved his goal: to show the ease of constructing an algorithm that can draw conclusions such as the sexual, religious or political orientation of a person.
Regardless of whether they are right or wrong, systems with this mission can cause harm to society if they are used. "There are startups and companies that offer free basic forecasts on the internet. It's an affordable technology - and it could be at airports or immigration checkpoints, "he says.
There are even places where this is already being put into practice: in China there are reports that the government uses facial recognition to catalog and watch over the Uighurs, a Muslim ethnic minority. Already Chinese citizens are monitored for obtaining personal credit.
"It is not possible for society to become a predictive society, in which I can not have a job because I have a 72% chance of having a behavior," says Sérgio Amadeu, a professor at the UFABC. "It takes away from people, as well as from society, the ability of free will." Ethic
For IA experts heard by the state, the level of technological predicament of Kosinski's research is not surprising. "These are preliminary results. It is even possible to improve the accuracy of the algorithm, "says Alexandre Chiavegatto Filho, a professor at USP and a specialist in health technology. It's not something that's in the Polish's plans: for him, what he did was enough to throw light on the problem. The discussion of the work, however, is far from being buried.
"Choosing to study patterns linked to intimate subjects such as sexual orientation, based on the faces of people seems rather frightening," says Walter Carnielli, director of Unicamp's Center for Logic, Epistemology and History of Science. For Chiavegatto, USP, the Polish shot may backfire. "It is a survey that shows to Iran and other totalitarian governments that it is possible and simple to apply this technology. You have to think about the consequences. "
Outside the issue is well discussed: foreseeing dystopic scenarios, the city of San Francisco (USA) banned government use of face recognition in public places two weeks ago. Kosinski agrees that banning is not a good way to technology. Here in Brazil, the closest of these are public hearings on the use of facial recognition and artificial intelligence.
Meanwhile, Kosinski prepares to blow the trumpets of a new technological apocalypse. Today, he works on a new article on facial recognition. This time, it tries to demonstrate how technology can be used to estimate political views. The expectation is to publish the work by the end of the year before a new presidential race starts in the US. He knows it should spark controversy - again. "If people understand the risks, that's fine: my job will have been well done." Bruno Romani and Nilton Fukuda, Brazil
No comments:
Post a Comment