Tuesday, February 13, 2018







TECH






Study shows that Artificial Intelligence can be "racist" in identifying people
Face recognition technology has evolved and some commercial products already recognize the sex of the person through photography. Crossing information through artificial intelligence systems is increasingly common and alerts for errors are increasing. A new study shows that the percentage effectiveness is around 99% when the photo is of a white man, but it drops dramatically the darker the skin tone.Joy Buolamwini, an MIT Media Lab researcher, built a database of 1,270 face images using faces of selected politicians based on their country's ranking by gender parity, that is, with a large number of women in public office . The researcher submitted the images to facial recognition systems from Microsoft, IBM and Megvii, a solution from a Chinese company, and the results showed inconsistencies in the identification of gender by artificial intelligence technology.

According to the report by Joy Buolamwini, the error was less than 1% in light-skinned men, 7% in light-colored women, 12% in dark-skinned men, and up to 35% in black women. In general, the identification of men is more effective than that of women and the errors increase as skin tones become darker.

tek reconhecimento facial tabela

By 2015, the Google Photos app had mistakenly identified black people as gorillas. In the promise to correct the system, only the word "gorillas" was removed from the indexing of the photos, which demonstrates the difficulty of adjusting the identification algorithms.The issue is more worrying as authorities increasingly rely on image recognition systems for research and safety in public spaces. Police in the United States, for example, have used facial recognition solutions since the beginning of 2000 in their investigations, with flaws reaching 15%, and several organizations have argued that this lack of rigor increases the risk of innocent people being considered suspicious of crimes.Joy Buolamwini warns of the need for technology makers to increase rigor and reduce the margin of error of algorithms in different demographic groups by introducing greater transparency in the identification system and ability to verify results.



Sapo

No comments:

Post a Comment

  DIGITAL LIFE More user control may help ease negative reactions to ads on voice assistants Voice assistants (VAs) like Alexa and Siri cont...