TECH
World’s First Psychopath AI “Norman” Shows What Bad Training Can Do To AI
Humans, including tech leaders, are now more frequent at addressing the concerns and bad repercussions of artificial intelligence in the future. It’s because the way the AI systems are being trained has introduced a scope for things like bias.
Researchers at MIT Media Lab have created what they call a psychopath AI ‘Norman‘. It’s named after the fictional character Norman Bates from the 1960s horror film Psycho. The AI is designed to show how the dataset used to train an algorithm could influence its behaviors.
During its training, the image captioning AI Norman was exposed to the darkest corners of Reddit. The dataset included images from an unnamed subreddit that “was dedicated to documenting and observing the disturbing reality of death.”
After that, the AI was subjected to Rorschach test (a psychological test that uses inkblots to analyze thought disorders) where it had to suggest a caption for provided inkblots. A standard image captioning neural network (trained on MSCOCO dataset) was shown the same abstract inkblots, and the results were compared to reveal the chilling difference.
During its training, the image captioning AI Norman was exposed to the darkest corners of Reddit. The dataset included images from an unnamed subreddit that “was dedicated to documenting and observing the disturbing reality of death.”
After that, the AI was subjected to Rorschach test (a psychological test that uses inkblots to analyze thought disorders) where it had to suggest a caption for provided inkblots. A standard image captioning neural network (trained on MSCOCO dataset) was shown the same abstract inkblots, and the results were compared to reveal the chilling difference.
According to the researchers, a conclusion that could be drawn is that we shouldn’t just blame the algorithms. The training dataset, which is mostly based on human-generated content, is equally important.
So, if we fear that an AI apocalypse would arrive one day, the biased data created by humans may be one of the reasons.
Recently, Microsoft CEO Satya Nadella also made a call for ethics and principals for the development of artificial intelligence when he tried to assure that robots won’t leave us jobless in the future.
So, if we fear that an AI apocalypse would arrive one day, the biased data created by humans may be one of the reasons.
Recently, Microsoft CEO Satya Nadella also made a call for ethics and principals for the development of artificial intelligence when he tried to assure that robots won’t leave us jobless in the future.
Aditya Tiwari
No comments:
Post a Comment