Tuesday, March 6, 2018







TECH






Image manipulation algorithm can fool machines and humans

Computer scientists from Google Brain have developed a technique that fools neural networks and makes them identify images in the wrong way - the trick also works with humans.
"Contradictory" images can be used to fool both humans and computers, as Evan Ackerman reports in the IEEE Spectrum magazine. The algorithm, developed by Google Brain staff, can modify photos for image recognition systems to misinterpret them.
In tests, a convolutional neural network (CNN) - a tool used in machine learning to analyze and identify images - was misled into thinking that, for example, a picture of a cat is actually a dog.

What is most fascinating is that humans were similarly deceived, a finding that suggests that scientists are coming close to developing systems that see the world the same way we do. The problem, however, is that it also means that we are improving the ability to deceive people.
The study has not yet been published, but is already available on the arXiv prepress server.
CNN networks are actually quite easy to fool. Machine-based approaches to computer vision do not analyze objects the same way you and I do. Artificial intelligence looks for patterns by meticulously analyzing each pixel in a photo and noticing where each point lies within a larger image.

Then the machine combines the general pattern with a pre-identified and pre-learned object, such as the image of an elephant. Humans, on the other hand, have a more holistic approach. To identify an elephant, we look for specific physical attributes, such as four legs, gray skin, big ears and a trunk.
We are good at having a sense of ambiguity and extrapolating what can exist beyond the boundaries of photography. Artificial intelligence is not lucky with that.
To give you an idea of how easy it is to fool an artificial neural network, a single pixel moved deceived a machine into thinking that a turtle was actually an assault rifle, as Japanese researchers showed last year. A few months ago, Google Brain researchers misled an artificial intelligence and made her think that a banana was a toaster simply by putting a label on the image. Algorithms have already found that skiers were dogs and a baseball was an espresso.

To fool with an artificial intelligence, as these examples illustrate, simply put a "disturbance" inside the image. Be it a pixel in the wrong place or noise patterns that, while invisible to humans, are able to convince a bot that a panda is doublet.
But these examples tend to involve a single image classifier, each one learned from a separate data set. In the new study, researchers at Google Brain attempted to develop an algorithm that could produce controversial images that could mislead several systems.
In addition, the researchers wanted to know if controversial images that deceive a whole range of image classifiers could fool humans as well. The answer is yes.
To do this, they had to create more "robust" disturbances, that is, create manipulations that could fool a range of systems, including humans. This required the addition of "significant human characteristics," for changing the edges of objects, enhancing corners with contrast adjustment, texture modification, and exploration of dark regions that may amplify the disturbing effect.




In the tests, the researchers succeeded in developing a controversial image generator capable of creating files that in some cases fooled 10 of the 10 CNN-based machine learning models. To test the effectiveness in humans, experiments were done in which participants viewed an unmodified photo, a controversial photo that fooled 100% of the CNN machines and a photo with the inverted disturbance layer.
The participants did not have much time to process the images visually, only something between 60 and 70 milliseconds. After that, he was asked to identify the objects in the photo. In one example, a dog was modified to resemble a cat - a controversial image that was identified by the machines per cat 100% of the time.
Overall, humans had more difficulty distinguishing objects from controversial images compared to unmodified photos. This means that changing the images can affect humans, too.

Making a human think that a dog is a cat when literally turning the dog into a cat does not seem to be something deep. However, this reveals that scientists are closer to creating image recognition systems that process images in a similar way with which we do. In the end, this will result in more accurate image recognition systems, which is good.The threat, however, is in the production of fake or modified images, audio or videos. Google Brain researchers worry that controversial images could one day be used to generate fake news and to subtly manipulate people."For example, a combination of deep models can be trained with the human look for trustworthiness of faces," write the authors. "From this, it could be possible to generate controversial disruptions that increase or reduce human imprints of trustworthiness, and these disturbed images could be used in news or political campaigns."
A politician who is running for office could use this technology to adjust his face to an ad on TV, making it appear more reliable to the viewer. It's like subliminal advertising, but it touches the vulnerabilities and the unconscious of the human brain.
The researchers also point out some more exciting possibilities, such as using these system to make boring images into something more appealing. As an example, we have air traffic data or radiology images. Of course, artificial intelligence would make these jobs obsolete anyway.
As Ackerman points out, "I'm much more concerned about all this invasion of how my brain perceives whether people are trustworthy or not, you know?"






Gizmodo

No comments:

Post a Comment

  SAMSUNG Galaxy S26 Ultra: new Gorilla Glass could kill screen protectors There are rituals that are part of buying a new smartphone: openi...