The Intuition behind Adversarial Attacks on Neural Networks
ML Review
MARCH 31, 2019
In 2014, a group of researchers at Google and NYU found that it was far too easy to fool ConvNets with an imperceivable, but carefully constructed nudge in the input. But by 2014, ConvNets had become powerful enough to start surpassing human accuracy on a number of visual recognition tasks. What are adversarial attacks? confidence.
Let's personalize your content