>This is a super overblown issue. It's been shown that every machine learning algorithm is vulnerable to adversarial examples. Especially linear models, NNs are actually more resistant to it. We don't know that humans aren't vulnerable to them - no one's ever opened up a human brain and backpropagated to the inputs. Adversarial examples are astronomically unlikely to occur by chance
Generative models are only vulnerable to adversarial examples that are actually unlikely in the data distribution. They do not have patterns or filters that can be added to ordinary images to cause wild misclassifications.
The brain, as far as we know, uses generative modeling.
"Generative models are only vulnerable to adversarial examples that are actually unlikely in the data distribution. They do not have patterns or filters that can be added to ordinary images to cause wild misclassifications."
^ Actually, you are wrong on this. See this recent paper "Universal adversarial perturbations"
That paper deals with non-stochastic deep neural networks, and its mathematical analysis deals with discriminative classification. It doesn't deal with generative models, which model the joint probability distribution of classes and data instances rather than just taking a maximum-a-posteriori estimate from the posterior.
We don't know that the brain uses generative models. Generative models are pretty inefficient.
Also the original adversarial examples paper found that autoencoders were just as vulnerable. I don't see why generative models wouldn't be vulnerable.
Generative models are only vulnerable to adversarial examples that are actually unlikely in the data distribution. They do not have patterns or filters that can be added to ordinary images to cause wild misclassifications.
The brain, as far as we know, uses generative modeling.
So yeah.