Abstract—Deep learning takes advantage of large datasets
and computationally efficient training algorithms to outperform
other approaches at various machine learning tasks. However,
imperfections in the training phase of deep neural networks
make them vulnerable to adversarial samples: inputs crafted by
adversaries with the intent of causing deep neural networks to
misclassify. In this work, we formalize the space of adversaries
against deep neural networks (DNNs) and introduce a novel class
of