Interesting paper: Deceiving Deep Neural Networks


The authors (Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer and Michael K. Reiter) of the paper “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition” [1] describe a new class of attacks that target face recognition systems: attacks that are 1. physically realizable and 2. at the same time are inconspicuous.
The authors investigate two categories of attacks:  1.  dodging attacks (the attacker seeks to have her face misidentified as any other arbitrary face) and  2. impersonation attacks (the adversary seeks to have a face recognized as a specific other face). Their approach is based on the observation that Deep Neural Networks can be misled by mildly perturbing inputs [2]. More: https://www.cs.cmu.edu/~sbhagava/

References

[1] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In 23rd ACM Conference on Computer and Communications Security (CCS 2016), 2016.

[2] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In Proc. ICLR, 2014.