“It is not enough for the pioneers of AI and ML to share their code. The industry and the world needs a new open source model where AI and ML trained engines themselves are open sourced along with the data, features and real world performance details…” Read more: https://techcrunch.com/2017/01/28/ais-open-source-model-is-closed-inadequate-and-outdated/
The authors (Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer and Michael K. Reiter) of the paper “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition”  describe a new class of attacks that target face recognition systems: attacks that are 1. physically realizable and 2. at the same time are inconspicuous.
The authors investigate two categories of attacks: 1. dodging attacks (the attacker seeks to have her face misidentified as any other arbitrary face) and 2. impersonation attacks (the adversary seeks to have a face recognized as a specific other face). Their approach is based on the observation that Deep Neural Networks can be misled by mildly perturbing inputs . More: https://www.cs.cmu.edu/~sbhagava/
 M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In 23rd ACM Conference on Computer and Communications Security (CCS 2016), 2016.
 C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, and R. Fergus. Intriguing properties of neural networks. In Proc. ICLR, 2014.