Skip to content

Deflecting Adversarial Attacks with Pixel Deflection

Notifications You must be signed in to change notification settings

carlini/pixel-deflection

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deflecting Adversarial Attacks with Pixel Deflection

The code in this repository demonstrates that Deflecting Adversarial Attacks with Pixel Deflection (Prakash et al. 2018) is ineffective in the white-box threat model.

With an L-infinity perturbation of 4/255, we generate targeted adversarial examples with 97% success rate, and can reduce classifier accuracy to 0%.

See our note for more context and details.

Pretty pictures

Obligatory picture of sample of adversarial examples against this defense.

Citation

@unpublished{cvpr2018breaks,
  author = {},
  title = {},
  year = {2018},
url = {https://arxiv.org/abs/TODO},
}

About

Deflecting Adversarial Attacks with Pixel Deflection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.7%
  • Python 0.3%