- Attack-Related
- Privacy-Related
- Fairness & Bias
- Machine Learning Related
- DNN Application
- Mathematics
Table of contents generated with markdown-toc
- 2018 IEEE ACCESS Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
- 2020 Engineering Adversarial Attacks and Defenses in Deep Learning
- 2021 arxiv Adversarial Example Detection for DNN Models: A Review and Experimental Comparison
- 2021 arxiv Advances in adversarial attacks and defenses in computer vision: A survey
- 2014 ICLR Intriguing properties of neural networks
- 2015 ICLR FGSM Explaining and harnessing adversarial examples
- 2016 S&P JSMA The Limitations of Deep Learning in Adversarial Settings
- 2016 CVPR DeepFool: a simple and accurate method to fool deep neural networks
- 2017 ICLR targeted FGSM Adversarial Machine Learning at Scale
- 2017 ICLR BIM&ICLM Adversarial examples in the physical world
- 2017 S&P C&W Towards Evaluating the Robustness of Neural Networks
- 2017 CVPR Universal adversarial perturbations
- 2018 ICLR PGDTowards Deep Learning Models Resistant to Adversarial Attacks
- 2018 IEEE TECV One-pixel attack One pixel attack for fooling deep neural networks
- 2018 AAAI Adversarial Transformation Networks: Learning to Generate Adversarial Examples
- 2018 CVPR MI-FGSMBoosting Adversarial Attacks With Momentum
- 2018 ICML Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
-
2017 CCS Practical Black-Box Attacks against Machine Learning
-
2016 arXivDelving into Transferable Adversarial Examples and Black-box Attacks
-
2018 ICDM Query-Efficient Black-Box Attack by Active Learning
-
2019 ICLR Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
-
2020 CVPR Boosting the Transferability of Adversarial Samples via Attention
-
2020 ICLR Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets
-
2020 NeurIPS Backpropagating Linearly Improves Transferability of Adversarial Examples
-
2021 ICCV Admix: Enhancing the Transferability of Adversarial Attacks
-
2021 CVPR Simulating Unknown Target Models for Query-Efficient Black-box Attacks
-
2022 CVPR Improving Adversarial Transferability via Neuron Attribution-Based Attacks
-
2022 IJCAI A Few Seconds Can Change Everything: Fast Decision-based Attacks against DNNs
-
2023 CVPR Improving the Transferability of Adversarial Samples by Path-Augmented Method
-
2023 ICCV Backpropagation Path Search On Adversarial Transferability
-
2023 CVPR Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks
-
2023 ICLR Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples
-
2023 CVPR Towards Transferable Targeted Adversarial Examples
-
2023 CVPR StyLess: Boosting the Transferability of Adversarial Examples
-
2023 CVPR Transferable Adversarial Attacks on Vision Transformers with Token Gradient Regularization
- 2015 arxiv Foveation-based Mechanisms Alleviate Adversarial Examples
- 2016 S&P Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
- 2017 ICCV Adversarial Examples for Semantic Segmentation and Object Detection
- 2017 arxiv DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
- 2017 arxiv Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN
- 2018 AAAI Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
- 2019 NuerIPS Adversarial Examples are not Bugs, they are Features
- 2019 ICML Theoretically Principled Trade-off between Robustness and Accuracy
- 2020 NeurIPS Adversarial Self-Supervised Contrastive Learning
- 2020 ICML Adversarial Neural Pruning with Latent Vulnerability Suppression
- 2020 ICLR Improving adversarial robustness requires revisiting misclassified examples
- 2020 ICML Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization
- 2021 arxiv Meta Adversarial Training against Universal Patches
- 2020 CVPR Adversarial Robustness- From Self-Supervised Pre-Training to Fine-Tuning
- 2021 ICCV Adversarial Attacks are Reversible with Natural Supervision
- 2021 ICLR Self-supervised adversarial robustness for the low-label, high-data regime
- 2021 NeurIPS When Does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?
- 2022 USENIX Transferring Adversarial Robustness Through Robust Representation Matching
- 2022 ICML Improving Adversarial Robustness via Mutual Information Estimation
- 2017 ICLRWorkshop Early Methods for Detecting Adversarial Images
- 2017 ICLR On Detecting Adversarial Perturbations
- 2017 Arixv Detecting Adversarial Samples from Artifacts
- 2017 ICCV SafetyNet: Detecting and Rejecting Adversarial Examples Robustly
- 2017 CCS MagNet: a Two-Pronged Defense against Adversarial Examples
- 2018 ICLR Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
- 2018 arxiv Detecting Adversarial Perturbations with Saliency
- 2018 NDSS Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
- 2019 NDSS NIC: Detecting Adversarial Samples with Neural Network Invariant Checking
- 2019 arxiv Model-based Saliency for the Detection of Adversarial Examples
- 2020 ICJNN Detection of Adversarial Examples in Deep Neural Networks with Natural Scene Statistics
- 2020 arxiv Detection Defense Against Adversarial Attacks with Saliency Map
- 2020 arxiv RAID: Randomized Adversarial-Input Detection for Neural Networks
- 2020 AAAI ML-LOO: Detecting Adversarial Examples with Feature Attribution
- 2021 Springer Adversarial example detection based on saliency map features
- 2020 KDD Interpretability is a Kind of Safety: An Interpreter-based Ensemble for Adversary Defense
- 2020 SPAI Stateful Detection of Black-Box Adversarial Attacks
- 2021 arxiv ExAD: An Ensemble Approach for Explanation-based Adversarial Detection
- 2021 AAAI Detecting Adversarial Examples from Sensitivity Inconsistency of Spatial-Transform Domain
- 2021 ICML Workshop Detecting AutoAttack Perturbations in the Frequency Domain
- 2021 IJCNN SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain
- 2021 MM Workshop Frequency Centric Defense Mechanisms against Adversarial Examples
- 2022 USENIX Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks
- 2022 From Spatial to Spectral Domain, a New Perspective for Detecting Adversarial Examples
- 2016 CVPR A study of the effect of JPG compression on adversarial images
- 2017 arxiv Mitigating adversarial effects through randomization
- 2018 CVPR Defense against Universal Adversarial Perturbations
- 2018 CVPR Deflecting Adversarial Attacks with Pixel Deflection
- 2018 CVPR Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser
- 2019 ICCV CIIDefence: Defeating Adversarial Attacks by Fusing Class-specific Image Inpainting and Image Denoising
- 2020 CVPR A Self-supervised Approach for Adversarial Robustness
- 2021 ICCV Removing Adversarial Noise in Class Activation Feature Space
- 2021 ICML Towards Defending against Adversarial Examples via Attack-Invariant Features
- 2022 CVPR Workshop SymDNN- Simple & Effective Adversarial Robustness for Embedded Systems
- 2022 ICLR Reverse Engineering of Imperceptible Adversarial Image Perturbations
- 2022 ICML Modeling Adversarial Noise for Adversarial Training
- 2018 NDSS TextBugger: Generating Adversarial Text Against Real-world Applications
- 2018 ACL HotFlip: White-Box Adversarial Examples for Text Classification
- 2019 S&P Intriguing Properties of Adversarial ML Attacks in the Problem Space
- 2019 IJCNN Adversarial Attacks on Deep Neural Networks for Time Series Classification
- 2020 AAAI Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
- 2019 ICML Certified Adversarial Robustness via Randomized Smoothing
- 2019 NeurIPS Certified Adversarial Robustness with Additive Noise
- 2020 NIPS Denoised Smoothing: A Provable Defense for Pretrained Classifiers
- 2022 ICLR How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
- 2022 ICLR On the Certified Robustness for Ensemble Models and Beyond
- 2022 ICML Intriguing Properties of Input-Dependent Randomized Smoothing
- 2023 ICLR (Certified!!) Adversarial Robustness for Free!
- 2023 ICLR DensePure: Understanding Diffusion Models towards Adversarial Robustness
- 2012 TROJANZOO: Everything you ever wanted to know about neural backdoors (but were afraid to ask)
- 2020 Backdoor Learning: A Survey
- 2017 Arxiv BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
- 2018 NDSS Trojaning Attack on Neural Networks
- 2018 CCS Model-Reuse Attacks on Deep Learning Systems
- 2019 CCS Latent Backdoor Attacks on Deep Neural Networks
- 2020 CCS A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
- 2020 AAAI Hidden Trigger Backdoor Attacks
- 2020 NeurIPS Input-Aware Dynamic Backdoor Attack
- 2020 arxiv Dynamic Backdoor Attacks Against Machine Learning Models
- 2020 arxiv Backdoor Attacks on the DNN Interpretation System
- 2021 USENIX Security Blind Backdoors in Deep Learning Models
- 2021 ICCV Invisible Backdoor Attack with Sample-Specific Triggers
- 2021 Infocom Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks
- 2021 ICLR WaNet -- Imperceptible Warping-based Backdoor Attack
- 2022 IJCAI Data-Efficient Backdoor Attacks
- 2022 AAAI Hibernated Backdoor- A Mutual Information Empowered Backdoor Attack to Deep Neural Networks
- 2022 CVPR Backdoor Attacks on Self-Supervised Learning
- 2022 CVPR BppAttack: Stealthy and Efficient Trojan Attacks against Deep Neural Networks via Image Quantization and Contrastive Adversarial Learning
- 2022 ICML Neurotoxin: Durable Backdoors in Federated Learning
- 2023 Infocom Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network in Edge Computing
- 2018 Arxiv Backdooring Convolutional Neural Networks via Targeted Weight Perturbations
- 2020 arxiv Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
- 2020 CVPR TBT: Targeted Neural Network Attack with Bit Trojan
- 2020 CIKM Can Adversarial Weight Perturbations Inject Neural Backdoors?
- 2020 SIGKDD An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
- 2021 ICSE DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection
- 2021 ICLR WORKSHOP Subnet Replacement: Deployment-stage backdoor attack against deep neural networks in gray-box setting
- 2020 PMLR How To Backdoor Federated Learning
- 2018 RAID Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
- 2018 NuerIPS Spectral Signatures in Backdoor Attacks
- 2019 S&P Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
- 2019 CCS ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation
- 2019 IJCAI DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks
- 2019 AAAI Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
- 2019 ACSAC STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
- 2020 ECCV One-pixel Signature: Characterizing CNN Models for Backdoor Detection
- 2020 ICDM TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems
- 2020 ACSAC Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network
- 2021 ICLR Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks
- 2021 NeurIPS Anti-Backdoor Learning: Training Clean Models on Poisoned Data
- 2021 NeurIPS Adversarial Neuron Pruning Purifies Backdoored Deep Models
- 2022 USENIX Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
- 2017 ICML Understanding Black-box Predictions via Influence Functions
- 2018 NeurIPS Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
- 2021 NeurIPS Manipulating SGD with Data Ordering Attacks
- 2022 AAAI CLPA: Clean-Label Poisoning Availability Attacks Using Generative Adversarial Nets
- 2022 USENIX PoisonedEncoder-Poisoning the Unlabeled Pre-training Data in Contrastive Learning
- 2014 ISCA Flipping Bits in Memory Without Accessing Them: An Experimental Study of DRAM Disturbance Errors
- 2019 USENIX Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks
- 2020 USENIX DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips
- 2015 CCS Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
- 2017 CCS Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
- 2019 NeurIPS Deep Leakage from Gradients
- 2020 Arxiv iDLG: Improved Deep Leakage from Gradients
- 2020 USENIX Security Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
- 2020 CVPR Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion
- 2021 CVPR See through Gradients: Image Batch Recovery via GradInversion
- 2021 计算机研究与发展 通用深度学习语言模型的隐私风险评估
- 2022 USENIX Security Theory-Oriented Deep Leakage from Gradients via Linear Equation Solver
- 2022 Arxiv Reconstructing Training Data from Trained Neural Networks
- 2022 ICLR Label Leakage and Protection in Two-party Split Learning
- 2022 ICLR Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
- 2017 S&P Membership Inference Attacks against Machine Learning Models
- 2018 Arxiv Understanding Membership Inferences on Well-Generalized Learning Models
- 2018 CSF Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting
- 2018 CCS Machine Learning with Membership Privacy using Adversarial Regularization
- 2019 NDSS ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
- 2019 CCS Privacy Risks of Securing Machine Learning Models against Adversarial Examples
- 2019 CCS MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
- 2019 S&P Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
- 2019 NIPSWorkshop Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries
- 2019 ICML White-box vs Black-box: Bayes Optimal Strategies for Membership Inference
- 2020 USENIX Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference
- 2021 CCS When Machine Unlearning Jeopardizes Privacy
- 2021 ICML Label-Only Membership Inference Attacks
- 2021 ICCV Membership Inference Attacks are Easier on Difficult Problems
- 2021 ICML When Does Data Augmentation Help With Membership Inference Attacks?
- 2021 CCS Membership Leakage in Label-Only Exposures
- 2021 AAAI Membership Privacy for Machine Learning Models Through Knowledge Transfer
- 2021 USENIX Systematic Evaluation of Privacy Risks of Machine Learning Models
- 2021 Arxiv Source Inference Attacks in Federated Learning
- 2021 CVPR On the Difficulty of Membership Inference Attacks
- 2021 AIES On the Privacy Risks of Model Explanations
- 2021 AsiaCCS Membership Feature Disentanglement Network
- 2022 Arxiv Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
- 2022 ECCV Semi-Leak- Membership Inference Attacks Against Semi-supervised Learning
- 2022 TDSC Membership Inference Attacks against Machine Learning Models via Prediction Sensitivity
- 2022 USENIX Membership Inference Attacks and Defenses in Neural Network Pruning
- 2022 S&P Membership Inference Attacks From First Principles
- 2022 NeurlPS M4I: Multi-modal Models Membership Inference
- 2017 CCS Machine Learning Models that Remember Too Much
- 2022 ICML On the Difficulty of Defending Self-Supervised Learning against Model Extraction
- 2019 NAACL On Measuring Social Biases in Sentence Encoders
- 2020 AAAI On Measuring and Mitigating Biased Inferences of Word Embeddings
- 2020 EMNLP CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
- 2020 ACL Towards Debiasing Sentence Representations
- 2021 ACL StereoSet: Measuring stereotypical bias in pretrained language models
- 2021 ACL Probing Toxic Content in Large Pre-Trained Language Models
- 2021 ICML Towards Understanding and Mitigating Social Biases in Language Models
- 2021 TransACL Self-Diagnosis and Self-Debiasing:A Proposal for Reducing Corpus-Based Bias in NLP
- 2021 EACL Debiasing Pre-trained Contextualised Embeddings
- 2021 EMNLP Sustainable Modular Debiasing of Language Models
- 2021 EMNLP Mitigating Language-Dependent Ethnic Bias in BERT
- 2021 ICLR FairFil: Contrastive Neural Debiasing Method for Pretrained Text Encoders
- 2022 ACL Auto-Debias: Debiasing Masked Language Models with Automated Biased Prompts
- 2022 ACL An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models
- 2022 EMNLP MABEL: Attenuating Gender Bias using Textual Entailment Data
- 2022 EMNLP Debiasing Pretrained Text Encoders by Paying Attention to Paying Attention
- 2019 AsiaCCS IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary
- 2019 CVPR Sensitive-Sample Fingerprinting of Deep Neural Networks
- 2020 Computer Communications AFA: Adversarial fingerprinting authentication for deep neural networks
- 2020 ACSAC Secure and Verifiable Inference in Deep Neural Networks
- 2021 ESORICS TAFA: A Task-Agnostic Fingerprinting Algorithm for Neural Networks
- 2021 ICLR Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
- 2021 ISCAS Fingerprinting Deep Neural Networks - a DeepFool Approach
- 2021 ISSTA ModelDiff: Testing-Based DNN Similarity Comparison for Model Reuse Detection
- 2021 BMVC Intrinsic Examples: Robust Fingerprinting of Deep Neural Networks
- 2021 IJCAI Characteristic Examples: High-Robustness, Low-Transferability Fingerprinting of Neural Networks
- 2021 Journal of Computer Research and Development An Evasion Algorithm to Fool Fingerprint Detector for Deep Neural Networks
- 2022 KDD MetaV: A Meta-Verifier Approach to Task-Agnostic Model Fingerprinting
- 2022 S&P Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models
- 2022 USENIX Teacher Model Fingerprinting Attacks Against Transfer Learning
- 2022 CVPR Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
- 2022 IJCAI MetaFinger: Fingerprinting the Deep Neural Networks with Meta-training
- 2022 ICIP Neural network fragile watermarking with no model performance degradation
- 2022 TIFS A DNN Fingerprint for Non-Repudiable Model Ownership Identification and Piracy Detection
- 2022 IEEE TIFS Your Model Trains on My Data? Protecting Intellectual Property of Training Data via Membership Fingerprint Authentication
- 2017 ICMR Embedding Watermarks into Deep Neural Networks
- 2018 USENIX Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring
- 2019 ASPLOS DeepSigns: An End-to-End Watermarking Framework for Ownership Protection of Deep Neural Networks\
- 2019 ICASSP Attacks on Digital Watermarks for Deep Neural Networks
- 2020 AsiaCCS Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning
- 2021 USENIX Entangled Watermarks as a Defense against Model Extraction
- 2021 WWW RIGA: Covert and Robust White-BoxWatermarking of Deep Neural Networks
- 2021 KSEM Fragile Neural Network Watermarking with Trigger Image Set
- 2022 arxiv AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning
- 2022 AAAI DeepAuth: A DNN Authentication Framework by Model-Unique and Fragile Signature Embedding
- 2022 Oakland S&P SoK- How Robust is Image Classification Deep Neural Network Watermarking?
- 2022 Oakland S&P Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models
- 2023 Arxiv The Stable Signature: Rooting Watermarks in Latent Diffusion Model
- 2020 NeurIPS Workshop Open-sourced Dataset Protection via Backdoor Watermarking
- 2020 ICML Radioactive data: tracing through training
- 2019 NeurIPS Making AI Forget You: Data Deletion in Machine Learning
- 2020 MICCAI Have you forgotten? A method to Assess If Machine Learning Models Have Forgotten Data
- 2020 CVPR Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks
- 2021 ECCV Forgetting Outside the Box- Scrubbing Deep Networks of Information Accessible from Input-Output Observations
- 2021 S&P Oakland Machine Unlearning
- 2021 AAAI Amnesiac Machine Learning
- 2021 MICCAI EMA: Auditing Data Removal from Trained Model
- 2021 ICML Certified Data Removal from Machine Learning Models
- 2022 TDSC Learn to Forget: Machine Unlearning via Neuron Masking
- 2022 AAAI PUMA: Performance Unchanged Model Augmentation for Training Data Removal
- 2022 AAAI Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
- 2022 Arxiv A Survey of Machine Unlearning
- 2022 Arxiv Zero-shot machine unlearning
- 2022 ECCV Learning with Recoverable Forgetting
- 2022 USENIX On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning
- 2014 MIPS On the Number of Linear Regions of Deep Neural Networks
- 2020 PNAS Overparameterized neural networks implement associative memory
- 2019 NIPS Superposition of many models into one
- 2020 ICLR Once-for-All: Train One Network and Specialize it for Efficient Deployment
- 2020 arxiv On Hiding Neural Networks Inside Neural Networks
- 2021 arxiv Recurrent Parameter Generators
- 2021 AAAI TransTailor- Pruning the Pre-trained Model for Improved Transfer Learning
- 2021 NIPS Supervised Contrastive Learning
- 2017 ICLR Loss-aware Binarization of Deep Networks
- 2018 NIPS Scalable Methods for 8-bit Training of Neural Networks
- 2018 CVPR Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
- 2018 Google White Paper Quantizing deep convolutional networks for efficient inference: A whitepaper
- 2018 ICLR Loss-aware Weight Quantization of Deep Networks
- 2017 CVPR Spatially Adaptive Computation Time for Residual Networks
- 2018 ICLR Multi-Scale Dense Networks for Resource Efficient Image Classification
- 2018 ECCV SkipNet: Learning Dynamic Routing in Convolutional Networks
- 2019 ICML Shallow-Deep Networks: Understanding and Mitigating Network Overthinking
- 2020 NeurIPS BERT Loses Patience: Fast and Robust Inference with Early Exit
- 2020 ICML PoWER-BERT: Accelerating BERT Inference via Progressive Word-vector Elimination
- 2021 PAMI Dynamic Neural Networks: A Survey
- 2021 IEEE TETC Fully Dynamic Inference with Deep Neural Networks
- 2022 AAAI ReX: an Efficient Approach to Reducing Memory Cost in Image Classification
- 2022 KDD Learned Token Pruning for Transformers
- 2022 ACL Transkimmer: Transformer Learns to Layer-wise Skim
- 2023 EACL A Survey on Dynamic Neural Networks for Natural Language Processing
- 2020 ICLR Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference
- 2020 CVPR ILFO: Adversarial Attack on Adaptive Neural Networks
- 2021 ICLR A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference
- 2021 IEEE IOTJ DefQ: Defensive Quantization against Inference Slow-down Attack for Edge Computing
- 2021 Arxiv Fingerprinting Multi-exit Deep Neural Network Models via Inference Time
- 2022 CCS Auditing Membership Leakages of Multi-Exit Networks
- 2022 ICSE EREBA: Black-box Energy Testing of Adaptive Neural Networks
- 2022 FSE NMTSloth: Understanding and Testing Efficiency Degradation of Neural Machine Translation Systems
- 2022 CVPR NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models
- 2014 ECCV Visualizing and Understanding Convolutional Networks
- 2014 ICLR Workshop Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
- 2016 CVPR Learning Deep Features for Discriminative Localization
- 2017 ICCV Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
- 2018 BMVC RISE: Randomized Input Sampling for Explanation of Black-box Models
- 2019 NeurIPS Full-Gradient Representation for Neural Network Visualization
- 2020 IJCNN Black-Box Saliency Map Generation Using Bayesian Optimisation
- 2021 CVPR Black-box Explanation of Object Detectors via Saliency Maps
- 2021 Arxiv Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
- 2022 ECCV SESS:Saliency Enhancing with Scaling and Sliding
- 2022 AAAI Interpretable Generative Adversarial Networks
- 2017 Arxiv The (Un)reliability of saliency methods
- 2018 NeurlPS Sanity Checks for Saliency Maps
- 2019 NeurlPS On the (In)fidelity and Sensitivity for Explanations
- 2022 ICLR Workshop Saliency Maps Contain Network "Fingerprints"
- 2019 AAAI Interpretation of Neural Networks Is Fragile
- 2019 ICCV Fooling Network Interpretation in Image Classification
- 2019 NeurlPS Fooling Neural Network Interpretations via Adversarial Model Manipulation
- 2019 NeurlPS Explanations can be manipulated and geometry is to blame
- 2020 USENIX Interpretable Deep Learning under Fire
- 2020 arxiv A simple defense against adversarial attacks on heatmap explanations
- 2022 arxiv Defense Against Explanation Manipulation
- 2017 NeurIPS Attention Is All You Need
- 2018 ICML Image Transformer
- 2020 ECCV End-to-End Object Detection with Transformers
- 2020 ICLR An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
- 2021 Arxiv TransReID: Transformer-based Object Re-Identification
- 2017 ICLR Workshop Adversarial Attacks on Neural Network Policies
- 2017 MLDM Vulnerability of Deep Reinforcement Learning toPolicy Induction Attacks
- 2017 IJCAI Tactics of Adversarial Attack on Deep Reinforcement Learning Agents
- 2018 CSCS Sequential Attacks on Agents for Long-Term Adversarial Goals
- 2020 S&P Workshop On the Robustness of Cooperative Multi-Agent Reinforcement Learning
- 2020 ICML Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
- 2019 CVPR workshop Bag of Tricks and A Strong Baseline for Deep Person Re-identification
- 2015 ICCV Scalable Person Re-identification: A Benchmark
- 2016 ECCV workshop Performance Measures and a Data Set for Multi-Target, Multi-Camera Tracking
- 2016 arxiv PersonNet: Person Re-identification with Deep Convolutional Neural Networks
- 2018 ECCV Adversarial Open-World Person Re-Identification
- 2019 PAMI Adversarial Metric Attack and Defense for Person Re-identification
- 2019 ICCV advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns
- 2020 CVPR Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
- 2020 IEEE ACCESS An Effective Adversarial Attack on Person Re-Identification in Video Surveillance via Dispersion Reduction
- 2020 ECCV Adversarial T-shirt! Evading Person Detectors in A Physical World
- 2021 Arxiv SoK: Anti-Facial Recognition Technology
- 2016 CCS Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
- 2019 ISVC DeepPrivacy: A Generative Adversarial Network for Face Anonymization
- 2019 CVPR Efficient Decision-based Black-box Adversarial Attacks on Face Recognition
- 2019 IMWUT VLA: A Practical Visible Light-based Attack on Face Recognition Systems in Physical World
- 2020 CVPR Workshop Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study
- 2020 USENIX Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
- 2021 ICCV Towards Face Encryption by Generating Adversarial Identity Masks
- 2021 ICLR LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
- 2021 ICLR Unlearnable Examples: Making Personal Data Unexploitable
- 1999 SIGIR Probabilistic Latent Semantic Indexing
- 2007 NeurIPSProbabilistic Matrix Factorization
- 2017 AAAISeqGAN: Sequence Generative Adversarial Nets with Policy Gradient
- 2017 ICML Toward Controlled Generation of Text
- 2014 Convex Optimization: Algorithms and Complexity
- 2019 AAAI AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks
- 2014 NeurIPS Learning, Regularization and Ill-Posed Inverse Problems
- 2015 ICML Workshop Norm-Based Capacity Control in Neural Networks
- 2020 CVPR Counterfactual Samples Synthesizing for Robust Visual Question Answering
- 2020 CVPR Towards Causal VQA: Revealing and Reducing Spurious Correlations by Invariant and Covariant Semantic Editing
- 2020 CVPR Visual Commonsense R-CNN
- 2021 CVPR Counterfactual VQA: A Cause-Effect Look at Language Bias
- 2020 NeurIPS Investigating Gender Bias in Language Models Using Causal Mediation Analysis
- 2021 NeurIPS A Causal Lens for Controllable Text Generation
- 2022 ACL Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View
- 2022 EMNLP Mitigating Spurious Correlation in Natural Language Understanding with Counterfactual Inference
- Blind Signal Separation Blind signal separation: statistical principles
- 2018 ICML MINE: Mutual Information Neural Estimation
- 2019 ICLR Learning deep representations by mutual information estimation and maximization