Improving Robustness of Image Classifiers by Selectively Pruning Input Features

Project Statement

The goal of this project is to improve the robustness of image classifiers towards naturally occurring and adversarial corruptions in the input. To do so, we are exploring a direction that involves selective pruning/dropping of features in the input space (i.e., image pixels). A naïve method for selecting which pixels to prune would be to select them at random. However, such a selection strategy does not optimize for improving robustness, or maintaining the performance of the classifier on non-corrupted images. Therefore, we are analyzing other strategies of doing the same and the effect they have on the performance of the classifier on corrupted as well as non-corrupted images. So far we have consider multiple such strategies like pruning pixels based on the semantic region they belong to (e.g., foreground or background), using saliency maps or PCA to determine importance of individual pixels and pruning based on this importance score.

Data Used

CIFAR10, MSCOCO, GTSRB

Approach

Algorithms - Deep Neural Networks, Principal Component Analysis, Saliency Map Generation Algorithms like SmoothGrad.

 

Methodology - Measure the performance and robustness of image classifiers by pruning input features using different selection strategies.

 

Tools - PyTorch, OpenCV

Result

Refer to table. Comparing Adversarial robustness of models trained on natural images versus foreground masked images. For both datasets, we observe increased robustness against a PGD adversary when the model and the adversary have access to foreground features only. 

RObust image classifier.JPG

Zeblok AI- Platform resources used:

 

  • Hybrid Base 2 Notebook

  • 1 RTX6000 GPU

  • 1 vCPU

  • 16GB RAM

  • 50GB Storage