Skip to content

comp-imaging-sci/Self-interpretable-classifier-for-medical-image-classification-task

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Test Statistic Estimation-Based Approach for Establishing Self-Interpretable CNN-Based Binary Classifiers

This repository contains the codebase for the paper "A Test Statistic Estimation-Based Approach for Establishing Self-Interpretable CNN-Based Binary Classifiers" (Sengupta and Anastasio, IEEE TMI 2024).

Abstract

Interpretability is highly desired for deep neural network-based classifiers, especially when addressing high-stakes decisions in medical imaging. Commonly used post-hoc interpretability methods can produce plausible but different interpretations of a given model, leading to ambiguity about which one to choose. To address this problem, we investigate a novel decision-theory-inspired approach to establish a self-interpretable model, given a pre-trained deep binary black-box medical image classifier.

This approach involves utilizing a self-interpretable encoder-decoder model in conjunction with a single-layer fully connected network with unity weights. The model is trained to estimate the test statistic of the given trained black-box deep binary classifier to maintain similar accuracy. The decoder output image, referred to as an equivalency map, represents a transformed version of the to-be-classified image that, when processed by the fixed fully connected layer, produces the same test statistic value as the original classifier. The equivalency map provides a visualization of the transformed image features that directly contribute to the test statistic value and permits quantification of their relative contributions. Unlike traditional post-hoc interpretability methods, the proposed method is self-interpretable and quantitative. Detailed quantitative and qualitative analyses have been performed with three different medical image binary classification tasks.

Overview

This repository contains the following components:

  1. Train the Black-box CNN Classifier (classification_blackbox.py)
  2. Train the Self-interpretable Classifier (self-interpretable.py)
  3. Test the Performance and Visualization: Jupyter Notebook (test.ipynb): Provides a test bed for evaluating and visualizing.

Data Storage

The data should be stored in numpy files (channel_last). The class 1 and 0 are of training: sp_train.npy and sa_train.npy. The class 1 and 0 are of validation: sp_val.npy and sa_val.npy.

Running the Python Scripts

  1. Black-box Classifier Training:

    • Execute the following command to run the black-box classifier training script:
    python classification_blackbox.py --n 4 --data_path /path/to/your/data_directory --best_blackbox_ckpt /path/to/your/best_checkpoint_file_directory
    
  2. Self-interpretable Classifier Trainings:

    • Execute the following command to run the self-interpretable encoder-decoder network:
    python self-interpretable.py --total 4 --randomrestart 1 --data_path /path/to/your/data_directory --best_blackbox_ckpt /path/to/your/best_checkpoint_file_directory --best_interpretable_ckpt /path/to/save/best_checkpoint_file_directory
    

Running the Jupyter Notebook

  1. Start Jupyter Notebook:

    jupyter notebook
  2. Open test.ipynb:

    • Navigate to the test.ipynb file in the Jupyter Notebook interface and open it.
    • Run the cells sequentially to see the self-interpretable classifier's performance and the equivalency map (E-map).

Important Libraries

  • keras==2.2.4
  • tensorflow==1.15
  • numpy
  • matplotlib
  • jupyter

Contact

If you have any questions or feedback, feel free to reach out:

Citation

If you use this code in your research, please cite the following paper:

@article{sengupta2024test,
  title={A Test Statistic Estimation-based Approach for Establishing Self-interpretable CNN-based Binary Classifiers},
  author={Sengupta, Sourya and Anastasio, Mark A},
  journal={IEEE transactions on medical imaging},
  year={2024},
  publisher={IEEE}
}
@article{sengupta2023revisiting,
  title={Revisiting model self-interpretability in a decision-theoretic way for binary medical image classification},
  author={Sengupta, Sourya and Anastasio, Mark A},
  journal={arXiv preprint arXiv:2303.06876},
  year={2023}
}

About

A Test Statistic Estimation-Based Approach for Establishing Self-Interpretable CNN-Based Binary Classifiers

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published