Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
-
Updated
Mar 26, 2022
Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems
[NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.
Replication package for the KNOSYS paper titled "An Objective Metric for Explainable AI: How and Why to Estimate the Degree of Explainability".
Open and extensible benchmark for XAI methods
Semantic Meaningfulness: Evaluating counterfactual approaches for real world plausibility
IBO: Inpainting-Based Occlusion to Enhance Explainable Artificial Intelligence Evaluation in Histopathology
Code for evaluating saliency maps with classification metrics.
CNN architectures Resnet-50 and InceptionV3 have been used to detect whether the CT scan images is covid affected or not and prediction is validated using explainable AI frameworks LIME and GradCAM.
ConsisXAI is an implementation of a technique to evaluate global machine learning explainability (XAI) methods based on feature subset consistency
Repository for ReVel framework to Measure Local-Linear Explanationsfor Black-Box Models
Research on AutoML and Explainability.
This repository is the code basis for the paper titled "Balancing Privacy and Explainability in Federated Learning"
This project poses a new methodology for assessing and improving sequential concept bottleneck models (CBMs). The research undertaken in this project builds upon the model proposed by Grange et al., of which I was one of the co-authors.
Saliency Metrics is a Python package that implements various metrics for comparing saliency maps generated by explanation methods.
Classify applications using flow features with Random Forest and K-Nearest Neighbor classifiers. Explore augmentation techniques like oversampling, SMOTE, BorderlineSMOTE, and ADASYN for better handling of underrepresented classes. Measure classifier effectiveness for different sampling techniques using accuracy, precision, recall, and F1-score.
Scripts and trained models from our paper: M. Ntrougkas, V. Mezaris, I. Patras, "P-TAME: Explain Any Image Classifier with Trained Perturbations", IEEE Open Journal of Signal Processing, 2025. DOI:10.1109/OJSP.2025.3568756.
A course project on explainable AI
🌀 Writing, documenting and sharing my journey in PhD. I am interested in the evaluation methods for XAI.
Método de XAI basado en CBR para generar explicaciones basadas en ejemplos y contraejemplos a través de técnicas de Visual Question Answering
Add a description, image, and links to the xai-evaluation topic page so that developers can more easily learn about it.
To associate your repository with the xai-evaluation topic, visit your repo's landing page and select "manage topics."