delta-audit

Delta-Audit 🔀🧭📊

A lightweight Δ-Attribution suite for auditing model updates (A→B) with behavioural linkage and robustness checks.

CI License: MIT Python 3.9+

Overview

Delta-Audit provides a comprehensive suite of Δ-Attribution metrics to understand how model explanations change when models are updated. It implements behavioural alignment, conservation error, and stability measures to audit model updates across different algorithms and datasets.

Overview Figure

Quickstart

# Install in a virtual environment
python3 -m venv .venv && source .venv/bin/activate
pip install -e . && pip install -r requirements.txt

# Run a quick demonstration (5 minutes)
delta-audit quickstart

# Run the full benchmark (45 experiments)
delta-audit run --config configs/full_benchmark.yaml

# Generate figures from results
delta-audit figures --summary delta_attr_run/results/_summary --out figures/

# Run sanity checks
delta-audit check

Repository Structure

delta-audit/
├── src/delta_audit/          # Main package
│   ├── metrics.py            # Δ-Attribution metrics implementation
│   ├── explainers.py         # Attribution computation methods
│   ├── runners.py            # Training and evaluation pipelines
│   ├── plotting.py           # Figure generation utilities
│   ├── io.py                 # Data loading and saving
│   └── cli.py                # Command-line interface
├── configs/                  # Configuration files
│   ├── quickstart.yaml       # Quick demonstration config
│   └── full_benchmark.yaml   # Full benchmark config
├── delta_attr_run/           # Original experiment structure
│   ├── code/                 # Original scripts (for reproducibility)
│   └── results/              # Results and figures
├── paper/                    # Research paper
│   └── ICCKE_delta.pdf       # NOT AVAILABLE NOW!
├── docs/                     # Documentation website
└── .github/                  # GitHub workflows and templates

Reproducing the Paper

To reproduce all results and figures from the paper:

# 1. Install dependencies
pip install -e . && pip install -r requirements.txt

# 2. Run the full benchmark (reproduces all 45 experiments)
delta-audit run --config configs/full_benchmark.yaml

# 3. Generate all figures
delta-audit figures --summary delta_attr_run/results/_summary --out delta_attr_run/results/figures/

# 4. Check results
delta-audit check

The results will be saved in delta_attr_run/results/ with the same structure as in the paper.

Δ-Attribution Metrics

Delta-Audit implements the following metrics:

Supported Algorithms

Supported Datasets

Documentation

See the documentation website for detailed guides:

Citation

If you use Delta-Audit in your research, please cite (Will be available soon!):

@article{hemmat2025delta,
  title={Delta-Audit: Explaining What Changes When Models Change},
  author={Hemmat, Arshia},
  journal={arXiv preprint},
  year={2025}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Thanks to the open-source community for the excellent tools that made this project possible, particularly scikit-learn, matplotlib, and pandas.