Skip to content

ML-KULeuven/lop_compress

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TreeCompress: top-down compression of decision tree ensembles using L1 regularization

This Python software package takes a pre-trained tree ensemble model and compresses it in a top-down fashion. Starting from the root and going down level per level, it prunes away subtrees by fitting coefficients.

It uses the tree representation of Veritas.

Installation

Currently, we only provide a linux binary for LOP on python 3.10. Compatibility for MacOS, Windows and more recent Python versions will follow soon.

From source:

git clone https://github.com/ML-KULeuven/lop_compress
cd lop_compress
pip install .

Example

import tree_compress
import veritas
import numpy as np
from sklearn.datasets import make_moons
from sklearn.ensemble import RandomForestClassifier

noise = 0.05
xtrain, ytrain = make_moons(200, noise=noise, random_state=1)
xtest, ytest = make_moons(200, noise=noise, random_state=2)
xvalid, yvalid = make_moons(200, noise=noise, random_state=3)

data = tree_compress.Data(xtrain, ytrain, xtest, ytest, xvalid, yvalid)

clf = RandomForestClassifier(
        max_depth=5,
        random_state=2,
        n_estimators=50)
clf.fit(data.xtrain, data.ytrain)

at_orig = veritas.get_addtree(clf, silent=silent)

compr = tree_compress.Compress(
                            data,
                            at_orig,
                            score=balanced_accuracy_score,
                            isworse=lambda v, ref: ref-v > 0.005,
                            silent=True
                        )
at_pruned = compr.compress(max_rounds=2, timeout=7200)

Experiments

The code to run the experiments from our ICML paper can be run using the files in experiment/.

The different experiments can be run using the commands in the experiment/settings/ folder. Figures/tables from the paper can be generated using the notebook experiment/icml.ipynb.

Notes:

  • Running the experiments requires having a working installation of PyTorch on your device (LRL1 requires this)
  • A Gurobi license is required for running the verification experiments. A free academic license can be requested here.

Reference

Devos, L., Martens, T., Oruç, D.C., Meert, W., Blockeel, H., Davis, J.: Compressing tree ensembles through level-wise optimization and pruning. In: Proceedings of the 42nd International Conference on Machine Learning (2025)

About

Compression of decision ensembles

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •