distillML: Model Distillation and Interpretability Methods for Machine
Learning Models
Provides several methods for model distillation and interpretability
for general black box machine learning models and treatment effect estimation
methods. For details on the algorithms implemented, see <https://forestry-labs.github.io/distillML/index.html>
Brian Cho, Theo F. Saarinen, Jasjeet S. Sekhon, Simon Walter.
Version: |
0.1.0.13 |
Imports: |
ggplot2, glmnet, Rforestry, dplyr, R6 (≥ 2.0), checkmate, purrr, tidyr, data.table, mltools, gridExtra |
Suggests: |
testthat, knitr, rmarkdown, mvtnorm |
Published: |
2023-03-25 |
DOI: |
10.32614/CRAN.package.distillML |
Author: |
Brian Cho [aut],
Theo Saarinen [aut, cre],
Jasjeet Sekhon [aut],
Simon Walter [aut] |
Maintainer: |
Theo Saarinen <theo_s at berkeley.edu> |
BugReports: |
https://github.com/forestry-labs/distillML/issues |
License: |
GPL (≥ 3) |
URL: |
https://github.com/forestry-labs/distillML |
NeedsCompilation: |
no |
Materials: |
README |
CRAN checks: |
distillML results |
Documentation:
Downloads:
Linking:
Please use the canonical form
https://CRAN.R-project.org/package=distillML
to link to this page.