Implementation of RFI using modular sampling approach
References
König, Gunnar, Molnar, Christoph, Bischl, Bernd, Grosse-Wentrup, Moritz (2021). “Relative Feature Importance.” In 2020 25th International Conference on Pattern Recognition (ICPR), 9318–9325. doi:10.1109/ICPR48806.2021.9413090 .
Super classes
xplainfi::FeatureImportanceMethod
-> xplainfi::PerturbationImportance
-> RFI
Methods
Method new()
Creates a new instance of the RFI class
Usage
RFI$new(
task,
learner,
measure,
resampling = NULL,
features = NULL,
conditioning_set = NULL,
relation = "difference",
iters_perm = 1L,
sampler = NULL
)
Arguments
task, learner, measure, resampling, features
Passed to PerturbationImportance
conditioning_set
(
character()
) Set of features to condition on. Can be overridden in$compute()
. Default (character(0)
) is equivalent toPFI
. InCFI
, this would be set to all features except tat of interest.relation
(character(1)) How to relate perturbed scores to originals. Can be overridden in
$compute()
.iters_perm
(integer(1)) Number of permutation iterations. Can be overridden in
$compute()
.sampler
(ConditionalSampler) Optional custom sampler. Defaults to ARFSampler
Method compute()
Compute RFI scores
Usage
RFI$compute(
relation = NULL,
conditioning_set = NULL,
iters_perm = NULL,
store_backends = TRUE
)
Arguments
relation
(character(1)) How to relate perturbed scores to originals. If
NULL
, uses stored value.conditioning_set
(
character()
) Set of features to condition on. IfNULL
, uses the stored parameter value.iters_perm
(integer(1)) Number of permutation iterations. If
NULL
, uses stored value.store_backends
(logical(1)) Whether to store backends
Examples
library(mlr3)
task = tgen("friedman1")$generate(n = 200)
rfi = RFI$new(
task = task,
learner = lrn("regr.ranger", num.trees = 50),
measure = msr("regr.mse"),
conditioning_set = c("important1")
)
#> ℹ No <ConditionalSampler> provided, using <ARFSampler> with default settings.
#> ℹ No <Resampling> provided, using holdout resampling with default ratio.
rfi$compute()
#> Key: <feature>
#> feature importance
#> <char> <num>
#> 1: important1 0.00000000
#> 2: important2 4.56934410
#> 3: important3 1.03210376
#> 4: important4 9.62226686
#> 5: important5 2.45917494
#> 6: unimportant1 0.84629886
#> 7: unimportant2 0.16659014
#> 8: unimportant3 -0.23757522
#> 9: unimportant4 0.06427863
#> 10: unimportant5 -0.55808231