These data generating processes (DGPs) are designed to illustrate specific strengths and weaknesses of different feature importance methods like PFI, CFI, and RFI. Each DGP focuses on one primary challenge to make the differences between methods clear.
Usage
sim_dgp_correlated(n = 500L, r = 0.9)
sim_dgp_mediated(n = 500L)
sim_dgp_confounded(n = 500L, hidden = TRUE)
sim_dgp_interactions(n = 500L)
sim_dgp_independent(n = 500L)
Arguments
- n
(
integer(1)
:500L
) Number of observations to generate.- r
(
numeric(1)
:0.9
) Correlation between x1 and x2. Must be between -1 and 1.(
logical(1)
:TRUE
) Whether to hide the confounder from the returned task. IfFALSE
, the confounder is included as a feature, allowing direct adjustment. IfTRUE
(default), only the proxy is available, simulating unmeasured confounding.
Value
A regression task (mlr3::TaskRegr) with data.table backend.
Details
Correlated Features DGP: This DGP creates highly correlated predictors where PFI will show artificially low importance due to redundancy, while CFI will correctly identify each feature's conditional contribution.
Mathematical Model: $$(X_1, X_2)^T \sim \text{MVN}(0, \Sigma)$$ where \(\Sigma\) is a 2×2 covariance matrix with 1 on the diagonal and correlation \(r\) on the off-diagonal. $$X_3 \sim N(0,1), \quad X_4 \sim N(0,1)$$ $$Y = 2 \cdot X_1 + X_3 + \varepsilon$$ where \(\varepsilon \sim N(0, 0.2^2)\).
Feature Properties:
x1
: Standard normal from MVN, direct causal effect on y (β=2.0)x2
: Correlated withx1
(correlation =r
), NO causal effect on y (β=0)x3
: Independent standard normal, direct causal effect on y (β=1.0)x4
: Independent standard normal, no effect on y (β=0)
Expected Behavior:
Marginal methods (PFI, Marginal SAGE): Will falsely assign importance to x2 due to correlation with x1
Conditional methods (CFI, Conditional SAGE): Should correctly assign near-zero importance to x2
Key insight: x2 is a "spurious predictor" - correlated with causal feature but not causal itself
Mediated Effects DGP: This DGP demonstrates the difference between total and direct causal effects. Some features affect the outcome only through mediators.
Mathematical Model: $$\text{exposure} \sim N(0,1), \quad \text{direct} \sim N(0,1)$$ $$\text{mediator} = 0.8 \cdot \text{exposure} + 0.6 \cdot \text{direct} + \varepsilon_m$$ $$Y = 1.5 \cdot \text{mediator} + 0.5 \cdot \text{direct} + \varepsilon$$ where \(\varepsilon_m \sim N(0, 0.3^2)\) and \(\varepsilon \sim N(0, 0.2^2)\).
Feature Properties:
exposure
: Has no direct effect on y, only through mediator (total effect = 1.2)mediator
: Mediates the effect of exposure on ydirect
: Has both direct effect on y and effect on mediatornoise
: No causal relationship to y
Causal Structure: exposure → mediator → y ← direct → mediator
Expected Behavior:
PFI: Shows total effects (exposure appears important)
CFI: Shows direct effects (exposure appears less important when conditioning on mediator)
RFI with mediator: Should show direct effects similar to CFI
Confounding DGP: This DGP includes a confounder that affects both features and the outcome. Uses simple coefficients for easy interpretation.
Mathematical Model: $$H \sim N(0,1)$$ $$X_1 = H + \varepsilon_1, \quad X_2 = H + \varepsilon_2$$ $$\text{proxy} = H + \varepsilon_p, \quad \text{independent} \sim N(0,1)$$ $$Y = H + 0.5 \cdot X_1 + 0.5 \cdot X_2 + \text{independent} + \varepsilon$$ where all \(\varepsilon \sim N(0, 0.5^2)\) independently.
Model Structure:
Confounder H ~ N(0,1) (dashed red node = potentially unobserved)
x1 = H + noise, x2 = H + noise (both affected by confounder)
proxy = H + noise (noisy measurement of confounder)
independent ~ N(0,1) (truly independent)
y = H + 0.5x1 + 0.5x2 + independent + noise
Expected Behavior:
PFI: Will show inflated importance for x1 and x2 due to confounding
CFI: Should partially account for confounding through conditional sampling
RFI conditioning on confounder/proxy: Should reduce confounding bias
Interaction Effects DGP: This DGP demonstrates a pure interaction effect where features have no main effects.
Mathematical Model: $$Y = 2 \cdot X_1 \cdot X_2 + X_3 + \varepsilon$$ where \(X_j \sim N(0,1)\) independently and \(\varepsilon \sim N(0, 0.5^2)\).
Feature Properties:
x1
,x2
: Independent features with ONLY interaction effect (no main effects)x3
: Independent feature with main effect onlynoise1
,noise2
: No causal effects
Expected Behavior:
PFI: Should assign near-zero importance to x1 and x2 (no marginal effect)
CFI: Should capture the interaction and assign high importance to x1 and x2
Ground truth: x1 and x2 are important ONLY through their interaction
Independent Features DGP: This is a baseline scenario where all features are independent and their effects are additive. All importance methods should give similar results.
Mathematical Model: $$Y = 2.0 \cdot X_1 + 1.0 \cdot X_2 + 0.5 \cdot X_3 + \varepsilon$$ where \(X_j \sim N(0,1)\) independently and \(\varepsilon \sim N(0, 0.2^2)\).
Feature Properties:
important1-3
: Independent features with different effect sizesunimportant1-2
: Independent noise features with no effect
Expected Behavior:
All methods: Should rank features consistently by their true effect sizes
Ground truth: important1 > important2 > important3 > unimportant1,2 ≈ 0
Functions
sim_dgp_correlated()
: Correlated features demonstrating PFI's limitationssim_dgp_mediated()
: Mediated effects showing direct vs total importancesim_dgp_confounded()
: Confounding scenario for conditional samplingsim_dgp_interactions()
: Interaction effects between featuressim_dgp_independent()
: Independent features baseline scenario
References
Ewald, Katharina F, Bothmann, Ludwig, Wright, N. M, Bischl, Bernd, Casalicchio, Giuseppe, König, Gunnar (2024). “A Guide to Feature Importance Methods for Scientific Inference.” In Longo, Luca, Lapuschkin, Sebastian, Seifert, Christin (eds.), Explainable Artificial Intelligence, 440–464. ISBN 978-3-031-63797-1, doi:10.1007/978-3-031-63797-1_22 .
Examples
task = sim_dgp_correlated(200)
task$data()
#> y x1 x2 x3 x4
#> <num> <num> <num> <num> <num>
#> 1: -1.0982610 -0.4428071 -0.23934452 0.01403434 -1.92432778
#> 2: 2.3117855 0.5767212 0.78106741 1.11973147 -0.06758976
#> 3: 0.7147643 0.4998377 0.51543786 -0.12480321 -1.34860127
#> 4: -2.6010494 -0.5858336 -1.23438347 -1.37973287 -1.39904346
#> 5: 3.3845077 1.1618951 1.58321415 1.10585657 -1.27340330
#> ---
#> 196: 0.3773938 0.5926598 0.70238351 -0.46089361 -1.50147442
#> 197: -2.4319543 -0.7763542 -0.39619007 -0.51775056 -0.34607112
#> 198: 0.4942087 0.2736232 0.51007668 0.14771309 0.06188192
#> 199: 0.6381516 0.1781219 -0.68733730 0.31457869 -0.09699775
#> 200: -2.0318049 -0.9000224 -0.08374335 0.03561983 -0.31835398
# With different correlation
task_high_cor = sim_dgp_correlated(200, r = 0.95)
cor(task_high_cor$data()$x1, task_high_cor$data()$x2)
#> [1] 0.9483875
task = sim_dgp_mediated(200)
task$data()
#> y direct exposure mediator noise
#> <num> <num> <num> <num> <num>
#> 1: 1.22287708 0.2334889 0.9020427 0.9566474 -0.85654679
#> 2: -1.35314469 -1.1153974 0.1637073 -0.4203579 0.19734928
#> 3: 0.87550582 0.7944294 0.7566281 0.5392914 -0.10721580
#> 4: 2.54838097 0.4682312 1.3346002 1.5982313 0.23961697
#> 5: 0.06452206 -1.2731247 1.5873714 0.3108156 0.36435867
#> ---
#> 196: 2.97049692 2.0279691 -0.0788349 1.1664691 -0.81099497
#> 197: -0.96483131 -0.5983913 -0.4993008 -0.5785399 0.10878012
#> 198: -1.24951414 -0.6559367 -1.0351772 -0.5762641 -0.28405395
#> 199: 2.57260012 1.2164183 0.4794212 1.3319747 0.07308749
#> 200: 0.40101768 -0.7484051 0.8699730 0.3602471 0.26331659
# Hidden confounder scenario (traditional)
task_hidden = sim_dgp_confounded(200, hidden = TRUE)
task_hidden$feature_names # proxy available but not confounder
#> [1] "independent" "proxy" "x1" "x2"
# Observable confounder scenario
task_observed = sim_dgp_confounded(200, hidden = FALSE)
task_observed$feature_names # both confounder and proxy available
#> [1] "confounder" "independent" "proxy" "x1" "x2"
task = sim_dgp_interactions(200)
task$data()
#> y noise1 noise2 x1 x2 x3
#> <num> <num> <num> <num> <num> <num>
#> 1: -2.34441760 0.3809744 1.000097015 -0.8301725 1.600118402 0.11802796
#> 2: 0.09205272 0.6306800 -0.501855463 -0.2044470 -0.847032420 0.17952533
#> 3: 1.94991390 0.4343245 -1.799742979 -0.1985319 -0.946624676 1.37721893
#> 4: -0.12621186 -0.6064331 0.122429288 0.8434210 0.443729999 -0.86563430
#> 5: -1.14535286 1.6021822 0.004124262 0.8959997 -0.562132215 -0.02117223
#> ---
#> 196: -0.02215617 -0.6696190 -0.687780279 0.5041544 -0.191231472 -0.21650299
#> 197: -1.20269668 -1.4214582 -1.823299662 1.4929161 0.002221343 -0.74791788
#> 198: 3.63454445 0.9606509 0.016812687 0.9097053 1.250037931 1.20002150
#> 199: -1.98155581 -0.6333708 0.095992757 1.6096458 -0.637841051 0.10583840
#> 200: 0.26154008 0.6757633 -1.026208950 -0.1379192 0.650368813 0.07514873
task = sim_dgp_independent(200)
task$data()
#> y important1 important2 important3 unimportant1 unimportant2
#> <num> <num> <num> <num> <num> <num>
#> 1: -1.5859268 -1.25680880 0.6802313 0.7670612 -1.1016874 -0.7109923
#> 2: 0.8570409 1.33258644 -0.8954708 -1.1141991 0.1349543 0.1382692
#> 3: 2.8373362 1.74140377 -1.1639628 1.2505838 -0.8021798 -0.8286302
#> 4: 0.5813651 0.14446853 0.4194224 -0.4141155 -1.6935823 -0.4137860
#> 5: -0.7328898 -1.09042611 1.5401146 0.4138058 1.1547045 -0.2675034
#> ---
#> 196: 1.6822291 1.57359319 -1.3516758 0.2246197 2.1591702 -2.0814301
#> 197: -1.8574093 -0.01768991 -1.2648662 -0.7914148 0.3847277 -1.0074660
#> 198: 3.8384382 2.06874695 0.1575735 -0.3963316 1.3714221 -0.7201743
#> 199: -2.3929911 -0.15628985 -1.6930708 -0.8685832 -1.7598634 0.9546445
#> 200: -2.1081653 -0.48938595 -0.6110942 -0.5727449 -0.2107130 2.0563720