-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about reading back-normalized SHAP values after normalizing data #16151
Comments
@MoonCapture There is no parameter to do that so you would have to implement it by yourself. I would approach this by calculating linear approximation similarly as is done in Generalized DeepSHAP. You will need to get SHAP predictions by First you have to ensure that the output of the SHAP values are in the same space as the predictions (i.e. if the model uses link function you might have to apply inverse link function on the SHAP values) this is what parameter Then you will need contribution to the change of prediction against every single point from Relevant part of the doc string: :param output_space: If True, linearly scale the contributions so that they sum up to the prediction.
NOTE: This will result only in approximate SHAP values even if the model supports exact SHAP calculation.
NOTE: This will not have any effect if the estimator doesn't use a link function.
:param output_per_reference: If True, return baseline SHAP, i.e., contribution for each data point for each reference from the background_frame.
If False, return TreeSHAP if no background_frame is provided, or marginal SHAP if background frame is provided.
Can be used only with background_frame. Next you denormalize the SHAP values. This depends on the way you normalize the data, if you can inverse the normalization just by multiplication then it's simple just multiply all values. If you need to use addition as well then this I would do only for the Bias after the multiplication. If the normalization procedure you use is more complicated, use eq. 3 from Explaining a series of models by propagating Shapley values. (or you can check my implementation of simplified G-DeepSHAP in our StackedEnsembles (simplified because it is applied only on two layers (basemodels -> metalearner))). Next you should check that the Bias is denormalized prediction on the background frame point. abs(denormalize(best_model_01.predict(background_frame[i, :])) - denorm_shap_pred[denorm_shap_pred["BackgroundRowIdx"]==i, "Bias"]) < 1e-6 Then you can also check that row sums of the Next if you're confident that those values are close enough (depending on the model the epsilon can be 1e-6 up to 1e-3 (XGBoost uses floats in our implementation for prediction and double for contributions so there will be the epsilon closer to 1e-3)), you take the average contribution across the background frame. Something like: denorm_shap_pred.drop("BackgroundRowIdx").groupby("RowIdx").mean() And that should be the result you are looking for. It's not exact SHAP value since G-DeepSHAP gives only approximation if there is some non-linearity but at least you can compute it in reasonable time. |
This is my code, when I am training my H2o automated machine learning model, I first normalize the data, how do I get the inverse normalized data values when I SHAP interpret the model? (Win11 Python H2O=3.46.0.1)
SHAP value.
Thanks!
The text was updated successfully, but these errors were encountered: