Parallelization of SHAP fails on certain models #357
Labels
Blocked
Issue is blocked by some bigger issue
internal-mle
Priority: Medium
Type: Bug
Something isn't working
Type: HPC
High performance extensions (distributed, out-of-core computing etc.)
I have trained an Autoencoder classifier and I am trying to get SHAP computations to run on multiple CPUs. However, I get an exception as follows -
Here is the full sample code (reproducible) for your reference -
This code works for other models, but fails for AutoEncoder. I did some research and found that it springs from the Ray library (used by Alibi for parallelization), which in turn springs from the way TensorFlow models are serialized.
Any ideas on how to solve this issue would be appreciated.
Thanks
The text was updated successfully, but these errors were encountered: