For a model that needs to explain itself, which methods could be appropriately used? (Choose Three)

For a model that needs to explain itself, which methods could be appropriately used? (Choose Three)
A . LIME (Local Interpretable Model-agnostic Explanations)
B . SHAP (SHapley Additive exPlanations)
C . Embedding model parameters directly into the user interface
D . Feature importance scores
E . Randomizing input features to test output variation

Answer: ABD

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments