What should you do?
You are a data scientist working for a bank and have used Azure ML to train and register a machine learning model that predicts whether a customer is likely to repay a loan.
You want to understand how your model is making selections and must be sure that the model does not violate government regulations such as denying loans based on where an applicant lives.
You need to determine the extent to which each feature in the customer data is influencing predictions.
What should you do?
A . Enable data drift monitoring for the model and its training dataset.
B . Score the model against some test data with known label values and use the results to calculate a confusion matrix.
C . Use the Hyperdrive library to test the model with multiple hyperparameter values.
D . Use the interpretability package to generate an explainer for the model.
E . Add tags to the model registration indicating the names of the features in the training dataset.
Answer: D
Explanation:
When you compute model explanations and visualize them, you’re not limited to an existing model explanation for an automated ML model. You can also get an explanation for your model with different test data. The steps in this section show you how to compute and visualize engineered feature importance based on your test data.
Reference: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-automl
Latest DP-100 Dumps Valid Version with 227 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund