Which of the following metrics should a Machine Learning Specialist generally use to compare/evaluate machine learning classification models against each other?
Which of the following metrics should a Machine Learning Specialist generally use to compare/evaluate machine learning classification models against each other?
A . Recall
B . Misclassification rate
C . Mean absolute percentage error (MAPE)
D . Area Under the ROC Curve (AUC)
Answer: D
Explanation:
Area Under the ROC Curve (AUC) is a metric that measures the performance of a binary classifier across all possible thresholds. It is also known as the probability that a randomly chosen positive example will be ranked higher than a randomly chosen negative example by the classifier. AUC is a good metric to compare different classification models because it is independent of the class distribution and the decision threshold. It also captures both the sensitivity (true positive rate) and the specificity (true negative rate) of the model.
References:
AWS Machine Learning Specialty Exam Guide
AWS Machine Learning Specialty Sample Questions
Latest MLS-C01 Dumps Valid Version with 104 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund