You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an “Out of Memory” error.
What should you do?
A . Use batch prediction mode instead of online mode.
B . Send the request again with a smaller batch of instances.
C . Use base64 to encode your data before using it for prediction.
D . Apply for a quota increase for the number of prediction requests.
Answer: C
Latest Professional Machine Learning Engineer Dumps Valid Version with 60 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund