Which solution will meet these requirements?
A company has developed an ML model for image classification. The company wants to deploy the model to production so that a web application can use the model.
The company needs to implement a solution to host the model and serve predictions without managing any of the underlying infrastructure.
Which solution will meet these requirements?
A . Use Amazon SageMaker Serverless Inference to deploy the model.
B . Use Amazon CloudFront to deploy the model.
C . Use Amazon API Gateway to host the model and serve predictions.
D . Use AWS Batch to host the model and serve predictions.
Answer: A
Explanation:
Amazon SageMaker Serverless Inference is a fully managed solution for deploying machine learning models without managing the underlying infrastructure. It automatically provisions compute capacity, scales based on request traffic, and serves predictions efficiently. This makes it an ideal choice for hosting a model and serving predictions for a web application with minimal management overhead. Why not the other options? B: Use Amazon CloudFront to deploy the model: Amazon CloudFront is a content delivery network (CDN) C: Use Amazon API Gateway to host the model and serve predictions: Amazon API Gateway is used to create APIs for accessing services. D: Use AWS Batch to host the model and serve predictions: AWS Batch is designed for batch processing and job scheduling, not for real-time inference or hosting ML models for web applications
Latest AIF-C01 Dumps Valid Version with 87 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund