Which approach is the MOST SUITABLE for deploying this solution, and why?

You are a data scientist at a financial services company tasked with deploying a lightweight machine learning model that predicts creditworthiness based on a customer’s transaction history. The model needs to provide real-time predictions with minimal latency, and the traffic pattern is unpredictable, with occasional spikes during business hours. The company is cost-conscious and prefers a serverless architecture to minimize infrastructure management overhead.

Which approach is the MOST SUITABLE for deploying this solution, and why?
A . Deploy the model directly within AWS Lambda as a function, and expose it through an API Gateway endpoint, allowing the function to scale automatically with traffic and provide real-time predictions
B . Deploy the model as a SageMaker endpoint for real-time inference, and configure AWS Lambda to preprocess incoming requests before sending them to the SageMaker endpoint for prediction
C . Deploy the model using Amazon ECS (Elastic Container Service) and configure an AWS Lambda to trigger the ECS service on-demand, ensuring that the model is only running during peak traffic periods
D . Use an Amazon EC2 instance to host the model, with AWS Lambda functions handling the
communication between the API Gateway and the EC2 instance for prediction requests

Answer: A

Explanation:

Correct option:

Deploy the model directly within AWS Lambda as a function, and expose it through an API Gateway endpoint, allowing the function to scale automatically with traffic and provide real-time predictions Deploying the model within AWS Lambda as a function and exposing it through an API Gateway endpoint is ideal for lightweight, serverless, real-time inference. Lambda’s automatic scaling and pay-per-use model align well with unpredictable traffic patterns and the need for cost efficiency.

via –

https://aws.amazon.com/blogs/compute/deploying-machine-learning-models-with-serverless-templates/

Incorrect options:

Deploy the model as a SageMaker endpoint for real-time inference, and configure AWS Lambda to preprocess incoming requests before sending them to the SageMaker endpoint for prediction – While deploying the model as a SageMaker endpoint is suitable for more complex models requiring managed infrastructure, this approach might be overkill for a lightweight model, especially if you want to minimize costs and management overhead. Lambda functions can serve the model directly in a more cost-effective manner.

Use an Amazon EC2 instance to host the model, with AWS Lambda functions handling the communication between the API Gateway and the EC2 instance for prediction requests – Hosting the model on an Amazon EC2 instance with Lambda managing communication adds unnecessary complexity and overhead. EC2-based deployments require more management and may not be as cost-effective for serverless and real-time use cases.

Deploy the model using Amazon ECS (Elastic Container Service) and configure an AWS Lambda to trigger the ECS service on-demand, ensuring that the model is only running during peak traffic periods – Using Amazon ECS triggered by AWS Lambda adds complexity and may not provide the same level of real-time responsiveness as directly deploying the model in Lambda.

Reference: https://aws.amazon.com/blogs/compute/deploying-machine-learning-models-with-serverless-templates/

Latest MLA-C01 Dumps Valid Version with 125 Q&As

Latest And Valid Q&A | Instant Download | Once Fail, Full Refund

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments