You are developing models to classify customer support emails. You created models with TensorFlow Estimators using small datasets on your on-premises system, but you now need to train the models using large datasets to ensure high performance. You will port your models to Google Cloud and want to minimize code refactoring and infrastructure overhead for easier migration from on-prem to cloud.
What should you do?
A . Use Vertex Al Platform for distributed training
B . Create a cluster on Dataproc for training
C . Create a Managed Instance Group with autoscaling
D . Use Kubeflow Pipelines to train on a Google Kubernetes Engine cluster.
Answer: A
Explanation:
AI platform also contains kubeflow pipelines. you don’t need to set up infrastructure to use it. For D you need to set up a kubernetes cluster engine. The question asks us to minimize infrastructure overheard.
Latest Professional Machine Learning Engineer Dumps Valid Version with 60 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund