What should you do?
You manage a team of data scientists who use a cloud-based backend system to submit training jobs.
This system has become very difficult to administer, and you want to use a managed service instead.
The data scientists you work with use many different frameworks, including Keras, PyTorch, theano.
Scikit-team, and custom libraries.
What should you do?
A . Use the Al Platform custom containers feature to receive training jobs using any framework
B . Configure Kubeflow to run on Google Kubernetes Engine and receive training jobs through TFJob
C . Create a library of VM images on Compute Engine; and publish these images on a centralized repository
D . Set up Slurm workload manager to receive jobs that can be scheduled to run on your cloud infrastructure.
Answer: A
Explanation:
because AI platform supported all the frameworks mentioned. And Kubeflow is not managed service in GCP. https://cloud.google.com/ai-platform/training/docs/getting-started-pytorch
https://cloud.google.com/ai-platform/training/docs/containers-overview#advantages_of_custom_containers
Use the ML framework of your choice. If you can’t find an AI Platform Training runtime version that supports the ML framework you want to use, then you can build a custom container that installs your chosen framework and use it to run jobs on AI Platform Training.
Latest Professional Machine Learning Engineer Dumps Valid Version with 60 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund