Google Professional Machine Learning Engineer Google Professional Machine Learning Engineer Online Training
Google Professional Machine Learning Engineer Online Training
The questions for Professional Machine Learning Engineer were last updated at Feb 20,2025.
- Exam Code: Professional Machine Learning Engineer
- Exam Name: Google Professional Machine Learning Engineer
- Certification Provider: Google
- Latest update: Feb 20,2025
You work as an ML engineer at a social media company, and you are developing a visual filter for users’ profile photos. This requires you to train an ML model to detect bounding boxes around human faces. You want to use this filter in your company’s iOS-based mobile phone application. You want to minimize code development and want the model to be optimized for inference on mobile phones.
What should you do?
- A . Train a model using AutoML Vision and use the “export for Core ML” option.
- B . Train a model using AutoML Vision and use the “export for Coral” option.
- C . Train a model using AutoML Vision and use the “export for TensorFlow.js” option.
- D . Train a custom TensorFlow model and convert it to TensorFlow Lite (TFLite).
You have been asked to build a model using a dataset that is stored in a medium-sized (~10 GB) BigQuery table. You need to quickly determine whether this data is suitable for model development. You want to create a one-time report that includes both informative visualizations of data distributions and more sophisticated statistical analyses to share with other ML engineers on your team. You require maximum flexibility to create your report.
What should you do?
- A . Use Vertex AI Workbench user-managed notebooks to generate the report.
- B . Use the Google Data Studio to create the report.
- C . Use the output from TensorFlow Data Validation on Dataflow to generate the report.
- D . Use Dataprep to create the report.
You work on an operations team at an international company that manages a large fleet of on-premises servers located in few data centers around the world. Your team collects monitoring data from the servers, including CPU/memory consumption. When an incident occurs on a server, your team is responsible for fixing it. Incident data has not been properly labeled yet. Your management team wants you to build a predictive maintenance solution that uses monitoring data from the VMs to detect potential failures and then alerts the service desk team.
What should you do first?
- A . Train a time-series model to predict the machines’ performance values. Configure an alert if a machine’s actual performance values significantly differ from the predicted performance values.
- B . Implement a simple heuristic (e.g., based on z-score) to label the machines’ historical performance data. Train a model to predict anomalies based on this labeled dataset.
- C . Develop a simple heuristic (e.g., based on z-score) to label the machines’ historical performance data. Test this heuristic in a production environment.
- D . Hire a team of qualified analysts to review and label the machines’ historical performance data.
Train a model based on this manually labeled dataset.
You are developing an ML model that uses sliced frames from video feed and creates bounding boxes around specific objects. You want to automate the following steps in your training pipeline: ingestion and preprocessing of data in Cloud Storage, followed by training and hyperparameter tuning of the object model using Vertex AI jobs, and finally deploying the model to an endpoint. You want to orchestrate the entire pipeline with minimal cluster management.
What approach should you use?
- A . Use Kubeflow Pipelines on Google Kubernetes Engine.
- B . Use Vertex AI Pipelines with TensorFlow Extended (TFX) SDK.
- C . Use Vertex AI Pipelines with Kubeflow Pipelines SDK.
- D . Use Cloud Composer for the orchestration.
You are training an object detection machine learning model on a dataset that consists of three million X-ray images, each roughly 2 GB in size. You are using Vertex AI Training to run a custom training application on a Compute Engine instance with 32-cores, 128 GB of RAM, and 1 NVIDIA P100 GPU. You notice that model training is taking a very long time. You want to decrease training time without sacrificing model performance.
What should you do?
- A . Increase the instance memory to 512 GB and increase the batch size.
- B . Replace the NVIDIA P100 GPU with a v3-32 TPU in the training job.
- C . Enable early stopping in your Vertex AI Training job.
- D . Use the tf.distribute.Strategy API and run a distributed training job.
You are a data scientist at an industrial equipment manufacturing company. You are developing a regression model to estimate the power consumption in the company’s manufacturing plants based on sensor data collected from all of the plants. The sensors collect tens of millions of records every day. You need to schedule daily training runs for your model that use all the data collected up to the current date. You want your model to scale smoothly and require minimal development work.
What should you do?
- A . Develop a custom TensorFlow regression model, and optimize it using Vertex Al Training.
- B . Develop a regression model using BigQuery ML.
- C . Develop a custom scikit-learn regression model, and optimize it using Vertex Al Training
- D . Develop a custom PyTorch regression model, and optimize it using Vertex Al Training