Google Professional Machine Learning Engineer Google Professional Machine Learning Engineer Online Training
Google Professional Machine Learning Engineer Online Training
The questions for Professional Machine Learning Engineer were last updated at Feb 19,2025.
- Exam Code: Professional Machine Learning Engineer
- Exam Name: Google Professional Machine Learning Engineer
- Certification Provider: Google
- Latest update: Feb 19,2025
Your data science team is training a PyTorch model for image classification based on a pre-trained
RestNet model. You need to perform hyperparameter tuning to optimize for several parameters.
What should you do?
- A . Convert the model to a Keras model, and run a Keras Tuner job.
- B . Run a hyperparameter tuning job on AI Platform using custom containers.
- C . Create a Kuberflow Pipelines instance, and run a hyperparameter tuning job on Katib.
- D . Convert the model to a TensorFlow model, and run a hyperparameter tuning job on AI Platform.
You have a large corpus of written support cases that can be classified into 3 separate categories: Technical Support, Billing Support, or Other Issues. You need to quickly build, test, and deploy a service that will automatically classify future written requests into one of the categories.
How should you configure the pipeline?
- A . Use the Cloud Natural Language API to obtain metadata to classify the incoming cases.
- B . Use AutoML Natural Language to build and test a classifier. Deploy the model as a REST API.
- C . Use BigQuery ML to build and test a logistic regression model to classify incoming requests. Use BigQuery ML to perform inference.
- D . Create a TensorFlow model using Google’s BERT pre-trained model. Build and test a classifier, and
deploy the model using Vertex AI.
You need to quickly build and train a model to predict the sentiment of customer reviews with custom categories without writing code. You do not have enough data to train a model from scratch. The resulting model should have high predictive performance.
Which service should you use?
- A . AutoML Natural Language
- B . Cloud Natural Language API
- C . AI Hub pre-made Jupyter Notebooks
- D . AI Platform Training built-in algorithms
You need to build an ML model for a social media application to predict whether a user’s submitted profile photo meets the requirements. The application will inform the user if the picture meets the requirements.
How should you build a model to ensure that the application does not falsely accept a non-compliant picture?
- A . Use AutoML to optimize the model’s recall in order to minimize false negatives.
- B . Use AutoML to optimize the model’s F1 score in order to balance the accuracy of false positives and false negatives.
- C . Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that meet the profile photo requirements.
- D . Use Vertex AI Workbench user-managed notebooks to build a custom model that has three times as many examples of pictures that do not meet the profile photo requirements.
You lead a data science team at a large international corporation. Most of the models your team trains are large-scale models using high-level TensorFlow APIs on AI Platform with GPUs. Your team usually
takes a few weeks or months to iterate on a new version of a model. You were recently asked to review your team’s spending.
How should you reduce your Google Cloud compute costs without impacting the model’s performance?
- A . Use AI Platform to run distributed training jobs with checkpoints.
- B . Use AI Platform to run distributed training jobs without checkpoints.
- C . Migrate to training with Kuberflow on Google Kubernetes Engine, and use preemptible VMs with checkpoints.
- D . Migrate to training with Kuberflow on Google Kubernetes Engine, and use preemptible VMs without checkpoints.
You have deployed a model on Vertex AI for real-time inference. During an online prediction request, you get an “Out of Memory” error.
What should you do?
- A . Use batch prediction mode instead of online mode.
- B . Send the request again with a smaller batch of instances.
- C . Use base64 to encode your data before using it for prediction.
- D . Apply for a quota increase for the number of prediction requests.
You work at a subscription-based company. You have trained an ensemble of trees and neural networks to predict customer churn, which is the likelihood that customers will not renew their yearly subscription. The average prediction is a 15% churn rate, but for a particular customer the model predicts that they are 70% likely to churn. The customer has a product usage history of 30%, is located in New York City, and became a customer in 1997. You need to explain the difference between the actual prediction, a 70% churn rate, and the average prediction. You want to use Vertex Explainable AI.
What should you do?
- A . Train local surrogate models to explain individual predictions.
- B . Configure sampled Shapley explanations on Vertex Explainable AI.
- C . Configure integrated gradients explanations on Vertex Explainable AI.
- D . Measure the effect of each feature as the weight of the feature multiplied by the feature value.
You need to execute a batch prediction on 100 million records in a BigQuery table with a custom TensorFlow DNN regressor model, and then store the predicted results in a BigQuery table. You want to minimize the effort required to build this inference pipeline.
What should you do?
- A . Import the TensorFlow model with BigQuery ML, and run the ml.predict function.
- B . Use the TensorFlow BigQuery reader to load the data, and use the BigQuery API to write the results to BigQuery.
- C . Create a Dataflow pipeline to convert the data in BigQuery to TFRecords. Run a batch inference on Vertex AI Prediction, and write the results to BigQuery.
- D . Load the TensorFlow SavedModel in a Dataflow pipeline. Use the BigQuery I/O connector with a custom function to perform the inference within the pipeline, and write the results to BigQuery.
You are creating a deep neural network classification model using a dataset with categorical input values. Certain columns have a cardinality greater than 10,000 unique values.
How should you encode these categorical values as input into the model?
- A . Convert each categorical value into an integer value.
- B . Convert the categorical string data to one-hot hash buckets.
- C . Map the categorical variables into a vector of boolean values.
- D . Convert each categorical value into a run-length encoded string.
You need to train a natural language model to perform text classification on product descriptions that contain millions of examples and 100,000 unique words. You want to preprocess the words individually so that they can be fed into a recurrent neural network.
What should you do?
- A . Create a hot-encoding of words, and feed the encodings into your model.
- B . Identify word embeddings from a pre-trained model, and use the embeddings in your model.
- C . Sort the words by frequency of occurrence, and use the frequencies as the encodings in your model.
- D . Assign a numerical value to each word from 1 to 100,000 and feed the values as inputs in your model.