Google Professional Machine Learning Engineer Google Professional Machine Learning Engineer Online Training
Google Professional Machine Learning Engineer Online Training
The questions for Professional Machine Learning Engineer were last updated at Feb 17,2025.
- Exam Code: Professional Machine Learning Engineer
- Exam Name: Google Professional Machine Learning Engineer
- Certification Provider: Google
- Latest update: Feb 17,2025
You have trained a text classification model in TensorFlow using Al Platform. You want to use the trained model for batch predictions on text data stored in BigQuery while minimizing computational overhead.
What should you do?
- A . Export the model to BigQuery ML.
- B . Deploy and version the model on Al Platform.
- C . Use Dataflow with the SavedModel to read the data from BigQuery
- D . Submit a batch prediction job on Al Platform that points to the model location in Cloud Storage.
You have trained a text classification model in TensorFlow using Al Platform. You want to use the trained model for batch predictions on text data stored in BigQuery while minimizing computational overhead.
What should you do?
- A . Export the model to BigQuery ML.
- B . Deploy and version the model on Al Platform.
- C . Use Dataflow with the SavedModel to read the data from BigQuery
- D . Submit a batch prediction job on Al Platform that points to the model location in Cloud Storage.
You have trained a text classification model in TensorFlow using Al Platform. You want to use the trained model for batch predictions on text data stored in BigQuery while minimizing computational overhead.
What should you do?
- A . Export the model to BigQuery ML.
- B . Deploy and version the model on Al Platform.
- C . Use Dataflow with the SavedModel to read the data from BigQuery
- D . Submit a batch prediction job on Al Platform that points to the model location in Cloud Storage.
You have trained a text classification model in TensorFlow using Al Platform. You want to use the trained model for batch predictions on text data stored in BigQuery while minimizing computational overhead.
What should you do?
- A . Export the model to BigQuery ML.
- B . Deploy and version the model on Al Platform.
- C . Use Dataflow with the SavedModel to read the data from BigQuery
- D . Submit a batch prediction job on Al Platform that points to the model location in Cloud Storage.
Dispatch an appropriately sized shuttle and provide the map with the required stops based on the simulated outcome.
You need to build classification workflows over several structured datasets currently stored in BigQuery. Because you will be performing the classification several times, you want to complete the following steps without writing code: exploratory data analysis, feature selection, model building, training, and hyperparameter tuning and serving.
What should you do?
- A . Configure AutoML Tables to perform the classification task
- B . Run a BigQuery ML task to perform logistic regression for the classification
- C . Use Al Platform Notebooks to run the classification model with pandas library
- D . Use Al Platform to run the classification model job configured for hyperparameter tuning
You recently joined an enterprise-scale company that has thousands of datasets. You know that there are accurate descriptions for each table in BigQuery, and you are searching for the proper BigQuery table to use for a model you are building on AI Platform.
How should you find the data that you need?
- A . Use Data Catalog to search the BigQuery datasets by using keywords in the table description.
- B . Tag each of your model and version resources on AI Platform with the name of the BigQuery table that was used for training.
- C . Maintain a lookup table in BigQuery that maps the table descriptions to the table ID. Query the lookup table to find the correct table ID for the data that you need.
- D . Execute a query in BigQuery to retrieve all the existing table names in your project using the INFORMATION_SCHEMA metadata tables that are native to BigQuery. Use the result o find the table that you need.
You are working on a classification problem with time series data and achieved an area under the receiver operating characteristic curve (AUC ROC) value of 99% for training data after just a few experiments. You haven’t explored using any sophisticated algorithms or spent any time on hyperparameter tuning.
What should your next step be to identify and fix the problem?
- A . Address the model overfitting by using a less complex algorithm.
- B . Address data leakage by applying nested cross-validation during model training.
- C . Address data leakage by removing features highly correlated with the target value.
- D . Address the model overfitting by tuning the hyperparameters to reduce the AUC ROC value.
You work for an online travel agency that also sells advertising placements on its website to other companies.
You have been asked to predict the most relevant web banner that a user should see next. Security is important to your company. The model latency requirements are 300ms@p99, the inventory is thousands of web banners, and your exploratory analysis has shown that navigation context is a good predictor. You want to Implement the simplest solution.
How should you configure the prediction pipeline?
- A . Embed the client on the website, and then deploy the model on AI Platform Prediction.
- B . Embed the client on the website, deploy the gateway on App Engine, and then deploy the model on AI Platform Prediction.
- C . Embed the client on the website, deploy the gateway on App Engine, deploy the database on Cloud
Bigtable for writing and for reading the user’s navigation context, and then deploy the model on AI Platform Prediction. - D . Embed the client on the website, deploy the gateway on App Engine, deploy the database on Memorystore for writing and for reading the user’s navigation context, and then deploy the model on Google Kubernetes Engine.
Your team is building a convolutional neural network (CNN)-based architecture from scratch. The preliminary experiments running on your on-premises CPU-only infrastructure were encouraging, but have slow convergence. You have been asked to speed up model training to reduce time-to-market. You want to experiment with virtual machines (VMs) on Google Cloud to leverage more powerful hardware. Your code does not include any manual device placement and has not been wrapped in Estimator model-level abstraction.
Which environment should you train your model on?
- A . AVM on Compute Engine and 1 TPU with all dependencies installed manually.
- B . AVM on Compute Engine and 8 GPUs with all dependencies installed manually.
- C . A Deep Learning VM with an n1-standard-2 machine and 1 GPU with all libraries pre-installed.
- D . A Deep Learning VM with more powerful CPU e2-highcpu-16 machines with all libraries pre-installed.