What should you do?

You need to execute a batch prediction on 100 million records in a BigQuery table with a custom TensorFlow DNN regressor model, and then store the predicted results in a BigQuery table. You want to minimize the effort required to build this inference pipeline.

What should you do?
A . Import the TensorFlow model with BigQuery ML, and run the ml.predict function.
B . Use the TensorFlow BigQuery reader to load the data, and use the BigQuery API to write the results to BigQuery.
C . Create a Dataflow pipeline to convert the data in BigQuery to TFRecords. Run a batch inference on Vertex AI Prediction, and write the results to BigQuery.
D . Load the TensorFlow SavedModel in a Dataflow pipeline. Use the BigQuery I/O connector with a custom function to perform the inference within the pipeline, and write the results to BigQuery.

Answer: A

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments