You are training a TensorFlow model on a structured data set with 100 billion records stored in several CSV files. You need to improve the input/output execution performance.
What should you do?
A . Load the data into BigQuery and read the data from BigQuery.
B . Load the data into Cloud Bigtable, and read the data from Bigtable
C . Convert the CSV files into shards of TFRecords, and store the data in Cloud Storage
D . Convert the CSV files into shards of TFRecords, and store the data in the Hadoop Distributed File System (HDFS)
Answer: C
Explanation:
Reference: https://cloud.google.com/dataflow/docs/guides/templates/provided-batch
Latest Professional Machine Learning Engineer Dumps Valid Version with 60 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund