You are developing a Kubeflow pipeline on Google Kubernetes Engine. The first step in the pipeline is to issue a query against BigQuery. You plan to use the results of that query as the input to the next step in your pipeline. You want to achieve this in the easiest way possible.
What should you do?
A . Use the BigQuery console to execute your query and then save the query results Into a new BigQuery table.
B . Write a Python script that uses the BigQuery API to execute queries against BigQuery Execute this script as the first step in your Kubeflow pipeline
C . Use the Kubeflow Pipelines domain-specific language to create a custom component that uses the Python BigQuery client library to execute queries
D . Locate the Kubeflow Pipelines repository on GitHub Find the BigQuery Query Component, copy that component’s URL, and use it to load the component into your pipeline. Use the component to execute queries against BigQuery
Answer: D
Explanation:
https://linuxtut.com/en/f4771efee37658c083cc/
https://github.com/kubeflow/pipelines/blob/master/components/gcp/bigquery/query/sample.ipynb
; https://v0-5.kubeflow.org/docs/pipelines/reusable-components/
Latest Professional Machine Learning Engineer Dumps Valid Version with 60 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund