You are noticing job cluster is taking 6 to 8 mins to start which is delaying your job to finish on time, what steps you can take to reduce the amount of time cluster startup time
You are noticing job cluster is taking 6 to 8 mins to start which is delaying your job to finish on time, what steps you can take to reduce the amount of time cluster startup timeA . Setup a second job ahead of first job to start the cluster, so...
You are noticing job cluster is taking 6 to 8 mins to start which is delaying your job to finish on time, what steps you can take to reduce the amount of time cluster startup time
You are noticing job cluster is taking 6 to 8 mins to start which is delaying your job to finish on time, what steps you can take to reduce the amount of time cluster startup timeA . Setup a second job ahead of first job to start the cluster, so...
What steps need to be taken to set up a DELTA LIVE PIPELINE as a job using the workspace UI?
What steps need to be taken to set up a DELTA LIVE PIPELINE as a job using the workspace UI?A . DELTA LIVE TABLES do not support job cluster B. Select Workflows UI and Delta live tables tab, under task type select Delta live tables pipeline and select the notebook...
What is the underlying technology that makes the Auto Loader work?
What is the underlying technology that makes the Auto Loader work?A . Loader B. Delta Live Tables C. Structured Streaming D. DataFrames E. Live DataFamesView AnswerAnswer: C
You are currently working with the application team to setup a SQL Endpoint point, once the team started consuming the SQL Endpoint you noticed that during peak hours as the number of concur-rent users increases you are seeing degradation in the query performance and the same queries are taking longer to run, which of the following steps can be taken to resolve the issue?
You are currently working with the application team to setup a SQL Endpoint point, once the team started consuming the SQL Endpoint you noticed that during peak hours as the number of concur-rent users increases you are seeing degradation in the query performance and the same queries are taking longer...
What is the purpose of a gold layer in Multi-hop architecture?
What is the purpose of a gold layer in Multi-hop architecture?A . Optimizes ETL throughput and analytic query performance B. Eliminate duplicate records C. Preserves grain of original data, without any aggregations D. Data quality checks and schema enforcement E. Powers ML applications, reporting, dashboards and adhoc reports.View AnswerAnswer: E...
What steps need to be taken to set up a DELTA LIVE PIPELINE as a job using the workspace UI?
What steps need to be taken to set up a DELTA LIVE PIPELINE as a job using the workspace UI?A . DELTA LIVE TABLES do not support job cluster B. Select Workflows UI and Delta live tables tab, under task type select Delta live tables pipeline and select the notebook...
Identify one of the below statements that can query a delta table in PySpark Dataframe API
Identify one of the below statements that can query a delta table in PySpark Dataframe APIA . Spark.read.mode("delta").table("table_name") B. Spark.read.table.delta("table_name") C. Spark.read.table("table_name") D. Spark.read.format("delta").LoadTableAs("table_name") E. Spark.read.format("delta").TableAs("table_name")View AnswerAnswer: C
Which of the following is true, when building a Databricks SQL dashboard?
Which of the following is true, when building a Databricks SQL dashboard?A . A dashboard can only use results from one query B. Only one visualization can be developed with one query result C. A dashboard can only connect to one schema/Database D. More than one visualization can be developed...
Which of the following locations hosts the driver and worker nodes of a Databricks-managed cluster?
Which of the following locations hosts the driver and worker nodes of a Databricks-managed cluster?A . Data plane B. Control plane C. Databricks Filesystem D. JDBC data source E. Databricks web applicationView AnswerAnswer: A Explanation: The answer is Data Plane, which is where compute(all-purpose, Job Cluster, DLT) are stored this...