Site icon Exam4Training

Databricks Databricks Certified Data Engineer Associate Databricks Certified Data Engineer Associate Exam Online Training

Question #1

A data organization leader is upset about the data analysis team’s reports being different from the data engineering team’s reports. The leader believes the siloed nature of their organization’s data engineering and data analysis architectures is to blame.

Which of the following describes how a data lakehouse could alleviate this issue?

  • A . Both teams would autoscale their work as data size evolves
  • B . Both teams would use the same source of truth for their work
  • C . Both teams would reorganize to report to the same department
  • D . Both teams would be able to collaborate on projects in real-time
  • E . Both teams would respond more quickly to ad-hoc requests

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

A data lakehouse is a data management architecture that combines the flexibility, cost-efficiency, and scale of data lakes with the data management and ACID transactions of data warehouses, enabling business intelligence (BI) and machine learning (ML) on all data12. By using a data lakehouse, both the data analysis and data engineering teams can access the same data sources and formats, ensuring data consistency and quality across their reports. A data lakehouse also supports schema enforcement and evolution, data validation, and time travel to old table versions, which can help resolve data conflicts and errors1.

Reference: 1: What is a Data Lakehouse? – Databricks 2: What is a data lakehouse? | IBM

Question #2

Which of the following describes a scenario in which a data team will want to utilize cluster pools?

  • A . An automated report needs to be refreshed as quickly as possible.
  • B . An automated report needs to be made reproducible.
  • C . An automated report needs to be tested to identify errors.
  • D . An automated report needs to be version-controlled across multiple collaborators.
  • E . An automated report needs to be runnable by all stakeholders.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Databricks cluster pools are a set of idle, ready-to-use instances that can reduce cluster start and auto-scaling times. This is useful for scenarios where a data team needs to run an automated report as quickly as possible, without waiting for the cluster to launch or scale up. Cluster pools can also help save costs by reusing idle instances across different clusters and avoiding DBU charges for idle instances in the pool.

Reference: Best practices: pools | Databricks on AWS, Best practices: pools – Azure Databricks | Microsoft Learn, Best practices: pools | Databricks on Google Cloud

Question #3

Which of the following is hosted completely in the control plane of the classic Databricks architecture?

  • A . Worker node
  • B . JDBC data source
  • C . Databricks web application
  • D . Databricks Filesystem
  • E . Driver node

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The Databricks web application is the user interface that allows you to create and manage workspaces, clusters, notebooks, jobs, and other resources. It is hosted completely in the control plane of the classic Databricks architecture, which includes the backend services that Databricks manages in your Databricks account. The other options are part of the compute plane, which is where your data is processed by compute resources such as clusters. The compute plane is in your own cloud account and network.

Reference: Databricks architecture overview, Security and Trust Center

Question #4

Which of the following benefits of using the Databricks Lakehouse Platform is provided by Delta Lake?

  • A . The ability to manipulate the same data using a variety of languages
  • B . The ability to collaborate in real time on a single notebook
  • C . The ability to set up alerts for query failures
  • D . The ability to support batch and streaming workloads
  • E . The ability to distribute complex data operations

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Delta Lake is the optimized storage layer that provides the foundation for storing data and tables in the Databricks lakehouse. Delta Lake is fully compatible with Apache Spark APIs, and was developed for tight integration with Structured Streaming, allowing you to easily use a single copy of data for both batch and streaming operations and providing incremental processing at scale1. Delta Lake supports upserts using the merge operation, which enables you to efficiently update existing data or insert new data into your Delta tables2. Delta Lake also provides time travel capabilities, which allow you to query previous versions of your data or roll back to a specific point in time3.

Reference:

1: What is Delta Lake? | Databricks on AWS

2: Upsert into a table using merge |

Databricks on AWS

3: [Query an older snapshot of a table (time travel) | Databricks on AWS]

Learn more 1learn.microsoft.com2medium.com3slideshare.net4docs.databricks.com5github.com6key2consulting.com

Question #5

Which of the following describes the storage organization of a Delta table?

  • A . Delta tables are stored in a single file that contains data, history, metadata, and other attributes.
  • B . Delta tables store their data in a single file and all metadata in a collection of files in a separate location.
  • C . Delta tables are stored in a collection of files that contain data, history, metadata, and other attributes.
  • D . Delta tables are stored in a collection of files that contain only the data stored within the table.
  • E . Delta tables are stored in a single file that contains only the data stored within the table.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Delta Lake is the optimized storage layer that provides the foundation for storing data and tables in the Databricks lakehouse. Delta Lake is open source software that extends Parquet data files with a file-based transaction log for ACID transactions and scalable metadata handling1. Delta Lake stores its data and metadata in a collection of files in a directory on a cloud storage system, such as AWS S3 or Azure Data Lake Storage2. Each Delta table has a transaction log that records the history of operations performed on the table, such as insert, update, delete, merge, etc. The transaction log also stores the schema and partitioning information of the table2. The transaction log enables Delta Lake to provide ACID guarantees, time travel, schema enforcement, and other features1.

Reference: What is Delta Lake? | Databricks on AWS

Quickstart ― Delta Lake Documentation

Question #6

Which of the following code blocks will remove the rows where the value in column age is greater than 25 from the existing Delta table my_table and save the updated table?

  • A . SELECT * FROM my_table WHERE age > 25;
  • B . UPDATE my_table WHERE age > 25;
  • C . DELETE FROM my_table WHERE age > 25;
  • D . UPDATE my_table WHERE age <= 25;
  • E . DELETE FROM my_table WHERE age <= 25;

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

: The DELETE command in Delta Lake allows you to remove data that matches a predicate from a Delta table. This command will delete all the rows where the value in the column age is greater than 25 from the existing Delta table my_table and save the updated table. The other options are either incorrect or do not achieve the desired result. Option A will only select the rows that match the predicate, but not delete them. Option B will update the rows that match the predicate, but not delete them. Option D will update the rows that do not match the predicate, but not delete them. Option E will delete the rows that do not match the predicate, which is the opposite of what we want.

Reference: Table deletes, updates, and merges ― Delta Lake Documentation

Question #7

A data engineer has realized that they made a mistake when making a daily update to a table. They need to use Delta time travel to restore the table to a version that is 3 days old. However, when the data engineer attempts to time travel to the older version, they are unable to restore the data because the data files have been deleted.

Which of the following explains why the data files are no longer present?

  • A . The VACUUM command was run on the table
  • B . The TIME TRAVEL command was run on the table
  • C . The DELETE HISTORY command was run on the table
  • D . The OPTIMIZE command was nun on the table
  • E . The HISTORY command was run on the table

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The VACUUM command is used to remove files that are no longer referenced by a Delta table and are older than the retention threshold1. The default retention period is 7 days2, but it can be changed by setting the delta.logRetentionDuration and delta.deletedFileRetentionDuration configurations3. If the VACUUM command was run on the table with a retention period shorter than 3 days, then the data files that were needed to restore the table to a 3-day-old version would have been deleted. The other commands do not delete data files from the table. The TIME TRAVEL command is used to query a historical version of the table4. The DELETE HISTORY command is not a valid command in Delta Lake. The OPTIMIZE command is used to improve the performance of the table by compacting small files into larger ones5. The HISTORY command is used to retrieve information about the operations performed on the table.

Reference:

1: VACUUM | Databricks on AWS

2: Work with Delta Lake table history | Databricks on AWS

3: [Delta Lake configuration | Databricks on AWS]

4: Work with Delta Lake table history – Azure Databricks

5: [OPTIMIZE | Databricks on AWS]: [HISTORY | Databricks on AWS]

Question #8

Which of the following Git operations must be performed outside of Databricks Repos?

  • A . Commit
  • B . Pull
  • C . Push
  • D . Clone
  • E . Merge

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Databricks Repos is a visual Git client and API in Databricks that supports common Git operations such as commit, pull, push, branch management, and visual comparison of diffs when committing1. However, merge is not supported in the Git dialog2. You need to use the Repos UI or your Git provider to merge branches3. Merge is a way to combine the commit history from one branch into another branch1. During a merge, a merge conflict is encountered when Git cannot automatically combine code from one branch into another. Merge conflicts require manual resolution before a merge can be completed1.

Reference: 4: Run Git operations on Databricks Repos4, 1: CI/CD techniques with Git and Databricks Repos1, 3: Collaborate in Repos3, 2: Databricks Repos – What it is and how we can use it2.

Databricks Repos is a visual Git client and API in Databricks that supports common Git operations such as commit, pull, push, merge, and branch management. However, to clone a remote Git repository to a Databricks repo, you must use the Databricks UI or API. You cannot clone a Git repo using the CLI through a cluster’s web terminal, as the files won’t display in the Databricks UI1.

Reference: 1: Run Git operations on Databricks Repos | Databricks on AWS2

Question #9

Which of the following data lakehouse features results in improved data quality over a traditional data lake?

  • A . A data lakehouse provides storage solutions for structured and unstructured data.
  • B . A data lakehouse supports ACID-compliant transactions.
  • C . A data lakehouse allows the use of SQL queries to examine data.
  • D . A data lakehouse stores data in open formats.
  • E . A data lakehouse enables machine learning and artificial Intelligence workloads.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

A data lakehouse is a data management architecture that combines the flexibility, cost-efficiency, and scale of data lakes with the data management and ACID transactions of data warehouses, enabling business intelligence (BI) and machine learning (ML) on all data12. One of the key features of a data lakehouse is that it supports ACID-compliant transactions, which means that it ensures data integrity, consistency, and isolation across concurrent read and write operations3. This feature results in improved data quality over a traditional data lake, which does not support transactions and may suffer from data corruption, duplication, or inconsistency due to concurrent or streaming data ingestion and processing .

Reference:

1: What is a Data Lakehouse? – Databricks

2: What is a Data Lakehouse? Definition, features & benefits. – Qlik

3: ACID Transactions – Databricks: [Data Lake vs

Data Warehouse: Key Differences]: [Data Lakehouse: The Future of Data Engineering]

Question #10

A data engineer needs to determine whether to use the built-in Databricks Notebooks versioning or version their project using Databricks Repos.

Which of the following is an advantage of using Databricks Repos over the Databricks Notebooks versioning?

  • A . Databricks Repos automatically saves development progress
  • B . Databricks Repos supports the use of multiple branches
  • C . Databricks Repos allows users to revert to previous versions of a notebook
  • D . Databricks Repos provides the ability to comment on specific changes
  • E . Databricks Repos is wholly housed within the Databricks Lakehouse Platform

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Databricks Repos is a visual Git client and API in Databricks that supports common Git operations such as cloning, committing, pushing, pulling, and branch management. Databricks Notebooks versioning is a legacy feature that allows users to link notebooks to GitHub repositories and perform basic Git operations. However, Databricks Notebooks versioning does not support the use of multiple branches for development work, which is an advantage of using Databricks Repos. With Databricks Repos, users can create and manage branches for different features, experiments, or bug fixes, and merge, rebase, or resolve conflicts between them. Databricks recommends using a separate branch for each notebook and following data science and engineering code development best practices using Git for version control, collaboration, and CI/CD.

Reference: Git integration with Databricks Repos – Azure Databricks | Microsoft Learn, Git version control for notebooks (legacy) | Databricks on AWS, Databricks Repos Is Now Generally Available – New ‘Files’ Feature in …, Databricks Repos – What it is and how we can use it | Adatis.

Question #11

A data engineer has left the organization. The data team needs to transfer ownership of the data engineer’s Delta tables to a new data engineer. The new data engineer is the lead engineer on the data team.

Assuming the original data engineer no longer has access, which of the following individuals must be the one to transfer ownership of the Delta tables in Data Explorer?

  • A . Databricks account representative
  • B . This transfer is not possible
  • C . Workspace administrator
  • D . New lead data engineer
  • E . Original data engineer

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The workspace administrator is the only individual who can transfer ownership of the Delta tables in Data Explorer, assuming the original data engineer no longer has access. The workspace administrator has the highest level of permissions in the workspace and can manage all resources, users, and groups. The other options are either not possible or not sufficient to perform the ownership transfer. The Databricks account representative is not involved in the workspace management. The transfer is possible and not dependent on the original data engineer. The new lead data engineer may not have the necessary permissions to access or modify the Delta tables, unless granted by the workspace administrator or the original data engineer before leaving.

Reference: Workspace access control, Manage Unity Catalog object ownership.

Question #12

A data analyst has created a Delta table sales that is used by the entire data analysis team. They want help from the data engineering team to implement a series of tests to ensure the data is clean. However, the data engineering team uses Python for its tests rather than SQL.

Which of the following commands could the data engineering team use to access sales in PySpark?

  • A . SELECT * FROM sales
  • B . There is no way to share data between PySpark and SQL.
  • C . spark.sql("sales")
  • D . spark.delta.table("sales")
  • E . spark.table("sales")

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

The data engineering team can use the spark.table method to access the Delta table sales in PySpark. This method returns a DataFrame representation of the Delta table, which can be used for further processing or testing. The spark.table method works for any table that is registered in the Hive metastore or the Spark catalog, regardless of the file format1. Alternatively, the data engineering team can also use the DeltaTable.forPath method to load the Delta table from its path2.

Reference:

1: SparkSession | PySpark 3.2.0 documentation

2: Welcome to Delta Lake’s Python documentation page ― delta-spark 2.4.0 documentation

Question #13

Which of the following commands will return the location of database customer360?

  • A . DESCRIBE LOCATION customer360;
  • B . DROP DATABASE customer360;
  • C . DESCRIBE DATABASE customer360;
  • D . ALTER DATABASE customer360 SET DBPROPERTIES (‘location’ = ‘/user’};
  • E . USE DATABASE customer360;

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The command DESCRIBE DATABASE customer360; will return the location of the database customer360, along with its comment and properties. This command is an alias for DESCRIBE SCHEMA customer360;, which can also be used to get the same information. The other commands will either drop the database, alter its properties, or use it as the current database, but will not return its location12.

Reference: DESCRIBE DATABASE | Databricks on AWS

DESCRIBE DATABASE – Azure Databricks – Databricks SQL

Question #14

A data engineer wants to create a new table containing the names of customers that live in France.

They have written the following command:

A senior data engineer mentions that it is organization policy to include a table property indicating that the new table includes personally identifiable information (PII).

Which of the following lines of code fills in the above blank to successfully complete the task?

  • A . There is no way to indicate whether a table contains PII.
  • B . "COMMENT PII"
  • C . TBLPROPERTIES PII
  • D . COMMENT "Contains PII"
  • E . PII

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In Databricks, when creating a table, you can add a comment to columns or the entire table to provide more information about the data it contains. In this case, since it’s organization policy to indicate that the new table includes personally identifiable information (PII), option D is correct. The line of code would be added after defining the table structure and before closing with a semicolon.

Reference: Data Engineer Associate Exam Guide, CREATE TABLE USING (Databricks SQL)

Question #15

Which of the following benefits is provided by the array functions from Spark SQL?

  • A . An ability to work with data in a variety of types at once
  • B . An ability to work with data within certain partitions and windows
  • C . An ability to work with time-related data in specified intervals
  • D . An ability to work with complex, nested data ingested from JSON files
  • E . An ability to work with an array of tables for procedural automation

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The array functions from Spark SQL are a subset of the collection functions that operate on array columns1. They provide an ability to work with complex, nested data ingested from JSON files or other sources2. For example, the explode function can be used to transform an array column into multiple rows, one for each element in the array3. The array_contains function can be used to check if a value is present in an array column4. The array_join function can be used to concatenate all elements of an array column with a delimiter. These functions can be useful for processing JSON data that may have nested arrays or objects.

Reference:

1: Spark SQL, Built-in Functions – Apache Spark

2: Spark SQL Array Functions Complete List – Spark By Examples

3: Spark SQL Array Functions – Syntax and Examples – DWgeek.com

4: Spark SQL, Built-in Functions – Apache Spark: Spark SQL, Built-in Functions – Apache Spark: [Working with Nested Data Using Higher Order Functions in SQL on Databricks – The Databricks Blog]

Question #16

Which of the following commands can be used to write data into a Delta table while avoiding the writing of duplicate records?

  • A . DROP
  • B . IGNORE
  • C . MERGE
  • D . APPEND
  • E . INSERT

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The MERGE command can be used to upsert data from a source table, view, or DataFrame into a target Delta table. It allows you to specify conditions for matching and updating existing records, and inserting new records when no match is found. This way, you can avoid writing duplicate records into a Delta table1. The other commands (DROP, IGNORE, APPEND, INSERT) do not have this functionality and may result in duplicate records or data loss234.

Reference: 1: Upsert into a Delta Lake table using merge | Databricks on AWS 2: SQL DELETE | Databricks on AWS 3: SQL INSERT INTO | Databricks on AWS 4: SQL UPDATE | Databricks on AWS

Question #17

A data engineer needs to apply custom logic to string column city in table stores for a specific use case. In order to apply this custom logic at scale, the data engineer wants to create a SQL user-defined function (UDF).

Which of the following code blocks creates this SQL UDF?

A)

B)

C)

D)

E)

  • A . Option A
  • B . Option B
  • C . Option C
  • D . Option D
  • E . Option E

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://www.databricks.com/blog/2021/10/20/introducing-sql-user-defined-functions.html

Question #18

A data analyst has a series of queries in a SQL program. The data analyst wants this program to run every day. They only want the final query in the program to run on Sundays. They ask for help from the data engineering team to complete this task.

Which of the following approaches could be used by the data engineering team to complete this task?

  • A . They could submit a feature request with Databricks to add this functionality.
  • B . They could wrap the queries using PySpark and use Python’s control flow system to determine when to run the final query.
  • C . They could only run the entire program on Sundays.
  • D . They could automatically restrict access to the source table in the final query so that it is only accessible on Sundays.
  • E . They could redesign the data model to separate the data used in the final query into a new table.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

This approach would allow the data engineering team to use the existing SQL program and add some logic to control the execution of the final query based on the day of the week. They could use the datetime module in Python to get the current date and check if it is a Sunday. If so, they could run the final query, otherwise they could skip it. This way, they could schedule the program to run every day without changing the data model or the source table.

Reference: PySpark SQL Module, Python datetime Module, Databricks Jobs

Question #19

A data engineer runs a statement every day to copy the previous day’s sales into the table transactions. Each day’s sales are in their own file in the location "/transactions/raw".

Today, the data engineer runs the following command to complete this task:

After running the command today, the data engineer notices that the number of records in table transactions has not changed.

Which of the following describes why the statement might not have copied any new records into the table?

  • A . The format of the files to be copied were not included with the FORMAT_OPTIONS keyword.
  • B . The names of the files to be copied were not included with the FILES keyword.
  • C . The previous day’s file has already been copied into the table.
  • D . The PARQUET file format does not support COPY INTO.
  • E . The COPY INTO statement requires the table to be refreshed to view the copied rows.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The COPY INTO statement is an idempotent operation, which means that it will skip any files that have already been loaded into the target table1. This ensures that the data is not duplicated or corrupted by multiple attempts to load the same file. Therefore, if the data engineer runs the same command every day without specifying the names of the files to be copied with the FILES keyword or a glob pattern with the PATTERN keyword, the statement will only copy the first file that matches the source location and ignore the rest. To avoid this problem, the data engineer should either use the FILES or PATTERN keywords to filter the files to be copied based on the date or some other criteria, or delete the files from the source location after they are copied into the table2.

Reference:

1: COPY INTO | Databricks on AWS

2: Get started using COPY INTO to load data | Databricks on AWS

Question #20

A data engineer needs to create a table in Databricks using data from their organization’s existing SQLite database.

They run the following command:

Which of the following lines of code fills in the above blank to successfully complete the task?

  • A . org.apache.spark.sql.jdbc
  • B . autoloader
  • C . DELTA
  • D . sqlite
  • E . org.apache.spark.sql.sqlite

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

: In the given command, a data engineer is trying to create a table in Databricks using data from an SQLite database. The correct option to fill in the blank is “sqlite” because it specifies the type of database being connected to in a JDBC connection string. The USING clause should be followed by the format of the data, and since we are connecting to an SQLite database, “sqlite” would be appropriate here.

Reference: Create a table using JDBC JDBC connection string SQLite JDBC driver

Question #21

A data engineering team has two tables. The first table march_transactions is a collection of all retail transactions in the month of March. The second table april_transactions is a collection of all retail

transactions in the month of April. There are no duplicate records between the tables.

Which of the following commands should be run to create a new table all_transactions that contains all records from march_transactions and april_transactions without duplicate records?

  • A . CREATE TABLE all_transactions AS
    SELECT * FROM march_transactions
    INNER JOIN SELECT * FROM april_transactions;
  • B . CREATE TABLE all_transactions AS
    SELECT * FROM march_transactions
    UNION SELECT * FROM april_transactions;
  • C . CREATE TABLE all_transactions AS
    SELECT * FROM march_transactions
    OUTER JOIN SELECT * FROM april_transactions;
  • D . CREATE TABLE all_transactions AS
    SELECT * FROM march_transactions
    INTERSECT SELECT * from april_transactions;
  • E . CREATE TABLE all_transactions AS
    SELECT * FROM march_transactions
    MERGE SELECT * FROM april_transactions;

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The correct command to create a new table that contains all records from two tables without duplicate records is to use the UNION operator. The UNION operator combines the results of two queries and removes any duplicate rows. The INNER JOIN, OUTER JOIN, and MERGE operators do not remove duplicate rows, and the INTERSECT operator only returns the rows that are common to both tables. Therefore, option B is the only correct answer.

Reference: Databricks SQL Reference – UNION, Databricks SQL Reference – JOIN, Databricks SQL Reference – MERGE, [Databricks SQL Reference – INTERSECT]

Question #22

A data engineer only wants to execute the final block of a Python program if the Python variable day_of_week is equal to 1 and the Python variable review_period is True.

Which of the following control flow statements should the data engineer use to begin this conditionally executed code block?

  • A . if day_of_week = 1 and review_period:
  • B . if day_of_week = 1 and review_period = "True":
  • C . if day_of_week == 1 and review_period == "True":
  • D . if day_of_week == 1 and review_period:
  • E . if day_of_week = 1 & review_period: = "True":

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In Python, the == operator is used to compare the values of two variables, while the = operator is used to assign a value to a variable. Therefore, option A and E are incorrect, as they use the = operator for comparison.

Option B and C are also incorrect, as they compare the review_period variable to a string value "True", which is different from the boolean value True.

Option D is the correct answer, as it uses the == operator to compare the day_of_week variable to the integer value 1, and the and operator to check if both conditions are true. If both conditions are true, then the final block of the Python program will be executed.

Reference: [Python Operators], [Python If … Else]

Question #23

A data engineer is attempting to drop a Spark SQL table my_table. The data engineer wants to delete all table metadata and data.

They run the following command:

DROP TABLE IF EXISTS my_table

While the object no longer appears when they run SHOW TABLES, the data files still exist.

Which of the following describes why the data files still exist and the metadata files were deleted?

  • A . The table’s data was larger than 10 GB
  • B . The table’s data was smaller than 10 GB
  • C . The table was external
  • D . The table did not have a location
  • E . The table was managed

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

An external table is a table that is defined in the metastore and points to an existing location in the storage system. When you drop an external table, only the metadata is deleted from the metastore, but the data files are not deleted from the storage system. This is because external tables are meant to be shared by multiple applications and users, and dropping them should not affect the data availability. On the other hand, a managed table is a table that is defined in the metastore and also managed by the metastore. When you drop a managed table, both the metadata and the data files are deleted from the metastore and the storage system, respectively. This is because managed tables are meant to be exclusive to the application or user that created them, and dropping them should free up the storage space. Therefore, the correct answer is C, because the table was external and only the metadata was deleted when the table was dropped.

Reference: Databricks Documentation – Managed and External Tables, Databricks Documentation – Drop Table

Question #24

A data engineer wants to create a data entity from a couple of tables. The data entity must be used by other data engineers in other sessions. It also must be saved to a physical location.

Which of the following data entities should the data engineer create?

  • A . Database
  • B . Function
  • C . View
  • D . Temporary view
  • E . Table

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

A table is a data entity that is stored in a physical location and can be accessed by other data engineers in other sessions. A table can be created from one or more tables using the CREATE TABLE or CREATE TABLE AS SELECT commands. A table can also be registered from an existing DataFrame using the spark.catalog.createTable method. A table can be queried using SQL or DataFrame APIs. A table can also be updated, deleted, or appended using the MERGE

INTO command or the DeltaTable API.

Reference: Create a table

Create a table from a query result Register a table from a DataFrame [Query a table]

[Update, delete, or merge into a table]

Question #25

A data engineer is maintaining a data pipeline. Upon data ingestion, the data engineer notices that the source data is starting to have a lower level of quality. The data engineer would like to automate the process of monitoring the quality level.

Which of the following tools can the data engineer use to solve this problem?

  • A . Unity Catalog
  • B . Data Explorer
  • C . Delta Lake
  • D . Delta Live Tables
  • E . Auto Loader

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Delta Live Tables is a tool that enables data engineers to build and manage reliable data pipelines with minimal code. One of the features of Delta Live Tables is data quality monitoring, which allows data engineers to define quality expectations for their data and automatically check them at every step of the pipeline. Data quality monitoring can help detect and resolve data quality issues, such as missing values, duplicates, outliers, or schema changes. Data quality monitoring can also generate alerts and reports on the quality level of the data, and enable data engineers to troubleshoot and fix problems quickly.

Reference: Delta Live Tables Overview, Data Quality Monitoring

Question #26

A Delta Live Table pipeline includes two datasets defined using STREAMING LIVE TABLE. Three datasets are defined against Delta Lake table sources using LIVE TABLE.

The table is configured to run in Production mode using the Continuous Pipeline Mode.

Assuming previously unprocessed data exists and all definitions are valid, what is the expected outcome after clicking Start to update the pipeline?

  • A . All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will persist to allow for additional testing.
  • B . All datasets will be updated once and the pipeline will persist without any processing. The compute resources will persist but go unused.
  • C . All datasets will be updated at set intervals until the pipeline is shut down. The compute resources will be deployed for the update and terminated when the pipeline is stopped.
  • D . All datasets will be updated once and the pipeline will shut down. The compute resources will be terminated.
  • E . All datasets will be updated once and the pipeline will shut down. The compute resources will persist to allow for additional testing.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

: In Production mode, the pipeline runs continuously and updates the output tables whenever new data is available in the input sources. The compute resources are allocated on demand and released when the pipeline is stopped. This mode is suitable for production workloads that require high availability and reliability.

Reference: Configure pipeline settings for Delta Live Tables, Tutorial: Run your first Delta Live Tables pipeline, Building Reliable Data Pipelines Using DataBricks’ Delta Live Tables

Question #27

In order for Structured Streaming to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing, which of the following two approaches is used by Spark to record the offset range of the data being processed in each trigger?

  • A . Checkpointing and Write-ahead Logs
  • B . Structured Streaming cannot record the offset range of the data being processed in each trigger.
  • C . Replayable Sources and Idempotent Sinks
  • D . Write-ahead Logs and Idempotent Sinks
  • E . Checkpointing and Idempotent Sinks

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Structured Streaming uses checkpointing and write-ahead logs to record the offset range of the data being processed in each trigger. This ensures that the engine can reliably track the exact progress of the processing and handle any kind of failure by restarting and/or reprocessing. Checkpointing is the mechanism of saving the state of a streaming query to fault-tolerant storage (such as HDFS) so that it can be recovered after a failure. Write-ahead logs are files that record the offset range of the data being processed in each trigger and are written to the checkpoint location before the processing starts. These logs are used to recover the query state and resume processing from the last processed offset range in case of a failure.

Reference: Structured Streaming Programming Guide, Fault Tolerance Semantics

Question #28

Which of the following describes the relationship between Gold tables and Silver tables?

  • A . Gold tables are more likely to contain aggregations than Silver tables.
  • B . Gold tables are more likely to contain valuable data than Silver tables.
  • C . Gold tables are more likely to contain a less refined view of data than Silver tables.
  • D . Gold tables are more likely to contain more data than Silver tables.
  • E . Gold tables are more likely to contain truthful data than Silver tables.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

According to the medallion lakehouse architecture, gold tables are the final layer of data that powers analytics, machine learning, and production applications. They are often highly refined and aggregated, containing data that has been transformed into knowledge, rather than just information. Silver tables, on the other hand, are the intermediate layer of data that represents a validated, enriched version of the raw data from the bronze layer. They provide an enterprise view of all its key business entities, concepts and transactions, but they may not have all the aggregations and calculations that are required for specific use cases. Therefore, gold tables are more likely to contain aggregations than silver tables.

Reference: What is the medallion lakehouse architecture?

What is a Medallion Architecture?

Question #29

Which of the following describes the relationship between Bronze tables and raw data?

  • A . Bronze tables contain less data than raw data files.
  • B . Bronze tables contain more truthful data than raw data.
  • C . Bronze tables contain aggregates while raw data is unaggregated.
  • D . Bronze tables contain a less refined view of data than raw data.
  • E . Bronze tables contain raw data with a schema applied.

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Bronze tables are the first layer of a medallion architecture, which is a data design pattern used to organize data in a lakehouse. Bronze tables contain raw data ingested from various sources, such as RDBMS data, JSON files, IoT data, etc. The table structures in this layer correspond to the source system table structures “as-is”, along with any additional metadata columns that capture the load date/time, process ID, etc. The only transformation applied to the raw data in this layer is to apply a schema, which defines the column names and data types of the table. The schema can be inferred from the data source or specified explicitly. Applying a schema to the raw data enables the use of SQL and other structured query languages to access and analyze the data. Therefore, option E is the correct answer.

Reference: What is a Medallion Architecture? Raw Data Ingestion into Delta Lake Bronze tables using Azure Synapse Mapping Data Flow, Apache Spark + Delta Lake concepts, Delta Lake Architecture & Azure Databricks Workspace.

Question #30

Which of the following tools is used by Auto Loader process data incrementally?

  • A . Checkpointing
  • B . Spark Structured Streaming
  • C . Data Explorer
  • D . Unity Catalog
  • E . Databricks SQL

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Auto Loader provides a Structured Streaming source called cloudFiles that can process new data files as they arrive in cloud storage without any additional setup. Auto Loader uses a scalable key-value store to track ingestion progress and ensure exactly-once semantics. Auto Loader can ingest various file formats and load them into Delta Lake tables. Auto Loader is recommended for incremental data ingestion with Delta Live Tables, which extends the functionality of Structured Streaming and allows you to write declarative Python or SQL code to deploy a production-quality data pipeline.

Reference: What is Auto Loader?, What is Auto Loader? | Databricks on AWS, Solved: How does Auto Loader ingest data? – Databricks – 5629

Question #31

A data engineer has configured a Structured Streaming job to read from a table, manipulate the data, and then perform a streaming write into a new table.

The cade block used by the data engineer is below:

If the data engineer only wants the query to execute a micro-batch to process data every 5 seconds, which of the following lines of code should the data engineer use to fill in the blank?

  • A . trigger("5 seconds")
  • B . trigger()
  • C . trigger(once="5 seconds")
  • D . trigger(processingTime="5 seconds")
  • E . trigger(continuous="5 seconds")

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The processingTime option specifies a time-based trigger interval for fixed interval micro-batches. This means that the query will execute a micro-batch to process data every 5 seconds, regardless of how much data is available. This option is suitable for near-real time processing workloads that require low latency and consistent processing frequency. The other options are either invalid syntax (A, C), default behavior (B), or experimental feature (E).

Reference: Databricks Documentation – Configure Structured Streaming trigger intervals, Databricks Documentation – Trigger.

Question #32

A dataset has been defined using Delta Live Tables and includes an expectations clause:

CONSTRAINT valid_timestamp EXPECT (timestamp > ‘2020-01-01’) ON VIOLATION DROP ROW

What is the expected behavior when a batch of data containing data that violates these constraints is

processed?

  • A . Records that violate the expectation are dropped from the target dataset and loaded into a quarantine table.
  • B . Records that violate the expectation are added to the target dataset and flagged as invalid in a field added to the target dataset.
  • C . Records that violate the expectation are dropped from the target dataset and recorded as invalid in the event log.
  • D . Records that violate the expectation are added to the target dataset and recorded as invalid in the event log.
  • E . Records that violate the expectation cause the job to fail.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Delta Live Tables expectations are optional clauses that apply data quality checks on each record passing through a query. An expectation consists of a description, a boolean statement, and an action to take when a record fails the expectation. The ON VIOLATION clause specifies the action to take, which can be one of the following: warn, drop, or fail. The drop action means that invalid records are dropped from the target dataset before data is written to the target. The failure is reported as a metric for the dataset, which can be viewed by querying the Delta Live Tables event log. The event log contains information such as the number of records that violate an expectation, the number of records dropped, and the number of records written to the target dataset.

Reference: Manage data quality with Delta Live Tables

Monitor Delta Live Tables pipelines

Delta Live Tables SQL language reference

Question #33

Which of the following describes when to use the CREATE STREAMING LIVE TABLE (formerly CREATE INCREMENTAL LIVE TABLE) syntax over the CREATE LIVE TABLE syntax when creating Delta Live Tables (DLT) tables using SQL?

  • A . CREATE STREAMING LIVE TABLE should be used when the subsequent step in the DLT pipeline is static.
  • B . CREATE STREAMING LIVE TABLE should be used when data needs to be processed incrementally.
  • C . CREATE STREAMING LIVE TABLE is redundant for DLT and it does not need to be used.
  • D . CREATE STREAMING LIVE TABLE should be used when data needs to be processed through complicated aggregations.
  • E . CREATE STREAMING LIVE TABLE should be used when the previous step in the DLT pipeline is static.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

A streaming live table or view processes data that has been added only since the last pipeline update. Streaming tables and views are stateful; if the defining query changes, new data will be processed based on the new query and existing data is not recomputed. This is useful when data needs to be processed incrementally, such as when ingesting streaming data sources or performing incremental loads from batch data sources. A live table or view, on the other hand, may be entirely computed when possible to optimize computation resources and time. This is suitable when data needs to be processed in full, such as when performing complex transformations or aggregations that require scanning all the data.

Reference: Difference between LIVE TABLE and STREAMING LIVE TABLE, CREATE STREAMING TABLE, Load data using streaming tables in Databricks SQL.

Question #34

A data engineer is designing a data pipeline. The source system generates files in a shared directory that is also used by other processes. As a result, the files should be kept as is and will accumulate in the directory. The data engineer needs to identify which files are new since the previous run in the pipeline, and set up the pipeline to only ingest those new files with each run.

Which of the following tools can the data engineer use to solve this problem?

  • A . Unity Catalog
  • B . Delta Lake
  • C . Databricks SQL
  • D . Data Explorer
  • E . Auto Loader

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Auto Loader is a tool that can incrementally and efficiently process new data files as they arrive in cloud storage without any additional setup. Auto Loader provides a Structured Streaming source called cloudFiles, which automatically detects and processes new files in a given input directory path on the cloud file storage. Auto Loader also tracks the ingestion progress and ensures exactly-once semantics when writing data into Delta Lake. Auto Loader can ingest various file formats, such as JSON, CSV, XML, PARQUET, AVRO, ORC, TEXT, and BINARYFILE. Auto Loader has support for both Python and SQL in Delta Live Tables, which are a declarative way to build production-quality data pipelines with Databricks.

Reference: What is Auto Loader?, Get started with Databricks Auto Loader, Auto Loader in Delta Live Tables

Question #35

Which of the following Structured Streaming queries is performing a hop from a Silver table to a Gold table?

A)

B)

C)

D)

E)

  • A . Option A
  • B . Option B
  • C . Option C
  • D . Option D
  • E . Option E

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

The best practice is to use "Complete" as output mode instead of "append" when working with aggregated tables. Since gold layer is work final aggregated tables, the only option with output mode as complete is option E.

Question #36

A data engineer has three tables in a Delta Live Tables (DLT) pipeline. They have configured the pipeline to drop invalid records at each table. They notice that some data is being dropped due to quality concerns at some point in the DLT pipeline. They would like to determine at which table in their pipeline the data is being dropped.

Which of the following approaches can the data engineer take to identify the table that is dropping the records?

  • A . They can set up separate expectations for each table when developing their DLT pipeline.
  • B . They cannot determine which table is dropping the records.
  • C . They can set up DLT to notify them via email when records are dropped.
  • D . They can navigate to the DLT pipeline page, click on each table, and view the data quality statistics.
  • E . They can navigate to the DLT pipeline page, click on the “Error” button, and review the present errors.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

One of the features of DLT is that it provides data quality metrics for each dataset in the pipeline, such as the number of records that pass or fail expectations, the number of records that are dropped, and the number of records that are written to the target. These metrics can be accessed from the DLT pipeline page, where the data engineer can click on each table and view the data quality statistics for the latest update or any previous update. This way, they can identify which table is dropping the records and why.

Reference: Monitor Delta Live Tables pipelines

Manage data quality with Delta Live Tables

Question #37

A data engineer has a single-task Job that runs each morning before they begin working. After identifying an upstream data issue, they need to set up another task to run a new notebook prior to the original task.

Which of the following approaches can the data engineer use to set up the new task?

  • A . They can clone the existing task in the existing Job and update it to run the new notebook.
  • B . They can create a new task in the existing Job and then add it as a dependency of the original task.
  • C . They can create a new task in the existing Job and then add the original task as a dependency of the new task.
  • D . They can create a new job from scratch and add both tasks to run concurrently.
  • E . They can clone the existing task to a new Job and then edit it to run the new notebook.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

To set up the new task to run a new notebook prior to the original task in a single-task Job, the data engineer can use the following approach: In the existing Job, create a new task that corresponds to the new notebook that needs to be run. Set up the new task with the appropriate configuration, specifying the notebook to be executed and any necessary parameters or dependencies. Once the new task is created, designate it as a dependency of the original task in the Job configuration. This ensures that the new task is executed before the original task.

Question #38

An engineering manager wants to monitor the performance of a recent project using a Databricks SQL query. For the first week following the project’s release, the manager wants the query results to be updated every minute. However, the manager is concerned that the compute resources used for the query will be left running and cost the organization a lot of money beyond the first week of the project’s release.

Which of the following approaches can the engineering team use to ensure the query does not cost the organization any money beyond the first week of the project’s release?

  • A . They can set a limit to the number of DBUs that are consumed by the SQL Endpoint.
  • B . They can set the query’s refresh schedule to end after a certain number of refreshes.
  • C . They cannot ensure the query does not cost the organization money beyond the first week of the project’s release.
  • D . They can set a limit to the number of individuals that are able to manage the query’s refresh schedule.
  • E . They can set the query’s refresh schedule to end on a certain date in the query scheduler.

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

In Databricks SQL, you can use scheduled query executions to update your dashboards or enable routine alerts. By default, your queries do not have a schedule. To set the schedule, you can use the dropdown pickers to specify the frequency, period, starting time, and time zone. You can also choose to end the schedule on a certain date by selecting the End date checkbox and picking a date from the calendar. This way, you can ensure that the query does not run beyond the first week of the project’s release and does not incur any additional cost. Option A is incorrect, as setting a limit to the number of DBUs does not stop the query from running. Option B is incorrect, as there is no option to end the schedule after a certain number of refreshes. Option C is incorrect, as there is a way to ensure the query does not cost the organization money beyond the first week of the project’s release. Option D is incorrect, as setting a limit to the number of individuals who can manage the query’s refresh schedule does not affect the query’s execution or cost.

Reference: Schedule a query, Schedule a query – Azure Databricks – Databricks SQL

Question #39

A data analysis team has noticed that their Databricks SQL queries are running too slowly when connected to their always-on SQL endpoint. They claim that this issue is present when many members of the team are running small queries simultaneously. They ask the data engineering team for help. The data engineering team notices that each of the team’s queries uses the same SQL endpoint.

Which of the following approaches can the data engineering team use to improve the latency of the team’s queries?

  • A . They can increase the cluster size of the SQL endpoint.
  • B . They can increase the maximum bound of the SQL endpoint’s scaling range.
  • C . They can turn on the Auto Stop feature for the SQL endpoint.
  • D . They can turn on the Serverless feature for the SQL endpoint.
  • E . They can turn on the Serverless feature for the SQL endpoint and change the Spot Instance Policy to “Reliability Optimized.”

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://community.databricks.com/t5/data-engineering/sequential-vs-concurrency-optimization-questions-from-query/td-p/36696

Question #40

A data engineer wants to schedule their Databricks SQL dashboard to refresh once per day, but they only want the associated SQL endpoint to be running when it is necessary.

Which of the following approaches can the data engineer use to minimize the total running time of the SQL endpoint used in the refresh schedule of their dashboard?

  • A . They can ensure the dashboard’s SQL endpoint matches each of the queries’ SQL endpoints.
  • B . They can set up the dashboard’s SQL endpoint to be serverless.
  • C . They can turn on the Auto Stop feature for the SQL endpoint.
  • D . They can reduce the cluster size of the SQL endpoint.
  • E . They can ensure the dashboard’s SQL endpoint is not one of the included query’s SQL endpoint.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

A serverless SQL endpoint is a compute resource that is automatically managed by Databricks and scales up or down based on the workload. A serverless SQL endpoint can be used to run queries and dashboards without requiring manual configuration or management. A serverless SQL endpoint is only active when it is needed and shuts down automatically when idle, minimizing the total running time and cost. A serverless SQL endpoint can be created and assigned to a dashboard using the Databricks SQL UI or the SQL Analytics API.

Reference: Create a serverless SQL endpoint

Assign a SQL endpoint to a dashboard

SQL Analytics API

Exit mobile version