You have an Azure Databricks workspace and an Azure Data Lake Storage Gen2 account named storage! New files are uploaded daily to storage1.
• Incrementally process new files as they are upkorage1 as a structured streaming source. The solution must meet the following requirements:
• Minimize implementation and maintenance effort.
• Minimize the cost of processing millions of files.
• Support schema inference and schema drift.
Which should you include in the recommendation?
A . Auto Loader
B. Apache Spark FileStreamSource
C. COPY INTO
D. Azure Data Factory
Answer: D
Latest DP-203 Dumps Valid Version with 116 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund