A data analyst is using AWS Glue to organize, cleanse, validate, and format a 200 GB dataset. The data analyst triggered the job to run with the Standard worker type. After 3 hours, the AWS Glue job status is still RUNNING. Logs from the job run show no error codes. The data analyst wants to improve the job execution time without overprovisioning.
Which actions should the data analyst take?
A . Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the executor-cores job parameter.
B . Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the
profiled metrics, increase the value of the maximum capacity job parameter.
C . Enable job metrics in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the spark.yarn.executor.memoryOverhead job parameter.
D . Enable job bookmarks in AWS Glue to estimate the number of data processing units (DPUs). Based on the profiled metrics, increase the value of the num-executors job parameter.
Answer: B
Explanation:
Reference: https://docs.aws.amazon.com/glue/latest/dg/monitor-debug-capacity.html
Latest DAS-C01 Dumps Valid Version with 77 Q&As
Latest And Valid Q&A | Instant Download | Once Fail, Full Refund