AliCloud ACA-BIGDATA1 ACA Big Data Certification Exam Online Training
AliCloud ACA-BIGDATA1 Online Training
The questions for ACA-BIGDATA1 were last updated at Dec 24,2024.
- Exam Code: ACA-BIGDATA1
- Exam Name: ACA Big Data Certification Exam
- Certification Provider: AliCloud
- Latest update: Dec 24,2024
A business flow in DataWorks integrates different node task types by business type, such a structure
improves business code development facilitation.
Which of the following descriptions about the node type is INCORRECT? Score 2
- A . A zero-load node is a control node that does not generate any data. The virtual node is generally used as the root node for planning the overall node workflow.
- B . An ODPS SQL task allows you to edit and maintain the SQL code on the Web, and easily implement code runs, debug, and collaboration.
- C . The PyODPS node in DataWorks can be integrated with MaxCompute Python SDK. You can edit the Python code to operate MaxCompute on a PyODPS node in DataWorks.
- D . The SHELL node supports standard SHELL syntax and the interactive syntax. The SHELL task can run on the default resource group
DataV is a powerful yet accessible data visualization tool, which features geographic information systems allowing for rapid interpretation of data to understand relationships, patterns, and trends. When a DataV screen is ready, it can embed works to the existing portal of the enterprise through ______.
- A . URL after the release
- B . URL in the preview
- C . MD5 code obtained after the release
- D . Jar package imported after the release
DataWorks can be used to develop and configure data sync tasks.
Which of the following statements are correct? (Number of correct answers: 3) Score 2
- A . The data source configuration in the project management is required to add data source
- B . Some of the columns in source tables can be extracted to create a mapping relationship between fields, and constants or variables can’t be added
- C . For the extraction of source data, "where" filtering clause can be referenced as the criteria of incremental synchronization
- D . Clean-up rules can be set to clear or preserve existing data before data write
You are working on a project where you need to chain together MapReduce, Hive jobs. You also need the ability to use forks, decision points, and path joins.
Which ecosystem project should you use to perform these actions? Score 2
- A . Spark
- B . HUE
- C . Zookeeper
- D . Oozie
MaxCompute supports two kinds of charging methods: Pay-As-You-Go and Subscription (CU cost). Pay-As-You-Go means each task is measured according to the input size by job cost. In this charging method the billing items do not include charges due to ______. Score 2
- A . Data upload
- B . Data download
- C . Computing
- D . Storage
In MaxCompute, if error occurs in Tunnel transmission due to network or Tunnel service, the user can resume the last update operation through the command tunnel resume; Score 1
- A . True
- B . False
You are working on a project where you need to chain together MapReduce, Hive jobs. You also need the ability to use forks, decision points, and path joins.
Which ecosystem project should you use to perform these actions?
- A . Apache HUE
- B . Apache Zookeeper
- C . Apache Oozie
- D . Apache Spark
In order to ensure smooth processing of tasks in the Dataworks data development kit, you must create an AccessKey. An AccessKey is primarily used for access permission verification between various Alibaba Cloud products. The AccessKey has two parts, they are ____. (Number of correct answers: 2) Score 2
- A . Access Username
- B . Access Key ID
- C . Access Key Secret
- D . Access Password
Scenario: Jack is the administrator of project prj1. The project involves a large volume of sensitive data such as bank account, medical record, etc. Jack wants to properly protect the data.
Which of the follow statements is necessary?
- A . set ProjectACL=true;
- B . add accountprovider ram;
- C . set ProjectProtection=true;
- D . use prj1;
Resource is a particular concept of MaxCompute. If you want to use user-defined function UDF or MapReduce, resource is needed. For example: After you have prepared UDF, you must upload the compiled jar package to MaxCompute as resource.
Which of the following objects are MaxCompute resources? (Number of correct answers: 4)
Score 2
- A . Files
- B . Tables: Tables in MaxCompute
- C . Jar: Compiled Java jar package
- D . Archive: Recognize the compression type according to the postfix in the resource name
- E . ACL Policy