Exam4Training

Microsoft DP-200 Implementing an Azure Data Solution Online Training

Question #1

Topic 1, Proseware Inc

Background

Proseware, Inc, develops and manages a product named Poll Taker. The product is used for delivering public opinion polling and analysis.

Polling data comes from a variety of sources, including online surveys, house-to-house interviews, and booths at public events.

Polling data

Polling data is stored in one of the two locations:

– An on-premises Microsoft SQL Server 2019 database named PollingData

– Azure Data Lake Gen 2

Data in Data Lake is queried by using PolyBase

Poll metadata

Each poll has associated metadata with information about the poll including the date and number of respondents. The data is stored as JSON.

Phone-based polling

Security

– Phone-based poll data must only be uploaded by authorized users from authorized devices

– Contractors must not have access to any polling data other than their own

– Access to polling data must set on a per-active directory user basis

Data migration and loading

– All data migration processes must use Azure Data Factory

– All data migrations must run automatically during non-business hours

– Data migrations must be reliable and retry when needed

Performance

After six months, raw polling data should be moved to a storage account. The storage must be available in the event of a regional disaster. The solution must minimize costs.

Deployments

– All deployments must be performed by using Azure DevOps. Deployments must use templates used in multiple environments

– No credentials or secrets should be used during deployments

Reliability

All services and processes must be resilient to a regional Azure outage.

Monitoring

All Azure services must be monitored by using Azure Monitor. On-premises SQL Server performance must be monitored.

You need to ensure that phone-based poling data can be analyzed in the PollingData database.

How should you configure Azure Data Factory?

  • A . Use a tumbling schedule trigger
  • B . Use an event-based trigger
  • C . Use a schedule trigger
  • D . Use manual execution

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

When creating a schedule trigger, you specify a schedule (start date, recurrence, end date etc.) for the trigger, and associate with a Data Factory pipeline.

Scenario:

All data migration processes must use Azure Data Factory

All data migrations must run automatically during non-business hours

References: https://docs.microsoft.com/en-us/azure/data-factory/how-to-create-schedule-trigger

Question #2

DRAG DROP

You need to provision the polling data storage account.

How should you configure the storage account? To answer, drag the appropriate Configuration Value to the correct Setting. Each Configuration Value may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Account type: StorageV2

You must create new storage accounts as type StorageV2 (general-purpose V2) to take advantage of Data Lake Storage Gen2 features.

Scenario: Polling data is stored in one of the two locations:

✑ An on-premises Microsoft SQL Server 2019 database named PollingData

✑ Azure Data Lake Gen 2

Data in Data Lake is queried by using PolyBase

Replication type: RA-GRS

Scenario: All services and processes must be resilient to a regional Azure outage.

Geo-redundant storage (GRS) is designed to provide at least 99.99999999999999% (16 9’s) durability of objects over a given year by replicating your data to a secondary region that is hundreds of miles away from the primary region. If your storage account has GRS enabled, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn’t recoverable.

If you opt for GRS, you have two related options to choose from:

✑ GRS replicates your data to another data center in a secondary region, but that data is available to be read only if Microsoft initiates a failover from the primary to secondary region.

✑ Read-access geo-redundant storage (RA-GRS) is based on GRS. RA-GRS replicates your data to another data center in a secondary region, and also provides you with the option to read from the secondary region. With RA-GRS, you can read from the secondary region regardless of whether Microsoft initiates a failover from the primary to secondary region.

References: https://docs.microsoft.com/bs-cyrl-ba/azure/storage/blobs/data-lake-storage-quickstart-create-account

https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy-grs


Question #3

DRAG DROP

You need to ensure that phone-based polling data can be analyzed in the PollingData database.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer are and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Scenario:

All deployments must be performed by using Azure DevOps. Deployments must use templates used in multiple environments

No credentials or secrets should be used during deployments


Question #4

HOTSPOT

You need to ensure phone-based polling data upload reliability requirements are met.

How should you configure monitoring? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: FileCapacity

FileCapacity is the amount of storage used by the storage account’s File service in bytes.

Box 2: Avg

The aggregation type of the FileCapacity metric is Avg.

Scenario:

All services and processes must be resilient to a regional Azure outage.

All Azure services must be monitored by using Azure Monitor. On-premises SQL Server performance must be monitored.

References: https://docs.microsoft.com/en-us/azure/azure-monitor/platform/metrics-supported


Question #5

HOTSPOT

You need to ensure that Azure Data Factory pipelines can be deployed.

How should you configure authentication and authorization for deployments? To answer, select the appropriate options in the answer choices. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

The way you control access to resources using RBAC is to create role assignments. This is a key concept to understand C it’s how permissions are enforced. A role assignment consists of three elements: security principal, role definition, and scope.

Scenario:

No credentials or secrets should be used during deployments

Phone-based poll data must only be uploaded by authorized users from authorized devices

Contractors must not have access to any polling data other than their own

Access to polling data must set on a per-active directory user basis

References: https://docs.microsoft.com/en-us/azure/role-based-access-control/overview


Question #6

HOTSPOT

You need to ensure polling data security requirements are met.

Which security technologies should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Azure Active Directory user

Scenario:

Access to polling data must set on a per-active directory user basis

Box 2: DataBase Scoped Credential

SQL Server uses a database scoped credential to access non-public Azure blob storage or Kerberos-secured Hadoop clusters with PolyBase.

PolyBase cannot authenticate by using Azure AD authentication.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql


Question #6

HOTSPOT

You need to ensure polling data security requirements are met.

Which security technologies should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Azure Active Directory user

Scenario:

Access to polling data must set on a per-active directory user basis

Box 2: DataBase Scoped Credential

SQL Server uses a database scoped credential to access non-public Azure blob storage or Kerberos-secured Hadoop clusters with PolyBase.

PolyBase cannot authenticate by using Azure AD authentication.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql


Question #6

HOTSPOT

You need to ensure polling data security requirements are met.

Which security technologies should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Azure Active Directory user

Scenario:

Access to polling data must set on a per-active directory user basis

Box 2: DataBase Scoped Credential

SQL Server uses a database scoped credential to access non-public Azure blob storage or Kerberos-secured Hadoop clusters with PolyBase.

PolyBase cannot authenticate by using Azure AD authentication.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql


Question #6

HOTSPOT

You need to ensure polling data security requirements are met.

Which security technologies should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Azure Active Directory user

Scenario:

Access to polling data must set on a per-active directory user basis

Box 2: DataBase Scoped Credential

SQL Server uses a database scoped credential to access non-public Azure blob storage or Kerberos-secured Hadoop clusters with PolyBase.

PolyBase cannot authenticate by using Azure AD authentication.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql


Question #6

HOTSPOT

You need to ensure polling data security requirements are met.

Which security technologies should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Azure Active Directory user

Scenario:

Access to polling data must set on a per-active directory user basis

Box 2: DataBase Scoped Credential

SQL Server uses a database scoped credential to access non-public Azure blob storage or Kerberos-secured Hadoop clusters with PolyBase.

PolyBase cannot authenticate by using Azure AD authentication.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql


Question #6

HOTSPOT

You need to ensure polling data security requirements are met.

Which security technologies should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Azure Active Directory user

Scenario:

Access to polling data must set on a per-active directory user basis

Box 2: DataBase Scoped Credential

SQL Server uses a database scoped credential to access non-public Azure blob storage or Kerberos-secured Hadoop clusters with PolyBase.

PolyBase cannot authenticate by using Azure AD authentication.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql


Question #12

Validate configuration results and deploy the solution

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Use the Azure Key Vault, not the Windows Certificate Store, to store the master key.

Note: The Master Key Configuration page is where you set up your CMK (Column Master Key) and select the key store provider where the CMK will be stored. Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a hardware security module (HSM).

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted-azure-key-vault


Question #13

You need to set up Azure Data Factory pipelines to meet data movement requirements.

Which integration runtime should you use?

  • A . self-hosted integration runtime
  • B . Azure-SSIS Integration Runtime
  • C . .NET Common Language Runtime (CLR)
  • D . Azure integration runtime

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The following table describes the capabilities and network support for each of the integration runtime types:

Scenario: The solution must support migrating databases that support external and internal application to Azure SQL Database. The migrated databases will be supported by Azure Data Factory pipelines for the continued movement, migration and updating of data both in the cloud and from local core business systems and repositories.

References: https://docs.microsoft.com/en-us/azure/data-factory/concepts-integration-runtime


Question #14

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some questions sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need setup monitoring for tiers 6 through 8.

What should you configure?

  • A . extended events for average storage percentage that emails data engineers
  • B . an alert rule to monitor CPU percentage in databases that emails data engineers
  • C . an alert rule to monitor CPU percentage in elastic pools that emails data engineers
  • D . an alert rule to monitor storage percentage in databases that emails data engineers
  • E . an alert rule to monitor storage percentage in elastic pools that emails data engineers

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Scenario:

Tiers 6 through 8 must have unexpected resource storage usage immediately reported to data engineers.

Tier 3 and Tier 6 through Tier 8 applications must use database density on the same server and Elastic pools in a cost-effective manner.

Question #14

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some questions sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need setup monitoring for tiers 6 through 8.

What should you configure?

  • A . extended events for average storage percentage that emails data engineers
  • B . an alert rule to monitor CPU percentage in databases that emails data engineers
  • C . an alert rule to monitor CPU percentage in elastic pools that emails data engineers
  • D . an alert rule to monitor storage percentage in databases that emails data engineers
  • E . an alert rule to monitor storage percentage in elastic pools that emails data engineers

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Scenario:

Tiers 6 through 8 must have unexpected resource storage usage immediately reported to data engineers.

Tier 3 and Tier 6 through Tier 8 applications must use database density on the same server and Elastic pools in a cost-effective manner.

Question #14

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some questions sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need setup monitoring for tiers 6 through 8.

What should you configure?

  • A . extended events for average storage percentage that emails data engineers
  • B . an alert rule to monitor CPU percentage in databases that emails data engineers
  • C . an alert rule to monitor CPU percentage in elastic pools that emails data engineers
  • D . an alert rule to monitor storage percentage in databases that emails data engineers
  • E . an alert rule to monitor storage percentage in elastic pools that emails data engineers

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Scenario:

Tiers 6 through 8 must have unexpected resource storage usage immediately reported to data engineers.

Tier 3 and Tier 6 through Tier 8 applications must use database density on the same server and Elastic pools in a cost-effective manner.

Question #14

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some questions sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need setup monitoring for tiers 6 through 8.

What should you configure?

  • A . extended events for average storage percentage that emails data engineers
  • B . an alert rule to monitor CPU percentage in databases that emails data engineers
  • C . an alert rule to monitor CPU percentage in elastic pools that emails data engineers
  • D . an alert rule to monitor storage percentage in databases that emails data engineers
  • E . an alert rule to monitor storage percentage in elastic pools that emails data engineers

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Scenario:

Tiers 6 through 8 must have unexpected resource storage usage immediately reported to data engineers.

Tier 3 and Tier 6 through Tier 8 applications must use database density on the same server and Elastic pools in a cost-effective manner.

Question #14

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some questions sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need setup monitoring for tiers 6 through 8.

What should you configure?

  • A . extended events for average storage percentage that emails data engineers
  • B . an alert rule to monitor CPU percentage in databases that emails data engineers
  • C . an alert rule to monitor CPU percentage in elastic pools that emails data engineers
  • D . an alert rule to monitor storage percentage in databases that emails data engineers
  • E . an alert rule to monitor storage percentage in elastic pools that emails data engineers

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Scenario:

Tiers 6 through 8 must have unexpected resource storage usage immediately reported to data engineers.

Tier 3 and Tier 6 through Tier 8 applications must use database density on the same server and Elastic pools in a cost-effective manner.

Question #14

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some questions sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You need setup monitoring for tiers 6 through 8.

What should you configure?

  • A . extended events for average storage percentage that emails data engineers
  • B . an alert rule to monitor CPU percentage in databases that emails data engineers
  • C . an alert rule to monitor CPU percentage in elastic pools that emails data engineers
  • D . an alert rule to monitor storage percentage in databases that emails data engineers
  • E . an alert rule to monitor storage percentage in elastic pools that emails data engineers

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

Scenario:

Tiers 6 through 8 must have unexpected resource storage usage immediately reported to data engineers.

Tier 3 and Tier 6 through Tier 8 applications must use database density on the same server and Elastic pools in a cost-effective manner.

Question #20

Validate configuration results and deploy the solution

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

We use the Azure Key Vault, not the Windows Certificate Store, to store the master key.

Note: The Master Key Configuration page is where you set up your CMK (Column Master Key) and select the key store provider where the CMK will be stored. Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a hardware security module (HSM).

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted-azure-key-vault


Question #21

DRAG DROP

You need to set up access to Azure SQL Database for Tier 7 and Tier 8 partners.

Which three actions should you perform in sequence? To answer, move the appropriate three actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Tier 7 and 8 data access is constrained to single endpoints managed by partners for access

Step 1: Set the Allow Azure Services to Access Server setting to Disabled

Set Allow access to Azure services to OFF for the most secure configuration.

By default, access through the SQL Database firewall is enabled for all Azure services, under Allow access to Azure services. Choose OFF to disable access for all Azure services.

Note: The firewall pane has an ON/OFF button that is labeled Allow access to Azure services. The ON setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This ON setting is probably more open than you want your SQL Database to be. The virtual network rule feature offers much finer granular control.

Step 2: In the Azure portal, create a server firewall rule

Set up SQL Database server firewall rules

Server-level IP firewall rules apply to all databases within the same SQL Database server.

To set up a server-level firewall rule:


Question #21

DRAG DROP

You need to set up access to Azure SQL Database for Tier 7 and Tier 8 partners.

Which three actions should you perform in sequence? To answer, move the appropriate three actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Tier 7 and 8 data access is constrained to single endpoints managed by partners for access

Step 1: Set the Allow Azure Services to Access Server setting to Disabled

Set Allow access to Azure services to OFF for the most secure configuration.

By default, access through the SQL Database firewall is enabled for all Azure services, under Allow access to Azure services. Choose OFF to disable access for all Azure services.

Note: The firewall pane has an ON/OFF button that is labeled Allow access to Azure services. The ON setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This ON setting is probably more open than you want your SQL Database to be. The virtual network rule feature offers much finer granular control.

Step 2: In the Azure portal, create a server firewall rule

Set up SQL Database server firewall rules

Server-level IP firewall rules apply to all databases within the same SQL Database server.

To set up a server-level firewall rule:


Question #21

DRAG DROP

You need to set up access to Azure SQL Database for Tier 7 and Tier 8 partners.

Which three actions should you perform in sequence? To answer, move the appropriate three actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Tier 7 and 8 data access is constrained to single endpoints managed by partners for access

Step 1: Set the Allow Azure Services to Access Server setting to Disabled

Set Allow access to Azure services to OFF for the most secure configuration.

By default, access through the SQL Database firewall is enabled for all Azure services, under Allow access to Azure services. Choose OFF to disable access for all Azure services.

Note: The firewall pane has an ON/OFF button that is labeled Allow access to Azure services. The ON setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This ON setting is probably more open than you want your SQL Database to be. The virtual network rule feature offers much finer granular control.

Step 2: In the Azure portal, create a server firewall rule

Set up SQL Database server firewall rules

Server-level IP firewall rules apply to all databases within the same SQL Database server.

To set up a server-level firewall rule:


Question #21

DRAG DROP

You need to set up access to Azure SQL Database for Tier 7 and Tier 8 partners.

Which three actions should you perform in sequence? To answer, move the appropriate three actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Tier 7 and 8 data access is constrained to single endpoints managed by partners for access

Step 1: Set the Allow Azure Services to Access Server setting to Disabled

Set Allow access to Azure services to OFF for the most secure configuration.

By default, access through the SQL Database firewall is enabled for all Azure services, under Allow access to Azure services. Choose OFF to disable access for all Azure services.

Note: The firewall pane has an ON/OFF button that is labeled Allow access to Azure services. The ON setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This ON setting is probably more open than you want your SQL Database to be. The virtual network rule feature offers much finer granular control.

Step 2: In the Azure portal, create a server firewall rule

Set up SQL Database server firewall rules

Server-level IP firewall rules apply to all databases within the same SQL Database server.

To set up a server-level firewall rule:


Question #21

DRAG DROP

You need to set up access to Azure SQL Database for Tier 7 and Tier 8 partners.

Which three actions should you perform in sequence? To answer, move the appropriate three actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Tier 7 and 8 data access is constrained to single endpoints managed by partners for access

Step 1: Set the Allow Azure Services to Access Server setting to Disabled

Set Allow access to Azure services to OFF for the most secure configuration.

By default, access through the SQL Database firewall is enabled for all Azure services, under Allow access to Azure services. Choose OFF to disable access for all Azure services.

Note: The firewall pane has an ON/OFF button that is labeled Allow access to Azure services. The ON setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This ON setting is probably more open than you want your SQL Database to be. The virtual network rule feature offers much finer granular control.

Step 2: In the Azure portal, create a server firewall rule

Set up SQL Database server firewall rules

Server-level IP firewall rules apply to all databases within the same SQL Database server.

To set up a server-level firewall rule:


Question #21

DRAG DROP

You need to set up access to Azure SQL Database for Tier 7 and Tier 8 partners.

Which three actions should you perform in sequence? To answer, move the appropriate three actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Tier 7 and 8 data access is constrained to single endpoints managed by partners for access

Step 1: Set the Allow Azure Services to Access Server setting to Disabled

Set Allow access to Azure services to OFF for the most secure configuration.

By default, access through the SQL Database firewall is enabled for all Azure services, under Allow access to Azure services. Choose OFF to disable access for all Azure services.

Note: The firewall pane has an ON/OFF button that is labeled Allow access to Azure services. The ON setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This ON setting is probably more open than you want your SQL Database to be. The virtual network rule feature offers much finer granular control.

Step 2: In the Azure portal, create a server firewall rule

Set up SQL Database server firewall rules

Server-level IP firewall rules apply to all databases within the same SQL Database server.

To set up a server-level firewall rule:


Question #21

DRAG DROP

You need to set up access to Azure SQL Database for Tier 7 and Tier 8 partners.

Which three actions should you perform in sequence? To answer, move the appropriate three actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Tier 7 and 8 data access is constrained to single endpoints managed by partners for access

Step 1: Set the Allow Azure Services to Access Server setting to Disabled

Set Allow access to Azure services to OFF for the most secure configuration.

By default, access through the SQL Database firewall is enabled for all Azure services, under Allow access to Azure services. Choose OFF to disable access for all Azure services.

Note: The firewall pane has an ON/OFF button that is labeled Allow access to Azure services. The ON setting allows communications from all Azure IP addresses and all Azure subnets. These Azure IPs or subnets might not be owned by you. This ON setting is probably more open than you want your SQL Database to be. The virtual network rule feature offers much finer granular control.

Step 2: In the Azure portal, create a server firewall rule

Set up SQL Database server firewall rules

Server-level IP firewall rules apply to all databases within the same SQL Database server.

To set up a server-level firewall rule:


Question #28

HOTSPOT

You need set up the Azure Data Factory JSON definition for Tier 10 data.

What should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Connection String

To use storage account key authentication, you use the ConnectionString property, which xpecify the information needed to connect to Blobl Storage.

Mark this field as a SecureString to store it securely in Data Factory. You can also put account key in Azure Key Vault and pull the accountKey configuration out of the connection string.

Box 2: Azure Blob

Tier 10 reporting data must be stored in Azure Blobs

References: https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage


Question #29

HOTSPOT

You need to mask tier 1 data.

Which functions should you use? To answer, select the appropriate option in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

A: Default

Full masking according to the data types of the designated fields.

For string data types, use XXXX or fewer Xs if the size of the field is less than 4 characters (char, nchar, varchar, nvarchar, text, ntext).

B: email

C: Custom text

Custom StringMasking method which exposes the first and last letters and adds a custom padding string in the middle. prefix,[padding],suffix

Tier 1 Database must implement data masking using the following masking logic:

References: https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking


Question #29

HOTSPOT

You need to mask tier 1 data.

Which functions should you use? To answer, select the appropriate option in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

A: Default

Full masking according to the data types of the designated fields.

For string data types, use XXXX or fewer Xs if the size of the field is less than 4 characters (char, nchar, varchar, nvarchar, text, ntext).

B: email

C: Custom text

Custom StringMasking method which exposes the first and last letters and adds a custom padding string in the middle. prefix,[padding],suffix

Tier 1 Database must implement data masking using the following masking logic:

References: https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking


Question #29

HOTSPOT

You need to mask tier 1 data.

Which functions should you use? To answer, select the appropriate option in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

A: Default

Full masking according to the data types of the designated fields.

For string data types, use XXXX or fewer Xs if the size of the field is less than 4 characters (char, nchar, varchar, nvarchar, text, ntext).

B: email

C: Custom text

Custom StringMasking method which exposes the first and last letters and adds a custom padding string in the middle. prefix,[padding],suffix

Tier 1 Database must implement data masking using the following masking logic:

References: https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking


Question #29

HOTSPOT

You need to mask tier 1 data.

Which functions should you use? To answer, select the appropriate option in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

A: Default

Full masking according to the data types of the designated fields.

For string data types, use XXXX or fewer Xs if the size of the field is less than 4 characters (char, nchar, varchar, nvarchar, text, ntext).

B: email

C: Custom text

Custom StringMasking method which exposes the first and last letters and adds a custom padding string in the middle. prefix,[padding],suffix

Tier 1 Database must implement data masking using the following masking logic:

References: https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking


Question #29

HOTSPOT

You need to mask tier 1 data.

Which functions should you use? To answer, select the appropriate option in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

A: Default

Full masking according to the data types of the designated fields.

For string data types, use XXXX or fewer Xs if the size of the field is less than 4 characters (char, nchar, varchar, nvarchar, text, ntext).

B: email

C: Custom text

Custom StringMasking method which exposes the first and last letters and adds a custom padding string in the middle. prefix,[padding],suffix

Tier 1 Database must implement data masking using the following masking logic:

References: https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking


Question #29

HOTSPOT

You need to mask tier 1 data.

Which functions should you use? To answer, select the appropriate option in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

A: Default

Full masking according to the data types of the designated fields.

For string data types, use XXXX or fewer Xs if the size of the field is less than 4 characters (char, nchar, varchar, nvarchar, text, ntext).

B: email

C: Custom text

Custom StringMasking method which exposes the first and last letters and adds a custom padding string in the middle. prefix,[padding],suffix

Tier 1 Database must implement data masking using the following masking logic:

References: https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking


Question #35

Validate configuration results and deploy the solution

Does the solution meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Use the Azure Key Vault, not the Windows Certificate Store, to store the master key.

Note: The Master Key Configuration page is where you set up your CMK (Column Master Key) and select the key store provider where the CMK will be stored. Currently, you can store a CMK in the Windows certificate store, Azure Key Vault, or a hardware security module (HSM).

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-always-encrypted-azure-key-vault


Question #36

You need to process and query ingested Tier 9 data.

Which two options should you use? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  • A . Azure Notification Hub
  • B . Transact-SQL statements
  • C . Azure Cache for Redis
  • D . Apache Kafka statements
  • E . Azure Event Grid
  • F . Azure Stream Analytics

Reveal Solution Hide Solution

Correct Answer: E,F
E,F

Explanation:

Event Hubs provides a Kafka endpoint that can be used by your existing Kafka based applications as an alternative to running your own Kafka cluster.

You can stream data into Kafka-enabled Event Hubs and process it with Azure Stream Analytics, in the following steps:

✑ Create a Kafka enabled Event Hubs namespace.

✑ Create a Kafka client that sends messages to the event hub.

✑ Create a Stream Analytics job that copies data from the event hub into an Azure blob storage.

Scenario:

Tier 9 reporting must be moved to Event Hubs, queried, and persisted in the same Azure region as the company’s main office

References: https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-kafka-stream-analytics


Question #37

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some questions sets might have more than one correct solution, while others might not have a

correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these

questions will not appear in the review screen.

You need to implement diagnostic logging for Data Warehouse monitoring.

Which log should you use?

  • A . RequestSteps
  • B . DmsWorkers
  • C . SqlRequests
  • D . ExecRequests

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Scenario:

The Azure SQL Data Warehouse cache must be monitored when the database is being used.

References: https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-sql-requests-transact-sq


Question #38

Topic 3, Litware, inc

Overview

General Overview

Litware, Inc. is an international car racing and manufacturing company that has 1,000 employees. Most employees are located in Europe. The company supports racing teams that complete in a worldwide racing series.

Physical Locations

Litware has two main locations: a main office in London, England, and a manufacturing plant in Berlin, Germany.

During each race weekend, 100 engineers set up a remote portable office by using a VPN to connect the datacenter in the London office. The portable office is set up and torn down in approximately 20 different countries each year.

Existing environment

Race Central

During race weekends, Litware uses a primary application named Race Central. Each car has several sensors that send real-time telemetry data to the London datacentre. The data is used for real-time tracking of the cars.

Race Central also sends batch updates to an application named Mechanical Workflow by using Microsoft SQL Server Integration Services (SSIS).

The telemetry data is sent to a MongoDB database. A custom application then moves the data to databases in SQL Server 2017. The telemetry data in MongoDB has more than 500 attributes. The application changes the attribute names when the data is moved to SQL Server 2017.

The database structure contains both OLAP and OLTP databases.

Mechanical Workflow

Mechanical Workflow is used to track changes and improvements made to the cars during their lifetime.

Currently, Mechanical Workflow runs on SQL Server 2017 as an OLAP system.

Mechanical Workflow has a table named Table1 that is 1 TB. Large aggregations are performed on a single column of Table1.

Requirements

Planned Changes

Litware is in the process of rearchitecting its data estate to be hosted in Azure. The company plans to decommission the London datacentre and move all its applications to an Azure datacenter.

Technical Requirements

Litware identifies the following technical requirements:

– Data collection for Race Central must be moved to Azure Cosmos DB and Azure SQL Database. The data must be written to the Azure datacenter closest to each race and must converge in the least amount of time.

– The query performance of Race Central must be stable, and the administrative time it takes to perform optimizations must be minimized.

– The database for Mechanical Workflow must be moved to Azure SQL Data Warehouse.

– Transparent data encryption (TDE) must be enabled on all data stores, whenever possible.

– An Azure Data Factory pipeline must be used to move data from Cosmos DB to SQL Database for Race Central. If the data load takes longer than 20 minutes, configuration changes must be made to Data Factory.

– The telemetry data must migrate toward a solution that is native to Azure.

– The telemetry data must be monitored for performance issues. You must adjust the Cosmos DB Request Units per second (RU/s) to maintain a performance SLA while minimizing the cost of the RU/s.

Data Masking Requirements

During race weekends, visitors will be able to enter the remote portable offices. Litware is concerned that some proprietary information might be exposed.

The company identifies the following data masking requirements for the Race Central data that will be stored in SQL Database:

– Only show the last four digits of the values in a column named SuspensionSprings.

– Only show a zero value for the values in a column named ShockOilWeight.

HOTSPOT

Which masking functions should you implement for each column to meet the data masking requirements? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Default

Default uses a zero value for numeric data types (bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float, real).

Only Show a zero value for the values in a column named ShockOilWeight.

Box 2: Credit Card

The Credit Card Masking method exposes the last four digits of the designated fields and adds a constant string as a prefix in the form of a credit card. Example: XXXX-XXXX-XXXX-1234

Only show the last four digits of the values in a column named SuspensionSprings.

Scenario:

The company identifies the following data masking requirements for the Race Central data that will be stored in SQL Database:

Only Show a zero value for the values in a column named ShockOilWeight.

Only show the last four digits of the values in a column named SuspensionSprings.


Question #39

What should you implement to optimize SQL Database for Race Central to meet the technical requirements?

  • A . the sp_update stored procedure
  • B . automatic tuning
  • C . Query Store
  • D . the dbcc checkdb command

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Scenario: The query performance of Race Central must be stable, and the administrative time it takes to

perform optimizations must be minimized.

sp_update updates query optimization statistics on a table or indexed view. By default, the query optimizer already updates statistics as necessary to improve the query plan; in some cases you can improve query performance by using UPDATE STATISTICS or the stored procedure sp_updatestats to update statistics more frequently than the default updates.

Question #40

HOTSPOT

Which masking functions should you implement for each column to meet the data masking requirements? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Custom text/string: A masking method, which exposes the first and/or last characters and adds a custom padding string in the middle.

Only show the last four digits of the values in a column named SuspensionSprings.

Box 2: Default

Default uses a zero value for numeric data types (bigint, bit, decimal, int, money, numeric, smallint, smallmoney, tinyint, float, real).

Scenario: Only show a zero value for the values in a column named ShockOilWeight.

Scenario:

The company identifies the following data masking requirements for the Race Central data that will be stored in SQL Database:

✑ Only show a zero value for the values in a column named ShockOilWeight.

✑ Only show the last four digits of the values in a column named SuspensionSprings.


Question #41

HOTSPOT

You need to build a solution to collect the telemetry data for Race Control.

What should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

API: MongoDB

Consistency level: Strong

Use the strongest consistency Strong to minimize convergence time.

Scenario: The data must be written to the Azure datacentre closest to each race and must converge in the least amount of time.

References: https://docs.microsoft.com/en-us/azure/cosmos-db/consistency-levels


Question #42

HOTSPOT

You are building the data store solution for Mechanical Workflow.

How should you configure Table1? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Table Type: Hash distributed.

Hash-distributed tables improve query performance on large fact tables.

Index type: Clusted columnstore

Scenario:

Mechanical Workflow has a named Table1 that is 1 TB. Large aggregations are performed on a single column of Table 1.

References: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-distribute


Question #43

On which data store you configure TDE to meet the technical requirements?

  • A . Cosmos DB
  • B . SQL Data Warehouse
  • C . SQL Database

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Scenario: Transparent data encryption (TDE) must be enabled on all data stores, whenever possible.

The datacentre for Mechanical Workflow must be moved to Azure SQL data Warehouse.

Question #44

What should you include in the Data Factory pipeline for Race Central?

  • A . a copy activity that uses a stored procedure as a source
  • B . a copy activity that contains schema mappings
  • C . a delete activity that has logging enabled
  • D . a filter activity that has a condition

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Scenario:

An Azure Data Factory pipeline must be used to move data from Cosmos DB to SQL Database for Race Central. If the data load takes longer than 20 minutes, configuration changes must be made to Data Factory.

The telemetry data is sent to a MongoDB database. A custom application then moves the data to databases in SQL Server 2017. The telemetry data in MongoDB has more than 500 attributes. The application changes the attribute names when the data is moved to SQL Server 2017.

You can copy data to or from Azure Cosmos DB (SQL API) by using Azure Data Factory pipeline.

Column mapping applies when copying data from source to sink. By default, copy activity map source data to sink by column names. You can specify explicit mapping to customize the column mapping based on your need. More specifically, copy activity:

Read the data from source and determine the source schema

✑ Use default column mapping to map columns by name, or apply explicit column mapping if specified.

✑ Write the data to sink

✑ Write the data to sink

References: https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-schema-and-type-mapping

Question #45

You are monitoring the Data Factory pipeline that runs from Cosmos DB to SQL Database for Race Central.

You discover that the job takes 45 minutes to run.

What should you do to improve the performance of the job?

  • A . Decrease parallelism for the copy activities.
  • B . Increase that data integration units.
  • C . Configure the copy activities to use staged copy.
  • D . Configure the copy activities to perform compression.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Performance tuning tips and optimization features. In some cases, when you run a copy activity in Azure Data Factory, you see a "Performance tuning tips" message on top of the copy activity monitoring, as shown in the following example. The message tells you the bottleneck that was identified for the given copy run. It also guides you on what to change to boost copy throughput. The performance tuning tips currently provide suggestions like: Use PolyBase when you copy data into Azure SQL Data Warehouse.

Increase Azure Cosmos DB Request Units or Azure SQL Database DTUs (Database Throughput Units)

when the resource on the data store side is the bottleneck.

Remove the unnecessary staged copy.

References: https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-performance

Question #46

Which two metrics should you use to identify the appropriate RU/s for the telemetry data? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  • A . Number of requests
  • B . Number of requests exceeded capacity
  • C . End to end observed read latency at the 99th percentile
  • D . Session consistency
  • E . Data + Index storage consumed
  • F . Avg Troughput/s

Reveal Solution Hide Solution

Correct Answer: A,E
A,E

Explanation:

Scenario: The telemetry data must be monitored for performance issues. You must adjust the Cosmos DB Request Units per second (RU/s) to maintain a performance SLA while minimizing the cost of the Ru/s.

With Azure Cosmos DB, you pay for the throughput you provision and the storage you

consume on an hourly basis.

While you estimate the number of RUs per second to provision, consider the following factors:

Item size: As the size of an item increases, the number of RUs consumed to read or write

the item also

increases.

Question #47

Topic 4, ADatum Corporation

Case study

Overview

ADatum Corporation is a retailer that sells products through two sales channels: retail stores and a website.

Existing Environment

ADatum has one database server that has Microsoft SQL Server 2016 installed. The server hosts three mission-critical databases named SALESDB, DOCDB, and REPORTINGDB.

SALESDB collects data from the stored and the website.

DOCDB stored documents that connect to the sales data in SALESDB. The documents are stored in two different JSON formats based on the sales channel.

REPORTINGDB stores reporting data and contains server columnstore indexes. A daily process creates reporting data in REPORTINGDB from the data in SALESDB. The process is implemented as a SQL Server Integration Services (SSIS) package that runs a stored procedure from SALESDB.

Requirements

Planned Changes

ADatum plans to move the current data infrastructure to Azure.

The new infrastructure has the following requirements:

– Migrate SALESDB and REPORTINGDB to an Azure SQL database.

– Migrate DOCDB to Azure Cosmos DB.

– The sales data including the documents in JSON format, must be gathered as it arrives and analyzed online by using Azure Stream Analytics. The analytic process will perform aggregations that must be done continuously, without gaps, and without overlapping.

– As they arrive, all the sales documents in JSON format must be transformed into one consistent format.

– Azure Data Factory will replace the SSIS process of copying the data from SALESDB to REPORTINGDB.

Technical Requirements

The new Azure data infrastructure must meet the following technical requirements:

– Data in SALESDB must encrypted by using Transparent Data Encryption (TDE). The encryption must use your own key.

– SALESDB must be restorable to any given minute within the past three weeks.

– Real-time processing must be monitored to ensure that workloads are sized properly based on actual usage patterns.

– Missing indexes must be created automatically for REPORTINGDB.

– Disk IO, CPU, and memory usage must be monitored for SALESDB.

You need to implement event processing by using Stream Analytics to produce consistent JSON documents.

Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

  • A . Define an output to Cosmos DB.
  • B . Define a query that contains a JavaScript user-defined aggregates (UDA) function.
  • C . Define a reference input.
  • D . Define a transformation query.
  • E . Define an output to Azure Data Lake Storage Gen2.
  • F . Define a stream input.

Reveal Solution Hide Solution

Correct Answer: D,E,F
D,E,F

Explanation:

DOCDB stored documents that connect to the sales data in SALESDB. The documents are stored in two different JSON formats based on the sales channel.

The sales data including the documents in JSON format, must be gathered as it arrives and analyzed online by using Azure Stream Analytics. The analytic process will perform aggregations that must be done continuously, without gaps, and without overlapping.

As they arrive, all the sales documents in JSON format must be transformed into one consistent format.

Question #48

DRAG DROP

You need to replace the SSIS process by using Data Factory.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Scenario: A daily process creates reporting data in REPORTINGDB from the data in SALESDB. The process is implemented as a SQL Server Integration Services (SSIS) package that runs a stored procedure from SALESDB.

Step 1: Create a linked service to each database

Step 2: Create two datasets

You can create two datasets: InputDataset and OutputDataset. These datasets are of type AzureBlob. They refer to the Azure Storage linked service that you created in the previous section.

Step 3: Create a pipeline

You create and validate a pipeline with a copy activity that uses the input and output datasets.

Step 4: Add a copy activity

References: https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal


Question #49

DRAG DROP

You need to implement the encryption for SALESDB.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Data in SALESDB must encrypted by using Transparent Data Encryption (TDE). The encryption must use your own key.

Step 1: Implement an Azure key vault

You must create an Azure Key Vault and Key to use for TDE

Step 2: Create a key

Step 3: From the settings of the Azure SQL database …

You turn transparent data encryption on and off on the database level.

References: https://docs.microsoft.com/en-us/azure/sql-database/transparent-data-encryption-byok-azure-sql-configure


Question #50

Which windowing function should you use to perform the streaming aggregation of the sales data?

  • A . Tumbling
  • B . Hopping
  • C . Sliding
  • D . Session

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Scenario: The analytic process will perform aggregations that must be done continuously, without gaps, and without overlapping.

The key differentiators of a Tumbling window are that they repeat, do not overlap, and an event cannot belong to more than one tumbling window.

Question #51

How should you monitor SALESDB to meet the technical requirements?

  • A . Query the sys.resource_stats dynamic management view.
  • B . Review the Query Performance Insights for SALESDB.
  • C . Query the sys.dm_os_wait_stats dynamic management view.
  • D . Review the auditing information of SALESDB.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Scenario: Disk IO, CPU, and memory usage must be monitored for SALESDB

The sys.resource_stats returns historical data for CPU, IO, DTU consumption. There’s one row every 5 minute for a database in an Azure logical SQL Server if there’s a change in the metrics.

Question #52

Which counter should you monitor for real-time processing to meet the technical requirements?

  • A . Concurrent users
  • B . SU% Utilization
  • C . Data Conversion Errors

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Scenario:

Real-time processing must be monitored to ensure that workloads are sized properly based on actual usage patterns.

The sales data including the documents in JSON format, must be gathered as it arrives and analyzed online by using Azure Stream Analytics.

Streaming Units (SUs) represents the computing resources that are allocated to execute a Stream Analytics job. The higher the number of SUs, the more CPU and memory resources are allocated for your job. This capacity lets you focus on the query logic and abstracts the need to manage the hardware to run your Stream Analytics job in a timely manner.

References: https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-streaming-unit-consumption

Question #53

You need to implement event processing by using Stream Analytics to produce consistent JSON documents.

Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

You need to ensure that the missing indexes for REPORTINGDB are added.

What should you use?

  • A . SQL Database Advisor
  • B . extended events
  • C . Query Performance Insight
  • D . automatic tuning

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Automatic tuning options include create index, which identifies indexes that may improve performance of your workload, creates indexes, and automatically verifies that performance

of queries has improved.

Scenario:

REPORTINGDB stores reporting data and contains server columnstore indexes.

Migrate SALESDB and REPORTINGDB to an Azure SQL database.

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-automatic-tuning

Question #54

You need to configure a disaster recovery solution for SALESDB to meet the technical requirements.

What should you configure in the backup policy?

  • A . weekly long-term retention backups that are retained for three weeks
  • B . failover groups
  • C . a point-in-time restore
  • D . geo-replication

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Scenario: SALESDB must be restorable to any given minute within the past three weeks.

The Azure SQL Database service protects all databases with an automated backup system. These backups are retained for 7 days for Basic, 35 days for Standard and 35 days for Premium. Point-in-time restore is a self-service capability, allowing customers to restore a Basic, Standard or Premium database from these backups to any point within the retention period.

References: https://azure.microsoft.com/en-us/blog/azure-sql-database-point-in-time-restore/

Question #55

Topic 5, Misc Questions

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You need to double the available processing resources available to an Azure SQL data warehouse named datawarehouse.

To complete this task, sign in to the Azure portal. NOTE: This task might take several minutes to complete. You can perform other tasks while the task completes or end this section of the exam.

Reveal Solution Hide Solution

Correct Answer: SQL Data Warehouse compute resources can be scaled by increasing or decreasing data warehouse units.
Question #55

Topic 5, Misc Questions

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You need to double the available processing resources available to an Azure SQL data warehouse named datawarehouse.

To complete this task, sign in to the Azure portal. NOTE: This task might take several minutes to complete. You can perform other tasks while the task completes or end this section of the exam.

Reveal Solution Hide Solution

Correct Answer: SQL Data Warehouse compute resources can be scaled by increasing or decreasing data warehouse units.
Question #55

Topic 5, Misc Questions

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You need to double the available processing resources available to an Azure SQL data warehouse named datawarehouse.

To complete this task, sign in to the Azure portal. NOTE: This task might take several minutes to complete. You can perform other tasks while the task completes or end this section of the exam.

Reveal Solution Hide Solution

Correct Answer: SQL Data Warehouse compute resources can be scaled by increasing or decreasing data warehouse units.
Question #55

Topic 5, Misc Questions

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You need to double the available processing resources available to an Azure SQL data warehouse named datawarehouse.

To complete this task, sign in to the Azure portal. NOTE: This task might take several minutes to complete. You can perform other tasks while the task completes or end this section of the exam.

Reveal Solution Hide Solution

Correct Answer: SQL Data Warehouse compute resources can be scaled by increasing or decreasing data warehouse units.
Question #55

Topic 5, Misc Questions

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You need to double the available processing resources available to an Azure SQL data warehouse named datawarehouse.

To complete this task, sign in to the Azure portal. NOTE: This task might take several minutes to complete. You can perform other tasks while the task completes or end this section of the exam.

Reveal Solution Hide Solution

Correct Answer: SQL Data Warehouse compute resources can be scaled by increasing or decreasing data warehouse units.
Question #55

Topic 5, Misc Questions

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You need to double the available processing resources available to an Azure SQL data warehouse named datawarehouse.

To complete this task, sign in to the Azure portal. NOTE: This task might take several minutes to complete. You can perform other tasks while the task completes or end this section of the exam.

Reveal Solution Hide Solution

Correct Answer: SQL Data Warehouse compute resources can be scaled by increasing or decreasing data warehouse units.

Question #61

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a container named Sales in an Azure Cosmos DB database. Sales has 120 GB of data. Each entry in Sales has the following structure.

The partition key is set to the OrderId attribute.

Users report that when they perform queries that retrieve data by ProductName, the queries take longer than expected to complete.

You need to reduce the amount of time it takes to execute the problematic queries.

Solution: You create a lookup collection that uses ProductName as a partition key.

Does this meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

One option is to have a lookup collection “ProductName” for the mapping of “ProductName” to “OrderId”.

References: https://azure.microsoft.com/sv-se/blog/azure-cosmos-db-partitioning-design-patterns-part-1/

Question #62

You have an Azure Stream Analytics job that receives clickstream data from an Azure event hub.

You need to define a query in the Stream Analytics job.

The query must meet the following requirements:

✑ Count the number of clicks within each 10-second window based on the country of a visitor.

✑ Ensure that each click is NOT counted more than once.

How should you define the query?

  • A . SELECT Country, Count(*) AS Count FROM ClickStream TIMESTAMP BY CreatedAt GROUP BY Country, TumblingWindow(second, 10)
  • B . SELECT Country, Count(*) AS Count FROM ClickStream TIMESTAMP BY CreatedAt GROUP BY Country, SessionWindow(second, 5, 10)
  • C . SELECT Country, Avg(*) AS Average FROM ClickStream TIMESTAMP BY CreatedAt GROUP BY Country, SlidingWindow(second, 10)
  • D . SELECT Country, Avg(*) AS Average FROM ClickStream TIMESTAMP BY CreatedAt GROUP BY Country, HoppingWindow(second, 10, 2)

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Tumbling window functions are used to segment a data stream into distinct time segments and perform a function against them, such as the example below. The key differentiators of a Tumbling window are that they repeat, do not overlap, and an event cannot belong to more than one tumbling window.

Example:

Reference: https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-window-functions

Question #63

You have an Azure Cosmos DB database that uses the SQL API.

You need to delete stale data from the database automatically.

What should you use?

  • A . soft delete
  • B . Low Latency Analytical Processing (LLAP)
  • C . schema on read
  • D . Time to Live (TTL)

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

With Time to Live or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified.

References: https://docs.microsoft.com/en-us/azure/cosmos-db/time-to-live

Question #64

You need to develop a pipeline for processing data.

The pipeline must meet the following requirements.

• Scale up and down resources for cost reduction.

• Use an in-memory data processing engine to speed up ETL and machine learning operations.

• Use streaming capabilities.

• Provide the ability to code in SQL, Python, Scala, and R.

• Integrate workspace collaboration with Git.

What should you use?

  • A . HDInsight Spark Cluster
  • B . Azure Stream Analytics
  • C . HDInsight Hadoop Cluster
  • D . Azure SQL Data Warehouse

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Aparch Spark is an open-source, parallel-processing framework that supports in-memory processing to boost the performance of big-data analysis applications.

HDInsight is a managed Hadoop service. Use it deploy and manage Hadoop clusters in Azure. For batch processing, you can use Spark, Hive, Hive LLAP, MapReduce.

Languages: R, Python, Java, Scala, SQL

You can create an HDInsight Spark cluster using an Azure Resource Manager template.

The template can be found in GitHub.

References: https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/batch-processing

Question #65

You develop data engineering solutions for a company. The company has on-premises Microsoft SQL Server databases at multiple locations.

The company must integrate data with Microsoft Power BI and Microsoft Azure Logic Apps.

The solution must avoid single points of failure during connection and transfer to the cloud.

The solution must also minimize latency.

You need to secure the transfer of data between on-premises databases and Microsoft Azure.

What should you do?

  • A . Install a standalone on-premises Azure data gateway at each location
  • B . Install an on-premises data gateway in personal mode at each location
  • C . Install an Azure on-premises data gateway at the primary location
  • D . Install an Azure on-premises data gateway as a cluster at each location

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

You can create high availability clusters of On-premises data gateway installations, to ensure your organization can access on-premises data resources used in Power BI reports and dashboards. Such clusters allow gateway administrators to group gateways to avoid single points of failure in accessing on-premises data resources. The Power BI service always uses the primary gateway in the cluster, unless it’s not available. In that case, the service switches to the next gateway in the cluster, and so on. References: https://docs.microsoft.com/en-us/power-bi/service-gateway-high-availability-clusters

Question #66

DRAG DROP

You plan to create a new single database instance of Microsoft Azure SQL Database.

The database must only allow communication from the data engineer’s workstation. You must connect directly to the instance by using Microsoft SQL Server Management Studio.

You need to create and configure the Database.

Which three Azure PowerShell cmdlets should you use to develop the solution? To answer, move the appropriate cmdlets from the list of cmdlets to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Step 1: New-AzureSqlServer

Create a server.

Step 2: New-AzureRmSqlServerFirewallRule

New-AzureRmSqlServerFirewallRule creates a firewall rule for a SQL Database server.

Can be used to create a server firewall rule that allows access from the specified IP range.

Step 3: New-AzureRmSqlDatabase

Example: Create a database on a specified server

PS C:>New-AzureRmSqlDatabase -ResourceGroupName "ResourceGroup01" – ServerName "Server01" -DatabaseName "Database01

References: https://docs.microsoft.com/en-us/azure/sql-database/scripts/sql-database-create-and-configure-database-powershell?toc=%2fpowershell%2fmodule%2ftoc.json


Question #67

DRAG DROP

You need to create an Azure Cosmos DB account that will use encryption keys managed by your organization.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders you select.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Step 1: Create an Azure key vault and enable purge protection

Using customer-managed keys with Azure Cosmos DB requires you to set two properties on the Azure Key Vault instance that you plan to use to host your encryption keys: Soft Delete and Purge Protection.

Step 2: Create a new Azure Cosmos DB account, set Data Encryption to Customer-managed Key (Enter key URI), and enter the key URI

Data stored in your Azure Cosmos account is automatically and seamlessly encrypted with keys managed by Microsoft (service-managed keys). Optionally, you can choose to add a second layer of encryption with keys you manage (customer-managed keys).

Step 3: Add an Azure Key Vault access policy to grant permissions to the Azure Cosmos DB principal

Add an access policy to your Azure Key Vault instance

Step 4: Generate a new key in the Azure key vault

Generate a key in Azure Key Vault


Question #68

You have an Azure subscription that contains an Azure Data Factory version 2 (V2) data factory named df1. Df1 contains a linked service.

You have an Azure Key vault named vault1 that contains an encryption key named key1.

You need to encrypt df1 by using key1.

What should you do first?

  • A . Disable purge protection on vault1.
  • B . Create a self-hosted integration runtime.
  • C . Disable soft delete on vault1.
  • D . Remove the linked service from df1.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Linked services are much like connection strings, which define the connection information needed for Data Factory to connect to external resources.

Reference:

https://docs.microsoft.com/en-us/azure/data-factory/enable-customer-managed-key

https://docs.microsoft.com/en-us/azure/data-factory/concepts-linked-services

https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime

Question #69

HOTSPOT

A company plans to analyze a continuous flow of data from a social media platform by using Microsoft Azure Stream Analytics. The incoming data is formatted as one record per row.

You need to create the input stream.

How should you complete the REST API segment? To answer, select the appropriate configuration in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: CSV

A comma-separated values (CSV) file is a delimited text file that uses a comma to separate values. A CSV file stores tabular data (numbers and text) in plain text. Each line of the file is a data record.

JSON and AVRO are not formatted as one record per row.

Box 2: "type":"Microsoft.ServiceBus/EventHub", Properties include "EventHubName"

References:

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-define-inputs

https://en.wikipedia.org/wiki/Comma-separated_values


Question #70

You manage an enterprise data warehouse in Azure Synapse Analytics.

Users report slow performance when they run commonly used queries. Users do not report performance changes for infrequently used queries.

You need to monitor resource utilization to determine the source of the performance issues.

Which metric should you monitor?

  • A . Data Warehouse Units (DWU) used
  • B . DWU limit
  • C . Cache hit percentage
  • D . Data IO percentage

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The Azure Synapse Analytics storage architecture automatically tiers your most frequently queried columnstore segments in a cache residing on NVMe based SSDs designed for Gen2 data warehouses. Greater performance is realized when your queries retrieve segments that are residing in the cache. You can monitor and troubleshoot slow query performance by determining whether your workload is optimally leveraging the Gen2 cache.

Note: As of November 2019, Azure SQL Data Warehouse is now Azure Synapse Analytics.

Reference:

https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-how-to-monitor-cache

https://docs.microsoft.com/bs-latn-ba/azure/sql-data-warehouse/sql-data-warehouse-concept-resource-utilization-query-activity

Question #71

HOTSPOT

You are building an Azure Stream Analytics job that queries reference data from a product catalog file. The file is updated daily.

The reference data input details for the file are shown in the Input exhibit.

The storage account container view is shown in the Refdata exhibit.

You need to configure the Stream Analytics job to pick up the new reference data.

What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Table

Description automatically generated

Box 1: {date}/product.csv

In the 2nd exhibit we see: Location: refdata / 2020-03-20

Note: Path Pattern: This is a required property that is used to locate your blobs within the specified container. Within the path, you may choose to specify one or more instances of the following 2 variables:

{date}, {time}

Example 1: products/{date}/{time}/product-list.csv

Example 2: products/{date}/product-list.csv

Example 3: product-list.csv

Box 2: YYYY-MM-DD

Note: Date Format [optional]: If you have used {date} within the Path Pattern that you specified, then you can select the date format in which your blobs are organized from the drop-down of supported formats.

Example: YYYY/MM/DD, MM/DD/YYYY, etc.


Question #72

Your company uses several Azure HDInsight clusters.

The data engineering team reports several errors with some application using these clusters.

You need to recommend a solution to review the health of the clusters.

What should you include in you recommendation?

  • A . Azure Automation
  • B . Log Analytics
  • C . Application Insights

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Azure Monitor logs integration. Azure Monitor logs enables data generated by multiple resources such as HDInsight clusters, to be collected and aggregated in one place to

achieve a unified monitoring experience.

As a prerequisite, you will need a Log Analytics Workspace to store the collected data. If you have not already created one, you can follow the instructions for creating a Log Analytics Workspace.

You can then easily configure an HDInsight cluster to send many workload-specific metrics to Log Analytics.

References: https://azure.microsoft.com/sv-se/blog/monitoring-on-azure-hdinsight-part-2-cluster-health-and-availability/

Question #73

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some questions sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have an Azure subscription that contains an Azure Storage account.

You plan to implement changes to a data storage solution to meet regulatory and compliance standards.

Every day, Azure needs to identify and delete blobs that were NOT modified during the last 100 days.

Solution: You schedule an Azure Data Factory pipeline.

Does this meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Instead apply an Azure Blob storage lifecycle policy.

Reference: https://docs.microsoft.com/en-us/azure/storage/blobs/storage-lifecycle-management-concepts?tabs=azure-portal

Question #74

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You plan to create an Azure Databricks workspace that has a tiered structure.

The workspace will contain the following three workloads:

✑ A workload for data engineers who will use Python and SQL

✑ A workload for jobs that will run notebooks that use Python, Spark, Scala, and SQL

✑ A workload that data scientists will use to perform ad hoc analysis in Scala and R

The enterprise architecture team at your company identifies the following standards for Databricks environments:

✑ The data engineers must share a cluster.

✑ The job cluster will be managed by using a request process whereby data scientists and data engineers provide packaged notebooks for deployment to the cluster.

✑ All the data scientists must be assigned their own cluster that terminates automatically after 120 minutes of inactivity. Currently, there are three data scientists.

You need to create the Databrick clusters for the workloads.

Solution: You create a Standard cluster for each data scientist, a High Concurrency cluster for the data engineers, and a Standard cluster for the jobs.

Does this meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

We need a High Concurrency cluster for the data engineers and the jobs.

Note:

Standard clusters are recommended for a single user. Standard can run workloads developed in any language: Python, R, Scala, and SQL.

A high concurrency cluster is a managed cloud resource. The key benefits of high concurrency clusters are that they provide Apache Spark-native fine-grained sharing for maximum resource utilization and minimum query latencies.

References: https://docs.azuredatabricks.net/clusters/configure.html

Question #75

HOTSPOT

You have an Azure data factory that has two pipelines named PipelineA and PipelineB.

PipelineA has four activities as shown in the following exhibit.

PipelineB has two activities as shown in the following exhibit.

You create an alert for the data factory that uses Failed pipeline runs metrics for both pipelines and all failure types.

The metric has the following settings:

✑ Operator: Greater than

✑ Aggregation type: Total

✑ Threshold value: 2

✑ Aggregation granularity (Period): 5 minutes

✑ Frequency of evaluation: Every 5 minutes

Data Factory monitoring records the failures shown in the following table.

For each of the following statements, select yes if the statement is true. Otherwise, select no. NOTE: Each correct answer selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: No

Only one failure at this point.

Box 2: No

Only two failures within 5 minutes.

Box 3: Yes

More than two (three) failures in 5 minutes


Question #76

DRAG DROP

You develop data engineering solutions for a company.

You need to deploy a Microsoft Azure Stream Analytics job for an IoT solution.

The solution must:

• Minimize latency.

• Minimize bandwidth usage between the job and IoT device.

Which four actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:


Question #77

You are designing an enterprise data warehouse in Azure Synapse Analytics. You plan to load millions of rows of data into the data warehouse each day.

You must ensure that staging tables are optimized for data loading.

You need to design the staging tables.

What type of tables should you recommend?

  • A . Round-robin distributed table
  • B . Hash-distributed table
  • C . Replicated table
  • D . External table

Reveal Solution Hide Solution

Correct Answer: A
Question #78

HOTSPOT

A company runs Microsoft Dynamics CRM with Microsoft SQL Server on-premises. SQL Server Integration Services (SSIS) packages extract data from Dynamics CRM APIs, and load the data into a SQL Server data warehouse.

The datacenter is running out of capacity. Because of the network configuration, you must extract on premises data to the cloud over https. You cannot open any additional ports. The solution must implement the least amount of effort.

You need to create the pipeline system.

Which component should you use? To answer, select the appropriate technology in the dialog box in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: Source

For Copy activity, it requires source and sink linked services to define the direction of data flow.

Copying between a cloud data source and a data source in private network: if either source or sink linked service points to a self-hosted IR, the copy activity is executed on that self-hosted Integration Runtime.

Box 2: Self-hosted integration runtime

A self-hosted integration runtime can run copy activities between a cloud data store and a data store in a private network, and it can dispatch transform activities against compute resources in an on-premises network or an Azure virtual network. The installation of a self-hosted integration runtime needs on an on-premises machine or a virtual machine (VM) inside a private network.

References: https://docs.microsoft.com/en-us/azure/data-factory/create-self-hosted-integration-runtime


Question #79

A company is designing a hybrid solution to synchronize data and on-premises Microsoft SQL Server database to Azure SQL Database.

You must perform an assessment of databases to determine whether data will move without compatibility issues.

You need to perform the assessment.

Which tool should you use?

  • A . Azure SQL Data Sync
  • B . SQL Vulnerability Assessment (VA)
  • C . SQL Server Migration Assistant (SSMA)
  • D . Microsoft Assessment and Planning Toolkit
  • E . Data Migration Assistant (DMA)

Reveal Solution Hide Solution

Correct Answer: E
E

Explanation:

The Data Migration Assistant (DMA) helps you upgrade to a modern data platform by detecting compatibility issues that can impact database functionality in your new version of SQL Server or Azure SQL Database. DMA recommends performance and reliability improvements for your target environment and allows you to move your schema, data, and uncontained objects from your source server to your target server.

References: https://docs.microsoft.com/en-us/sql/dma/dma-overview

Question #80

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You plan to enable Azure Multi-Factor Authentication (MFA).

You need to ensure that User1-10543936@ExamUsers.com can manage any databases hosted on an Azure SQL server named SQL10543936 by signing in using his Azure Active Directory (Azure AD) user account.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer: Provision an Azure Active Directory administrator for your managed instance

Each Azure SQL server (which hosts a SQL Database or SQL Data Warehouse) starts with a single server administrator account that is the administrator of the entire Azure SQL server. A second SQL Server administrator must be created, that is an Azure AD account. This principal is created as a contained database user in the master database.

Question #80

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You plan to enable Azure Multi-Factor Authentication (MFA).

You need to ensure that User1-10543936@ExamUsers.com can manage any databases hosted on an Azure SQL server named SQL10543936 by signing in using his Azure Active Directory (Azure AD) user account.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer: Provision an Azure Active Directory administrator for your managed instance

Each Azure SQL server (which hosts a SQL Database or SQL Data Warehouse) starts with a single server administrator account that is the administrator of the entire Azure SQL server. A second SQL Server administrator must be created, that is an Azure AD account. This principal is created as a contained database user in the master database.

Question #80

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You plan to enable Azure Multi-Factor Authentication (MFA).

You need to ensure that User1-10543936@ExamUsers.com can manage any databases hosted on an Azure SQL server named SQL10543936 by signing in using his Azure Active Directory (Azure AD) user account.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer: Provision an Azure Active Directory administrator for your managed instance

Each Azure SQL server (which hosts a SQL Database or SQL Data Warehouse) starts with a single server administrator account that is the administrator of the entire Azure SQL server. A second SQL Server administrator must be created, that is an Azure AD account. This principal is created as a contained database user in the master database.

Question #80

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You plan to enable Azure Multi-Factor Authentication (MFA).

You need to ensure that User1-10543936@ExamUsers.com can manage any databases hosted on an Azure SQL server named SQL10543936 by signing in using his Azure Active Directory (Azure AD) user account.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer: Provision an Azure Active Directory administrator for your managed instance

Each Azure SQL server (which hosts a SQL Database or SQL Data Warehouse) starts with a single server administrator account that is the administrator of the entire Azure SQL server. A second SQL Server administrator must be created, that is an Azure AD account. This principal is created as a contained database user in the master database.

Question #80

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You plan to enable Azure Multi-Factor Authentication (MFA).

You need to ensure that User1-10543936@ExamUsers.com can manage any databases hosted on an Azure SQL server named SQL10543936 by signing in using his Azure Active Directory (Azure AD) user account.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer: Provision an Azure Active Directory administrator for your managed instance

Each Azure SQL server (which hosts a SQL Database or SQL Data Warehouse) starts with a single server administrator account that is the administrator of the entire Azure SQL server. A second SQL Server administrator must be created, that is an Azure AD account. This principal is created as a contained database user in the master database.

Question #80

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You plan to enable Azure Multi-Factor Authentication (MFA).

You need to ensure that User1-10543936@ExamUsers.com can manage any databases hosted on an Azure SQL server named SQL10543936 by signing in using his Azure Active Directory (Azure AD) user account.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer: Provision an Azure Active Directory administrator for your managed instance

Each Azure SQL server (which hosts a SQL Database or SQL Data Warehouse) starts with a single server administrator account that is the administrator of the entire Azure SQL server. A second SQL Server administrator must be created, that is an Azure AD account. This principal is created as a contained database user in the master database.

Question #80

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10543936

You plan to enable Azure Multi-Factor Authentication (MFA).

You need to ensure that User1-10543936@ExamUsers.com can manage any databases hosted on an Azure SQL server named SQL10543936 by signing in using his Azure Active Directory (Azure AD) user account.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer: Provision an Azure Active Directory administrator for your managed instance

Each Azure SQL server (which hosts a SQL Database or SQL Data Warehouse) starts with a single server administrator account that is the administrator of the entire Azure SQL server. A second SQL Server administrator must be created, that is an Azure AD account. This principal is created as a contained database user in the master database.

Question #87

HOTSPOT

You have an Azure SQL database that contains a table named Employee. Employee contains sensitive data in a decimal (10,2) column named Salary.

You need to ensure that nonprivileged users can view the table data, but Salary must display a number from 0 to 100.

What should you configure? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Box 1: SELECT

Users with SELECT permission on a table can view the table data. Columns that are defined as masked, will display the masked data.

Incorrect:

Grant the UNMASK permission to a user to enable them to retrieve unmasked data from the columns for which masking is defined.

The CONTROL permission on the database includes both the ALTER ANY MASK and UNMASK permission.

Box 2: Random number

Random number: Masking method, which generates a random number according to the selected boundaries and actual data types. If the designated boundaries are equal, then the masking function is a constant number.



Question #88

DRAG DROP

You manage security for a database that supports a line of business application.

Private and personal data stored in the database must be protected and encrypted.

You need to configure the database to use Transparent Data Encryption (TDE).

Which five actions should you perform in sequence? To answer, select the appropriate actions from the list of actions to the answer area and arrange them in the correct order.

Reveal Solution Hide Solution

Correct Answer:

Explanation:

Step 1: Create a master key

Step 2: Create or obtain a certificate protected by the master key

Step 3: Set the context to the company database

Step 4: Create a database encryption key and protect it by the certificate

Step 5: Set the database to use encryption

Example code:

USE master;

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘<UseStrongPasswordHere>’;

go

CREATE CERTIFICATE MyServerCert WITH SUBJECT = ‘My DEK Certificate’;

go

USE AdventureWorks2012;

GO

CREATE DATABASE ENCRYPTION KEY

WITH ALGORITHM = AES_128

ENCRYPTION BY SERVER CERTIFICATE MyServerCert;

GO

ALTER DATABASE AdventureWorks2012

SET ENCRYPTION ON;

GO

Reference: https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption


Question #89

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this scenario, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You are developing a solution that will use Azure Stream Analytics. The solution will accept an Azure Blob storage file named Customers. The file will contain both in-store and online customer details. The online customers will provide a mailing address.

You have a file in Blob storage named LocationIncomes that contains based on location.

The file rarely changes.

You need to use an address to look up a median income based on location. You must

output the data to Azure SQL Database for immediate use and to Azure Data Lake Storage Gen2 for long-term retention.

Solution: You implement a Stream Analytics job that has one streaming input, one reference input, one query, and two outputs.

Does this meet the goal?

  • A . Yes
  • B . No

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

We need one reference data input for LocationIncomes, which rarely changes.

We need two queries, on for in-store customers, and one for online customers.

For each query two outputs is needed.

Note: Stream Analytics also supports input known as reference data. Reference data is either completely static or changes slowly.

References:

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-add-inputs#stream-and-reference-inputs

https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-define-outputs

Question #90

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10277521

You need to generate an email notification to admin@contoso.com if the available storage in an Azure Cosmos DB database named cosmos10277521 is less than 100,000,000 bytes.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer:

Question #90

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10277521

You need to generate an email notification to admin@contoso.com if the available storage in an Azure Cosmos DB database named cosmos10277521 is less than 100,000,000 bytes.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer:
Question #90

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10277521

You need to generate an email notification to admin@contoso.com if the available storage in an Azure Cosmos DB database named cosmos10277521 is less than 100,000,000 bytes.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer:
Question #90

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10277521

You need to generate an email notification to admin@contoso.com if the available storage in an Azure Cosmos DB database named cosmos10277521 is less than 100,000,000 bytes.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer:
Question #90

CORRECT TEXT

Use the following login credentials as needed:

Azure Username: xxxxx

Azure Password: xxxxx

The following information is for technical support purposes only:

Lab Instance: 10277521

You need to generate an email notification to admin@contoso.com if the available storage in an Azure Cosmos DB database named cosmos10277521 is less than 100,000,000 bytes.

To complete this task, sign in to the Azure portal.

Reveal Solution Hide Solution

Correct Answer:
Question #95

You have an Azure Storage account.

You need to configure the storage account to send an email when an administrative action is performed on the account.

What should you do?

  • A . Enable Azure Advanced Threat Protection (ATP).
  • B . Create an alert based on a metric.
  • C . Create an alert based on the activity log.
  • D . Create a custom role for the storage account.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

To determine if there is an activity log alert created for "Create/Update Storage Account" events in your Microsoft Azure cloud account, perform the following actions:

✑ Sign in to Azure Management Console.

✑ Navigate to Azure Monitor blade

✑ In the navigation panel, select Alerts to access all the alerts available in your cloud account.

✑ On the Alerts page, click on the Manage alert rules button from the dashboard top menu to access the alert rules management page.

✑ On the Rules page, select the subscription that you want to examine from the Subscription filter box and the Enabled option from the Status dropdown list, to return all the active alert rules created in the selected account subscription.

✑ Click on the name of the alert rule that you want to examine.

✑ On the selected alert rule configuration page, check the condition phrase available in the Condition section. If the phrase is different than Whenever the Activity Log

has an event with Category=’Administrative’, Signal name=’Create/Update Storage Account (Microsoft.Storage/storageAccounts)’, the selected alert rule is not designed to fire whenever "Create Storage Account" or "Update Storage Account" events are triggered.

Reference: https://www.cloudconformity.com/knowledge-base/azure/ActivityLog/create-storage-account-alert.html

Question #96

You have an Azure subscription that contains the resources shown in the following table.

All the resources have the default encryption settings.

  • A . Enable Azure Storage encryption for storageaccount1.
  • B . Enable Transparent Data Encryption (TDE) for synapsedb1.
  • C . Enable encryption at rest for cosmosdb1.
  • D . Enable Azure Storage encryption for storageaccount2.

Reveal Solution Hide Solution

Correct Answer: D
Exit mobile version