Exam4Training

Google Professional Cloud Database Engineer Google Cloud Certified – Professional Cloud Database Engineer Online Training

Question #1

You are developing a new application on a VM that is on your corporate network. The application will use Java Database Connectivity (JDBC) to connect to Cloud SQL for PostgreSQL. Your Cloud SQL instance is configured with IP address 192.168.3.48, and SSL is disabled. You want to ensure that your application can access your database instance without requiring configuration changes to your database.

What should you do?

  • A . Define a connection string using your Google username and password to point to the external (public) IP address of your Cloud SQL instance.
  • B . Define a connection string using a database username and password to point to the internal (private) IP address of your Cloud SQL instance.
  • C . Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the internal (private) IP address of your Cloud SQL instance.
  • D . Define a connection string using Cloud SQL Auth proxy configured with a service account to point to the external (public) IP address of your Cloud SQL instance.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The Cloud SQL connectors are libraries that provide encryption and IAM-based authorization when connecting to a Cloud SQL instance. They can’t provide a network path to a Cloud SQL instance if one is not already present. Other ways to connect to a Cloud SQL instance include using a database client or the Cloud SQL Auth proxy.

https://cloud.google.com/sql/docs/postgres/connect-connectors

https://github.com/GoogleCloudPlatform/cloud-sql-jdbc-socket-factory/blob/main/docs/jdbc-postgres.md

Question #2

Your digital-native business runs its database workloads on Cloud SQL. Your website must be globally accessible 24/7. You need to prepare your Cloud SQL instance for high availability (HA). You want to follow Google-recommended practices.

What should you do? (Choose two.)

  • A . Set up manual backups.
  • B . Create a PostgreSQL database on-premises as the HA option.
  • C . Configure single zone availability for automated backups.
  • D . Enable point-in-time recovery.
  • E . Schedule automated backups.

Reveal Solution Hide Solution

Correct Answer: DE
DE

Explanation:

D) Enable point-in-time recovery – This feature allows you to restore your database to a specific point in time. It helps protect against data loss and can be used in the event of data corruption or accidental data deletion.

E. Schedule automated backups – Automated backups allow you to take regular backups of your database without manual intervention. You can use these backups to restore your database in the event of data loss or corruption.

Question #3

Your company wants to move to Google Cloud. Your current data center is closing in six months. You are running a large, highly transactional Oracle application footprint on VMWare. You need to design a solution with minimal disruption to the current architecture and provide ease of migration to Google Cloud.

What should you do?

  • A . Migrate applications and Oracle databases to Google Cloud VMware Engine (VMware Engine).
  • B . Migrate applications and Oracle databases to Compute Engine.
  • C . Migrate applications to Cloud SQL.
  • D . Migrate applications and Oracle databases to Google Kubernetes Engine (GKE).

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/blog/products/databases/migrate-databases-to-google-cloud-vmware-engine-gcve

Question #4

Your customer has a global chat application that uses a multi-regional Cloud Spanner instance. The application has recently experienced degraded performance after a new version of the application was launched. Your customer asked you for assistance. During initial troubleshooting, you observed high read latency.

What should you do?

  • A . Use query parameters to speed up frequently executed queries.
  • B . Change the Cloud Spanner configuration from multi-region to single region.
  • C . Use SQL statements to analyze SPANNER_SYS.READ_STATS* tables.
  • D . Use SQL statements to analyze SPANNER_SYS.QUERY_STATS* tables.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

To troubleshoot high read latency, you can use SQL statements to analyze the SPANNER_SYS.READ_STATS* tables. These tables contain statistics about read operations in Cloud Spanner, including the number of reads, read latency, and the number of read errors. By analyzing these tables, you can identify the cause of the high read latency and take appropriate action to resolve the issue. Other options, such as using query parameters to speed up frequently executed queries or changing the Cloud Spanner configuration from multi-region to single region, may not be directly related to the issue of high read latency. Similarly, analyzing the SPANNER_SYS.QUERY_STATS* tables, which contain statistics about query operations, may not be relevant to the issue of high read latency.

Question #5

Your company has PostgreSQL databases on-premises and on Amazon Web Services (AWS). You are planning multiple database migrations to Cloud SQL in an effort to reduce costs and downtime. You want to follow Google-recommended practices and use Google native data migration tools. You also want to closely monitor the migrations as part of the cutover strategy.

What should you do?

  • A . Use Database Migration Service to migrate all databases to Cloud SQL.
  • B . Use Database Migration Service for one-time migrations, and use third-party or partner tools for change data capture (CDC) style migrations.
  • C . Use data replication tools and CDC tools to enable migration.
  • D . Use a combination of Database Migration Service and partner tools to support the data migration strategy.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/blog/products/databases/tips-for-migrating-across-compatible-database-engines

Question #6

You are setting up a Bare Metal Solution environment. You need to update the operating system to the latest version. You need to connect the Bare Metal Solution environment to the internet so you can receive software updates.

What should you do?

  • A . Setup a static external IP address in your VPC network.
  • B . Set up bring your own IP (BYOIP) in your VPC.
  • C . Set up a Cloud NAT gateway on the Compute Engine VM.
  • D . Set up Cloud NAT service.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/bare-metal/docs/bms-setup?hl=en#bms-access-internet-vm-nat The docs specifically says "Setting up a NAT gateway on a Compute Engine VM" is the way to give BMS internet access.

Question #7

Your organization is running a MySQL workload in Cloud SQL. Suddenly you see a degradation in

database performance. You need to identify the root cause of the performance degradation.

What should you do?

  • A . Use Logs Explorer to analyze log data.
  • B . Use Cloud Monitoring to monitor CPU, memory, and storage utilization metrics.
  • C . Use Error Reporting to count, analyze, and aggregate the data.
  • D . Use Cloud Debugger to inspect the state of an application.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/sql/docs/mysql/diagnose-issues#:~:text=If%20your%20instance%20stops%20responding%20to%20connections%20or%20perf ormance%20is%20degraded%2C%20make%20sure%20it%20conforms%20to%20the%20Operational %20Guidelines

Question #8

You work for a large retail and ecommerce company that is starting to extend their business globally. Your company plans to migrate to Google Cloud. You want to use platforms that will scale easily, handle transactions with the least amount of latency, and provide a reliable customer experience. You need a storage layer for sales transactions and current inventory levels. You want to retain the same relational schema that your existing platform uses.

What should you do?

  • A . Store your data in Firestore in a multi-region location, and place your compute resources in one of the constituent regions.
  • B . Deploy Cloud Spanner using a multi-region instance, and place your compute resources close to the default leader region.
  • C . Build an in-memory cache in Memorystore, and deploy to the specific geographic regions where your application resides.
  • D . Deploy a Bigtable instance with a cluster in one region and a replica cluster in another geographic region.

Reveal Solution Hide Solution

Correct Answer: B
Question #9

You host an application in Google Cloud. The application is located in a single region and uses Cloud SQL for transactional data. Most of your users are located in the same time zone and expect the application to be available 7 days a week, from 6 AM to 10 PM. You want to ensure regular maintenance updates to your Cloud SQL instance without creating downtime for your users.

What should you do?

  • A . Configure a maintenance window during a period when no users will be on the system. Control the order of update by setting non-production instances to earlier and production instances to later.
  • B . Create your database with one primary node and one read replica in the region.
  • C . Enable maintenance notifications for users, and reschedule maintenance activities to a specific time after notifications have been sent.
  • D . Configure your Cloud SQL instance with high availability enabled.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Configure a maintenance window during a period when no users will be on the system. Control the order of update by setting non-production instances to earlier and production instances to later.

Question #10

Your team recently released a new version of a highly consumed application to accommodate additional user traffic. Shortly after the release, you received an alert from your production monitoring team that there is consistently high replication lag between your primary instance and the read replicas of your Cloud SQL for MySQL instances. You need to resolve the replication lag.

What should you do?

  • A . Identify and optimize slow running queries, or set parallel replication flags.
  • B . Stop all running queries, and re-create the replicas.
  • C . Edit the primary instance to upgrade to a larger disk, and increase vCPU count.
  • D . Edit the primary instance to add additional memory.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/sql/docs/mysql/replication/replication-lag#optimize_queries_and_schema

Question #11

Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical.

The operations team consists of:

Person A is a database administrator.

Person B is an analyst who generates metric reports.

Application C is responsible for automatic backups.

You need to assign roles to team members for Cloud Spanner.

Which roles should you assign?

  • A . roles/spanner.databaseAdmin for Person A
    roles/spanner.databaseReader for Person B
    roles/spanner.backupWriter for Application C
  • B . roles/spanner.databaseAdmin for Person A
    roles/spanner.databaseReader for Person B
    roles/spanner.backupAdmin for Application C
  • C . roles/spanner.databaseAdmin for Person A
    roles/spanner.databaseUser for Person B
    roles/spanner databaseReader for Application C
  • D . roles/spanner.databaseAdmin for Person A
    roles/spanner.databaseUser for Person B
    roles/spanner.backupWriter for Application C

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/spanner/docs/iam#spanner.backupWriter

Question #12

You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SLA. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low.

What should you do?

  • A . Manually scale down the number of nodes after the peak period has passed.
  • B . Use interleaving to co-locate parent and child rows.
  • C . Use the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
  • D . Use granular instance sizing in Cloud Spanner and Autoscaler.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Granular instance is available in Public Preview. With this feature, you can run workloads on Spanner at as low as 1/10th the cost of regular instances, https://cloud.google.com/blog/products/databases/get-more-out-of-spanner-with-granular-instance-sizing

Question #13

You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices.

What should you do?

  • A . Maintain a target of 23% CPU utilization by locating:
    cluster-a in zone us-central1-a
    cluster-b in zone europe-west1-d
    cluster-c in zone asia-east1-b
  • B . Maintain a target of 23% CPU utilization by locating:
    cluster-a in zone us-central1-a
    cluster-b in zone us-central1-b
    cluster-c in zone us-east1-a
  • C . Maintain a target of 35% CPU utilization by locating:
    cluster-a in zone us-central1-a
    cluster-b in zone australia-southeast1-a
    cluster-c in zone europe-west1-d
    cluster-d in zone asia-east1-b
  • D . Maintain a target of 35% CPU utilization by locating:
    cluster-a in zone us-central1-a
    cluster-b in zone us-central2-a
    cluster-c in zone asia-northeast1-b
    cluster-d in zone asia-east1-b

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/bigtable/docs/replication-settings#regional-failover

Question #14

Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second.

What should you do?

  • A . Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.
  • B . Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
  • C . Use Memorystore to handle your low-latency requirements and for real-time analytics.
  • D . Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Start with the lowest tier and smallest size and then grow your instance as needed. Memorystore provides automated scaling using APIs, and optimized node placement across zones for redundancy. Memorystore for Memcached can support clusters as large as 5 TB, enabling millions of QPS at very low latency

Question #15

Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hot-spots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation.

What should you do? (Choose two.)

  • A . Use an auto-incrementing value as the primary key.
  • B . Normalize the data model.
  • C . Promote low-cardinality attributes in multi-attribute primary keys.
  • D . Promote high-cardinality attributes in multi-attribute primary keys.
  • E . Use bit-reverse sequential value as the primary key.

Reveal Solution Hide Solution

Correct Answer: DE
DE

Explanation:

https://cloud.google.com/spanner/docs/schema-design D because high cardinality means you have more unique values in the collumn. That’s a good thing for a hot-spotting issue. E because Spanner specifically has this feature to reduce hot spotting. Basically, it generates unique values https://cloud.google.com/spanner/docs/schema-design#bit_reverse_primary_key

D) Promote high-cardinality attributes in multi-attribute primary keys.

This is a correct answer because promoting high-cardinality attributes in multi-attribute primary keys can help avoid hotspots in Cloud Spanner. High-cardinality attributes are those that have many distinct values, such as UUIDs, email addresses, or timestamps1. By placing high-cardinality attributes first in the primary key, you can ensure that the rows are distributed more evenly across the key space, and avoid having too many requests sent to the same server2. E) Use bit-reverse sequential value as the primary key.

This is a correct answer because using bit-reverse sequential value as the primary key can help avoid hotspots in Cloud Spanner. Bit-reverse sequential value is a technique that reverses the bits of a monotonically increasing value, such as a timestamp or an auto-incrementing ID1. By reversing the bits, you can create a pseudo-random value that spreads the writes across the key space, and avoid having all the inserts occurring at the end of the table2.

Question #16

You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries.

What should you do?

  • A . Use log messages produced by Cloud SQL.
  • B . Use Query Insights for Cloud SQL.
  • C . Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
  • D . Use Cloud SQL instance monitoring in the Google Cloud Console.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/sql/docs/mysql/using-query-insights#introduction

Question #17

You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse.

What should you do?

  • A . Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.
  • B . Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences
  • C . Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
  • D . Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Use Firestore in Datastore mode for new server projects. Firestore in Datastore mode allows you to use established Datastore server architectures while removing fundamental Datastore limitations. Datastore mode can automatically scale to millions of writes per second. Use Firestore in Native mode for new mobile and web apps. Firestore offers mobile and web client libraries with real-time and offline features. Native mode can automatically scale to millions of concurrent clients.

Question #18

Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports.

Which two approaches should you implement? (Choose two.)

  • A . Create secondary indexes on the replica.
  • B . Create additional read replicas, and partition your analytics users to use different read replicas.
  • C . Disable replication on the read replica, and set the flag for parallel replication on the read replica.
    Re-enable replication and optimize performance by setting flags on the primary instance.
  • D . Disable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.
  • E . Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.

Reveal Solution Hide Solution

Correct Answer: B, C
B, C

Explanation:

Replication lag and slow report performance. E is eliminated because using BigQuery would mean changes to the current reports. Report slowness could be the result of poor indexing or just too much read load (or both!). Since excessive load is mentioned in the question, creating additional read replicas and spreading the analytics workload around makes B correct and eliminates A as a way to speed up reporting. That leaves the replication problem. Cloud SQL enables single threaded replication by default, so it stands to reason enabling parallel replication would help the lag. To do that you disable replication on the replica (not the primary), set flags on the replica and optionally set flags on the primary instance to optimize performance for parallel replication. That makes C correct and D incorrect. https://cloud.google.com/sql/docs/mysql/replication/manage-replicas#configuring-parallel-replication

Question #19

You are evaluating Cloud SQL for PostgreSQL as a possible destination for your on-premises

PostgreSQL instances. Geography is becoming increasingly relevant to customer privacy worldwide.

Your solution must support data residency requirements and include a strategy to:

– configure where data is stored

– control where the encryption keys are stored

– govern the access to data

What should you do?

  • A . Replicate Cloud SQL databases across different zones.
  • B . Create a Cloud SQL for PostgreSQL instance on Google Cloud for the data that does not need to adhere to data residency requirements. Keep the data that must adhere to data residency requirements on-premises. Make application changes to support both databases.
  • C . Allow application access to data only if the users are in the same region as the Google Cloud region for the Cloud SQL for PostgreSQL database.
  • D . Use features like customer-managed encryption keys (CMEK), VPC Service Controls, and Identity and Access Management (IAM) policies.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/blog/products/identity-security/meet-data-residency-requirements-with-google-cloud

Question #20

Your customer is running a MySQL database on-premises with read replicas. The nightly incremental backups are expensive and add maintenance overhead. You want to follow Google-recommended practices to migrate the database to Google Cloud, and you need to ensure minimal downtime.

What should you do?

  • A . Create a Google Kubernetes Engine (GKE) cluster, install MySQL on the cluster, and then import the dump file.
  • B . Use the mysqldump utility to take a backup of the existing on-premises database, and then import it into Cloud SQL.
  • C . Create a Compute Engine VM, install MySQL on the VM, and then import the dump file.
  • D . Create an external replica, and use Cloud SQL to synchronize the data to the replica.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/sql/docs/mysql/replication/configure-replication-from-external

Question #21

Your team uses thousands of connected IoT devices to collect device maintenance data for your oil and gas customers in real time. You want to design inspection routines, device repair, and replacement schedules based on insights gathered from the data produced by these devices. You need a managed solution that is highly scalable, supports a multi-cloud strategy, and offers low latency for these IoT devices.

What should you do?

  • A . Use Firestore with Looker.
  • B . Use Cloud Spanner with Data Studio.
  • C . Use MongoD8 Atlas with Charts.
  • D . Use Bigtable with Looker.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

This scenario has BigTable written all over it – large amounts of data from many devices to be analysed in realtime. I would even argue it could qualify as a multicloud solution, given the links to HBASE. BUT it does not support SQL queries and is not therefore compatible (on its own) with Looker. Firestore + Looker has the same problem. Spanner + Data Studio is at least a compatible pairing, but I agree with others that it doesn’t fit this use-case – not least because it’s Google-native. By contrast, MongoDB Atlas is a managed solution (just not by Google) which is compatible with the proposed reporting tool (Mongo’s own Charts), it’s specifically designed for this type of solution and of course it can run on any cloud.

Question #22

Your application follows a microservices architecture and uses a single large Cloud SQL instance, which is starting to have performance issues as your application grows. in the Cloud Monitoring dashboard, the CPU utilization looks normal You want to follow Google-recommended practices to resolve and prevent these performance issues while avoiding any major refactoring.

What should you do?

  • A . Use Cloud Spanner instead of Cloud SQL.
  • B . Increase the number of CPUs for your instance.
  • C . Increase the storage size for the instance.
  • D . Use many smaller Cloud SQL instances.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/sql/docs/mysql/best-practices#data-arch

Question #23

You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in

the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance.

What should you do?

  • A . Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.
  • B . Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.
  • C . Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.
  • D . Create a CSV file by running the SQL statement SELECT…INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/sql/docs/mysql/import-export#serverless

Question #24

You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi-zonal setup. The primary and read replica instances are in the same region but in different zones. You need to ensure that you split the application load between both instances.

What should you do?

  • A . Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.
  • B . Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.
  • C . Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.
  • D . Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://severalnines.com/blog/how-achieve-postgresql-high-availability-pgbouncer/

https://cloud.google.com/blog/products/databases/using-haproxy-to-scale-read-only-workloads-on-cloud-sql-for-postgresql

This answer is correct because PgBouncer is a lightweight connection pooler for PostgreSQL that can help you distribute read requests between the Cloud SQL primary and read replica instances1. PgBouncer can also improve performance and scalability by reducing the overhead of creating new connections and reusing existing ones1. You can install PgBouncer on a Compute Engine instance and configure it to connect to the Cloud SQL instances using private IP addresses or the Cloud SQL Auth proxy2.

Question #25

Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss.

What should you do?

  • A . Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.
  • B . Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.
  • C . Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.
  • D . Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Binary Logging enabled, with that you can identify the point of time the data was good and recover from that point time. https://cloud.google.com/sql/docs/mysql/backup-recovery/pitr#perform_the_point-in-time_recovery_using_binary_log_positions

Question #26

You plan to use Database Migration Service to migrate data from a PostgreSQL on-premises instance to Cloud SQL. You need to identify the prerequisites for creating and automating the task.

What should you do? (Choose two.)

  • A . Drop or disable all users except database administration users.
  • B . Disable all foreign key constraints on the source PostgreSQL database.
  • C . Ensure that all PostgreSQL tables have a primary key.
  • D . Shut down the database before the Data Migration Service task is started.
  • E . Ensure that pglogical is installed on the source PostgreSQL database.

Reveal Solution Hide Solution

Correct Answer: C, E
C, E

Explanation:

https://cloud.google.com/database-migration/docs/postgres/faq

Question #27

You are using Compute Engine on Google Cloud and your data center to manage a set of MySQL databases in a hybrid configuration. You need to create replicas to scale reads and to offload part of the management operation.

What should you do?

  • A . Use external server replication.
  • B . Use Data Migration Service.
  • C . Use Cloud SQL for MySQL external replica.
  • D . Use the mysqldump utility and binary logs.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

An external replica is a method that allows you to create a read-only copy of your Cloud SQL instance on an external server, such as a Compute Engine instance or an on-premises database server1. An external replica can help you scale reads and offload management operations from your data center to Google Cloud. You can also use an external replica for disaster recovery, migration, or reporting purposes1.

To create an external replica, you need to configure a Cloud SQL instance that replicates to one or more replicas external to Cloud SQL, and a source representation instance that represents the source database server in Cloud SQL1. You also need to enable access on the Cloud SQL instance for the IP address of the external replica, create a replication user, and export and import the data from the source database server to the external replica1.

Question #28

Your company is shutting down their data center and migrating several MySQL and PostgreSQL databases to Google Cloud. Your database operations team is severely constrained by ongoing production releases and the lack of capacity for additional on-premises backups. You want to ensure that the scheduled migrations happen with minimal downtime and that the Google Cloud databases stay in sync with the on-premises data changes until the applications can cut over.

What should you do? (Choose two.)

  • A . Use an external read replica to migrate the databases to Cloud SQL.
  • B . Use a read replica to migrate the databases to Cloud SQL.
  • C . Use Database Migration Service to migrate the databases to Cloud SQL.
  • D . Use a cross-region read replica to migrate the databases to Cloud SQL.
  • E . Use replication from an external server to migrate the databases to Cloud SQL.

Reveal Solution Hide Solution

Correct Answer: C, E
Question #29

Your company is migrating the existing infrastructure for a highly transactional application to Google Cloud. You have several databases in a MySQL database instance and need to decide how to transfer the data to Cloud SQL. You need to minimize the downtime for the migration of your 500 GB instance.

What should you do?

  • A . Create a Cloud SQL for MySQL instance for your databases, and configure Datastream to stream your database changes to Cloud SQL.
    Select the Backfill historical data check box on your stream configuration to initiate Datastream to backfill any data that is out of sync between the source and destination.
    Delete your stream when all changes are moved to Cloud SQL for MySQL, and update your application to use the new instance.
  • B . Create migration job using Database Migration Service.
    Set the migration job type to Continuous, and allow the databases to complete the full dump phase and start sending data in change data capture (CDC) mode.
    Wait for the replication delay to minimize, initiate a promotion of the new Cloud SQL instance, and wait for the migration job to complete.
    Update your application connections to the new instance.
  • C . Create migration job using Database Migration Service.
    Set the migration job type to One-time, and perform this migration during a maintenance window. Stop all write workloads to the source database and initiate the dump. Wait for the dump to be loaded into the Cloud SQL destination database and the destination database to be promoted to the primary database.
    Update your application connections to the new instance.
  • D . Use the mysqldump utility to manually initiate a backup of MySQL during the application maintenance window.
    Move the files to Cloud Storage, and import each database into your Cloud SQL instance.
    Continue to dump each database until all the databases are migrated.
    Update your application connections to the new instance.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/datastream/docs/overview.

Question #30

Your company uses the Cloud SQL out-of-disk recommender to analyze the storage utilization trends of production databases over the last 30 days. Your database operations team uses these recommendations to proactively monitor storage utilization and implement corrective actions. You receive a recommendation that the instance is likely to run out of disk space.

What should you do to address this storage alert?

  • A . Normalize the database to the third normal form.
  • B . Compress the data using a different compression algorithm.
  • C . Manually or automatically increase the storage capacity.
  • D . Create another schema to load older data.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/sql/docs/mysql/instance-settings#storage-capacity-2ndgen

Question #31

You are managing a mission-critical Cloud SQL for PostgreSQL instance. Your application team is

running important transactions on the database when another DBA starts an on-demand backup. You want to verify the status of the backup.

What should you do?

  • A . Check the cloudsql.googleapis.com/postgres.log instance log.
  • B . Perform the gcloud sql operations list command.
  • C . Use Cloud Audit Logs to verify the status.
  • D . Use the Google Cloud Console.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/sql/docs/postgres/backup-recovery/backups#troubleshooting-backups Under Troubleshooting: Issue: "You can’t see the current operation’s status." The Google Cloud console reports only success or failure when the operation is done. It isn’t designed to show warnings or other updates. Run the gcloud sql operations list command to list all operations for the given Cloud SQL instance.

Question #32

You support a consumer inventory application that runs on a multi-region instance of Cloud Spanner. A customer opened a support ticket to complain about slow response times. You notice a Cloud Monitoring alert about high CPU utilization. You want to follow Google-recommended practices to address the CPU performance issue.

What should you do first?

  • A . Increase the number of processing units.
  • B . Modify the database schema, and add additional indexes.
  • C . Shard data required by the application into multiple instances.
  • D . Decrease the number of processing units.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

In case of high CPU utilization like, mentioned in question, refer: https://cloud.google.com/spanner/docs/identify-latency-point#:~:text=Check%20the%20CPU%20utilization%20of%20the%20instance.%20If%20the%20CPU% 20utilization%20of%20the%20instance%20is%20above%20the%20recommended%20level%2C%20y ou%20should%20manually%20add%20more%20nodes%2C%20or%20set%20up%20auto%20scaling. "Check the CPU utilization of the instance. If the CPU utilization of the instance is above the recommended level, you should manually add more nodes, or set up auto scaling." Indexes and schema are reviewed post identifying query with slow performance.

Refer: https://cloud.google.com/spanner/docs/troubleshooting-performance-regressions#review-schema

Question #33

Your company uses Bigtable for a user-facing application that displays a low-latency real-time dashboard. You need to recommend the optimal storage type for this read-intensive database.

What should you do?

  • A . Recommend solid-state drives (SSD).
  • B . Recommend splitting the Bigtable instance into two instances in order to load balance the concurrent reads.
  • C . Recommend hard disk drives (HDD).
  • D . Recommend mixed storage types.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

if you plan to store extensive historical data for a large number of remote-sensing devices and then use the data to generate daily reports, the cost savings for HDD storage might justify the performance tradeoff. On the other hand, if you plan to use the data to display a real-time dashboard, it probably would not make sense to use HDD storage―reads would be much more frequent in this case, and reads that are not scans are much slower with HDD storage.

Question #34

Your organization has a critical business app that is running with a Cloud SQL for MySQL backend database. Your company wants to build the most fault-tolerant and highly available solution possible. You need to ensure that the application database can survive a zonal and regional failure with a primary region of us-central1 and the backup region of us-east1.

What should you do?

  • A . Provision a Cloud SQL for MySQL instance in us-central1-a.
    Create a multiple-zone instance in us-west1-b.
    Create a read replica in us-east1-c.
  • B . Provision a Cloud SQL for MySQL instance in us-central1-a.
    Create a multiple-zone instance in us-central1-b.
    Create a read replica in us-east1-b.
  • C . Provision a Cloud SQL for MySQL instance in us-central1-a.
    Create a multiple-zone instance in us-east-b.
    Create a read replica in us-east1-c.
  • D . Provision a Cloud SQL for MySQL instance in us-central1-a.
    Create a multiple-zone instance in us-east1-b.
    Create a read replica in us-central1-b.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/sql/docs/sqlserver/intro-to-cloud-sql-disaster-recovery

Question #35

You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available.

What should you do?

  • A . Use Firestore.
  • B . Use Cloud SQL with an external (public) IP address.
  • C . Use an in-app embedded database.
  • D . Use Cloud Spanner.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://firebase.google.com/docs/firestore

Question #36

You released a popular mobile game and are using a 50 TB Cloud Spanner instance to store game data in a PITR-enabled production environment. When you analyzed the game statistics, you realized that some players are exploiting a loophole to gather more points to get on the leaderboard. Another DBA accidentally ran an emergency bugfix script that corrupted some of the data in the production environment. You need to determine the extent of the data corruption and restore the production environment.

What should you do? (Choose two.)

  • A . If the corruption is significant, use backup and restore, and specify a recovery timestamp.
  • B . If the corruption is significant, perform a stale read and specify a recovery timestamp. Write the results back.
  • C . If the corruption is significant, use import and export.
  • D . If the corruption is insignificant, use backup and restore, and specify a recovery timestamp.
  • E . If the corruption is insignificant, perform a stale read and specify a recovery timestamp. Write the results back.

Reveal Solution Hide Solution

Correct Answer: A, E
A, E

Explanation:

https://cloud.google.com/spanner/docs/pitr#ways-to-recover

To recover the entire database, backup or export the database specifying a timestamp in the past and then restore or import it to a new database. This is typically used to recover from data corruption issues when you have to revert the entire database to a point-in-time before the corruption occurred.

This part describes significant corruption – A

To recover a portion of the database, perform a stale read specifying a query-condition and timestamp in the past, and then write the results back into the live database. This is typically used for surgical operations on a live database. For example, if you accidentally delete a particular row or incorrectly update a subset of data, you can recover it with this method.

This describes insignificant corruption case C E https://cloud.google.com/spanner/docs/pitr https://cloud.google.com/spanner/docs/backup/restore-backup

Question #37

You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out.

What should you do?

  • A . Close idle connections or restart the instance before beginning the import operation.
  • B . Increase the amount of memory allocated to your instance.
  • C . Ensure that the service account has the Storage Admin role.
  • D . Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/sql/docs/mysql/import-export#troubleshooting

Question #38

You are migrating your data center to Google Cloud. You plan to migrate your applications to Compute Engine and your Oracle databases to Bare Metal Solution for Oracle. You must ensure that the applications in different projects can communicate securely and efficiently with the Oracle databases.

What should you do?

  • A . Set up a Shared VPC, configure multiple service projects, and create firewall rules.
  • B . Set up Serverless VPC Access.
  • C . Set up Private Service Connect.
  • D . Set up Traffic Director.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://medium.com/google-cloud/shared-vpc-in-google-cloud-64527e0a409e#:~:text=Unlike%20VPC%20peering%2C%20Shared%20VPC%20connects%20projects %20within%20the%20same%20organization.&text=There%20are%20a%20lot%20of,between%20VP Cs%20in%20different%20projects.

Question #39

You are running an instance of Cloud Spanner as the backend of your ecommerce website. You learn that the quality assurance (QA) team has doubled the number of their test cases. You need to create a copy of your Cloud Spanner database in a new test environment to accommodate the additional test cases. You want to follow Google-recommended practices.

What should you do?

  • A . Use Cloud Functions to run the export in Avro format.
  • B . Use Cloud Functions to run the export in text format.
  • C . Use Dataflow to run the export in Avro format.
  • D . Use Dataflow to run the export in text format.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/spanner/docs/import-export-overview#file-format

Question #40

You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL. The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency.

What should you do?

  • A . Configure Firestore, and modify your application to offload queries.
  • B . Configure Bigtable, and modify your application to offload queries.
  • C . Configure Cloud SQL for PostgreSQL read replicas to offload queries.
  • D . Configure Memorystore, and modify your application to offload queries.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

"sub-millisecond latency" always involves Memorystore. Furthermore, as we are talking about a relational DB (Cloud SQL), BigTable is not a solution to be considered.

Question #41

You need to migrate existing databases from Microsoft SQL Server 2016 Standard Edition on a single Windows Server 2019 Datacenter Edition to a single Cloud SQL for SQL Server instance. During the discovery phase of your project, you notice that your on-premises server peaks at around 25,000 read IOPS. You need to ensure that your Cloud SQL instance is sized appropriately to maximize read performance.

What should you do?

  • A . Create a SQL Server 2019 Standard on Standard machine type with 4 vCPUs, 15 GB of RAM, and 800 GB of solid-state drive (SSD).
  • B . Create a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
  • C . Create a SQL Server 2019 Standard on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 4 TB of SSD.
  • D . Create a SQL Server 2019 Enterprise on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 500 GB of SSD.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Given that Google SSD performance is related to the size of the disk in an order of 30 IOPS for each GB, ti would require at least 833 GB to handle 25000 IOPS, the only answer that exceeds this value is C. https://cloud.google.com/compute/docs/disks/performance

Question #42

You are managing a small Cloud SQL instance for developers to do testing. The instance is not critical and has a recovery point objective (RPO) of several days. You want to minimize ongoing costs for this instance.

What should you do?

  • A . Take no backups, and turn off transaction log retention.
  • B . Take one manual backup per day, and turn off transaction log retention.
  • C . Turn on automated backup, and turn off transaction log retention.
  • D . Turn on automated backup, and turn on transaction log retention.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

https://cloud.google.com/sql/docs/mysql/backup-recovery/backups

Question #43

You manage a meeting booking application that uses Cloud SQL. During an important launch, the Cloud SQL instance went through a maintenance event that resulted in a downtime of more than 5 minutes and adversely affected your production application. You need to immediately address the maintenance issue to prevent any unplanned events in the future.

What should you do?

  • A . Set your production instance’s maintenance window to non-business hours.
  • B . Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.
  • C . Contact Support to understand why your Cloud SQL instance had a downtime of more than 5 minutes.
  • D . Use Cloud Scheduler to schedule a maintenance window of no longer than 5 minutes.

Reveal Solution Hide Solution

Correct Answer: A
Question #44

You are designing a highly available (HA) Cloud SQL for PostgreSQL instance that will be used by 100 databases. Each database contains 80 tables that were migrated from your on-premises environment to Google Cloud. The applications that use these databases are located in multiple regions in the US, and you need to ensure that read and write operations have low latency.

What should you do?

  • A . Deploy 2 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-east1 and us-west1.
  • B . Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas in us-east1 and us-west1.
  • C . Deploy 4 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas
    in us-central1, us-east1, and us-west1.
  • D . Deploy 4 Cloud SQL instances in the us-central1 region, and create read replicas in us-central1, us-east1 and us-west1.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/sql/docs/mysql/quotas#table_limit

Question #45

You work in the logistics department. Your data analysis team needs daily extracts from Cloud SQL for MySQL to train a machine learning model. The model will be used to optimize next-day routes. You need to export the data in CSV format. You want to follow Google-recommended practices.

What should you do?

  • A . Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.
  • B . Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the cloudsql.instances.export API.
  • C . Use Cloud Composer to orchestrate an export by calling the cloudsql.instances.export API.
  • D . Use Cloud Composer to execute a select * from table(s) query and export results.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

https://cloud.google.com/blog/topics/developers-practitioners/scheduling-cloud-sql-exports-using-cloud-functions-and-cloud-scheduler

Question #46

You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history.

What should you do?

  • A . Use Cloud SQL with read replicas for throughput.
  • B . Use Firestore, and rely on automatic serverless scaling.
  • C . Use Memorystore for Memcached, and add nodes as necessary to achieve the required throughput.
  • D . Use Bigtable, and add nodes as necessary to achieve the required throughput.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

https://cloud.google.com/memorystore/docs/redis/redis-overview

Question #47

You are designing a payments processing application on Google Cloud. The application must continue to serve requests and avoid any user disruption if a regional failure occurs. You need to use AES-256 to encrypt data in the database, and you want to control where you store the encryption key.

What should you do?

  • A . Use Cloud Spanner with a customer-managed encryption key (CMEK).
  • B . Use Cloud Spanner with default encryption.
  • C . Use Cloud SQL with a customer-managed encryption key (CMEK).
  • D . Use Bigtable with default encryption.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Yes default encryption comes with AES-256 but the question states that you need to control where you store the encryption keys. that can be achieved by CMEK.

Question #48

You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working.

What should you do?

  • A . Use the Google Cloud Console or gcloud CLI to manually create a new clone database.
  • B . Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.
  • C . Verify that the new replica is created automatically.
  • D . Start the original primary instance and resume replication.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Recovery Process: Once Zone-B becomes available again, Cloud SQL will initiate the recovery process for the impacted read replica. The recovery process involves the following steps: 1. Synchronization: Cloud SQL will compare the data in the recovered read replica with the primary instance in Zone-A. If there is any data divergence due to the unavailability period, Cloud SQL will synchronize the read replica with the primary instance to ensure data consistency. 2. Catch-up Replication: The recovered read replica will start catching up on the changes that occurred on the primary instance during its unavailability. It will apply the necessary updates from the primary instance’s binary logs (binlogs) to bring the replica up to date. 3. Resuming Read Traffic: Once the synchronization and catch-up replication processes are complete, the read replica in Zone-B will resume its normal operation. It will be able to serve read traffic and stay updated with subsequent changes from the primary instance.

Question #49

You are migrating an on-premises application to Google Cloud. The application requires a high availability (HA) PostgreSQL database to support business-critical functions. Your company’s disaster recovery strategy requires a recovery time objective (RTO) and recovery point objective (RPO) within 30 minutes of failure. You plan to use a Google Cloud managed service.

What should you do to maximize uptime for your application?

  • A . Deploy Cloud SQL for PostgreSQL in a regional configuration. Create a read replica in a different zone in the same region and a read replica in another region for disaster recovery.
  • B . Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Take periodic backups, and use this backup to restore to a new Cloud SQL for PostgreSQL instance in another region during a disaster recovery event.
  • C . Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross-region read replica, and promote the read replica as the primary node for disaster recovery.
  • D . Migrate the PostgreSQL database to multi-regional Cloud Spanner so that a single region outage will not affect your application. Update the schema to support Cloud Spanner data types, and refactor the application.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The best answer is deploy an HA configuration and have a read replica you could promote to the primary in a different region

Question #50

Your team is running a Cloud SQL for MySQL instance with a 5 TB database that must be available 24/7. You need to save database backups on object storage with minimal operational overhead or risk to your production workloads.

What should you do?

  • A . Use Cloud SQL serverless exports.
  • B . Create a read replica, and then use the mysqldump utility to export each table.
  • C . Clone the Cloud SQL instance, and then use the mysqldump utlity to export the data.
  • D . Use the mysqldump utility on the primary database instance to export the backup.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

https://cloud.google.com/blog/products/databases/introducing-cloud-sql-serverless-exports

Exit mobile version