Google Professional Cloud Database Engineer Google Cloud Certified – Professional Cloud Database Engineer Online Training
Google Professional Cloud Database Engineer Online Training
The questions for Professional Cloud Database Engineer were last updated at Nov 23,2024.
- Exam Code: Professional Cloud Database Engineer
- Exam Name: Google Cloud Certified - Professional Cloud Database Engineer
- Certification Provider: Google
- Latest update: Nov 23,2024
You are managing a mission-critical Cloud SQL for PostgreSQL instance. Your application team is
running important transactions on the database when another DBA starts an on-demand backup. You want to verify the status of the backup.
What should you do?
- A . Check the cloudsql.googleapis.com/postgres.log instance log.
- B . Perform the gcloud sql operations list command.
- C . Use Cloud Audit Logs to verify the status.
- D . Use the Google Cloud Console.
You support a consumer inventory application that runs on a multi-region instance of Cloud Spanner. A customer opened a support ticket to complain about slow response times. You notice a Cloud Monitoring alert about high CPU utilization. You want to follow Google-recommended practices to address the CPU performance issue.
What should you do first?
- A . Increase the number of processing units.
- B . Modify the database schema, and add additional indexes.
- C . Shard data required by the application into multiple instances.
- D . Decrease the number of processing units.
Your company uses Bigtable for a user-facing application that displays a low-latency real-time dashboard. You need to recommend the optimal storage type for this read-intensive database.
What should you do?
- A . Recommend solid-state drives (SSD).
- B . Recommend splitting the Bigtable instance into two instances in order to load balance the concurrent reads.
- C . Recommend hard disk drives (HDD).
- D . Recommend mixed storage types.
Your organization has a critical business app that is running with a Cloud SQL for MySQL backend database. Your company wants to build the most fault-tolerant and highly available solution possible. You need to ensure that the application database can survive a zonal and regional failure with a primary region of us-central1 and the backup region of us-east1.
What should you do?
- A . Provision a Cloud SQL for MySQL instance in us-central1-a.
Create a multiple-zone instance in us-west1-b.
Create a read replica in us-east1-c. - B . Provision a Cloud SQL for MySQL instance in us-central1-a.
Create a multiple-zone instance in us-central1-b.
Create a read replica in us-east1-b. - C . Provision a Cloud SQL for MySQL instance in us-central1-a.
Create a multiple-zone instance in us-east-b.
Create a read replica in us-east1-c. - D . Provision a Cloud SQL for MySQL instance in us-central1-a.
Create a multiple-zone instance in us-east1-b.
Create a read replica in us-central1-b.
You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available.
What should you do?
- A . Use Firestore.
- B . Use Cloud SQL with an external (public) IP address.
- C . Use an in-app embedded database.
- D . Use Cloud Spanner.
You released a popular mobile game and are using a 50 TB Cloud Spanner instance to store game data in a PITR-enabled production environment. When you analyzed the game statistics, you realized that some players are exploiting a loophole to gather more points to get on the leaderboard. Another DBA accidentally ran an emergency bugfix script that corrupted some of the data in the production environment. You need to determine the extent of the data corruption and restore the production environment.
What should you do? (Choose two.)
- A . If the corruption is significant, use backup and restore, and specify a recovery timestamp.
- B . If the corruption is significant, perform a stale read and specify a recovery timestamp. Write the results back.
- C . If the corruption is significant, use import and export.
- D . If the corruption is insignificant, use backup and restore, and specify a recovery timestamp.
- E . If the corruption is insignificant, perform a stale read and specify a recovery timestamp. Write the results back.
You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out.
What should you do?
- A . Close idle connections or restart the instance before beginning the import operation.
- B . Increase the amount of memory allocated to your instance.
- C . Ensure that the service account has the Storage Admin role.
- D . Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.
You are migrating your data center to Google Cloud. You plan to migrate your applications to Compute Engine and your Oracle databases to Bare Metal Solution for Oracle. You must ensure that the applications in different projects can communicate securely and efficiently with the Oracle databases.
What should you do?
- A . Set up a Shared VPC, configure multiple service projects, and create firewall rules.
- B . Set up Serverless VPC Access.
- C . Set up Private Service Connect.
- D . Set up Traffic Director.
You are running an instance of Cloud Spanner as the backend of your ecommerce website. You learn that the quality assurance (QA) team has doubled the number of their test cases. You need to create a copy of your Cloud Spanner database in a new test environment to accommodate the additional test cases. You want to follow Google-recommended practices.
What should you do?
- A . Use Cloud Functions to run the export in Avro format.
- B . Use Cloud Functions to run the export in text format.
- C . Use Dataflow to run the export in Avro format.
- D . Use Dataflow to run the export in text format.
You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL. The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency.
What should you do?
- A . Configure Firestore, and modify your application to offload queries.
- B . Configure Bigtable, and modify your application to offload queries.
- C . Configure Cloud SQL for PostgreSQL read replicas to offload queries.
- D . Configure Memorystore, and modify your application to offload queries.