Google Professional Cloud Database Engineer Google Cloud Certified – Professional Cloud Database Engineer Online Training
Google Professional Cloud Database Engineer Online Training
The questions for Professional Cloud Database Engineer were last updated at Nov 22,2024.
- Exam Code: Professional Cloud Database Engineer
- Exam Name: Google Cloud Certified - Professional Cloud Database Engineer
- Certification Provider: Google
- Latest update: Nov 22,2024
Your team uses thousands of connected IoT devices to collect device maintenance data for your oil and gas customers in real time. You want to design inspection routines, device repair, and replacement schedules based on insights gathered from the data produced by these devices. You need a managed solution that is highly scalable, supports a multi-cloud strategy, and offers low latency for these IoT devices.
What should you do?
- A . Use Firestore with Looker.
- B . Use Cloud Spanner with Data Studio.
- C . Use MongoD8 Atlas with Charts.
- D . Use Bigtable with Looker.
Your application follows a microservices architecture and uses a single large Cloud SQL instance, which is starting to have performance issues as your application grows. in the Cloud Monitoring dashboard, the CPU utilization looks normal You want to follow Google-recommended practices to resolve and prevent these performance issues while avoiding any major refactoring.
What should you do?
- A . Use Cloud Spanner instead of Cloud SQL.
- B . Increase the number of CPUs for your instance.
- C . Increase the storage size for the instance.
- D . Use many smaller Cloud SQL instances.
You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in
the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance.
What should you do?
- A . Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.
- B . Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.
- C . Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.
- D . Create a CSV file by running the SQL statement SELECT…INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.
You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi-zonal setup. The primary and read replica instances are in the same region but in different zones. You need to ensure that you split the application load between both instances.
What should you do?
- A . Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.
- B . Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.
- C . Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.
- D . Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.
Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss.
What should you do?
- A . Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.
- B . Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.
- C . Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.
- D . Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.
You plan to use Database Migration Service to migrate data from a PostgreSQL on-premises instance to Cloud SQL. You need to identify the prerequisites for creating and automating the task.
What should you do? (Choose two.)
- A . Drop or disable all users except database administration users.
- B . Disable all foreign key constraints on the source PostgreSQL database.
- C . Ensure that all PostgreSQL tables have a primary key.
- D . Shut down the database before the Data Migration Service task is started.
- E . Ensure that pglogical is installed on the source PostgreSQL database.
You are using Compute Engine on Google Cloud and your data center to manage a set of MySQL databases in a hybrid configuration. You need to create replicas to scale reads and to offload part of the management operation.
What should you do?
- A . Use external server replication.
- B . Use Data Migration Service.
- C . Use Cloud SQL for MySQL external replica.
- D . Use the mysqldump utility and binary logs.
Your company is shutting down their data center and migrating several MySQL and PostgreSQL databases to Google Cloud. Your database operations team is severely constrained by ongoing production releases and the lack of capacity for additional on-premises backups. You want to ensure that the scheduled migrations happen with minimal downtime and that the Google Cloud databases stay in sync with the on-premises data changes until the applications can cut over.
What should you do? (Choose two.)
- A . Use an external read replica to migrate the databases to Cloud SQL.
- B . Use a read replica to migrate the databases to Cloud SQL.
- C . Use Database Migration Service to migrate the databases to Cloud SQL.
- D . Use a cross-region read replica to migrate the databases to Cloud SQL.
- E . Use replication from an external server to migrate the databases to Cloud SQL.
Your company is migrating the existing infrastructure for a highly transactional application to Google Cloud. You have several databases in a MySQL database instance and need to decide how to transfer the data to Cloud SQL. You need to minimize the downtime for the migration of your 500 GB instance.
What should you do?
- A . Create a Cloud SQL for MySQL instance for your databases, and configure Datastream to stream your database changes to Cloud SQL.
Select the Backfill historical data check box on your stream configuration to initiate Datastream to backfill any data that is out of sync between the source and destination.
Delete your stream when all changes are moved to Cloud SQL for MySQL, and update your application to use the new instance. - B . Create migration job using Database Migration Service.
Set the migration job type to Continuous, and allow the databases to complete the full dump phase and start sending data in change data capture (CDC) mode.
Wait for the replication delay to minimize, initiate a promotion of the new Cloud SQL instance, and wait for the migration job to complete.
Update your application connections to the new instance. - C . Create migration job using Database Migration Service.
Set the migration job type to One-time, and perform this migration during a maintenance window. Stop all write workloads to the source database and initiate the dump. Wait for the dump to be loaded into the Cloud SQL destination database and the destination database to be promoted to the primary database.
Update your application connections to the new instance. - D . Use the mysqldump utility to manually initiate a backup of MySQL during the application maintenance window.
Move the files to Cloud Storage, and import each database into your Cloud SQL instance.
Continue to dump each database until all the databases are migrated.
Update your application connections to the new instance.
Your company uses the Cloud SQL out-of-disk recommender to analyze the storage utilization trends of production databases over the last 30 days. Your database operations team uses these recommendations to proactively monitor storage utilization and implement corrective actions. You receive a recommendation that the instance is likely to run out of disk space.
What should you do to address this storage alert?
- A . Normalize the database to the third normal form.
- B . Compress the data using a different compression algorithm.
- C . Manually or automatically increase the storage capacity.
- D . Create another schema to load older data.