Google Professional Cloud Database Engineer Google Cloud Certified – Professional Cloud Database Engineer Online Training
Google Professional Cloud Database Engineer Online Training
The questions for Professional Cloud Database Engineer were last updated at Nov 23,2024.
- Exam Code: Professional Cloud Database Engineer
- Exam Name: Google Cloud Certified - Professional Cloud Database Engineer
- Certification Provider: Google
- Latest update: Nov 23,2024
You need to migrate existing databases from Microsoft SQL Server 2016 Standard Edition on a single Windows Server 2019 Datacenter Edition to a single Cloud SQL for SQL Server instance. During the discovery phase of your project, you notice that your on-premises server peaks at around 25,000 read IOPS. You need to ensure that your Cloud SQL instance is sized appropriately to maximize read performance.
What should you do?
- A . Create a SQL Server 2019 Standard on Standard machine type with 4 vCPUs, 15 GB of RAM, and 800 GB of solid-state drive (SSD).
- B . Create a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
- C . Create a SQL Server 2019 Standard on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 4 TB of SSD.
- D . Create a SQL Server 2019 Enterprise on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 500 GB of SSD.
You are managing a small Cloud SQL instance for developers to do testing. The instance is not critical and has a recovery point objective (RPO) of several days. You want to minimize ongoing costs for this instance.
What should you do?
- A . Take no backups, and turn off transaction log retention.
- B . Take one manual backup per day, and turn off transaction log retention.
- C . Turn on automated backup, and turn off transaction log retention.
- D . Turn on automated backup, and turn on transaction log retention.
You manage a meeting booking application that uses Cloud SQL. During an important launch, the Cloud SQL instance went through a maintenance event that resulted in a downtime of more than 5 minutes and adversely affected your production application. You need to immediately address the maintenance issue to prevent any unplanned events in the future.
What should you do?
- A . Set your production instance’s maintenance window to non-business hours.
- B . Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.
- C . Contact Support to understand why your Cloud SQL instance had a downtime of more than 5 minutes.
- D . Use Cloud Scheduler to schedule a maintenance window of no longer than 5 minutes.
You are designing a highly available (HA) Cloud SQL for PostgreSQL instance that will be used by 100 databases. Each database contains 80 tables that were migrated from your on-premises environment to Google Cloud. The applications that use these databases are located in multiple regions in the US, and you need to ensure that read and write operations have low latency.
What should you do?
- A . Deploy 2 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-east1 and us-west1.
- B . Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas in us-east1 and us-west1.
- C . Deploy 4 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas
in us-central1, us-east1, and us-west1. - D . Deploy 4 Cloud SQL instances in the us-central1 region, and create read replicas in us-central1, us-east1 and us-west1.
You work in the logistics department. Your data analysis team needs daily extracts from Cloud SQL for MySQL to train a machine learning model. The model will be used to optimize next-day routes. You need to export the data in CSV format. You want to follow Google-recommended practices.
What should you do?
- A . Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.
- B . Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the cloudsql.instances.export API.
- C . Use Cloud Composer to orchestrate an export by calling the cloudsql.instances.export API.
- D . Use Cloud Composer to execute a select * from table(s) query and export results.
You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history.
What should you do?
- A . Use Cloud SQL with read replicas for throughput.
- B . Use Firestore, and rely on automatic serverless scaling.
- C . Use Memorystore for Memcached, and add nodes as necessary to achieve the required throughput.
- D . Use Bigtable, and add nodes as necessary to achieve the required throughput.
You are designing a payments processing application on Google Cloud. The application must continue to serve requests and avoid any user disruption if a regional failure occurs. You need to use AES-256 to encrypt data in the database, and you want to control where you store the encryption key.
What should you do?
- A . Use Cloud Spanner with a customer-managed encryption key (CMEK).
- B . Use Cloud Spanner with default encryption.
- C . Use Cloud SQL with a customer-managed encryption key (CMEK).
- D . Use Bigtable with default encryption.
You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working.
What should you do?
- A . Use the Google Cloud Console or gcloud CLI to manually create a new clone database.
- B . Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.
- C . Verify that the new replica is created automatically.
- D . Start the original primary instance and resume replication.
You are migrating an on-premises application to Google Cloud. The application requires a high availability (HA) PostgreSQL database to support business-critical functions. Your company’s disaster recovery strategy requires a recovery time objective (RTO) and recovery point objective (RPO) within 30 minutes of failure. You plan to use a Google Cloud managed service.
What should you do to maximize uptime for your application?
- A . Deploy Cloud SQL for PostgreSQL in a regional configuration. Create a read replica in a different zone in the same region and a read replica in another region for disaster recovery.
- B . Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Take periodic backups, and use this backup to restore to a new Cloud SQL for PostgreSQL instance in another region during a disaster recovery event.
- C . Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross-region read replica, and promote the read replica as the primary node for disaster recovery.
- D . Migrate the PostgreSQL database to multi-regional Cloud Spanner so that a single region outage will not affect your application. Update the schema to support Cloud Spanner data types, and refactor the application.
Your team is running a Cloud SQL for MySQL instance with a 5 TB database that must be available 24/7. You need to save database backups on object storage with minimal operational overhead or risk to your production workloads.
What should you do?
- A . Use Cloud SQL serverless exports.
- B . Create a read replica, and then use the mysqldump utility to export each table.
- C . Clone the Cloud SQL instance, and then use the mysqldump utlity to export the data.
- D . Use the mysqldump utility on the primary database instance to export the backup.