Google Professional Cloud Database Engineer Google Cloud Certified – Professional Cloud Database Engineer Online Training
Google Professional Cloud Database Engineer Online Training
The questions for Professional Cloud Database Engineer were last updated at Nov 22,2024.
- Exam Code: Professional Cloud Database Engineer
- Exam Name: Google Cloud Certified - Professional Cloud Database Engineer
- Certification Provider: Google
- Latest update: Nov 22,2024
Your organization operates in a highly regulated industry. Separation of concerns (SoC) and security principle of least privilege (PoLP) are critical.
The operations team consists of:
Person A is a database administrator.
Person B is an analyst who generates metric reports.
Application C is responsible for automatic backups.
You need to assign roles to team members for Cloud Spanner.
Which roles should you assign?
- A . roles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupWriter for Application C - B . roles/spanner.databaseAdmin for Person A
roles/spanner.databaseReader for Person B
roles/spanner.backupAdmin for Application C - C . roles/spanner.databaseAdmin for Person A
roles/spanner.databaseUser for Person B
roles/spanner databaseReader for Application C - D . roles/spanner.databaseAdmin for Person A
roles/spanner.databaseUser for Person B
roles/spanner.backupWriter for Application C
You are designing an augmented reality game for iOS and Android devices. You plan to use Cloud Spanner as the primary backend database for game state storage and player authentication. You want to track in-game rewards that players unlock at every stage of the game. During the testing phase, you discovered that costs are much higher than anticipated, but the query response times are within the SLA. You want to follow Google-recommended practices. You need the database to be performant and highly available while you keep costs low.
What should you do?
- A . Manually scale down the number of nodes after the peak period has passed.
- B . Use interleaving to co-locate parent and child rows.
- C . Use the Cloud Spanner query optimizer to determine the most efficient way to execute the SQL query.
- D . Use granular instance sizing in Cloud Spanner and Autoscaler.
You recently launched a new product to the US market. You currently have two Bigtable clusters in one US region to serve all the traffic. Your marketing team is planning an immediate expansion to APAC. You need to roll out the regional expansion while implementing high availability according to Google-recommended practices.
What should you do?
- A . Maintain a target of 23% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone europe-west1-d
cluster-c in zone asia-east1-b - B . Maintain a target of 23% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central1-b
cluster-c in zone us-east1-a - C . Maintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone australia-southeast1-a
cluster-c in zone europe-west1-d
cluster-d in zone asia-east1-b - D . Maintain a target of 35% CPU utilization by locating:
cluster-a in zone us-central1-a
cluster-b in zone us-central2-a
cluster-c in zone asia-northeast1-b
cluster-d in zone asia-east1-b
Your ecommerce website captures user clickstream data to analyze customer traffic patterns in real time and support personalization features on your website. You plan to analyze this data using big data tools. You need a low-latency solution that can store 8 TB of data and can scale to millions of read and write requests per second.
What should you do?
- A . Write your data into Bigtable and use Dataproc and the Apache Hbase libraries for analysis.
- B . Deploy a Cloud SQL environment with read replicas for improved performance. Use Datastream to export data to Cloud Storage and analyze with Dataproc and the Cloud Storage connector.
- C . Use Memorystore to handle your low-latency requirements and for real-time analytics.
- D . Stream your data into BigQuery and use Dataproc and the BigQuery Storage API to analyze large volumes of data.
Your company uses Cloud Spanner for a mission-critical inventory management system that is globally available. You recently loaded stock keeping unit (SKU) and product catalog data from a company acquisition and observed hot-spots in the Cloud Spanner database. You want to follow Google-recommended schema design practices to avoid performance degradation.
What should you do? (Choose two.)
- A . Use an auto-incrementing value as the primary key.
- B . Normalize the data model.
- C . Promote low-cardinality attributes in multi-attribute primary keys.
- D . Promote high-cardinality attributes in multi-attribute primary keys.
- E . Use bit-reverse sequential value as the primary key.
You are managing multiple applications connecting to a database on Cloud SQL for PostgreSQL. You need to be able to monitor database performance to easily identify applications with long-running and resource-intensive queries.
What should you do?
- A . Use log messages produced by Cloud SQL.
- B . Use Query Insights for Cloud SQL.
- C . Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
- D . Use Cloud SQL instance monitoring in the Google Cloud Console.
You are building an application that allows users to customize their website and mobile experiences. The application will capture user information and preferences. User profiles have a dynamic schema, and users can add or delete information from their profile. You need to ensure that user changes automatically trigger updates to your downstream BigQuery data warehouse.
What should you do?
- A . Store your data in Bigtable, and use the user identifier as the key. Use one column family to store user profile data, and use another column family to store user preferences.
- B . Use Cloud SQL, and create different tables for user profile data and user preferences from your recommendations model. Use SQL to join the user profile data and preferences
- C . Use Firestore in Native mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
- D . Use Firestore in Datastore mode, and store user profile data as a document. Update the user profile with preferences specific to that user and use the user identifier to query.
Your application uses Cloud SQL for MySQL. Your users run reports on data that relies on near-real time; however, the additional analytics caused excessive load on the primary database. You created a read replica for the analytics workloads, but now your users are complaining about the lag in data changes and that their reports are still slow. You need to improve the report performance and shorten the lag in data replication without making changes to the current reports.
Which two approaches should you implement? (Choose two.)
- A . Create secondary indexes on the replica.
- B . Create additional read replicas, and partition your analytics users to use different read replicas.
- C . Disable replication on the read replica, and set the flag for parallel replication on the read replica.
Re-enable replication and optimize performance by setting flags on the primary instance. - D . Disable replication on the primary instance, and set the flag for parallel replication on the primary instance. Re-enable replication and optimize performance by setting flags on the read replica.
- E . Move your analytics workloads to BigQuery, and set up a streaming pipeline to move data and update BigQuery.
You are evaluating Cloud SQL for PostgreSQL as a possible destination for your on-premises
PostgreSQL instances. Geography is becoming increasingly relevant to customer privacy worldwide.
Your solution must support data residency requirements and include a strategy to:
– configure where data is stored
– control where the encryption keys are stored
– govern the access to data
What should you do?
- A . Replicate Cloud SQL databases across different zones.
- B . Create a Cloud SQL for PostgreSQL instance on Google Cloud for the data that does not need to adhere to data residency requirements. Keep the data that must adhere to data residency requirements on-premises. Make application changes to support both databases.
- C . Allow application access to data only if the users are in the same region as the Google Cloud region for the Cloud SQL for PostgreSQL database.
- D . Use features like customer-managed encryption keys (CMEK), VPC Service Controls, and Identity and Access Management (IAM) policies.
Your customer is running a MySQL database on-premises with read replicas. The nightly incremental backups are expensive and add maintenance overhead. You want to follow Google-recommended practices to migrate the database to Google Cloud, and you need to ensure minimal downtime.
What should you do?
- A . Create a Google Kubernetes Engine (GKE) cluster, install MySQL on the cluster, and then import the dump file.
- B . Use the mysqldump utility to take a backup of the existing on-premises database, and then import it into Cloud SQL.
- C . Create a Compute Engine VM, install MySQL on the VM, and then import the dump file.
- D . Create an external replica, and use Cloud SQL to synchronize the data to the replica.