Google Professional Cloud Architect Google Certified Professional – Cloud Architect (GCP) Online Training
Google Professional Cloud Architect Online Training
The questions for Professional Cloud Architect were last updated at Apr 25,2025.
- Exam Code: Professional Cloud Architect
- Exam Name: Google Certified Professional – Cloud Architect (GCP)
- Certification Provider: Google
- Latest update: Apr 25,2025
For this question, refer to the TerramEarth case study
You analyzed TerramEarth’s business requirement to reduce downtime, and found that they can achieve a majority of time saving by reducing customers’ wait time for parts You decided to focus on reduction of the 3 weeks aggregate reporting time.
Which modifications to the company’s processes should you recommend?
- A . Migrate from CSV to binary format, migrate from FTP to SFTP transport, and develop machine learning analysis of metrics.
- B . Migrate from FTP to streaming transport, migrate from CSV to binary format, and develop machine learning analysis of metrics.
- C . Increase fleet cellular connectivity to 80%, migrate from FTP to streaming transport, and develop machine learning analysis of metrics.
- D . Migrate from FTP to SFTP transport, develop machine learning analysis of metrics, and increase dealer local inventory by a fixed factor.
For this question, refer to the TerramEarth case study.
TerramEarth’s 20 million vehicles are scattered around the world. Based on the vehicle’s location its telemetry data is stored in a Google Cloud Storage (GCS) regional bucket (US. Europe, or Asia). The CTO has asked you to run a report on the raw telemetry data to determine why vehicles are breaking down after 100 K miles. You want to run this job on all the data .
What is the most cost-effective way to run this job?
- A . Move all the data into 1 zone, then launch a Cloud Dataproc cluster to run the job.
- B . Move all the data into 1 region, then launch a Google Cloud Dataproc cluster to run the job.
- C . Launch a cluster in each region to preprocess and compress the raw data, then move the data into a multi region bucket and use a Dataproc cluster to finish the job.
- D . Launch a cluster in each region to preprocess and compress the raw data, then move the data into a region bucket and use a Cloud Dataproc cluster to finish the jo
For this question refer to the TerramEarth case study
Operational parameters such as oil pressure are adjustable on each of TerramEarth’s vehicles to increase their efficiency, depending on their environmental conditions. Your primary goal is to increase the operating efficiency of all 20 million cellular and unconnected vehicles in the field How can you accomplish this goal?
- A . Have your engineers inspect the data for patterns, and then create an algorithm with rules that make operational adjustments automatically.
- B . Capture all operating data, train machine learning models that identify ideal operations, and run locally to make operational adjustments automatically.
- C . Implement a Google Cloud Dataflow streaming job with a sliding window, and use Google Cloud Messaging (GCM) to make operational adjustments automatically.
- D . Capture all operating data, train machine learning models that identify ideal operations, and host in Google Cloud Machine Learning (ML) Platform to make operational adjustments automatically.
For this question, refer to the TerramEarth case study.
To speed up data retrieval, more vehicles will be upgraded to cellular connections and be able to transmit data to the ETL process. The current FTP process is error-prone and restarts the data transfer from the start of the file when connections fail, which happens often. You want to improve the reliability of the solution and minimize data transfer time on the cellular connections .
What should you do?
- A . Use one Google Container Engine cluster of FTP servers. Save the data to a Multi-Regional bucket. Run the ETL process using data in the bucket.
- B . Use multiple Google Container Engine clusters running FTP servers located in different regions. Save the data to Multi-Regional buckets in us, eu, and asia. Run the ETL process using the data in the bucket.
- C . Directly transfer the files to different Google Cloud Multi-Regional Storage bucket locations in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process using the data in the bucket.
- D . Directly transfer the files to a different Google Cloud Regional Storage bucket location in us, eu, and asia using Google APIs over HTTP(S). Run the ETL process to retrieve the data from each Regional bucket.
Your agricultural division is experimenting with fully autonomous vehicles.
You want your architecture to promote strong security during vehicle operation.
Which two architecture should you consider? Choose 2 answers:
- A . Treat every micro service call between modules on the vehicle as untrusted.
- B . Require IPv6 for connectivity to ensure a secure address space.
- C . Use a trusted platform module (TPM) and verify firmware and binaries on boot.
- D . Use a functional programming language to isolate code execution cycles.
- E . Use multiple connectivity subsystems for redundancy.
- F . Enclose the vehicle’s drive electronics in a Faraday cage to isolate chips.
For this question, refer to the TerramEarth case study.
The TerramEarth development team wants to create an API to meet the company’s business requirements. You want the development team to focus their development effort on business value versus creating a custom framework .
Which method should they use?
- A . Use Google App Engine with Google Cloud Endpoints. Focus on an API for dealers and partners.
- B . Use Google App Engine with a JAX-RS Jersey Java-based framework. Focus on an API for the public.
- C . Use Google App Engine with the Swagger (open API Specification) framework. Focus on an API for the public.
- D . Use Google Container Engine with a Django Python container. Focus on an API for the public.
- E . Use Google Container Engine with a Tomcat container with the Swagger (Open API Specification) framework. Focus on an API for dealers and partners.
For this question, refer to the TerramEarth case study
Your development team has created a structured API to retrieve vehicle data. They want to allow third parties to develop tools for dealerships that use this vehicle event data. You want to support delegated authorization against this data .
What should you do?
- A . Build or leverage an OAuth-compatible access control system.
- B . Build SAML 2.0 SSO compatibility into your authentication system.
- C . Restrict data access based on the source IP address of the partner systems.
- D . Create secondary credentials for each dealer that can be given to the trusted third party.
Topic 3, JencoMart Case Study
Company Overview
JencoMart is a global retailer with over 10,000 stores in 16 countries. The stores carry a range of goods, such as groceries, tires, and jewelry. One of the company’s core values is excellent customer service. In addition, they recently introduced an environmental policy to reduce their carbon output by 50% over the next 5 years.
Company Background
JencoMart started as a general store in 1931, and has grown into one of the world’s leading brands, known for great value and customer service. Over time, the company transitioned from only physical stores to a stores and online hybrid model, with 25% of sales online. Currently, JencoMart has little presence in Asia, but considers that market key for future growth.
Solution Concept
JencoMart wants to migrate several critical applications to the cloud but has not completed a technical review to determine their suitability for the cloud and the engineering required for migration. They currently host all of these applications on infrastructure that is at its end of life and is no longer supported.
Existing Technical Environment
JencoMart hosts all of its applications in 4 data centers: 3 in North American and 1 in Europe; most applications are dual-homed.
JencoMart understands the dependencies and resource usage metrics of their on-premises architecture.
Application: Customer loyalty portal
LAMP (Linux, Apache, MySQL and PHP) application served from the two JencoMart-owned U.S. data centers.
Database
– Oracle Database stores user profiles
– 20 TB
– Complex table structure
– Well maintained, clean data
– Strong backup strategy
– PostgreSQL database stores user credentials
– Single-homed in US West
– No redundancy
– Backed up every 12 hours
– 100% uptime service level agreement (SLA)
– Authenticates all users
Compute
– 30 machines in US West Coast, each machine has:
– Twin, dual core CPUs
– 32 GB of RAM
– Twin 250 GB HDD (RAID 1)
– 20 machines in US East Coast, each machine has:
– Single, dual-core CPU
– 24 GB of RAM
– Twin 250 GB HDD (RAID 1)
Storage
– Access to shared 100 TB SAN in each location
– Tape backup every week
Business Requirements
– Optimize for capacity during peak periods and value during off-peak periods
– Guarantee service availability and support
– Reduce on-premises footprint and associated financial and environmental impact
– Move to outsourcing model to avoid large upfront costs associated with infrastructure purchase
– Expand services into Asia
Technical Requirements
– Assess key application for cloud suitability
– Modify applications for the cloud
– Move applications to a new infrastructure
– Leverage managed services wherever feasible
– Sunset 20% of capacity in existing data centers
– Decrease latency in Asia
CEO Statement
JencoMart will continue to develop personal relationships with our customers as more people access the web. The future of our retail business is in the global market and the connection between online and in-store experiences. As a large, global company, we also have a responsibility to the environment through “green” initiatives and policies.
CTO Statement
The challenges of operating data centers prevent focus on key technologies critical to our long-term success. Migrating our data services to a public cloud infrastructure will allow us to focus on big data and machine learning to improve our service to customers.
CFO Statement
Since its founding, JencoMart has invested heavily in our data services infrastructure. However, because of changing market trends, we need to outsource our infrastructure to ensure our long-term success. This model will allow us to respond to increasing customer demand during peak periods and reduce costs.
For this question, refer to the JencoMart case study.
JencoMart has built a version of their application on Google Cloud Platform that serves traffic to Asia. You want to measure success against their business and technical goals .
Which metrics should you track?
- A . Error rates for requests from Asia
- B . Latency difference between US and Asia
- C . Total visits, error rates, and latency from Asia
- D . Total visits and average latency for users in Asia
- E . The number of character sets present in the database
For this question, refer to the JencoMart case study
A few days after JencoMart migrates the user credentials database to Google Cloud Platform and shuts down the old server, the new database server stops responding to SSH connections. It is still serving database requests to the application servers correctly .
What three steps should you take to diagnose the problem? Choose 3 answers
- A . Delete the virtual machine (VM) and disks and create a new one.
- B . Delete the instance, attach the disk to a new VM, and investigate.
- C . Take a snapshot of the disk and connect to a new machine to investigate.
- D . Check inbound firewall rules for the network the machine is connected to.
- E . Connect the machine to another network with very simple firewall rules and investigate.
- F . Print the Serial Console output for the instance for troubleshooting, activate the interactive console, and investigate.
For this question, refer to the JencoMart case study.
JencoMart has decided to migrate user profile storage to Google Cloud Datastore and the application servers to Google Compute Engine (GCE). During the migration, the existing infrastructure will need access to Datastore to upload the data .
What service account key-management strategy should you recommend?
- A . Provision service account keys for the on-premises infrastructure and for the GCE virtual machines (VMs).
- B . Authenticate the on-premises infrastructure with a user account and provision service account keys for the VMs.
- C . Provision service account keys for the on-premises infrastructure and use Google Cloud Platform (GCP) managed keys for the VMs
- D . Deploy a custom authentication service on GCE/Google Container Engine (GKE) for the on-premises infrastructure and use GCP managed keys for the VMs.