Google Associate Cloud Engineer Google Cloud Certified – Associate Cloud Engineer Online Training
Google Associate Cloud Engineer Online Training
The questions for Associate Cloud Engineer were last updated at Nov 23,2024.
- Exam Code: Associate Cloud Engineer
- Exam Name: Google Cloud Certified – Associate Cloud Engineer
- Certification Provider: Google
- Latest update: Nov 23,2024
You need to provide a cost estimate for a Kubernetes cluster using the GCP pricing calculator for Kubernetes. Your workload requires high IOPs, and you will also be using disk snapshots. You start by entering the number of nodes, average hours, and average days.
What should you do next?
- A . Fill in local SSD. Fill in persistent disk storage and snapshot storage.
- B . Fill in local SSD. Add estimated cost for cluster management.
- C . Select Add GPUs. Fill in persistent disk storage and snapshot storage.
- D . Select Add GPUs. Add estimated cost for cluster management.
You are using Google Kubernetes Engine with autoscaling enabled to host a new application. You want to expose this new application to the public, using HTTPS on a public IP address.
What should you do?
- A . Create a Kubernetes Service of type NodePort for your application, and a Kubernetes Ingress to expose this Service via a Cloud Load Balancer.
- B . Create a Kubernetes Service of type ClusterIP for your application. Configure the public DNS name of your application using the IP of this Service.
- C . Create a Kubernetes Service of type NodePort to expose the application on port 443 of each node of the Kubernetes cluster. Configure the public DNS name of your application with the IP of every node of the cluster to achieve load-balancing.
- D . Create a HAProxy pod in the cluster to load-balance the traffic to all the pods of the application. Forward the public traffic to HAProxy with an iptable rule. Configure the DNS name of your application using the public IP of the node HAProxy is running on.
You need to enable traffic between multiple groups of Compute Engine instances that are currently running two different GCP projects. Each group of Compute Engine instances is running in its own VPC.
What should you do?
- A . Verify that both projects are in a GCP Organization. Create a new VPC and add all instances.
- B . Verify that both projects are in a GCP Organization. Share the VPC from one project and request that the Compute Engine instances in the other project use this shared VPC.
- C . Verify that you are the Project Administrator of both projects. Create two new VPCs and add all instances.
- D . Verify that you are the Project Administrator of both projects. Create a new VPC and add all instances.
You want to add a new auditor to a Google Cloud Platform project. The auditor should be allowed to read, but not modify, all project items.
How should you configure the auditor’s permissions?
- A . Create a custom role with view-only project permissions. Add the user’s account to the custom role.
- B . Create a custom role with view-only service permissions. Add the user’s account to the custom role.
- C . Select the built-in IAM project Viewer role. Add the user’s account to this role.
- D . Select the built-in IAM service Viewer role. Add the user’s account to this role.
You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost.
What should you do?
- A . Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.
- B . Recreate all the nodes of the GKE cluster to enable GPUs on all of them.
- C . Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs.
Dedicate this cluster to your ML team. - D . Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.
Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes.
What should you do?
- A . Use gcloud to expand the IP range of the current subnet.
- B . Delete the subnet, and recreate it using a wider range of IP addresses.
- C . Create a new project. Use Shared VPC to share the current network with the new project.
- D . Create a new subnet with the same starting IP but a wider range to overwrite the current subnet.
Your organization uses G Suite for communication and collaboration. All users in your organization have a G Suite account. You want to grant some G Suite users access to your Cloud Platform project.
What should you do?
- A . Enable Cloud Identity in the GCP Console for your domain.
- B . Grant them the required IAM roles using their G Suite email address.
- C . Create a CSV sheet with all users’ email addresses. Use the gcloud command line tool to convert them into Google Cloud Platform accounts.
- D . In the G Suite console, add the users to a special group called [email protected]. Rely on the default behavior of the Cloud Platform to grant users access if they are members of this group.
You have a Google Cloud Platform account with access to both production and development projects. You need to create an automated process to list all compute instances in development and production projects on a daily basis.
What should you do?
- A . Create two configurations using gcloud config. Write a script that sets configurations as active, individually. For each configuration, use gcloud compute instances list to get a list of compute resources.
- B . Create two configurations using gsutil config. Write a script that sets configurations as active, individually. For each configuration, use gsutil compute instances list to get a list of compute resources.
- C . Go to Cloud Shell and export this information to Cloud Storage on a daily basis.
- D . Go to GCP Console and export this information to Cloud SQL on a daily basis.
You have a large 5-TB AVRO file stored in a Cloud Storage bucket. Your analysts are proficient only in SQL and need access to the data stored in this file. You want to find a cost-effective way to complete their request as soon as possible.
What should you do?
- A . Load data in Cloud Datastore and run a SQL query against it.
- B . Create a BigQuery table and load data in BigQuery. Run a SQL query on this table and drop this table after you complete your request.
- C . Create external tables in BigQuery that point to Cloud Storage buckets and run a SQL query on these external tables to complete your request.
- D . Create a Hadoop cluster and copy the AVRO file to NDFS by compressing it. Load the file in a hive table and provide access to your analysts so that they can run SQL queries.
You need to verify that a Google Cloud Platform service account was created at a particular time.
What should you do?
- A . Filter the Activity log to view the Configuration category. Filter the Resource type to Service Account.
- B . Filter the Activity log to view the Configuration category. Filter the Resource type to Google Project.
- C . Filter the Activity log to view the Data Access category. Filter the Resource type to Service Account.
- D . Filter the Activity log to view the Data Access category. Filter the Resource type to Google Project.