You need to restrict access to your Google Cloud load-balanced application so that only specific IP addresses can connect.
What should you do?
- A . Create a secure perimeter using the Access Context Manager feature of VPC Service Controls and restrict access to the source IP range of the allowed clients and Google health check IP ranges.
- B . Create a secure perimeter using VPC Service Controls, and mark the load balancer as a service restricted to the source IP range of the allowed clients and Google health check IP ranges.
- C . Tag the backend instances "application," and create a firewall rule with target tag "application" and the source IP range of the allowed clients and Google health check IP ranges.
- D . Label the backend instances "application," and create a firewall rule with the target label "application" and the source IP range of the allowed clients and Google health check IP ranges.
C
Explanation:
https://cloud.google.com/load-balancing/docs/https/setting-up-https#sendtraffic
Your end users are located in close proximity to us-east1 and europe-west1. Their workloads need to communicate with each other. You want to minimize cost and increase network efficiency.
How should you design this topology?
- A . Create 2 VPCs, each with their own regions and individual subnets. Create 2 VPN gateways to establish connectivity between these regions.
- B . Create 2 VPCs, each with their own region and individual subnets. Use external IP addresses on the instances to establish connectivity between these regions.
- C . Create 1 VPC with 2 regional subnets. Create a global load balancer to establish connectivity between the regions.
- D . Create 1 VPC with 2 regional subnets. Deploy workloads in these subnets and have them communicate using private RFC1918 IP addresses.
D
Explanation:
https://cloud.google.com/vpc/docs/using-vpc#create-auto-network
We create one VPC network in auto mode that creates one subnet in each Google Cloud region automatically. So, region us-east1 and europe-west1 are in the same network and they can communicate using their internal IP address even though they are in different Regions. They take advantage of Google’s global fiber network.
Your organization is deploying a single project for 3 separate departments. Two of these departments require network connectivity between each other, but the third department should remain in isolation. Your design should create separate network administrative domains between these departments. You want to minimize operational overhead.
How should you design the topology?
- A . Create a Shared VPC Host Project and the respective Service Projects for each of the 3 separate departments.
- B . Create 3 separate VPCs, and use Cloud VPN to establish connectivity between the two appropriate VPCs.
- C . Create 3 separate VPCs, and use VPC peering to establish connectivity between the two appropriate VPCs.
- D . Create a single project, and deploy specific firewall rules. Use network tags to isolate access between the departments.
C
Explanation:
https://cloud.google.com/vpc/docs/vpc-peering
You are migrating to Cloud DNS and want to import your BIND zone file.
Which command should you use?
- A . gcloud dns record-sets import ZONE_FILE –zone MANAGED_ZONE
- B . gcloud dns record-sets import ZONE_FILE –replace-origin-ns –zone MANAGED_ZONE
- C . gcloud dns record-sets import ZONE_FILE –zone-file-format –zone MANAGED_ZONE
- D . gcloud dns record-sets import ZONE_FILE –delete-all-existing –zone MANAGED ZONE
C
Explanation:
https://cloud.google.com/sdk/gcloud/reference/dns/record-sets/import
You created a VPC network named Retail in auto mode. You want to create a VPC network named Distribution and peer it with the Retail VPC.
How should you configure the Distribution VPC?
- A . Create the Distribution VPC in auto mode. Peer both the VPCs via network peering.
- B . Create the Distribution VPC in custom mode. Use the CIDR range 10.0.0.0/9. Create the necessary subnets, and then peer them via network peering.
- C . Create the Distribution VPC in custom mode. Use the CIDR range 10.128.0.0/9. Create the necessary subnets, and then peer them via network peering.
- D . Rename the default VPC as "Distribution" and peer it via network peering.
B
Explanation:
https://cloud.google.com/vpc/docs/vpc#ip-ranges
You are using a third-party next-generation firewall to inspect traffic. You created a custom route of 0.0.0.0/0 to route egress traffic to the firewall. You want to allow your VPC instances without public IP addresses to access the BigQuery and Cloud Pub/Sub APIs, without sending the traffic through the firewall.
Which two actions should you take? (Choose two.)
- A . Turn on Private Google Access at the subnet level.
- B . Turn on Private Google Access at the VPC level.
- C . Turn on Private Services Access at the VPC level.
- D . Create a set of custom static routes to send traffic to the external IP addresses of Google APIs and services via the default internet gateway.
- E . Create a set of custom static routes to send traffic to the internal IP addresses of Google APIs and services via the default internet gateway.
AD
Explanation:
https://cloud.google.com/vpc/docs/private-access-options#pga Private Google Access VM instances that only have internal IP addresses (no external IP addresses) can use Private Google Access. They can reach the _external IP addresses_ of Google APIs and services.
All the instances in your project are configured with the custom metadata enable-oslogin value set to FALSE and to block project-wide SSH keys. None of the instances are set with any SSH key, and no project-wide SSH keys have been configured. Firewall rules are set up to allow SSH sessions from any IP address range. You want to SSH into one instance.
What should you do?
- A . Open the Cloud Shell SSH into the instance using gcloud compute ssh.
- B . Set the custom metadata enable-oslogin to TRUE, and SSH into the instance using a third-party tool like putty or ssh.
- C . Generate a new SSH key pair. Verify the format of the private key and add it to the instance. SSH into the instance using a third-party tool like putty or ssh.
- D . Generate a new SSH key pair. Verify the format of the public key and add it to the project. SSH into the instance using a third-party tool like putty or ssh.
You work for a university that is migrating to GCP.
These are the cloud requirements:
• On-premises connectivity with 10 Gbps
• Lowest latency access to the cloud
• Centralized Networking Administration Team
New departments are asking for on-premises connectivity to their projects. You want to deploy the most cost-efficient interconnect solution for connecting the campus to Google Cloud.
What should you do?
- A . Use Shared VPC, and deploy the VLAN attachments and Interconnect in the host project.
- B . Use Shared VPC, and deploy the VLAN attachments in the service projects. Connect the VLAN attachment to the Shared VPC’s host project.
- C . Use standalone projects, and deploy the VLAN attachments in the individual projects. Connect the VLAN attachment to the standalone projects’ Interconnects.
- D . Use standalone projects and deploy the VLAN attachments and Interconnects in each of the individual projects.
A
Explanation:
https://cloud.google.com/interconnect/docs/how-to/dedicated/using-interconnects-other-projects
Using Cloud Interconnect with Shared VPC You can use Shared VPC to share your VLAN attachment in a project with other VPC networks. Choosing Shared VPC is preferable if you need to create many projects and would like to prevent individual project owners from managing their connectivity back to your on-premises network. In this scenario, the host project contains a common Shared VPC network usable by VMs in service projects. Because VMs in the service projects use this network, Service Project Admins don’t need to create other VLAN attachments or Cloud Routers in the service projects. In this scenario, you must create VLAN attachments and Cloud Routers for a Cloud Interconnect connection only in the Shared VPC host project. The combination of a VLAN attachment and its associated Cloud Router are unique to a given Shared VPC network. https://cloud.google.com/network-connectivity/docs/interconnect/how-to/enabling-multiple-networks-access-same-attachment#using_with
https://cloud.google.com/vpc/docs/shared-vpc
You have deployed a new internal application that provides HTTP and TFTP services to on-premises hosts. You want to be able to distribute traffic across multiple Compute Engine instances, but need to ensure that clients are sticky to a particular instance across both services.
Which session affinity should you choose?
- A . None
- B . Client IP
- C . Client IP and protocol
- D . Client IP, port and protocol
You created a new VPC network named Dev with a single subnet. You added a firewall rule for the network Dev to allow HTTP traffic only and enabled logging. When you try to log in to an instance in the subnet via Remote Desktop Protocol, the login fails. You look for the Firewall rules logs in Stackdriver Logging, but you do not see any entries for blocked traffic. You want to see the logs for blocked traffic.
What should you do?
- A . Check the VPC flow logs for the instance.
- B . Try connecting to the instance via SSH, and check the logs.
- C . Create a new firewall rule to allow traffic from port 22, and enable logs.
- D . Create a new firewall rule with priority 65500 to deny all traffic, and enable logs.
D
Explanation:
Ingress packets in VPC Flow Logs are sampled after ingress firewall rules. If an ingress firewall rule denies inbound packets, those packets are not sampled by VPC Flow Logs. We want to see the logs for blocked traffic so we have to look for them in firewall logs. https://cloud.google.com/vpc/docs/flow-logs#key_properties
You are trying to update firewall rules in a shared VPC for which you have been assigned only Network Admin permissions. You cannot modify the firewall rules. Your organization requires using the least privilege necessary.
Which level of permissions should you request?
- A . Security Admin privileges from the Shared VPC Admin.
- B . Service Project Admin privileges from the Shared VPC Admin.
- C . Shared VPC Admin privileges from the Organization Admin.
- D . Organization Admin privileges from the Organization Admin.
A
Explanation:
A Shared VPC Admin can define a Security Admin by granting an IAM member the Security Admin (compute.securityAdmin) role to the host project. Security Admins manage firewall rules and SSL certificates.
You want to create a service in GCP using IPv6.
What should you do?
- A . Create the instance with the designated IPv6 address.
- B . Configure a TCP Proxy with the designated IPv6 address.
- C . Configure a global load balancer with the designated IPv6 address.
- D . Configure an internal load balancer with the designated IPv6 address.
C
Explanation:
https://cloud.google.com/load-balancing/docs/load-balancing-overview mentions to use global load balancer for IPv6 termination.
You want to deploy a VPN Gateway to connect your on-premises network to GCP. You are using a non BGP-capable on-premises VPN device. You want to minimize downtime and operational overhead when your network grows. The device supports only IKEv2, and you want to follow Google-recommended practices.
What should you do?
- A . • Create a Cloud VPN instance.
• Create a policy-based VPN tunnel per subnet.
• Configure the appropriate local and remote traffic selectors to match your local and remote networks.
• Create the appropriate static routes. - B . • Create a Cloud VPN instance.
• Create a policy-based VPN tunnel.
• Configure the appropriate local and remote traffic selectors to match your local and remote networks.
• Configure the appropriate static routes. - C . • Create a Cloud VPN instance.
• Create a route-based VPN tunnel.
• Configure the appropriate local and remote traffic selectors to match your local and remote networks.
• Configure the appropriate static routes. - D . • Create a Cloud VPN instance.
• Create a route-based VPN tunnel.
• Configure the appropriate local and remote traffic selectors to 0.0.0.0/0.
• Configure the appropriate static routes.
B
Explanation:
https://cloud.google.com/network-connectivity/docs/vpn/how-to/creating-static-vpns#creating_a_gateway_and_tunnel
Your company just completed the acquisition of Altostrat (a current GCP customer). Each company has a separate organization in GCP and has implemented a custom DNS solution. Each organization will retain its current domain and host names until after a full transition and architectural review is done in one year.
These are the assumptions for both GCP environments.
• Each organization has enabled full connectivity between all of its projects by using Shared VPC.
• Both organizations strictly use the 10.0.0.0/8 address space for their instances, except for bastion hosts (for accessing the instances) and load balancers for serving web traffic.
• There are no prefix overlaps between the two organizations.
• Both organizations already have firewall rules that allow all inbound and outbound traffic from the 10.0.0.0/8 address space.
• Neither organization has Interconnects to their on-premises environment.
You want to integrate networking and DNS infrastructure of both organizations as quickly as possible and with minimal downtime.
Which two steps should you take? (Choose two.)
- A . Provision Cloud Interconnect to connect both organizations together.
- B . Set up some variant of DNS forwarding and zone transfers in each organization.
- C . Connect VPCs in both organizations using Cloud VPN together with Cloud Router.
- D . Use Cloud DNS to create A records of all VMs and resources across all projects in both organizations.
- E . Create a third organization with a new host project, and attach all projects from your company and Altostrat to it using shared VPC.
BC
Explanation:
https://cloud.google.com/dns/docs/best-practices
Your on-premises data center has 2 routers connected to your Google Cloud environment through a VPN on each router. All applications are working correctly; however, all of the traffic is passing across a single VPN instead of being load-balanced across the 2 connections as desired.
During troubleshooting you find:
• Each on-premises router is configured with a unique ASN.
• Each on-premises router is configured with the same routes and priorities.
• Both on-premises routers are configured with a VPN connected to a single Cloud Router.
• BGP sessions are established between both on-premises routers and the Cloud Router.
• Only 1 of the on-premises router’s routes are being added to the routing table.
What is the most likely cause of this problem?
- A . The on-premises routers are configured with the same routes.
- B . A firewall is blocking the traffic across the second VPN connection.
- C . You do not have a load balancer to load-balance the network traffic.
- D . The ASNs being used on the on-premises routers are different.
D
Explanation:
https://cloud.google.com/network-connectivity/docs/router/support/troubleshooting#ecmp
You have ordered Dedicated Interconnect in the GCP Console and need to give the Letter of Authorization/Connecting Facility Assignment (LOA-CFA) to your cross-connect provider to complete the physical connection.
Which two actions can accomplish this? (Choose two.)
- A . Open a Cloud Support ticket under the Cloud Interconnect category.
- B . Download the LOA-CFA from the Hybrid Connectivity section of the GCP Console.
- C . Run gcloud compute interconnects describe <interconnect>.
- D . Check the email for the account of the NOC contact that you specified during the ordering process.
- E . Contact your cross-connect provider and inform them that Google automatically sent the LOA/CFA to them via email, and to complete the connection.
DE
Explanation:
https://cloud.google.com/network-connectivity/docs/interconnect/how-to/dedicated/retrieving-loas
Your company offers a popular gaming service. Your instances are deployed with private IP addresses, and external access is granted through a global load balancer. You believe you have identified a potential malicious actor, but aren’t certain you have the correct client IP address. You want to identify this actor while minimizing disruption to your legitimate users.
What should you do?
- A . Create a Cloud Armor Policy rule that denies traffic and review necessary logs.
- B . Create a Cloud Armor Policy rule that denies traffic, enable preview mode, and review necessary logs.
- C . Create a VPC Firewall rule that denies traffic, enable logging and set enforcement to disabled, and review necessary logs.
- D . Create a VPC Firewall rule that denies traffic, enable logging and set enforcement to enabled, and review necessary logs.
B
Explanation:
https://cloud.google.com/armor/docs/security-policy-concepts#preview_mode
Your company’s web server administrator is migrating on-premises backend servers for an application to GCP. Libraries and configurations differ significantly across these backend servers. The migration to GCP will be lift-and-shift, and all requests to the servers will be served by a single network load balancer frontend. You want to use a GCP-native solution when possible.
How should you deploy this service in GCP?
- A . Create a managed instance group from one of the images of the on-premises servers, and link this instance group to a target pool behind your load balancer.
- B . Create a target pool, add all backend instances to this target pool, and deploy the target pool behind your load balancer.
- C . Deploy a third-party virtual appliance as frontend to these servers that will accommodate the significant differences between these backend servers.
- D . Use GCP’s ECMP capability to load-balance traffic to the backend servers by installing multiple equal-priority static routes to the backend servers.
You decide to set up Cloud NAT. After completing the configuration, you find that one of your instances is not using the Cloud NAT for outbound NAT.
What is the most likely cause of this problem?
- A . The instance has been configured with multiple interfaces.
- B . An external IP address has been configured on the instance.
- C . You have created static routes that use RFC1918 ranges.
- D . The instance is accessible by a load balancer external IP address.
You want to set up two Cloud Routers so that one has an active Border Gateway Protocol (BGP) session, and the other one acts as a standby.
Which BGP attribute should you use on your on-premises router?
- A . AS-Path
- B . Community
- C . Local Preference
- D . Multi-exit Discriminator
You are increasing your usage of Cloud VPN between on-premises and GCP, and you want to support more traffic than a single tunnel can handle. You want to increase the available bandwidth using Cloud VPN.
What should you do?
- A . Double the MTU on your on-premises VPN gateway from 1460 bytes to 2920 bytes.
- B . Create two VPN tunnels on the same Cloud VPN gateway that point to the same destination VPN gateway IP address.
- C . Add a second on-premises VPN gateway with a different public IP address. Create a second tunnel on the existing Cloud VPN gateway that forwards the same IP range, but points at the new on-premises gateway IP.
- D . Add a second Cloud VPN gateway in a different region than the existing VPN gateway. Create a new tunnel on the second Cloud VPN gateway that forwards the same IP range, but points to the existing on-premises VPN gateway IP address.
C
Explanation:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/classic-topologies#redundancy-options
You are disabling DNSSEC for one of your Cloud DNS-managed zones. You removed the DS records from your zone file, waited for them to expire from the cache, and disabled DNSSEC for the zone. You receive reports that DNSSEC validating resolves are unable to resolve names in your zone.
What should you do?
- A . Update the TTL for the zone.
- B . Set the zone to the TRANSFER state.
- C . Disable DNSSEC at your domain registar.
- D . Transfer ownership of the domain to a new registar.
C
Explanation:
Before disabling DNSSEC for a managed zone you want to use, you must deactivate DNSSEC at your domain registrar to ensure that DNSSEC-validating resolvers can still resolve names in the zone.
You have an application hosted on a Compute Engine virtual machine instance that cannot communicate with a resource outside of its subnet. When you review the flow and firewall logs, you do not see any denied traffic listed.
During troubleshooting you find:
• Flow logs are enabled for the VPC subnet, and all firewall rules are set to log.
• The subnetwork logs are not excluded from Stackdriver.
• The instance that is hosting the application can communicate outside the subnet.
• Other instances within the subnet can communicate outside the subnet.
• The external resource initiates communication.
What is the most likely cause of the missing log lines?
- A . The traffic is matching the expected ingress rule.
- B . The traffic is matching the expected egress rule.
- C . The traffic is not matching the expected ingress rule.
- D . The traffic is not matching the expected egress rule.
You have configured Cloud CDN using HTTP(S) load balancing as the origin for cacheable content. Compression is configured on the web servers, but responses served by Cloud CDN are not compressed.
What is the most likely cause of the problem?
- A . You have not configured compression in Cloud CDN.
- B . You have configured the web servers and Cloud CDN with different compression types.
- C . The web servers behind the load balancer are configured with different compression types.
- D . You have to configure the web servers to compress responses even if the request has a Via header.
D
Explanation:
If responses served by Cloud CDN are not compressed but should be, check that the web server software running on your instances is configured to compress responses. By default, some web server software will automatically disable compression for requests that include a Via header. The presence of a Via header indicates the request was forwarded by a proxy. HTTP proxies such as HTTP(S) load balancing add a Via header to each request as required by the HTTP specification. To enable compression, you may have to override your web server’s default configuration to tell it to compress responses even if the request had a Via header.
You have a web application that is currently hosted in the us-central1 region. Users experience high latency when traveling in Asia. You’ve configured a network load balancer, but users have not experienced a performance improvement. You want to decrease the latency.
What should you do?
- A . Configure a policy-based route rule to prioritize the traffic.
- B . Configure an HTTP load balancer, and direct the traffic to it.
- C . Configure Dynamic Routing for the subnet hosting the application.
- D . Configure the TTL for the DNS zone to decrease the time between updates.
You have an application running on Compute Engine that uses BigQuery to generate some results that are stored in Cloud Storage. You want to ensure that none of the application instances have external IP addresses.
Which two methods can you use to accomplish this? (Choose two.)
- A . Enable Private Google Access on all the subnets.
- B . Enable Private Google Access on the VPC.
- C . Enable Private Services Access on the VPC.
- D . Create network peering between your VPC and BigQuery.
- E . Create a Cloud NAT, and route the application traffic via NAT gateway.
A,E
Explanation:
https://cloud.google.com/nat/docs/overview#interaction-pga Specifications
https://cloud.google.com/vpc/docs/configure-private-google-access#specifications
You are designing a shared VPC architecture. Your network and security team has strict controls over which routes are exposed between departments. Your Production and Staging departments can communicate with each other, but only via specific networks. You want to follow Google-recommended practices.
How should you design this topology?
- A . Create 2 shared VPCs within the shared VPC Host Project, and enable VPC peering between them.
Use firewall rules to filter access between the specific networks. - B . Create 2 shared VPCs within the shared VPC Host Project, and create a Cloud VPN/Cloud Router between them. Use Flexible Route Advertisement (FRA) to filter access between the specific networks.
- C . Create 2 shared VPCs within the shared VPC Service Project, and create a Cloud VPN/Cloud Router between them. Use Flexible Route Advertisement (FRA) to filter access between the specific networks.
- D . Create 1 VPC within the shared VPC Host Project, and share individual subnets with the Service Projects to filter access between the specific networks.
You are adding steps to a working automation that uses a service account to authenticate. You need to drive the automation the ability to retrieve files from a Cloud Storage bucket. Your organization requires using the least privilege possible.
What should you do?
- A . Grant the compute.instanceAdmin to your user account.
- B . Grant the iam.serviceAccountUser to your user account.
- C . Grant the read-only privilege to the service account for the Cloud Storage bucket.
- D . Grant the cloud-platform privilege to the service account for the Cloud Storage bucket.
You converted an auto mode VPC network to custom mode. Since the conversion, some of your Cloud Deployment Manager templates are no longer working. You want to resolve the problem.
What should you do?
- A . Apply an additional IAM role to the Google API’s service account to allow custom mode networks.
- B . Update the VPC firewall to allow the Cloud Deployment Manager to access the custom mode networks.
- C . Explicitly reference the custom mode networks in the Cloud Armor whitelist.
- D . Explicitly reference the custom mode networks in the Deployment Manager templates.
You have recently been put in charge of managing identity and access management for your organization. You have several projects and want to use scripting and automation wherever possible. You want to grant the editor role to a project member.
Which two methods can you use to accomplish this? (Choose two.)
- A . GetIamPolicy() via REST API
- B . setIamPolicy() via REST API
- C . gcloud pubsub add-iam-policy-binding Sprojectname –member user:Susername –role roles/editor
- D . gcloud projects add-iam-policy-binding Sprojectname –member user:Susername –role roles/editor
- E . Enter an email address in the Add members field, and select the desired role from the drop-down menu in the GCP Console.
You are using a 10-Gbps direct peering connection to Google together with the gsutil tool to upload files to Cloud Storage buckets from on-premises servers. The on-premises servers are 100 milliseconds away from the Google peering point. You notice that your uploads are not using the full 10-Gbps bandwidth available to you. You want to optimize the bandwidth utilization of the connection.
What should you do on your on-premises servers?
- A . Tune TCP parameters on the on-premises servers.
- B . Compress files using utilities like tar to reduce the size of data being sent.
- C . Remove the -m flag from the gsutil command to enable single-threaded transfers.
- D . Use the perfdiag parameter in your gsutil command to enable faster performance: gsutil perfdiag gs://[BUCKET NAME].
A
Explanation:
https://cloud.google.com/solutions/tcp-optimization-for-network-performance-in-gcp-and-hybrid
https://cloud.google.com/solutions/tcp-optimization-for-network-performance-in-gcp-and-hybrid
https://cloud.google.com/blog/products/gcp/5-steps-to-better-gcp-network-performance?hl=ml
You work for a multinational enterprise that is moving to GCP.
These are the cloud requirements:
• An on-premises data center located in the United States in Oregon and New York with Dedicated Interconnects connected to Cloud regions us-west1 (primary HQ) and us-east4 (backup)
• Multiple regional offices in Europe and APAC
• Regional data processing is required in europe-west1 and australia-southeast1
• Centralized Network Administration Team
Your security and compliance team requires a virtual inline security appliance to perform L7 inspection for URL filtering. You want to deploy the appliance in us-west1.
What should you do?
- A . • Create 2 VPCs in a Shared VPC Host Project.
• Configure a 2-NIC instance in zone us-west1-a in the Host Project.
• Attach NIC0 in VPC #1 us-west1 subnet of the Host Project.
• Attach NIC1 in VPC #2 us-west1 subnet of the Host Project.
• Deploy the instance.
• Configure the necessary routes and firewall rules to pass traffic through the instance. - B . • Create 2 VPCs in a Shared VPC Host Project.
• Configure a 2-NIC instance in zone us-west1-a in the Service Project.
• Attach NIC0 in VPC #1 us-west1 subnet of the Host Project.
• Attach NIC1 in VPC #2 us-west1 subnet of the Host Project.
• Deploy the instance.
• Configure the necessary routes and firewall rules to pass traffic through the instance. - C . • Create 1 VPC in a Shared VPC Host Project.
• Configure a 2-NIC instance in zone us-west1-a in the Host Project.
• Attach NIC0 in us-west1 subnet of the Host Project.
• Attach NIC1 in us-west1 subnet of the Host Project
• Deploy the instance.
• Configure the necessary routes and firewall rules to pass traffic through the instance. - D . • Create 1 VPC in a Shared VPC Service Project.
• Configure a 2-NIC instance in zone us-west1-a in the Service Project.
• Attach NIC0 in us-west1 subnet of the Service Project.
• Attach NIC1 in us-west1 subnet of the Service Project
• Deploy the instance.
• Configure the necessary routes and firewall rules to pass traffic through the instance.
B
Explanation:
https://cloud.google.com/vpc/docs/shared-vpc
You are designing a Google Kubernetes Engine (GKE) cluster for your organization. The current cluster size is expected to host 10 nodes, with 20 Pods per node and 150 services. Because of the migration of new services over the next 2 years, there is a planned growth for 100 nodes, 200 Pods per node, and 1500 services. You want to use VPC-native clusters with alias IP ranges, while minimizing address consumption.
How should you design this topology?
- A . Create a subnet of size/25 with 2 secondary ranges of: /17 for Pods and /21 for Services. Create a VPC-native cluster and specify those ranges.
- B . Create a subnet of size/28 with 2 secondary ranges of: /24 for Pods and /24 for Services. Create a VPC-native cluster and specify those ranges. When the services are ready to be deployed, resize the subnets.
- C . Use gcloud container clusters create [CLUSTER NAME]–enable-ip-alias to create a VPC-native cluster.
- D . Use gcloud container clusters create [CLUSTER NAME] to create a VPC-native cluster.
A
Explanation:
The service range setting is permanent and cannot be changed. Please see https://stackoverflow.com/questions/60957040/how-to-increase-the-service-address-range-of-a-gke-cluster I think the correc tanswer is A since: Grow is expected to up to 100 nodes (that would be /25), then up to 200 pods per node (100 times 200 = 20000 so /17 is 32768), then 1500 services in a /21 (up to 2048)
https://docs.netgate.com/pfsense/en/latest/book/network/understanding-cidr-subnet-mask-notation.html
Your company has recently expanded their EMEA-based operations into APAC. Globally distributed users report that their SMTP and IMAP services are slow. Your company requires end-to-end encryption, but you do not have access to the SSL certificates.
Which Google Cloud load balancer should you use?
- A . SSL proxy load balancer
- B . Network load balancer
- C . HTTPS load balancer
- D . TCP proxy load balancer
D
Explanation:
https://cloud.google.com/security/encryption-in-transit/ Automatic encryption between GFEs and backends For the following load balancer types, Google automatically encrypts traffic between Google Front Ends (GFEs) and your backends that reside within Google Cloud VPC networks: HTTP(S) Load Balancing TCP Proxy Load Balancing SSL Proxy Load Balancing
Your company is working with a partner to provide a solution for a customer. Both your company and the partner organization are using GCP. There are applications in the partner’s network that need access to some resources in your company’s VPC. There is no CIDR overlap between the VPCs.
Which two solutions can you implement to achieve the desired results without compromising the security? (Choose two.)
- A . VPC peering
- B . Shared VPC
- C . Cloud VPN
- D . Dedicated Interconnect
- E . Cloud NAT
AC
Explanation:
Google Cloud VPC Network Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization.
You have a storage bucket that contains the following objects:
– folder-a/image-a-1.jpg
– folder-a/image-a-2.jpg
– folder-b/image-b-1.jpg
– folder-b/image-b-2.jpg
Cloud CDN is enabled on the storage bucket, and all four objects have been successfully cached. You want to remove the cached copies of all the objects with the prefix folder-a, using the minimum number of commands.
What should you do?
- A . Add an appropriate lifecycle rule on the storage bucket.
- B . Issue a cache invalidation command with pattern /folder-a/*.
- C . Make sure that all the objects with prefix folder-a are not shared publicly.
- D . Disable Cloud CDN on the storage bucket. Wait 90 seconds. Re-enable Cloud CDN on the storage bucket.
B
Explanation:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Invalidation.html
Your company is running out of network capacity to run a critical application in the on-premises data center. You want to migrate the application to GCP. You also want to ensure that the Security team does not lose their ability to monitor traffic to and from Compute Engine instances.
Which two products should you incorporate into the solution? (Choose two.)
- A . VPC flow logs
- B . Firewall logs
- C . Cloud Audit logs
- D . Stackdriver Trace
- E . Compute Engine instance system logs
AB
Explanation:
A: Using VPC Flow Logs VPC Flow Logs records a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. https://cloud.google.com/vpc/docs/using-flow-logs (B): Firewall Rules Logging overview Firewall Rules Logging allows you to audit, verify, and analyze the effects of your firewall rules. For example, you can determine if a firewall rule designed to deny traffic is functioning as intended. Firewall Rules Logging is also useful if you need to determine how many connections are affected by a given firewall rule. You enable Firewall Rules Logging individually for each firewall rule whose connections you need to log. Firewall Rules Logging is an option for any firewall rule, regardless of the action (allow or deny) or direction (ingress or egress) of the rule. https://cloud.google.com/vpc/docs/firewall-rules-logging
You want to apply a new Cloud Armor policy to an application that is deployed in Google Kubernetes Engine (GKE). You want to find out which target to use for your Cloud Armor policy.
Which GKE resource should you use?
- A . GKE Node
- B . GKE Pod
- C . GKE Cluster
- D . GKE Ingress
D
Explanation:
Cloud Armour is applied at load balancers Configuring Google Cloud Armor through Ingress. https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features Security policy features Google Cloud Armor security policies have the following core features: You can optionally use the QUIC protocol with load balancers that use Google Cloud Armor. You can use Google Cloud Armor with external HTTP(S) load balancers that are in either Premium Tier or Standard Tier. You can use security policies with GKE and the default Ingress controller.
You need to establish network connectivity between three Virtual Private Cloud networks, Sales, Marketing, and Finance, so that users can access resources in all three VPCs. You configure VPC peering between the Sales VPC and the Finance VPC. You also configure VPC peering between the Marketing VPC and the Finance VPC. After you complete the configuration, some users cannot connect to resources in the Sales VPC and the Marketing VPC. You want to resolve the problem.
What should you do?
- A . Configure VPC peering in a full mesh.
- B . Alter the routing table to resolve the asymmetric route.
- C . Create network tags to allow connectivity between all three VPCs.
- D . Delete the legacy network and recreate it to allow transitive peering.
A
Explanation:
https://cloud.google.com/vpc/docs/using-vpc-peering
You create multiple Compute Engine virtual machine instances to be used as TFTP servers.
Which type of load balancer should you use?
- A . HTTP(S) load balancer
- B . SSL proxy load balancer
- C . TCP proxy load balancer
- D . Network load balancer
D
Explanation:
"TFTP is a UDP-based protocol. Servers listen on port 69 for the initial client-to-server packet to establish the TFTP session, then use a port above 1023 for all further packets during that session. Clients use ports above 1023" https://docstore.mik.ua/orelly/networking_2ndEd/fire/ch17_02.htm Besides, Google Cloud external TCP/UDP Network Load Balancing (after this referred to as Network Load Balancing) is a regional, non-proxied load balancer. Network Load Balancing distributes traffic among virtual machine (VM) instances in the same region in a Virtual Private Cloud (VPC) netw
You want to configure load balancing for an internet-facing, standard voice-over-IP (VOIP)
application.
Which type of load balancer should you use?
- A . HTTP(S) load balancer
- B . Network load balancer
- C . Internal TCP/UDP load balancer
- D . TCP/SSL proxy load balancer
You want to configure a NAT to perform address translation between your on-premises network blocks and GCP.
Which NAT solution should you use?
- A . Cloud NAT
- B . An instance with IP forwarding enabled
- C . An instance configured with iptables DNAT rules
- D . An instance configured with iptables SNAT rules
You need to ensure your personal SSH key works on every instance in your project. You want to accomplish this as efficiently as possible.
What should you do?
- A . Upload your public ssh key to the project Metadata.
- B . Upload your public ssh key to each instance Metadata.
- C . Create a custom Google Compute Engine image with your public ssh key embedded.
- D . Use gcloud compute ssh to automatically copy your public ssh key to the instance.
A
Explanation:
Overview By creating and managing SSH keys, you can let users access a Linux instance through third-party tools. An SSH key consists of the following files: A public SSH key file that is applied to instance-level metadata or project-wide metadata. A private SSH key file that the user stores on their local devices. If a user presents their private SSH key, they can use a third-party tool to connect to any instance that is configured with the matching public SSH key file, even if they aren’t a member of your Google Cloud project. Therefore, you can control which instances a user can access by changing the public SSH key metadata for one or more instances. https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#addkey
In order to provide subnet level isolation, you want to force instance-A in one subnet to route through a security appliance, called instance-B, in another subnet.
What should you do?
- A . Create a more specific route than the system-generated subnet route, pointing the next hop to instance-B with no tag.
- B . Create a more specific route than the system-generated subnet route, pointing the next hop to instance-B with a tag applied to instance-A.
- C . Delete the system-generated subnet route and create a specific route to instance-B with a tag applied to instance-A.
- D . Move instance-B to another VPC and, using multi-NIC, connect instance-B’s interface to instance-A’s network. Configure the appropriate routes to force traffic through to instance-A.
You create a Google Kubernetes Engine private cluster and want to use kubectl to get the status of the pods. In one of your instances you notice the master is not responding, even though the cluster is up and running.
What should you do to solve the problem?
- A . Assign a public IP address to the instance.
- B . Create a route to reach the Master, pointing to the default internet gateway.
- C . Create the appropriate firewall policy in the VPC to allow traffic from Master node IP address to the instance.
- D . Create the appropriate master authorized network entries to allow the instance to communicate to the master.
D
Explanation:
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cant_reach_cluster
https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks
Your company has a security team that manages firewalls and SSL certificates. It also has a networking team that manages the networking resources. The networking team needs to be able to read firewall rules, but should not be able to create, modify, or delete them.
How should you set up permissions for the networking team?
- A . Assign members of the networking team the compute.networkUser role.
- B . Assign members of the networking team the compute.networkAdmin role.
- C . Assign members of the networking team a custom role with only the compute.networks.* and the compute.firewalls.list permissions.
- D . Assign members of the networking team the compute.networkViewer role, and add the compute.networks.use permission.
You have created an HTTP(S) load balanced service. You need to verify that your backend instances are responding properly.
How should you configure the health check?
- A . Set request-path to a specific URL used for health checking, and set proxy-header to PROXY_V1.
- B . Set request-path to a specific URL used for health checking, and set host to include a custom host header that identifies the health check.
- C . Set request-path to a specific URL used for health checking, and set response to a string that the backend service will always return in the response body.
- D . Set proxy-header to the default value, and set host to include a custom host header that identifies the health check.
C
Explanation:
https://cloud.google.com/load-balancing/docs/health-check-concepts#content-based_health_checks
You need to give each member of your network operations team least-privilege access to create, modify, and delete Cloud Interconnect VLAN attachments.
What should you do?
- A . Assign each user the editor role.
- B . Assign each user the compute.networkAdmin role.
- C . Give each user the following permissions only: compute.interconnectAttachments.create, compute.interconnectAttachments.get.
- D . Give each user the following permissions only: compute.interconnectAttachments.create, compute.interconnectAttachments.get, compute.routers.create, compute.routers.get, compute.routers.update.
D
Explanation:
https://cloud.google.com/interconnect/docs/how-to/dedicated/creating-vlan-attachments
You have an application that is running in a managed instance group. Your development team has released an updated instance template which contains a new feature which was not heavily tested. You want to minimize impact to users if there is a bug in the new template.
How should you update your instances?
- A . Manually patch some of the instances, and then perform a rolling restart on the instance group.
- B . Using the new instance template, perform a rolling update across all instances in the instance group. Verify the new feature once the rollout completes.
- C . Deploy a new instance group and canary the updated template in that group. Verify the new feature in the new canary instance group, and then update the original instance group.
- D . Perform a canary update by starting a rolling update and specifying a target size for your instances to receive the new template. Verify the new feature on the canary instances, and then roll forward to the rest of the instances.
D
Explanation:
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#starting_a_canary_update
https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
You have deployed a proof-of-concept application by manually placing instances in a single Compute Engine zone. You are now moving the application to production, so you need to increase your application availability and ensure it can autoscale.
How should you provision your instances?
- A . Create a single managed instance group, specify the desired region, and select Multiple zones for the location.
- B . Create a managed instance group for each region, select Single zone for the location, and manually distribute instances across the zones in that region.
- C . Create an unmanaged instance group in a single zone, and then create an HTTP load balancer for the instance group.
- D . Create an unmanaged instance group for each zone, and manually distribute the instances across the desired zones.
A
Explanation:
https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-managed-instances
You have a storage bucket that contains two objects. Cloud CDN is enabled on the bucket, and both
objects have been successfully cached. Now you want to make sure that one of the two objects will not be cached anymore, and will always be served to the internet directly from the origin.
What should you do?
- A . Ensure that the object you don’t want to be cached anymore is not shared publicly.
- B . Create a new storage bucket, and move the object you don’t want to be checked anymore inside it. Then edit the bucket setting and enable the private attribute.
- C . Add an appropriate lifecycle rule on the storage bucket containing the two objects.
- D . Add a Cache-Control entry with value private to the metadata of the object you don’t want to be cached anymore. Invalidate all the previously cached copies.
D
Explanation:
https://cloud.google.com/cdn/docs/invalidating-cached-content
Your company offers a popular gaming service. Your instances are deployed with private IP addresses, and external access is granted through a global load balancer. You have recently engaged a traffic-scrubbing service and want to restrict your origin to allow connections only from the traffic-scrubbing service.
What should you do?
- A . Create a Cloud Armor Security Policy that blocks all traffic except for the traffic-scrubbing
service. - B . Create a VPC Firewall rule that blocks all traffic except for the traffic-scrubbing service.
- C . Create a VPC Service Control Perimeter that blocks all traffic except for the traffic-scrubbing service.
- D . Create IP Tables firewall rules that block all traffic except for the traffic-scrubbing service.
A
Explanation:
Global load balancer will proxy the connection. thus no trace of session origin IP. you should use Cloud Armor to geofence your service.
https://cloud.google.com/load-balancing/docs/https
Your software team is developing an on-premises web application that requires direct connectivity to Compute Engine Instances in GCP using the RFC 1918 address space.
You want to choose a connectivity solution from your on-premises environment to GCP, given these specifications:
Your ISP is a Google Partner Interconnect provider.
Your on-premises VPN device’s internet uplink and downlink speeds are 10 Gbps.
A test VPN connection between your on-premises gateway and GCP is performing at a maximum speed of 500 Mbps due to packet losses.
Most of the data transfer will be from GCP to the on-premises environment.
The application can burst up to 1.5 Gbps during peak transfers over the Interconnect.
Cost and the complexity of the solution should be minimal.
How should you provision the connectivity solution?
- A . Provision a Partner Interconnect through your ISP.
- B . Provision a Dedicated Interconnect instead of a VPN.
- C . Create multiple VPN tunnels to account for the packet losses, and increase bandwidth using
ECMP. - D . Use network compression over your VPN to increase the amount of data you can send over your VPN.
A
Explanation:
Direct Interconnect will be too expensive and also an overkill for this requirement. Managing multiple tunnels that too with packet loss consideration is complex also. Whereas partner interconnect fits the bill with providing required bandwidth but not super expensive also once setup not too complex too manage.
Your company has just launched a new critical revenue-generating web application. You deployed the application for scalability using managed instance groups, autoscaling, and a network load balancer as frontend. One day, you notice severe bursty traffic that the caused autoscaling to reach the maximum number of instances, and users of your application cannot complete transactions. After an investigation, you think it as a DDOS attack. You want to quickly restore user access to your application and allow successful transactions while minimizing cost.
Which two steps should you take? (Choose two.)
- A . Use Cloud Armor to blacklist the attacker’s IP addresses.
- B . Increase the maximum autoscaling backend to accommodate the severe bursty traffic.
- C . Create a global HTTP(s) load balancer and move your application backend to this load balancer.
- D . Shut down the entire application in GCP for a few hours. The attack will stop when the application is offline.
- E . SSH into the backend compute engine instances, and view the auth logs and syslogs to further understand the nature of the attack.
You are creating a new application and require access to Cloud SQL from VPC instances without public IP addresses.
Which two actions should you take? (Choose two.)
- A . Activate the Service Networking API in your project.
- B . Activate the Cloud Datastore API in your project.
- C . Create a private connection to a service producer.
- D . Create a custom static route to allow the traffic to reach the Cloud SQL API.
- E . Enable Private Google Access.
CE
Explanation:
https://cloud.google.com/sql/docs/mysql/configure-private-services-access#console_1
C: If you are using private IP for any of your Cloud SQL instances, you only need to configure private services access one time for every Google Cloud project that has or needs to connect to a Cloud SQL instance. If your Google Cloud project has a Cloud SQL instance, you can either configure it yourself or let Cloud SQL do it for you to use private IP. Cloud SQL configures private services access for you when all the conditions below are true: https://cloud.google.com/sql/docs/postgres/configure-private-services-access#before_you_begin
E: You can enable Private Google access on a subnet level and any VMs on that subnet can access Google APIs by using their internal IP address. https://cloud.google.com/vpc/docs/configure-private-google-access
You want to use Cloud Interconnect to connect your on-premises network to a GCP VPC. You cannot meet Google at one of its point-of-presence (POP) locations, and your on-premises router cannot run a Border Gateway Protocol (BGP) configuration.
Which connectivity model should you use?
- A . Direct Peering
- B . Dedicated Interconnect
- C . Partner Interconnect with a layer 2 partner
- D . Partner Interconnect with a layer 3 partner
D
Explanation:
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview
For Layer 3 connections, your service provider establishes a BGP session between your Cloud Routers and their edge routers for each VLAN attachment. You don’t need to configure BGP on your on-premises router. Google and your service provider automatically set the correct configurations.
https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview#connectivity-type
You have configured a Compute Engine virtual machine instance as a NAT gateway.
You execute the following command:
gcloud compute routes create no-ip-internet-route
–network custom-network1
–destination-range 0.0.0.0/0
–next-hop instance nat-gateway
–next-hop instance-zone us-central1-a
–tags no-ip –priority 800
You want existing instances to use the new NAT gateway.
Which command should you execute?
- A . sudo sysctl -w net.ipv4.ip_forward=1
- B . gcloud compute instances add-tags [existing-instance] –tags no-ip
- C . gcloud builds submit –config=cloudbuild.waml –substitutions=TAG_NAME=no-ip
- D . gcloud compute instances create example-instance –network custom-network1
–subnet subnet-us-central
–no-address
–zone us-central1-a
–image-family debian-9
–image-project debian-cloud –tags no-ip
B
Explanation:
https://cloud.google.com/sdk/gcloud/reference/compute/routes/create
In order to apply a route to an existing instance we should use a tag to bind the route to it.
Reference: https://cloud.google.com/vpc/docs/special-configurations
You need to configure a static route to an on-premises resource behind a Cloud VPN gateway that is configured for policy-based routing using the gcloud command.
Which next hop should you choose?
- A . The default internet gateway
- B . The IP address of the Cloud VPN gateway
- C . The name and region of the Cloud VPN tunnel
- D . The IP address of the instance on the remote side of the VPN tunnel
C
Explanation:
When you create a route based tunnel using the Cloud Console, Classic VPN performs both of the following tasks: Sets the tunnel’s local and remote traffic selectors to any IP address (0.0.0.0/0) For
each range in Remote network IP ranges, Google Cloud creates a custom static route whose destination (prefix) is the range’s CIDR, and whose next hop is the tunnel.
https://cloud.google.com/network-connectivity/docs/vpn/how-to/creating-static-vpns
Reference: https://cloud.google.com/vpn/docs/how-to/creating-static-vpns
You need to enable Cloud CDN for all the objects inside a storage bucket. You want to ensure that all the object in the storage bucket can be served by the CDN.
What should you do in the GCP Console?
- A . Create a new cloud storage bucket, and then enable Cloud CDN on it.
- B . Create a new TCP load balancer, select the storage bucket as a backend, and then enable Cloud CDN on the backend.
- C . Create a new SSL proxy load balancer, select the storage bucket as a backend, and then enable Cloud CDN on the backend.
- D . Create a new HTTP load balancer, select the storage bucket as a backend, enable Cloud CDN on the backend, and make sure each object inside the storage bucket is shared publicly.
D
Explanation:
https://cloud.google.com/load-balancing/docs/https/adding-backend-buckets-to-load-balancers#using_cloud_cdn_with_cloud_storage_buckets
Cloud CDN needs HTTP(S) Load Balancers and Cloud Storage bucket has to be shared publicly. https://cloud.google.com/cdn/docs/setting-up-cdn-with-bucket
Your company’s Google Cloud-deployed, streaming application supports multiple languages. The application development team has asked you how they should support splitting audio and video traffic to different backend Google Cloud storage buckets. They want to use URL maps and minimize operational overhead.
They are currently using the following directory structure:
/fr/video
/en/video
/es/video
/../video
/fr/audio
/en/audio
/es/audio
/../audio
Which solution should you recommend?
- A . Rearrange the directory structure, create a URL map and leverage a path rule such as /video/* and /audio/*.
- B . Rearrange the directory structure, create DNS hostname entries for video and audio and leverage a path rule such as /video/* and /audio/*.
- C . Leave the directory structure as-is, create a URL map and leverage a path rule such as /[a-z]{2}/video and /[a-z]{2}/audio.
- D . Leave the directory structure as-is, create a URL map and leverage a path rule such as /*/video and /*/ audio.
A
Explanation:
https://cloud.google.com/load-balancing/docs/url-map#configuring_url_maps
Path matcher constraints Path matchers and path rules have the following constraints: A path rule can only include a wildcard character (*) after a forward slash character (/). For example, /videos/* and /videos/hd/* are valid for path rules, but /videos* and /videos/hd* are not. Path rules do not use regular expression or substring matching. For example, path rules for either /videos/hd or /videos/hd/* do not apply to a URL with the path /video/hd-abcd. However, a path rule for /video/* does apply to that path. https://cloud.google.com/load-balancing/docs/url-map-concepts#pm-constraints