Nutanix NCP-US Nutanix Certified Professional – Unified Storage (NCP-US) v6 exam Online Training
Nutanix NCP-US Online Training
The questions for NCP-US were last updated at Dec 31,2024.
- Exam Code: NCP-US
- Exam Name: Nutanix Certified Professional – Unified Storage (NCP-US) v6 exam
- Certification Provider: Nutanix
- Latest update: Dec 31,2024
An administrator successfully installed Objects and was able to create a bucket.
When using the reference URL to access this Objects store, the administrator is unable to write data in the bucket when using an Active Directory account.
Which action should the administrator take to resolve this issue?
- A . Verify sharing policies at buckets level.
- B . Verify Access Keys for the user.
- C . Replace SSL Certificates at Objects store level.
- D . Reset Active Directory use’s password
B
Explanation:
If the administrator is unable to write data in the bucket using an Active Directory account, the issue is likely related to incorrect or missing access keys for the user. Access keys are required to authenticate and authorize access to the Objects store. The administrator should verify the access keys associated with the Active Directory account and ensure they are correctly configured.
An existing Objects bucket was created for backups with these parameters:
A) WORM policy of three years
B) Versioning policy of two years
C) Lifecycle policy of two years
The customer reports that the cluster is nearly full due to backups created during a recent crypto locker attack. The customer would like to automatically delete backups older than one year to free up space in the cluster.
How should the administrator change settings within Objects?
- A . Modify the existing bucket lifecycle policy from two years to one year.
- B . Create a new’ bucket with the lifecycle policy of one year.
- C . Create a new’ bucket with the WORM policy of two years.
- D . Modify the existing bucket WORM policy from three years to one year.
A
Explanation:
According to Nutanix documentation on unified storage (NCP-US) v6, to automatically delete backups older than one year, an administrator should modify the existing bucket lifecycle policy to set the expiration period to one year. Lifecycle policies enable administrators to automate the transition of objects to different storage classes and to expire them altogether. In this scenario, modifying the existing bucket lifecycle policy to shorten the expiration period to one year will ensure that backups older than one year are automatically deleted to free up space in the cluster. https://portal.nutanix.com/page/documents/details?targetId=Objects-v3_1:v31-lifecycle-policies-rule-c.html
An administrator has mapped a Volume Group named VGI to a VM, Due to changes in the application running inside the VM, two additional volumes are required. At the time of creation, the VM was assigned to a Protection Domain named PDI.
Which two steps should the administrator take to add the additional volumes to the VM while maintaining data protection? (Choose two.)
- A . Add the new vDisks to the same Consistency Group.
- B . Add two additional vDisks to VG1.
- C . Map the new vDisks as volumes to the VM.
- D . Manually update PD1to include the newly created vDisks.
B,C
Explanation:
To add additional volumes to the VM while maintaining data protection, the administrator should add two new vDisks to the Volume Group (VG1) and map the new vDisks as volumes to the VM. This will allow the VM to access the additional storage while keeping the data protected in the Protection Domain (PD1).
An administrator has mapped a Volume Group named VGI to a VM, Due to changes in the application running inside the VM, two additional volumes are required. At the time of creation, the VM was assigned to a Protection Domain named PDI.
Which two steps should the administrator take to add the additional volumes to the VM while maintaining data protection? (Choose two.)
- A . Add the new vDisks to the same Consistency Group.
- B . Add two additional vDisks to VG1.
- C . Map the new vDisks as volumes to the VM.
- D . Manually update PD1to include the newly created vDisks.
B,C
Explanation:
To add additional volumes to the VM while maintaining data protection, the administrator should add two new vDisks to the Volume Group (VG1) and map the new vDisks as volumes to the VM. This will allow the VM to access the additional storage while keeping the data protected in the Protection Domain (PD1).
63 TiB storage
The total target workload requests are:
5 Ghz CPU
10 GB RAM
40 TiB storage
What should be changed to have a good capacity runway at least for the next 6 months?
- A . Review storage capacity requested by the workload.
- B . Increase the storage capacity for physical nodes.
- C . Add new physical node to the configuration.
- D . Change the date of physical rode availability.
A
Explanation:
According to the Nutanix Support & Insights website1, capacity runway is an estimate of how long your cluster can support its current workload before running out of resources2. The exhibit shows that your cluster has a storage runway of only 1 month, which means you will run out of storage space soon. However, your CPU and memory runways are much longer (more than 2 years), which means you have plenty of CPU and memory resources available. Therefore, it might be possible to reduce the storage capacity requested by your workload or optimize your storage utilization to extend your storage runway. Alternatively, you could also increase the storage capacity for physical nodes or add new physical nodes to the configuration, but these options might be more costly and complex.
When completing the Linux Client iSCSI discovery process of the Nutanix cluster Volumes target, which action should an administrator complete first?
- A . Ensure the iSCSI is started.
- B . Restart iSCSI service on CVM.
- C . Discover the Volumes target.
- D . Establish connection to the Volumes target.
A
Explanation:
To use Nutanix Volumes with Linux clients, you must install and configure an iSCSI initiator on each client.” Therefore, the administrator should ensure that the Iscsi service is started on the Linux client before discovering or connecting to the Volumes target.
https://next.nutanix.com/installation-configuration-23/data-services-ip-iscsi-33804
An administrator is building a new application server and has decided to use post-process compression for the file server and inline compression for the database components.
The current environment has these characteristics:
Two Volume Groups named VGI and VG2.
A Storage Container named SCI with inline-compression.
A Storage Container named SC2 with post-process compression.
Which action should the administrator take to most efficiently satisfy this requirement?
- A . Wfithin VG1, create one vDisk in SC1 and one vDisk in Sc2.
- B . Within SC1, create one Wisk in VG1 and within SC2. create one Disk in VG2
- C . Wthin SC1, create one vDisk in VG1 and one vDisk in VG2,
- D . Writhin VG1, create one vDisk in SC1 and within VG2, create one vDisk in SC2.
D
Explanation:
Volume Groups (VGs) are collections of vDisks that can be attached to VMs1. vDisks are virtual disks that reside on Storage Containers2. Storage Containers are logical entities that provide storage services with different compression options3.
To use post-process compression for the file server and inline compression for the database components, you need to create two vDisks on different Storage Containers with different compression options. Then you need to attach those vDisks to different VGs.
https://next.nutanix.com/volumes-block-storage-171/configuration-for-volumes-vdisks-40537
Which Nutanix interface is used to deploy a new Files instance?
- A . Prism Element
- B . Prism Central
- C . Files Manager
- D . Life Cycle Manager
B
Explanation:
According to Nutanix Support & Insights1, Nutanix Files is a scale-out file storage solution that provides SMB and NFS file services to clients. Nutanix Files instances are composed of a set of VMs (called FSVMs) that run on Nutanix clusters.
According to Your Complete Guide to Nutanix Files Training Resources2, Prism Central is the interface used to deploy a new Files instance. Prism Central is a centralized management console that provides visibility and control across multiple Nutanix clusters and services.
An administrator has received multiple trouble tickets from users who are unable to access a particular Distributed share.
While troubleshooting, the administrator observes that data located on FSVM 3 can be accessed, but the data located on FSVMs 1 and 2 is inaccessible. The administrator receives this message when attempting to access data on FSVMs 1 and 2:
Network object not found
Both FSVM 1 and 2 nodes successfully reply to pings, and the administrator is able to access data via the node IP.
What must the administrator check as a next step?
- A . Check if all DNS records are created on the DNS server.
- B . Check if SM3v1 is enabled on FSVM node 1 and 2.
- C . Check ifkeberos_time_skewis logged on the client.
- D . Check if user permissions are configured correctly.
A
Explanation:
According to Nutanix Support & Insights1, a distributed share is a type of SMB share that distributes the hosting of top-level directories across FSVMs. To access a distributed share, you need to use a DNS name that resolves to all FSVMs in the cluster. If some DNS records are missing or incorrect, then some FSVMs may not be reachable by their DNS name, resulting in the network object not found error. This could explain why data located on FSVM 3 can be accessed, but data located on FSVMs 1 and 2 is inaccessible.
During a recent audit, the auditors discovered several shares that were unencrypted. To remediate the audit item, the administrator enabled Encrypt SMB3 Messages on the accounting, finance, and facilities shares. After encryption was enabled, several users have reported that they are no longer able to access the shares.
What is causing this issue?
- A . The users are accessing the shares from Windows 8 desktops.
- B . Advanced Encryption Standard 128 & 256 are disabled in Windows 7.
- C . Advanced Encryption Standard 128 & 256 are disabled in Linux or Mac OS.
- D . The users are accessing the shares from Linux desktops.
C
Explanation:
The issue causing the users to be unable to access the shares after enabling encryption (Encrypt SMB3 Messages) is likely related to the fact that Advanced Encryption Standard 128 & 256 are disabled in Linux or Mac OS. These operating systems may not have the required encryption standards enabled by default, leading to compatibility issues with the encrypted shares. Enabling the appropriate encryption standards on Linux or Mac OS should resolve the issue