Site icon Exam4Training

Splunk SPLK-2002 Splunk Enterprise Certified Architect Exam Online Training

Question #1

Which of the following will cause the greatest reduction in disk size requirements for a cluster of N indexers running Splunk Enterprise Security?

  • A . Setting the cluster search factor to N-1.
  • B . Increasing the number of buckets per index.
  • C . Decreasing the data model acceleration range.
  • D . Setting the cluster replication factor to N-1.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Decreasing the data model acceleration range will reduce the disk size requirements for a cluster of indexers running Splunk Enterprise Security. Data model acceleration creates tsidx files that consume disk space on the indexers. Reducing the acceleration range will limit the amount of data that is accelerated and thus save disk space. Setting the cluster search factor or replication factor to N-1 will not reduce the disk size requirements, but rather increase the risk of data loss. Increasing the number of buckets per index will also increase the disk size requirements, as each bucket has a minimum size. For more information, see Data model acceleration and Bucket size in the Splunk documentation.

Question #2

Stakeholders have identified high availability for searchable data as their top priority.

Which of the following best addresses this requirement?

  • A . Increasing the search factor in the cluster.
  • B . Increasing the replication factor in the cluster.
  • C . Increasing the number of search heads in the cluster.
  • D . Increasing the number of CPUs on the indexers in the cluster.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Increasing the search factor in the cluster will best address the requirement of high availability for searchable data. The search factor determines how many copies of searchable data are maintained by the cluster. A higher search factor means that more indexers can serve the data in case of a failure or a maintenance event. Increasing the replication factor will improve the availability of raw data, but not searchable data. Increasing the number of search heads or CPUs on the indexers will improve the search performance, but not the availability of searchable data. For more information, see Replication factor and search factor in the Splunk documentation.

Question #3

Search dashboards in the Monitoring Console indicate that the distributed deployment is approaching its capacity.

Which of the following options will provide the most search performance improvement?

  • A . Replace the indexer storage to solid state drives (SSD).
  • B . Add more search heads and redistribute users based on the search type.
  • C . Look for slow searches and reschedule them to run during an off-peak time.
  • D . Add more search peers and make sure forwarders distribute data evenly across all indexers.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Adding more search peers and making sure forwarders distribute data evenly across all indexers will provide the most search performance improvement when the distributed deployment is approaching its capacity. Adding more search peers will increase the search concurrency and reduce the load on each indexer. Distributing data evenly across all indexers will ensure that the search workload is balanced and no indexer becomes a bottleneck. Replacing the indexer storage to SSD will improve the search performance, but it is a costly and time-consuming option. Adding more search heads will not improve the search performance if the indexers are the bottleneck. Rescheduling slow searches to run during an off-peak time will reduce the search contention, but it will not improve the search performance for each individual search. For more information, see [Scale your indexer cluster] and [Distribute data across your indexers] in the Splunk documentation.

Question #4

A Splunk architect has inherited the Splunk deployment at Buttercup Games and end users are complaining that the events are inconsistently formatted for a web source. Further investigation reveals that not all weblogs flow through the same infrastructure: some of the data goes through heavy forwarders and some of the forwarders are managed by another department.

Which of the following items might be the cause of this issue?

  • A . The search head may have different configurations than the indexers.
  • B . The data inputs are not properly configured across all the forwarders.
  • C . The indexers may have different configurations than the heavy forwarders.
  • D . The forwarders managed by the other department are an older version than the rest.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The indexers may have different configurations than the heavy forwarders, which might cause the issue of inconsistently formatted events for a web sourcetype. The heavy forwarders perform parsing and indexing on the data before sending it to the indexers. If the indexers have different configurations than the heavy forwarders, such as different props.conf or transforms.conf settings, the data may be parsed or indexed differently on the indexers, resulting in inconsistent events. The search head configurations do not affect the event formatting, as the search head does not parse or index the data. The data inputs configurations on the forwarders do not affect the event formatting, as the data inputs only determine what data to collect and how to monitor it. The forwarder version does not affect the event formatting, as long as the forwarder is compatible with the indexer. For more information, see [Heavy forwarder versus indexer] and [Configure event processing] in the Splunk documentation.

Question #5

A customer has installed a 500GB Enterprise license. They also purchased and installed a 300GB, no enforcement license on the same license master.

How much data can the customer ingest before the search is locked out?

  • A . 300GB. After this limit, the search is locked out.
  • B . 500GB. After this limit, the search is locked out.
  • C . 800GB. After this limit, the search is locked out.
  • D . Search is not locked out. Violations are still recorded.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Search is not locked out when a customer has installed a 500GB Enterprise license and a 300GB, no enforcement license on the same license master. The no enforcement license allows the customer to exceed the license quota without locking search, but violations are still recorded. The customer can ingest up to 800GB of data per day without violating the license, but if they ingest more than that, they will incur a violation. However, the violation will not lock search, as the no enforcement license overrides the enforcement policy of the Enterprise license. For more information, see [No enforcement licenses] and [License violations] in the Splunk documentation.

Question #6

What does the deployer do in a Search Head Cluster (SHC)? (Select all that apply.)

  • A . Distributes apps to SHC members.
  • B . Bootstraps a clean Splunk install for a SHC.
  • C . Distributes non-search-related and manual configuration file changes.
  • D . Distributes runtime knowledge object changes made by users across the SHC.

Reveal Solution Hide Solution

Correct Answer: A, C
A, C

Explanation:

The deployer distributes apps and non-search related and manual configuration file changes to the search head cluster members. The deployer does not bootstrap a clean Splunk install for a search head cluster, as this is done by the captain. The deployer also does not distribute runtime knowledge object changes made by users across the search head cluster, as this is done by the replication factor. For more information, see Use the deployer to distribute apps and configuration updates in the Splunk documentation.

Question #7

When using the props.conf LINE_BREAKER attribute to delimit multi-line events, the SHOULD_LINEMERGE attribute should be set to what?

  • A . Auto
  • B . None
  • C . True
  • D . False

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When using the props.conf LINE_BREAKER attribute to delimit multi-line events, the SHOULD_LINEMERGE attribute should be set to false. This tells Splunk not to merge events that have been broken by the LINE_BREAKER. Setting the SHOULD_LINEMERGE attribute to true, auto, or none will cause Splunk to ignore the LINE_BREAKER and merge events based on other criteria. For more information, see Configure event line breaking in the Splunk documentation.

Question #8

Which of the following should be included in a deployment plan?

  • A . Business continuity and disaster recovery plans.
  • B . Current logging details and data source inventory.
  • C . Current and future topology diagrams of the IT environment.
  • D . A comprehensive list of stakeholders, either direct or indirect.

Reveal Solution Hide Solution

Correct Answer: A, B, C
A, B, C

Explanation:

A deployment plan should include business continuity and disaster recovery plans, current logging details and data source inventory, and current and future topology diagrams of the IT environment. These elements are essential for planning, designing, and implementing a Splunk deployment that meets the business and technical requirements. A comprehensive list of stakeholders, either direct or indirect, is not part of the deployment plan, but rather part of the project charter. For more information, see Deployment planning in the Splunk documentation.

Question #9

A multi-site indexer cluster can be configured using which of the following? (Select all that apply.)

  • A . Via Splunk Web.
  • B . Directly edit SPLUNK_HOME/etc./system/local/server.conf
  • C . Run a Splunk edit cluster-config command from the CLI.
  • D . Directly edit SPLUNK_HOME/etc/system/default/server.conf

Reveal Solution Hide Solution

Correct Answer: B, C
B, C

Explanation:

A multi-site indexer cluster can be configured by directly editing SPLUNK_HOME/etc/system/local/server.conf or running a splunk edit cluster-config command from the CLI. These methods allow the administrator to specify the site attribute for each indexer node and the site_replication_factor and site_search_factor for the cluster. Configuring a multi-site indexer cluster via Splunk Web or directly editing SPLUNK_HOME/etc/system/default/server.conf are not supported methods. For more information, see Configure the indexer cluster with server.conf in the Splunk documentation.

Question #10

Which index-time props.conf attributes impact indexing performance? (Select all that apply.)

  • A . REPORT
  • B . LINE_BREAKER
  • C . ANNOTATE_PUNCT
  • D . SHOULD_LINEMERGE

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

The index-time props.conf attributes that impact indexing performance are LINE_BREAKER and SHOULD_LINEMERGE. These attributes determine how Splunk breaks the incoming data into events and whether it merges multiple events into one. These operations can affect the indexing speed and the disk space consumption. The REPORT attribute does not impact indexing performance, as it is used to apply transforms at search time. The ANNOTATE_PUNCT attribute does not impact indexing performance, as it is used to add punctuation metadata to events at search time. For more information, see [About props.conf and transforms.conf] in the Splunk documentation.

Question #11

Which of the following are client filters available in serverclass.conf? (Select all that apply.)

  • A . DNS name.
  • B . IP address.
  • C . Splunk server role.
  • D . Platform (machine type).

Reveal Solution Hide Solution

Correct Answer: A, B, D
A, B, D

Explanation:

The client filters available in serverclass.conf are DNS name, IP address, and platform (machine type). These filters allow the administrator to specify which forwarders belong to a server class and receive the apps and configurations from the deployment server. The Splunk server role is not a valid client filter in serverclass.conf, as it is not a property of the forwarder. For more information, see [Use forwarder management filters] in the Splunk documentation.

Question #12

What log file would you search to verify if you suspect there is a problem interpreting a regular expression in a monitor stanza?

  • A . btool.log
  • B . metrics.log
  • C . splunkd.log
  • D . tailing_processor.log

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The tailing_processor.log file would be the best place to search if you suspect there is a problem interpreting a regular expression in a monitor stanza. This log file contains information about how Splunk monitors files and directories, including any errors or warnings related to parsing the monitor stanza. The splunkd.log file contains general information about the Splunk daemon, but it may not have the specific details about the monitor stanza. The btool.log file contains information about the configuration files, but it does not log the runtime behavior of the monitor stanza. The metrics.log file contains information about the performance metrics of Splunk, but it does not log the event breaking issues. For more information, see About Splunk Enterprise logging in the Splunk documentation.

Question #13

Which Splunk tool offers a health check for administrators to evaluate the health of their Splunk deployment?

  • A . btool
  • B . DiagGen
  • C . SPL Clinic
  • D . Monitoring Console

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The Monitoring Console is the Splunk tool that offers a health check for administrators to evaluate the health of their Splunk deployment. The Monitoring Console provides dashboards and alerts that show the status and performance of various Splunk components, such as indexers, search heads, forwarders, license usage, and search activity. The Monitoring Console can also run health checks on the deployment and identify any issues or recommendations. The btool is a command-line tool that shows the effective settings of the configuration files, but it does not offer a health check. The DiagGen is a tool that generates diagnostic snapshots of the Splunk environment, but it does not offer a health check. The SPL Clinic is a tool that analyzes and optimizes SPL queries, but it does not offer a health check. For more information, see About the Monitoring Console in the Splunk documentation.

Question #14

In a four site indexer cluster, which configuration stores two searchable copies at the origin site, one searchable copy at site2, and a total of four searchable copies?

  • A . site_search_factor = origin:2, site1:2, total:4
  • B . site_search_factor = origin:2, site2:1, total:4
  • C . site_replication_factor = origin:2, site1:2, total:4
  • D . site_replication_factor = origin:2, site2:1, total:4

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

In a four site indexer cluster, the configuration that stores two searchable copies at the origin site, one searchable copy at site2, and a total of four searchable copies is site_search_factor = origin:2, site2:1, total:4. This configuration tells the cluster to maintain two copies of searchable data at the site where the data originates, one copy of searchable data at site2, and a total of four copies of searchable data across all sites. The site_search_factor determines how many copies of searchable data are maintained by the cluster for each site. The site_replication_factor determines how many copies of raw data are maintained by the cluster for each site. For more information, see Configure multisite indexer clusters with server.conf in the Splunk documentation.

Question #15

Which of the following is true regarding Splunk Enterprise’s performance? (Select all that apply.)

  • A . Adding search peers increases the maximum size of search results.
  • B . Adding RAM to existing search heads provides additional search capacity.
  • C . Adding search peers increases the search throughput as the search load increases.
  • D . Adding search heads provides additional CPU cores to run more concurrent searches.

Reveal Solution Hide Solution

Correct Answer: C, D
C, D

Explanation:

The following statements are true regarding Splunk Enterprise performance:

Adding search peers increases the search throughput as search load increases. This is because adding more search peers distributes the search workload across more indexers, which reduces the load on each indexer and improves the search speed and concurrency.

Adding search heads provides additional CPU cores to run more concurrent searches. This is because adding more search heads increases the number of search processes that can run in parallel, which improves the search performance and scalability. The following statements are false regarding Splunk Enterprise performance:

Adding search peers does not increase the maximum size of search results. The maximum size of search results is determined by the maxresultrows setting in the limits.conf file, which is independent of the number of search peers.

Adding RAM to an existing search head does not provide additional search capacity. The search capacity of a search head is determined by the number of CPU cores, not the amount of RAM. Adding RAM to a search head may improve the search performance, but not the search capacity. For more information, see Splunk Enterprise performance in the Splunk documentation.

Question #16

Which Splunk Enterprise offering has its own license?

  • A . Splunk Cloud Forwarder
  • B . Splunk Heavy Forwarder
  • C . Splunk Universal Forwarder
  • D . Splunk Forwarder Management

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The Splunk Universal Forwarder is the only Splunk Enterprise offering that has its own license. The Splunk Universal Forwarder license allows the forwarder to send data to any Splunk Enterprise or Splunk Cloud instance without consuming any license quota. The Splunk Heavy Forwarder does not have its own license, but rather consumes the license quota of the Splunk Enterprise or Splunk Cloud instance that it sends data to. The Splunk Cloud Forwarder and the Splunk Forwarder Management are not separate Splunk Enterprise offerings, but rather features of the Splunk Cloud service. For more information, see [About forwarder licensing] in the Splunk documentation.

Question #17

Which component in the splunkd.log will log information related to bad event breaking?

  • A . Audittrail
  • B . EventBreaking
  • C . IndexingPipeline
  • D . AggregatorMiningProcessor

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The AggregatorMiningProcessor component in the splunkd.log file will log information related to bad event breaking. The AggregatorMiningProcessor is responsible for breaking the incoming data into events and applying the props.conf settings. If there is a problem with the event breaking, such as incorrect timestamps, missing events, or merged events, the AggregatorMiningProcessor will log the error or warning messages in the splunkd.log file. The Audittrail component logs information about the audit events, such as user actions, configuration changes, and search activity. The EventBreaking component logs information about the event breaking rules, such as the LINE_BREAKER and SHOULD_LINEMERGE settings. The IndexingPipeline component logs information

about the indexing pipeline, such as the parsing, routing, and indexing phases. For more information,

see About Splunk Enterprise logging and [Configure event line breaking] in the Splunk

documentation.

Question #18

Which Splunk server role regulates the functioning of indexer cluster?

  • A . Indexer
  • B . Deployer
  • C . Master Node
  • D . Monitoring Console

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The master node is the Splunk server role that regulates the functioning of the indexer cluster. The master node coordinates the activities of the peer nodes, such as data replication, data searchability, and data recovery. The master node also manages the cluster configuration bundle and distributes it to the peer nodes. The indexer is the Splunk server role that indexes the incoming data and makes it searchable. The deployer is the Splunk server role that distributes apps and configuration updates to the search head cluster members. The monitoring console is the Splunk server role that monitors the health and performance of the Splunk deployment. For more information, see About indexer clusters and index replication in the Splunk documentation.

Question #19

When adding or rejoining a member to a search head cluster, the following error is displayed: Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.

What corrective action should be taken?

  • A . Restart the search head.
  • B . Run the splunk apply shcluster-bundle command from the deployer.
  • C . Run the clean raft command on all members of the search head cluster.
  • D . Run the splunk resync shcluster-replicated-config command on this member.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When adding or rejoining a member to a search head cluster, and the following error is displayed: Error pulling configurations from the search head cluster captain; consider performing a destructive configuration resync on this search head cluster member.

The corrective action that should be taken is to run the splunk resync shcluster-replicated-config command on this member. This command will delete the existing configuration files on this member and replace them with the latest configuration files from the captain. This will ensure that the member has the same configuration as the rest of the cluster. Restarting the search head, running the splunk apply shcluster-bundle command from the deployer, or running the clean raft command on all members of the search head cluster are not the correct actions to take in this scenario. For more information, see Resolve configuration inconsistencies across cluster members in the Splunk documentation.

Question #20

Which of the following commands is used to clear the KV store?

  • A . splunk clean kvstore
  • B . splunk clear kvstore
  • C . splunk delete kvstore
  • D . splunk reinitialize kvstore

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The splunk clean kvstore command is used to clear the KV store. This command will delete all the collections and documents in the KV store and reset it to an empty state. This command can be useful for troubleshooting KV store issues or resetting the KV store data. The splunk clear kvstore, splunk delete kvstore, and splunk reinitialize kvstore commands are not valid Splunk commands. For more information, see Use the CLI to manage the KV store in the Splunk documentation.

Question #21

Indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. There is ample CPU and memory available on the indexers.

Which of the following is most likely to improve indexing performance?

  • A . Increase the maximum number of hot buckets in indexes.conf
  • B . Increase the number of parallel ingestion pipelines in server.conf
  • C . Decrease the maximum size of the search pipelines in limits.conf
  • D . Decrease the maximum concurrent scheduled searches in limits.conf

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Increasing the number of parallel ingestion pipelines in server.conf is most likely to improve indexing performance when indexing is slow and real-time search results are delayed in a Splunk environment with two indexers and one search head. The parallel ingestion pipelines allow Splunk to process multiple data streams simultaneously, which increases the indexing throughput and reduces the indexing latency. Increasing the maximum number of hot buckets in indexes.conf will not improve indexing performance, but rather increase the disk space consumption and the bucket rolling time. Decreasing the maximum size of the search pipelines in limits.conf will not improve indexing performance, but rather reduce the search performance and the search concurrency. Decreasing the maximum concurrent scheduled searches in limits.conf will not improve indexing performance, but rather reduce the search capacity and the search availability. For more information, see Configure parallel ingestion pipelines in the Splunk documentation.

Question #22

The guidance Splunk gives for estimating size on for syslog data is 50% of original data size.

How does this divide between files in the index?

  • A . rawdata is: 10%, tsidx is: 40%
  • B . rawdata is: 15%, tsidx is: 35%
  • C . rawdata is: 35%, tsidx is: 15%
  • D . rawdata is: 40%, tsidx is: 10%

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The guidance Splunk gives for estimating size on for syslog data is 50% of original data size. This divides between files in the index as follows: rawdata is 15%, tsidx is 35%. The rawdata is the compressed version of the original data, which typically takes about 15% of the original data size. The tsidx is the index file that contains the time-series metadata and the inverted index, which typically takes about 35% of the original data size. The total size of the rawdata and the tsidx is about 50% of the original data size. For more information, see [Estimate your storage requirements] in the Splunk documentation.

Question #23

In an existing Splunk environment, the new index buckets that are created each day are about half the size of the incoming data. Within each bucket, about 30% of the space is used for raw data and about 70% for index files.

What additional information is needed to calculate the daily disk consumption, per indexer, if indexer clustering is implemented?

  • A . Total daily indexing volume, number of peer nodes, and number of accelerated searches.
  • B . Total daily indexing volume, number of peer nodes, replication factor, and search factor.
  • C . Total daily indexing volume, replication factor, search factor, and number of search heads.
  • D . Replication factor, search factor, number of accelerated searches, and total disk size across cluster.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The additional information that is needed to calculate the daily disk consumption, per indexer, if indexer clustering is implemented, is the total daily indexing volume, the number of peer nodes, the replication factor, and the search factor. These information are required to estimate how much data is ingested, how many copies of raw data and searchable data are maintained, and how many indexers are involved in the cluster. The number of accelerated searches, the number of search heads, and the total disk size across the cluster are not relevant for calculating the daily disk consumption, per indexer. For more information, see [Estimate your storage requirements] in the Splunk documentation.

Question #24

A three-node search head cluster is skipping a large number of searches across time.

What should be done to increase scheduled search capacity on the search head cluster?

  • A . Create a job server on the cluster.
  • B . Add another search head to the cluster.
  • C . server.conf captain_is_adhoc_searchhead = true.
  • D . Change limits.conf value for max_searches_per_cpu to a higher value.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Changing the limits.conf value for max_searches_per_cpu to a higher value is the best option to increase scheduled search capacity on the search head cluster when a large number of searches are skipped across time. This value determines how many concurrent scheduled searches can run on each CPU core of the search head. Increasing this value will allow more scheduled searches to run at the same time, which will reduce the number of skipped searches. Creating a job server on the cluster, running the server.conf captain_is_adhoc_searchhead = true command, or adding another search head to the cluster are not the best options to increase scheduled search capacity on the search head cluster. For more information, see [Configure limits.conf] in the Splunk documentation.

Question #25

The frequency in which a deployment client contacts the deployment server is controlled by what?

  • A . polling_interval attribute in outputs.conf
  • B . phoneHomeIntervalInSecs attribute in outputs.conf
  • C . polling_interval attribute in deploymentclient.conf
  • D . phoneHomeIntervalInSecs attribute in deploymentclient.conf

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The frequency in which a deployment client contacts the deployment server is controlled by the phoneHomeIntervalInSecs attribute in deploymentclient.conf. This attribute specifies how often the deployment client checks in with the deployment server to get updates on the apps and configurations that it should receive. The polling_interval attribute in outputs.conf controls how often the forwarder sends data to the indexer or another forwarder. The polling_interval attribute in deploymentclient.conf and the phoneHomeIntervalInSecs attribute in outputs.conf are not valid Splunk attributes. For more information, see Configure deployment clients and Configure forwarders with outputs.conf in the Splunk documentation.

Question #26

To activate replication for an index in an indexer cluster, what attribute must be configured in indexes.conf on all peer nodes?

  • A . repFactor = 0
  • B . replicate = 0
  • C . repFactor = auto
  • D . replicate = auto

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

To activate replication for an index in an indexer cluster, the repFactor attribute must be configured in indexes.conf on all peer nodes. This attribute specifies the replication factor for the index, which determines how many copies of raw data are maintained by the cluster. Setting the repFactor attribute to auto will enable replication for the index. The replicate attribute in indexes.conf is not a valid Splunk attribute. The repFactor attribute in outputs.conf and the replicate attribute in deploymentclient.conf are not related to replication for an index in an indexer cluster. For more information, see Configure indexes for indexer clusters in the Splunk documentation.

Question #27

Which of the following clarification steps should be taken if apps are not appearing on a deployment client? (Select all that apply.)

  • A . Check serverclass.conf of the deployment server.
  • B . Check deploymentclient.conf of the deployment client.
  • C . Check the content of SPLUNK_HOME/etc/apps of the deployment server.
  • D . Search for relevant events in splunkd.log of the deployment server.

Reveal Solution Hide Solution

Correct Answer: A, B, D
A, B, D

Explanation:

The following clarification steps should be taken if apps are not appearing on a deployment client: Check serverclass.conf of the deployment server. This file defines the server classes and the apps and configurations that they should receive from the deployment server. Make sure that the deployment client belongs to the correct server class and that the server class has the desired apps and configurations.

Check deploymentclient.conf of the deployment client. This file specifies the deployment server that the deployment client contacts and the client name that it uses. Make sure that the deployment client is pointing to the correct deployment server and that the client name matches the server class criteria.

Search for relevant events in splunkd.log of the deployment server. This file contains information about the deployment server activities, such as sending apps and configurations to the deployment clients, detecting client check-ins, and logging any errors or warnings. Look for any events that indicate a problem with the deployment server or the deployment client.

Checking the content of SPLUNK_HOME/etc/apps of the deployment server is not a necessary clarification step, as this directory does not contain the apps and configurations that are distributed to the deployment clients. The apps and configurations for the deployment server are stored in SPLUNK_HOME/etc/deployment-apps. For more information, see Configure deployment server and clients in the Splunk documentation.

Question #28

What is the minimum reference server specification for a Splunk indexer?

  • A . 12 CPU cores, 12GB RAM, 800 IOPS
  • B . 16 CPU cores, 16GB RAM, 800 IOPS
  • C . 24 CPU cores, 16GB RAM, 1200 IOPS
  • D . 28 CPU cores, 32GB RAM, 1200 IOPS

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The minimum reference server specification for a Splunk indexer is 12 CPU cores, 12GB RAM, and 800 IOPS. This specification is based on the assumption that the indexer will handle an average indexing volume of 100GB per day, with a peak of 300GB per day, and a typical search load of 1 concurrent search per 1GB of indexing volume. The other specifications are either higher or lower than the minimum requirement. For more information, see [Reference hardware] in the Splunk documentation.

Question #29

Which of the following security options must be explicitly configured (i.e. which options are not enabled by default)?

  • A . Data encryption between Splunk Web and splunkd.
  • B . Certificate authentication between forwarders and indexers.
  • C . Certificate authentication between Splunk Web and search head.
  • D . Data encryption for distributed search between search heads and indexers.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The following security option must be explicitly configured, as it is not enabled by default: Certificate authentication between forwarders and indexers. This option allows the forwarders and indexers to verify each other’s identity using SSL certificates, which prevents unauthorized data transmission or spoofing attacks. This option is not enabled by default, as it requires the administrator to generate and distribute the certificates for the forwarders and indexers. For more information, see [Secure the communication between forwarders and indexers] in the Splunk documentation. The following security options are enabled by default:

Data encryption between Splunk Web and splunkd. This option encrypts the communication between the Splunk Web interface and the splunkd daemon using SSL, which prevents data interception or tampering. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [About securing Splunk Enterprise with SSL] in the Splunk documentation.

Certificate authentication between Splunk Web and search head. This option allows the Splunk Web interface and the search head to verify each other’s identity using SSL certificates, which prevents unauthorized access or spoofing attacks. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [About securing Splunk Enterprise with

SSL] in the Splunk documentation.

Data encryption for distributed search between search heads and indexers. This option encrypts the communication between the search heads and the indexers using SSL, which prevents data interception or tampering. This option is enabled by default, as Splunk provides a self-signed certificate for this purpose. For more information, see [Secure your distributed search environment] in the Splunk documentation.

Question #30

Which of the following artifacts are included in a Splunk diag file? (Select all that apply.)

  • A . OS settings.
  • B . Internal logs.
  • C . Customer data.
  • D . Configuration files.

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

The following artifacts are included in a Splunk diag file:

Internal logs. These are the log files that Splunk generates to record its own activities, such as splunkd.log, metrics.log, audit.log, and others. These logs can help troubleshoot Splunk issues and monitor Splunk performance.

Configuration files. These are the files that Splunk uses to configure various aspects of its operation, such as server.conf, indexes.conf, props.conf, transforms.conf, and others. These files can help understand Splunk settings and behavior. The following artifacts are not included in a Splunk diag file: OS settings. These are the settings of the operating system that Splunk runs on, such as the kernel version, the memory size, the disk space, and others. These settings are not part of the Splunk diag file, but they can be collected separately using the diag –os option.

Customer data. These are the data that Splunk indexes and makes searchable, such as the rawdata and the tsidx files. These data are not part of the Splunk diag file, as they may contain sensitive or confidential information. For more information, see Generate a diagnostic snapshot of your Splunk Enterprise deployment in the Splunk documentation.

Question #31

Which command will permanently decommission a peer node operating in an indexer cluster?

  • A . splunk stop -f
  • B . splunk offline -f
  • C . splunk offline –enforce-counts
  • D . splunk decommission –enforce counts

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The splunk offline –enforce-counts command will permanently decommission a peer node operating in an indexer cluster. This command will remove the peer node from the cluster and delete its data. This command should be used when the peer node is no longer needed or is being replaced by another node. The splunk stop -f command will stop the Splunk service on the peer node, but it will not decommission it from the cluster. The splunk offline -f command will take the peer node offline, but it will not delete its data or enforce the replication and search factors. The splunk decommission – -enforce-counts command is not a valid Splunk command. For more information, see Remove a peer node from an indexer cluster in the Splunk documentation.

Question #32

Which CLI command converts a Splunk instance to a license slave?

  • A . splunk add licenses
  • B . splunk list licenser-slaves
  • C . splunk edit licenser-localslave
  • D . splunk list licenser-localslave

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The splunk edit licenser-localslave command is used to convert a Splunk instance to a license slave. This command will configure the Splunk instance to contact a license master and receive a license from it. This command should be used when the Splunk instance is part of a distributed deployment and needs to share a license pool with other instances. The splunk add licenses command is used to add a license to a Splunk instance, not to convert it to a license slave. The splunk list licenser-slaves command is used to list the license slaves that are connected to a license master, not to convert a Splunk instance to a license slave. The splunk list licenser-localslave command is used to list the license master that a license slave is connected to, not to convert a Splunk instance to a license slave. For more information, see Configure license slaves in the Splunk documentation.

Question #33

Splunk Enterprise platform instrumentation refers to data that the Splunk Enterprise deployment logs in the _introspection index.

Which of the following logs are included in this index? (Select all that apply.)

  • A . audit.log
  • B . metrics.log
  • C . disk_objects.log
  • D . resource_usage.log

Reveal Solution Hide Solution

Correct Answer: C, D
C, D

Explanation:

The following logs are included in the _introspection index, which contains data that the Splunk Enterprise deployment logs for platform instrumentation:

disk_objects.log. This log contains information about the disk objects that Splunk creates and manages, such as buckets, indexes, and files. This log can help monitor the disk space usage and the bucket lifecycle.

resource_usage.log. This log contains information about the resource usage of Splunk processes, such as CPU, memory, disk, and network. This log can help monitor the Splunk performance and identify any resource bottlenecks. The following logs are not included in the _introspection index, but rather in the _internal index, which contains data that Splunk generates for internal logging: audit.log. This log contains information about the audit events that Splunk records, such as user actions, configuration changes, and search activity. This log can help audit the Splunk operations and security.

metrics.log. This log contains information about the performance metrics that Splunk collects, such as data throughput, data latency, search concurrency, and search duration. This log can help measure the Splunk performance and efficiency. For more information, see About Splunk Enterprise logging and [About the _introspection index] in the Splunk documentation.

Question #34

Which of the following can a Splunk diag contain?

  • A . Search history, Splunk users and their roles, running processes, indexed data
  • B . Server specs, current open connections, internal Splunk log files, index listings
  • C . KV store listings, internal Splunk log files, search peer bundles listings, indexed data
  • D . Splunk platform configuration details, Splunk users and their roles, current open connections, index listings

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The following artifacts are included in a Splunk diag file:

Server specs. These are the specifications of the server that Splunk runs on, such as the CPU model, the memory size, the disk space, and the network interface. These specs can help understand the Splunk hardware requirements and performance.

Current open connections. These are the connections that Splunk has established with other Splunk instances or external sources, such as forwarders, indexers, search heads, license masters, deployment servers, and data inputs. These connections can help understand the Splunk network topology and communication.

Internal Splunk log files. These are the log files that Splunk generates to record its own activities, such as splunkd.log, metrics.log, audit.log, and others. These logs can help troubleshoot Splunk issues and monitor Splunk performance.

Index listings. These are the listings of the indexes that Splunk has created and configured, such as the index name, the index location, the index size, and the index attributes. These listings can help understand the Splunk data management and retention. The following artifacts are not included in a Splunk diag file:

Search history. This is the history of the searches that Splunk has executed, such as the search query,

the search time, the search results, and the search user. This history is not part of the Splunk diag file, but it can be accessed from the Splunk Web interface or the audit.log file.

Splunk users and their roles. These are the users that Splunk has created and assigned roles to, such as the user name, the user password, the user role, and the user capabilities. These users and roles are not part of the Splunk diag file, but they can be accessed from the Splunk Web interface or the authentication.conf and authorize.conf files.

KV store listings. These are the listings of the KV store collections and documents that Splunk has created and stored, such as the collection name, the collection schema, the document ID, and the document fields. These listings are not part of the Splunk diag file, but they can be accessed from the Splunk Web interface or the mongod.log file.

Indexed data. These are the data that Splunk indexes and makes searchable, such as the rawdata and the tsidx files. These data are not part of the Splunk diag file, as they may contain sensitive or confidential information. For more information, see Generate a diagnostic snapshot of your Splunk Enterprise deployment in the Splunk documentation.

Question #35

Which of the following are true statements about Splunk indexer clustering?

  • A . All peer nodes must run exactly the same Splunk version.
  • B . The master node must run the same or a later Splunk version than search heads.
  • C . The peer nodes must run the same or a later Splunk version than the master node.
  • D . The search head must run the same or a later Splunk version than the peer nodes.

Reveal Solution Hide Solution

Correct Answer: A, D
A, D

Explanation:

The following statements are true about Splunk indexer clustering:

All peer nodes must run exactly the same Splunk version. This is a requirement for indexer clustering, as different Splunk versions may have different data formats or features that are incompatible with each other. All peer nodes must run the same Splunk version as the master node and the search heads that connect to the cluster.

The search head must run the same or a later Splunk version than the peer nodes. This is a recommendation for indexer clustering, as a newer Splunk version may have new features or bug fixes that improve the search functionality or performance. The search head should not run an older Splunk version than the peer nodes, as this may cause search errors or failures. The following statements are false about Splunk indexer clustering:

The master node must run the same or a later Splunk version than the search heads. This is not a requirement or a recommendation for indexer clustering, as the master node does not participate in the search process. The master node should run the same Splunk version as the peer nodes, as this ensures the cluster compatibility and functionality.

The peer nodes must run the same or a later Splunk version than the master node. This is not a requirement or a recommendation for indexer clustering, as the peer nodes do not coordinate the cluster activities. The peer nodes should run the same Splunk version as the master node, as this ensures the cluster compatibility and functionality. For more information, see [About indexer clusters and index replication] and [Upgrade an indexer cluster] in the Splunk documentation.

Question #36

A customer plans to ingest 600 GB of data per day into Splunk. They will have six concurrent users, and they also want high data availability and high search performance. The customer is concerned about cost and wants to spend the minimum amount on the hardware for Splunk.

How many indexers are recommended for this deployment?

  • A . Two indexers not in a cluster, assuming users run many long searches.
  • B . Three indexers not in a cluster, assuming a long data retention period.
  • C . Two indexers clustered, assuming high availability is the greatest priority.
  • D . Two indexers clustered, assuming a high volume of saved/scheduled searches.

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

Two indexers clustered is the recommended deployment for a customer who plans to ingest 600 GB of data per day into Splunk, has six concurrent users, and wants high data availability and high search performance. This deployment will provide enough indexing capacity and search concurrency for the customer’s needs, while also ensuring data replication and searchability across the cluster. The customer can also save on the hardware cost by using only two indexers. Two indexers not in a cluster will not provide high data availability, as there is no data replication or failover. Three indexers not in a cluster will provide more indexing capacity and search concurrency, but also more hardware cost and no data availability. The customer’s data retention period, number of long searches, or volume of saved/scheduled searches are not relevant for determining the number of indexers. For more information, see [Reference hardware] and [About indexer clusters and index replication] in the Splunk documentation.

Question #37

To reduce the captain’s work load in a search head cluster, what setting will prevent scheduled searches from running on the captain?

  • A . adhoc_searchhead = true (on all members)
  • B . adhoc_searchhead = true (on the current captain)
  • C . captain_is_adhoc_searchhead = true (on all members)
  • D . captain_is_adhoc_searchhead = true (on the current captain)

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

To reduce the captain’s work load in a search head cluster, the setting that will prevent scheduled searches from running on the captain is captain_is_adhoc_searchhead = true (on the current captain). This setting will designate the current captain as an ad hoc search head, which means that it

will not run any scheduled searches, but only ad hoc searches initiated by users. This will reduce the captain’s work load and improve the search head cluster performance. The adhoc_searchhead = true (on all members) setting will designate all search head cluster members as ad hoc search heads, which means that none of them will run any scheduled searches, which is not desirable. The adhoc_searchhead = true (on the current captain) setting will have no effect, as this setting is ignored by the captain. The captain_is_adhoc_searchhead = true (on all members) setting will have no effect, as this setting is only applied to the current captain. For more information, see Configure the captain as an ad hoc search head in the Splunk documentation.

Question #38

At which default interval does metrics.log generate a periodic report regarding license utilization?

  • A . 10 seconds
  • B . 30 seconds
  • C . 60 seconds
  • D . 300 seconds

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The default interval at which metrics.log generates a periodic report regarding license utilization is 60 seconds. This report contains information about the license usage and quota for each Splunk instance, as well as the license pool and stack. The report is generated every 60 seconds by default, but this interval can be changed by modifying the license_usage stanza in the metrics.conf file. The other intervals (10 seconds, 30 seconds, and 300 seconds) are not the default values, but they can be set by the administrator if needed. For more information, see About metrics.log and Configure metrics.log in the Splunk documentation.

Question #39

Which of the following is a good practice for a search head cluster deployer?

  • A . The deployer only distributes configurations to search head cluster members when they “phone home”.
  • B . The deployer must be used to distribute non-replicable configurations to search head cluster members.
  • C . The deployer must distribute configurations to search head cluster members to be valid configurations.
  • D . The deployer only distributes configurations to search head cluster members with splunk apply shcluster-bundle.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The following is a good practice for a search head cluster deployer: The deployer must be used to distribute non-replicable configurations to search head cluster members. Non-replicable configurations are the configurations that are not replicated by the search factor, such as the apps and the server.conf settings. The deployer is the Splunk server role that distributes these configurations to the search head cluster members, ensuring that they have the same configuration. The deployer does not only distribute configurations to search head cluster members when they “phone home”, as this would cause configuration inconsistencies and delays. The deployer does not distribute configurations to search head cluster members to be valid configurations, as this implies that the configurations are invalid without the deployer. The deployer does not only distribute configurations to search head cluster members with splunk apply shcluster-bundle, as this would require manual intervention by the administrator. For more information, see Use the deployer to distribute apps and configuration updates in the Splunk documentation.

Question #40

A new Splunk customer is using syslog to collect data from their network devices on port 514.

What is the best practice for ingesting this data into Splunk?

  • A . Configure syslog to send the data to multiple Splunk indexers.
  • B . Use a Splunk indexer to collect a network input on port 514 directly.
  • C . Use a Splunk forwarder to collect the input on port 514 and forward the data.
  • D . Configure syslog to write logs and use a Splunk forwarder to collect the logs.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

The best practice for ingesting syslog data from network devices on port 514 into Splunk is to configure syslog to write logs and use a Splunk forwarder to collect the logs. This practice will ensure that the data is reliably collected and forwarded to Splunk, without losing any data or overloading the Splunk indexer. Configuring syslog to send the data to multiple Splunk indexers will not guarantee data reliability, as syslog is a UDP protocol that does not provide acknowledgment or delivery confirmation. Using a Splunk indexer to collect a network input on port 514 directly will not provide data reliability or load balancing, as the indexer may not be able to handle the incoming data volume or distribute it to other indexers. Using a Splunk forwarder to collect the input on port 514 and forward the data will not provide data reliability, as the forwarder may not be able to receive the data from syslog or buffer it in case of network issues. For more information, see [Get data from TCP and UDP ports] and [Best practices for syslog data] in the Splunk documentation.

Question #41

Which Splunk internal index contains license-related events?

  • A . _audit
  • B . _license
  • C . _internal
  • D . _introspection

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The _internal index contains license-related events, such as the license usage, the license quota, the license pool, the license stack, and the license violations. These events are logged by the license manager in the license_usage.log file, which is part of the _internal index. The _audit index contains audit events, such as user actions, configuration changes, and search activity. These events are logged by the audit trail in the audit.log file, which is part of the _audit index. The _license index does not exist in Splunk, as the license-related events are stored in the _internal index. The _introspection index contains platform instrumentation data, such as the resource usage, the disk objects, the search activity, and the data ingestion. These data are logged by the introspection generator in various log files, such as resource_usage.log, disk_objects.log, search_activity.log, and data_ingestion.log, which are part of the _introspection index. For more information, see About Splunk Enterprise logging and [About the _internal index] in the Splunk documentation.

Question #42

Which of the following statements describe a Search Head Cluster (SHC) captain? (Select all that apply.)

  • A . Is the job scheduler for the entire SHC.
  • B . Manages alert action suppressions (throttling).
  • C . Synchronizes the member list with the KV store primary.
  • D . Replicates the SHC’s knowledge bundle to the search peers.

Reveal Solution Hide Solution

Correct Answer: A, D
A, D

Explanation:

The following statements describe a search head cluster captain:

Is the job scheduler for the entire search head cluster. The captain is responsible for scheduling and dispatching the searches that run on the search head cluster, as well as coordinating the search results from the search peers. The captain also ensures that the scheduled searches are balanced across the search head cluster members and that the search concurrency limits are enforced. Replicates the search head cluster’s knowledge bundle to the search peers. The captain is responsible for creating and distributing the knowledge bundle to the search peers, which contains the knowledge objects that are required for the searches. The captain also ensures that the knowledge bundle is consistent and up-to-date across the search head cluster and the search peers. The following statements do not describe a search head cluster captain:

Manages alert action suppressions (throttling). Alert action suppressions are the settings that prevent an alert from triggering too frequently or too many times. These settings are managed by the search head that runs the alert, not by the captain. The captain does not have any special role in managing alert action suppressions.

Synchronizes the member list with the KV store primary. The member list is the list of search head cluster members that are active and available. The KV store primary is the search head cluster member that is responsible for replicating the KV store data to the other members. These roles are not related to the captain, and the captain does not synchronize them. The member list and the KV store primary are determined by the RAFT consensus algorithm, which is independent of the captain election. For more information, see [About the captain and the captain election] and [About KV store and search head clusters] in the Splunk documentation.

Question #43

Before users can use a KV store, an admin must create a collection. Where is a collection is defined?

  • A . kvstore.conf
  • B . collection.conf
  • C . collections.conf
  • D . kvcollections.conf

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

A collection is defined in the collections.conf file, which specifies the name, schema, and permissions of the collection. The kvstore.conf file is used to configure the KV store settings, such as the port, SSL, and replication factor. The other two files do not exist1

Question #44

Which search will show all deployment client messages from the client (UF)?

  • A . index=_audit component=DC* host=<ds> | stats count by message
  • B . index=_audit component=DC* host=<uf> | stats count by message
  • C . index=_internal component= DC* host=<uf> | stats count by message
  • D . index=_internal component=DS* host=<ds> | stats count by message

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The index=_internal component=DC* host=<uf> search will show all deployment client messages from the universal forwarder. The component field indicates the type of Splunk component that generated the message, and the host field indicates the host name of the machine that sent the message. The index=_audit component=DC* host=<uf> search will not return any results, because the deployment client messages are not stored in the _audit index. The index=_internal component=DS* host=<ds> search will show the deployment server messages from the deployment server, not the client. The index=_audit component=DS* host=<ds> search will also not return any results, for the same reason as above

Question #45

To optimize the distribution of primary buckets; when does primary rebalancing automatically occur? (Select all that apply.)

  • A . Rolling restart completes.
  • B . Master node rejoins the cluster.
  • C . Captain joins or rejoins cluster.
  • D . A peer node joins or rejoins the cluster.

Reveal Solution Hide Solution

Correct Answer: A, B, D
A, B, D

Explanation:

Primary rebalancing automatically occurs when a rolling restart completes, a master node rejoins the cluster, or a peer node joins or rejoins the cluster. These events can cause the distribution of primary buckets to become unbalanced, so the master node will initiate a rebalancing process to ensure that each peer node has roughly the same number of primary buckets. Primary rebalancing does not occur when a captain joins or rejoins the cluster, because the captain is a search head cluster component, not an indexer cluster component. The captain is responsible for search head clustering, not indexer clustering

Question #46

Which search head cluster component is responsible for pushing knowledge bundles to search peers, replicating configuration changes to search head cluster members, and scheduling jobs across the search head cluster?

  • A . Master
  • B . Captain
  • C . Deployer
  • D . Deployment server

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The captain is the search head cluster component that is responsible for pushing knowledge bundles to search peers, replicating configuration changes to search head cluster members, and scheduling jobs across the search head cluster. The captain is elected from among the search head cluster members and performs these tasks in addition to serving search requests. The master is the indexer cluster component that is responsible for managing the replication and availability of data across the peer nodes. The deployer is the standalone instance that is responsible for distributing apps and other configurations to the search head cluster members. The deployment server is the instance that is responsible for distributing apps and other configurations to the deployment clients, such as forwarders

Question #47

Configurations from the deployer are merged into which location on the search head cluster member?

  • A . SPLUNK_HOME/etc/system/local
  • B . SPLUNK_HOME/etc/apps/APP_HOME/local
  • C . SPLUNK_HOME/etc/apps/search/default
  • D . SPLUNK_HOME/etc/apps/APP_HOME/default

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Configurations from the deployer are merged into the SPLUNK_HOME/etc/apps/APP_HOME/local directory on the search head cluster member. The deployer distributes apps and other configurations to the search head cluster members in the form of a configuration bundle. The configuration bundle contains the contents of the SPLUNK_HOME/etc/shcluster/apps directory on the deployer. When a search head cluster member receives the configuration bundle, it merges the contents of the bundle into its own SPLUNK_HOME/etc/apps directory. The configurations in the local directory take precedence over the configurations in the default directory. The SPLUNK_HOME/etc/system/local directory is used for system-level configurations, not app-level configurations. The SPLUNK_HOME/etc/apps/search/default directory is used for the default configurations of the search app, not the configurations from the deployer.

Question #48

When Splunk indexes data in a non-clustered environment, what kind of files does it create by default?

  • A . Index and .tsidx files.
  • B . Rawdata and index files.
  • C . Compressed and .tsidx files.
  • D . Compressed and meta data files.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

When Splunk indexes data in a non-clustered environment, it creates index and .tsidx files by default. The index files contain the raw data that Splunk has ingested, compressed and encrypted. The .tsidx files contain the time-series index that maps the timestamps and event IDs of the raw data. The rawdata and index files are not the correct terms for the files that Splunk creates. The compressed and .tsidx files are partially correct, but compressed is not the proper name for the index files. The compressed and meta data files are also partially correct, but meta data is not the proper name for the .tsidx files.

Question #49

How does IT Service Intelligence (ITSI) impact the planning of a Splunk deployment?

  • A . ITSI requires a dedicated deployment server.
  • B . The amount of users using ITSI will not impact performance.
  • C . ITSI in a Splunk deployment does not require additional hardware resources.
  • D . Depending on the Key Performance Indicators that are being tracked, additional infrastructure may be needed.

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

ITSI can impact the planning of a Splunk deployment depending on the Key Performance Indicators (KPIs) that are being tracked. KPIs are metrics that measure the health and performance of IT services and business processes. ITSI collects, analyzes, and displays KPI data from various data sources in Splunk. Depending on the number, frequency, and complexity of the KPIs, additional infrastructure may be needed to support the data ingestion, processing, and visualization. ITSI does not require a dedicated deployment server, nor does it affect the number of users using ITSI. ITSI in a Splunk deployment does require additional hardware resources, such as CPU, memory, and disk space, to run the ITSI components and apps

Question #50

In the deployment planning process, when should a person identify who gets to see network data?

  • A . Deployment schedule
  • B . Topology diagramming
  • C . Data source inventory
  • D . Data policy definition

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

In the deployment planning process, a person should identify who gets to see network data in the data policy definition step. This step involves defining the data access policies and permissions for different users and roles in Splunk. The deployment schedule step involves defining the timeline and milestones for the deployment project. The topology diagramming step involves creating a visual representation of the Splunk architecture and components. The data source inventory step involves identifying and documenting the data sources and types that will be ingested by Splunk

Question #51

The KV store forms its own cluster within a SHC.

What is the maximum number of SHC members KV store will form?

  • A . 25
  • B . 50
  • C . 100
  • D . Unlimited

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

The KV store forms its own cluster within a SHC. The maximum number of SHC members KV store will form is 50. The KV store cluster is a subset of the SHC members that are responsible for replicating and storing the KV store data. The KV store cluster can have up to 50 members, but only 20 of them can be active at any given time. The other members are standby members that can take over if an active member fails. The KV store cluster cannot have more than 50 members, nor can it have an unlimited number of members. The KV store cluster cannot have 25 or 100 members, because these numbers are not multiples of 5, which is the minimum replication factor for the KV store cluster

Question #52

In search head clustering, which of the following methods can you use to transfer captaincy to a different member? (Select all that apply.)

  • A . Use the Monitoring Console.
  • B . Use the Search Head Clustering settings menu from Splunk Web on any member.
  • C . Run the splunk transfer shcluster-captain command from the current captain.
  • D . Run the splunk transfer shcluster-captain command from the member you would like to become the captain.

Reveal Solution Hide Solution

Correct Answer: B, D
B, D

Explanation:

In search head clustering, there are two methods to transfer captaincy to a different member. One method is to use the Search Head Clustering settings menu from Splunk Web on any member. This method allows the user to select a specific member to become the new captain, or to let Splunk choose the best candidate. The other method is to run the splunk transfer shcluster-captain command from the member that the user wants to become the new captain. This method requires the user to know the name of the target member and to have access to the CLI of that member. Using the Monitoring Console is not a method to transfer captaincy, because the Monitoring Console does not have the option to change the captain. Running the splunk transfer shcluster-captain command from the current captain is not a method to transfer captaincy, because this command will fail with an error message

Question #53

Which command is used for thawing the archive bucket?

  • A . Splunk collect
  • B . Splunk convert
  • C . Splunk rebuild
  • D . Splunk dbinspect

Reveal Solution Hide Solution

Correct Answer: C
C

Explanation:

The splunk rebuild command is used for thawing the archive bucket. Thawing is the process of restoring frozen data back to Splunk for searching. Frozen data is data that has been archived or deleted from Splunk after reaching the end of its retention period. To thaw a bucket, the user needs to copy the bucket from the archive location to the thaweddb directory under SPLUNK_HOME/var/lib/splunk and run the splunk rebuild command to rebuild the .tsidx files for the bucket. The splunk collect command is used for collecting diagnostic data from a Splunk instance. The splunk convert command is used for converting configuration files from one format to another. The splunk dbinspect command is used for inspecting the status and properties of the buckets in an index.

Question #54

A Splunk instance has the following settings in SPLUNK_HOME/etc/system/local/server.conf:

[clustering]

mode = master

replication_factor = 2

pass4SymmKey = password123

Which of the following statements describe this Splunk instance? (Select all that apply.)

  • A . This is a multi-site cluster.
  • B . This cluster’s search factor is 2.
  • C . This Splunk instance needs to be restarted.
  • D . This instance is missing the master_uri attribute.

Reveal Solution Hide Solution

Correct Answer: C, D
C, D

Explanation:

The Splunk instance with the given settings in SPLUNK_HOME/etc/system/local/server.conf is missing the master_uri attribute and needs to be restarted. The master_uri attribute is required for the master node to communicate with the peer nodes and the search head cluster. The master_uri attribute specifies the host name and port number of the master node. Without this attribute, the master node cannot function properly. The Splunk instance also needs to be restarted for the changes in the server.conf file to take effect. The replication_factor setting determines how many copies of each bucket are maintained across the peer nodes. The search factor is a separate setting that determines how many searchable copies of each bucket are maintained across the peer nodes. The search factor is not specified in the given settings, so it defaults to the same value as the replication factor, which is 2. This is not a multi-site cluster, because the site attribute is not specified in the clustering stanza. A multi-site cluster is a cluster that spans multiple geographic locations, or sites, and has different replication and search factors for each site.

Question #55

Which of the following describe migration from single-site to multisite index replication?

  • A . A master node is required at each site.
  • B . Multisite policies apply to new data only.
  • C . Single-site buckets instantly receive the multisite policies.
  • D . Multisite total values should not exceed any single-site factors.

Reveal Solution Hide Solution

Correct Answer: B
B

Explanation:

Migration from single-site to multisite index replication only affects new data, not existing data. Multisite policies apply to new data only, meaning that data that is ingested after the migration will follow the multisite replication and search factors. Existing data, or data that was ingested before the migration, will retain the single-site policies, unless they are manually converted to multisite buckets. Single-site buckets do not instantly receive the multisite policies, nor do they automatically convert to multisite buckets. Multisite total values can exceed any single-site factors, as long as they do not exceed the number of peer nodes in the cluster. A master node is not required at each site, only one master node is needed for the entire cluster

Question #56

What does setting site=site0 on all Search Head Cluster members do in a multi-site indexer cluster?

  • A . Disables search site affinity.
  • B . Sets all members to dynamic captaincy.
  • C . Enables multisite search artifact replication.
  • D . Enables automatic search site affinity discovery.

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

Setting site=site0 on all Search Head Cluster members disables search site affinity. Search site affinity is a feature that allows search heads to preferentially search the peer nodes that are in the same site as the search head, to reduce network latency and bandwidth consumption. By setting site=site0, which is a special value that indicates no site, the search heads will search all peer nodes regardless of their site. Setting site=site0 does not set all members to dynamic captaincy, enable multisite search artifact replication, or enable automatic search site affinity discovery. Dynamic captaincy is a feature that allows any member to become the captain, and it is enabled by default. Multisite search artifact replication is a feature that allows search artifacts to be replicated across sites, and it is enabled by setting site_replication_factor to a value greater than 1. Automatic search site affinity discovery is a feature that allows search heads to automatically determine their site based on the network latency to the peer nodes, and it is enabled by setting site=auto

Question #57

Which of the following is a way to exclude search artifacts when creating a diag?

  • A . SPLUNK_HOME/bin/splunk diag –exclude
  • B . SPLUNK_HOME/bin/splunk diag –debug –refresh
  • C . SPLUNK_HOME/bin/splunk diag –disable=dispatch
  • D . SPLUNK_HOME/bin/splunk diag –filter-searchstrings

Reveal Solution Hide Solution

Correct Answer: A
A

Explanation:

The splunk diag –exclude command is a way to exclude search artifacts when creating a diag. A diag is a diagnostic snapshot of a Splunk instance that contains various logs, configurations, and other information. Search artifacts are temporary files that are generated by search jobs and stored in the dispatch directory. Search artifacts can be excluded from the diag by using the –exclude option and specifying the dispatch directory. The splunk diag –debug –refresh command is a way to create a diag with debug logging enabled and refresh the diag if it already exists. The splunk diag — disable=dispatch command is not a valid command, because the –disable option does not exist. The splunk diag –filter-searchstrings command is a way to filter out sensitive information from the search strings in the diag

Question #58

Which of the following statements describe licensing in a clustered Splunk deployment? (Select all that apply.)

  • A . Free licenses do not support clustering.
  • B . Replicated data does not count against licensing.
  • C . Each cluster member requires its own clustering license.
  • D . Cluster members must share the same license pool and license master.

Reveal Solution Hide Solution

Correct Answer: A, B
A, B

Explanation:

The following statements describe licensing in a clustered Splunk deployment: Free licenses do not support clustering, and replicated data does not count against licensing. Free licenses are limited to 500 MB of daily indexing volume and do not allow distributed searching or clustering. To enable clustering, a license with a higher volume limit and distributed features is required. Replicated data is data that is copied from one peer node to another for the purpose of high availability and load balancing. Replicated data does not count against licensing, because it is not new data that is ingested by Splunk. Only the original data that is indexed by the peer nodes counts against licensing. Each cluster member does not require its own clustering license, because clustering licenses are shared among the cluster members. Cluster members must share the same license pool and license master, because the license master is responsible for distributing licenses to the cluster members and enforcing the license limits

Question #59

When planning a search head cluster, which of the following is true?

  • A . All search heads must use the same operating system.
  • B . All search heads must be members of the cluster (no standalone search heads).
  • C . The search head captain must be assigned to the largest search head in the cluster.
  • D . All indexers must belong to the underlying indexer cluster (no standalone indexers).

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

When planning a search head cluster, the following statement is true: All indexers must belong to the underlying indexer cluster (no standalone indexers). A search head cluster is a group of search heads that share configurations, apps, and search jobs. A search head cluster requires an indexer cluster as its data source, meaning that all indexers that provide data to the search head cluster must be members of the same indexer cluster. Standalone indexers, or indexers that are not part of an indexer cluster, cannot be used as data sources for a search head cluster. All search heads do not have to use the same operating system, as long as they are compatible with the Splunk version and the indexer cluster. All search heads do not have to be members of the cluster, as standalone search heads can also search the indexer cluster, but they will not have the benefits of configuration replication and load balancing. The search head captain does not have to be assigned to the largest search head in the cluster, as the captain is dynamically elected from among the cluster members based on various criteria, such as CPU load, network latency, and search load.

Question #60

In which phase of the Splunk Enterprise data pipeline are indexed extraction configurations processed?

  • A . Input
  • B . Search
  • C . Parsing
  • D . Indexing

Reveal Solution Hide Solution

Correct Answer: D
D

Explanation:

Indexed extraction configurations are processed in the indexing phase of the Splunk Enterprise data pipeline. The data pipeline is the process that Splunk uses to ingest, parse, index, and search data. Indexed extraction configurations are settings that determine how Splunk extracts fields from data at index time, rather than at search time. Indexed extraction can improve search performance, but it also increases the size of the index. Indexed extraction configurations are applied in the indexing phase, which is the phase where Splunk writes the data and the .tsidx files to the index. The input phase is the phase where Splunk receives data from various sources and formats. The parsing phase is the phase where Splunk breaks the data into events, timestamps, and hosts. The search phase is the phase where Splunk executes search commands and returns results.

Exit mobile version