Confluent CCDAK Confluent Certified Developer for Apache Kafka Certification Examination Online Training
Confluent CCDAK Online Training
The questions for CCDAK were last updated at Feb 13,2025.
- Exam Code: CCDAK
- Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
- Certification Provider: Confluent
- Latest update: Feb 13,2025
You want to sink data from a Kafka topic to S3 using Kafka Connect. There are 10 brokers in the cluster, the topic has 2 partitions with replication factor of 3 .
How many tasks will you configure for the S3 connector?
- A . 10
- B . 6
- C . 3
- D . 2
To enhance compression, I can increase the chances of batching by using
- A . acks=all
- B . linger.ms=20
- C . batch.size=65536
- D . max.message.size=10MB
How can you gracefully make a Kafka consumer to stop immediately polling data from Kafka and gracefully shut down a consumer application?
- A . Call consumer.wakeUp() and catch a WakeUpException
- B . Call consumer.poll() in another thread
- C . Kill the consumer thread
StreamsBuilder builder = new StreamsBuilder();
KStream<String, String> textLines = builder.stream("word-count-input");
KTable<String, Long> wordCounts = textLines
.mapValues(textLine -> textLine.toLowerCase())
.flatMapValues(textLine -> Arrays.asList(textLine.split("W+")))
.selectKey((key, word) -> word)
.groupByKey()
.count(Materialized.as("Counts"));
wordCounts.toStream().to("word-count-output", Produced.with(Serdes.String(), Serdes.Long()));
builder.build();
What is an adequate topic configuration for the topic word-count-output?
- A . max.message.bytes=10000000
- B . cleanup.policy=delete
- C . compression.type=lz4
- D . cleanup.policy=compact
Where are the ACLs stored in a Kafka cluster by default?
- A . Inside the broker’s data directory
- B . Under Zookeeper node /kafka-acl/
- C . In Kafka topic __kafka_acls
- D . Inside the Zookeeper’s data directory
What kind of delivery guarantee this consumer offers?
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
try {
consumer.commitSync();
} catch (CommitFailedException e) { log.error("commit failed", e)
}
for (ConsumerRecord<String, String> record records)
{
System.out.printf("topic = %s, partition = %s, offset = %d, customer = %s, country = %s
",
record.topic(), record.partition(), record.offset(), record.key(), record.value());
}
}
- A . Exactly-once
- B . At-least-once
- C . At-most-once
The exactly once guarantee in the Kafka Streams is for which flow of data?
- A . Kafka => Kafka
- B . Kafka => External
- C . External => Kafka
You are using JDBC source connector to copy data from a table to Kafka topic. There is one connector created with max.tasks equal to 2 deployed on a cluster of 3 workers .
How many tasks are launched?
- A . 3
- B . 2
- C . 1
- D . 6
You want to perform table lookups against a KTable everytime a new record is received from the KStream .
What is the output of KStream-KTable join?
- A . KTable
- B . GlobalKTable
- C . You choose between KStream or KTable
- D . Kstream
You are doing complex calculations using a machine learning framework on records fetched from a Kafka topic. It takes more about 6 minutes to process a record batch, and he consumer enters rebalances even though it’s still running .
How can you improve this scenario?
- A . Increase max.poll.interval.ms to 600000
- B . Increase heartbeat.interval.ms to 600000
- C . Increase session.timeout.ms to 600000
- D . Add consumers to the consumer group and kill them right away