Which of the following is the function of a 'Sink Connector' in Kafka Connect?
It retrieves data from a Kafka topic and writes it to an external system.
It aggregates data from multiple Kafka topics into a single topic.
It filters messages in a Kafka topic based on predefined criteria.
It replicates data between different Kafka clusters.
How are Kafka Connect connectors typically run in a production environment?
As standalone Java processes.
Both A and C are common deployment methods.
As Docker containers orchestrated by Kubernetes.
Within the Kafka broker processes.
What is the primary purpose of monitoring Kafka metrics?
To debug application code that interacts with Kafka
To track the number of messages consumed by each consumer group
To understand and optimize Kafka cluster performance and health
To identify and troubleshoot security vulnerabilities in Kafka
What is the primary benefit of using Kafka's idempotent producer feature?
Increased throughput by reducing the need for message acknowledgments.
Elimination of duplicate messages on the broker due to producer retries.
Improved message ordering guarantees within a partition.
Automatic data balancing across multiple Kafka brokers.
Which component in Kafka is responsible for managing the state of tasks and ensuring fault tolerance within a Kafka Streams application?
ZooKeeper
Kafka Producer
Kafka Connect
Kafka Streams API
What happens to a consumer's offset when it encounters an error while processing a message?
The offset is automatically reset to the beginning of the partition.
The consumer is removed from the consumer group.
The offset is not updated until the message is successfully processed.
The message is discarded and the offset is advanced.
How does Kafka ensure message ordering within a partition?
By employing a priority queue mechanism
By assigning sequential timestamps to messages
By using message keys for sorting
By appending messages sequentially to the log
Which method in the Kafka Consumer API is used to retrieve a batch of records from a topic?
fetch()
consume()
poll()
receive()
What does the term 'offset' represent in Kafka?
The position of a message within a partition.
The physical location of a message on disk.
The timestamp associated with a message.
The unique identifier assigned to each message.
What happens to the data on a broker that is permanently removed from a Kafka cluster without proper decommissioning?
It is migrated to the ZooKeeper ensemble.
It becomes inaccessible until the broker is added back.
It is permanently lost.
It is automatically replicated to other brokers.