What is the significance of the 'unclean.leader.election.enable' configuration parameter during broker failures?
It controls the replication factor for partitions.
It defines the time a broker is considered dead before triggering a leader election.
It ensures no data loss during leader election but might increase unavailability.
It allows for faster leader election but might lead to data loss.
When adding a new broker to an existing Kafka cluster, what process ensures that the partitions are evenly distributed across all available brokers?
Broker Synchronization
Load Balancing
Rebalancing
Data Replication
Which component in Kafka is responsible for managing the state of tasks and ensuring fault tolerance within a Kafka Streams application?
Kafka Connect
Kafka Producer
ZooKeeper
Kafka Streams API
What is the role of a Kafka Controller in a cluster?
Monitoring and managing the health of brokers
Handling data replication between brokers
Performing load balancing of messages
Managing message consumption rates
Which method in the Kafka Consumer API is used to retrieve a batch of records from a topic?
poll()
fetch()
consume()
receive()
In Kafka Connect, what is the role of a 'Source Connector'?
It transforms data within a Kafka topic before sending it to a sink.
It routes messages between different topics within a Kafka cluster.
It consumes data from an external system and publishes it to a Kafka topic.
It writes data from a Kafka topic to an external system.
What is the role of Prometheus in Kafka monitoring?
It analyzes Kafka logs to identify performance bottlenecks
It provides a user interface for visualizing Kafka metrics
It acts as a message broker between Kafka and monitoring tools
It collects and stores Kafka metrics as time-series data
Which of the following is a common format for specifying connector configurations in Kafka Connect?
All of the above
YAML
JSON
Properties files
What is the primary purpose of log compaction in Kafka?
Optimizing message routing
Deleting old messages based on time
Retaining the latest value for each key
Improving message compression
What is the purpose of using tumbling windows in Kafka Streams?
To process records in fixed-size, non-overlapping time intervals.
To divide data into sessions based on user activity.
To trigger aggregations only when a specific event occurs.
To overlap windows for smoothing out aggregated results.