What is the role of Prometheus in Kafka monitoring?
It provides a user interface for visualizing Kafka metrics
It analyzes Kafka logs to identify performance bottlenecks
It acts as a message broker between Kafka and monitoring tools
It collects and stores Kafka metrics as time-series data
In Kafka, which configuration setting controls the duration for which message acknowledgments from consumers are tracked?
group.max.session.timeout.ms
message.timeout.ms
replica.lag.time.max.ms
offsets.retention.minutes
In KSQL, what is a 'STREAM' analogous to in traditional database terminology?
A table
A view
A trigger
A stored procedure
What happens to a consumer's offset when it encounters an error while processing a message?
The message is discarded and the offset is advanced.
The offset is not updated until the message is successfully processed.
The offset is automatically reset to the beginning of the partition.
The consumer is removed from the consumer group.
Which of the following is the function of a 'Sink Connector' in Kafka Connect?
It replicates data between different Kafka clusters.
It aggregates data from multiple Kafka topics into a single topic.
It retrieves data from a Kafka topic and writes it to an external system.
It filters messages in a Kafka topic based on predefined criteria.
Which scenario would benefit from using a synchronous Kafka producer?
Logging system where message loss is acceptable.
Real-time data streaming where latency is critical.
High-volume sensor data ingestion where throughput is a primary concern.
Financial transaction processing where guaranteed message delivery is paramount.
What is the purpose of using tumbling windows in Kafka Streams?
To overlap windows for smoothing out aggregated results.
To divide data into sessions based on user activity.
To process records in fixed-size, non-overlapping time intervals.
To trigger aggregations only when a specific event occurs.
What is the significance of 'Exactly Once Semantics' in Kafka Streams?
It guarantees that each record is processed at least once.
It prevents duplicate processing of records even in the event of failures.
It prioritizes speed over accuracy in data processing.
It ensures that records are processed in the exact order they were produced.
What happens to the data on a broker that is permanently removed from a Kafka cluster without proper decommissioning?
It is permanently lost.
It is migrated to the ZooKeeper ensemble.
It is automatically replicated to other brokers.
It becomes inaccessible until the broker is added back.
When adding a new broker to an existing Kafka cluster, what process ensures that the partitions are evenly distributed across all available brokers?
Data Replication
Broker Synchronization
Load Balancing
Rebalancing