In a hash table with open addressing using linear probing, suppose we perform a sequence of insertions where each key hashes to the same index. What is the time complexity of the nth insertion in the worst case?
O(n)
O(log n)
O(1)
O(n log n)
Python dictionaries use open addressing for collision resolution. Which of the following techniques helps mitigate the performance degradation caused by clustering in open addressing?
Separate Chaining
Using a cryptographic hash function
Robin Hood Hashing
Linear Probing with a prime step size
How can a hash flooding attack impact the performance of a web server using a hashmap to store session data?
It has no impact on performance, as hash flooding attacks only target data integrity.
It can cause a denial-of-service by forcing the server to handle a large number of collisions.
It can improve the efficiency of the hashmap by distributing data more evenly.
It can lead to increased memory usage and faster response times.
In a hash table using double hashing, the second hash function is used to:
Determine the initial index to store the key.
Generate a new key if a collision occurs.
Determine the step size for probing in case of a collision.
Calculate the size of the hash table.
Why is it generally recommended to avoid using mutable objects as keys in hash tables?
Hash tables cannot store mutable objects as keys; only immutable objects are allowed.
Using mutable keys increases the memory overhead of the hash table.
Mutable keys make the implementation of the hash table significantly more complex.
Mutable keys can lead to inconsistent state if their values are modified after being inserted into the hash table.
What is a common disadvantage of using a hashmap with a poorly chosen hash function?
Increased memory usage
Inability to handle duplicate keys
Frequent hash collisions
Slow key generation
You are implementing an LRU (Least Recently Used) cache with a fixed capacity. Which data structure combination would be most suitable for efficiently managing the cache?
Binary Search Tree + Heap
Hashmap + Doubly Linked List
Array + Queue
Hashmap + Stack
What is the primary reason for using a prime number as the size of a hash table in many implementations?
To make the implementation of the hash table simpler.
To minimize the memory usage of the hash table.
To ensure an even distribution of keys across the hash table, reducing collisions.
To increase the speed of hash function computation.
Which of the following is NOT a valid mitigation strategy against hash flooding attacks?
Implementing a random salt value in the hash function to make collisions unpredictable.
Employing a bloom filter to quickly identify and discard potentially malicious input.
Switching to a different data structure like a tree-based map that offers consistent performance.
Using a fixed-size hashmap to limit the maximum number of collisions.
Which collision resolution strategy generally performs better in terms of cache locality?
Both perform equally well
Cache locality is irrelevant to hash tables
Open Addressing