In cuckoo hashing, how many hash functions are typically used?
2
It depends on the size of the hash table.
3
1
In a hash table using double hashing, the second hash function is used to:
Generate a new key if a collision occurs.
Calculate the size of the hash table.
Determine the step size for probing in case of a collision.
Determine the initial index to store the key.
In a hash table with open addressing using linear probing, suppose we perform a sequence of insertions where each key hashes to the same index. What is the time complexity of the nth insertion in the worst case?
O(log n)
O(n log n)
O(1)
O(n)
What is the primary reason for using a prime number as the size of a hash table in many implementations?
To increase the speed of hash function computation.
To minimize the memory usage of the hash table.
To make the implementation of the hash table simpler.
To ensure an even distribution of keys across the hash table, reducing collisions.
What is the primary advantage of using a universal hash function?
It ensures constant-time performance for all operations.
It eliminates the possibility of collisions entirely.
It makes the hash table resistant to attacks that exploit patterns in the hash function.
It provides better performance than any single, fixed hash function.
What mechanism does Java's ConcurrentHashMap employ to allow for concurrent reads and updates while maintaining thread safety?
Read-write locks separating readers and writers
Lock-free data structures using atomic operations
Fine-grained locking at the bucket level
A single global lock for all operations
What is a common disadvantage of using a hashmap with a poorly chosen hash function?
Frequent hash collisions
Slow key generation
Inability to handle duplicate keys
Increased memory usage
In a hashmap implementation using open addressing with linear probing, what is the worst-case time complexity for searching for a key if the hash table is nearly full?
In a web server implemented using a hashmap to store cached web pages, which collision resolution strategy is generally preferred for its performance in handling a high volume of concurrent requests?
Separate Chaining with linked lists
Separate Chaining with balanced binary search trees
Open Addressing with linear probing
Double Hashing
You are implementing an LRU (Least Recently Used) cache with a fixed capacity. Which data structure combination would be most suitable for efficiently managing the cache?
Hashmap + Doubly Linked List
Array + Queue
Hashmap + Stack
Binary Search Tree + Heap