What is the time complexity of inserting an element at the beginning of a dynamically sized array that needs to resize by doubling its capacity when full?
O(log n)
O(n)
O(n log n)
O(1)
You need to design an algorithm to solve a variation of the knapsack problem with a capacity C and N items. However, you can take multiple copies of the same item. Which algorithmic paradigm best addresses this scenario and provides an efficient solution?
Greedy Algorithm
Divide and Conquer
Dynamic Programming with a 2D table
Depth First Search with Backtracking
You are given a sorted array of n integers. You want to determine if there exists a pair of elements in the array that sum up to a specific target value. Which algorithm provides the most efficient time complexity?
Nested loops to check all pairs
Using a hash table to store seen elements
Sorting the array and then using binary search
Using two pointers, one at the beginning and one at the end of the array
You need to find the smallest k elements in an unsorted array of size n. What is the most efficient time complexity achievable in the average case?
O(k^2)
O(n^2)
O(n + k log n)
When optimizing code for better time complexity, which approach will NOT always lead to improved real-world performance?
Using a more complex data structure with better time complexity for certain operations.
Replacing an O(n^2) algorithm with an O(n log n) algorithm.
Reducing the number of operations within loops.
Caching frequently accessed data to avoid recomputation.
Which of the following notations provides the tightest possible bound on the growth rate of an algorithm's running time?
Big-Theta (Θ)
Big-O (O)
Big-Omega (Ω)
Little-o (o)
You have two algorithms, A and B, for the same problem. A has a time complexity of O(n log n) and Ω(n), while B has a time complexity of O(n^2) and Ω(log n). Which of the following statements is always true?
Algorithm A is faster than Algorithm B for sufficiently large input sizes.
We cannot definitively compare the performance of the two algorithms based on the given information.
Algorithm B is faster than Algorithm A for all input sizes.
Algorithm A is faster than Algorithm B for all input sizes.
A hash table using open addressing has a load factor of 0.8. What is the expected average-case time complexity for a successful search operation?
It depends on the hash function
Which of the following statements about best-case, worst-case, and average-case analysis is FALSE?
Worst-case analysis always provides an upper bound on the algorithm's runtime for any input.
Average-case analysis attempts to determine the average runtime over all possible inputs.
Best-case analysis considers the input that results in the algorithm's fastest possible execution.
Worst-case analysis is typically the most important to consider when designing for performance-critical applications.
Consider a dynamic array implementation where resizing to double the capacity takes O(n) time. If we perform 'n' insertions sequentially, what is the amortized time complexity per insertion?