An algorithm performs the following operations: it iterates through an array of size n, and for each element, it performs a binary search on a sorted array of size m. What is the overall time complexity of the algorithm?
O(n log m)
O(m log n)
O(n + log m)
O(n * m)
What is the time complexity of inserting an element at the beginning of a dynamically sized array that needs to resize by doubling its capacity when full?
O(log n)
O(n)
O(1)
O(n log n)
You need to sort a large dataset. Algorithm A has a time complexity of O(n log n) and a space complexity of O(1). Algorithm B has a time complexity of O(n) but requires O(n) extra space. Which algorithm is preferable if memory usage is severely constrained?
Both are equally preferable
Insufficient information to determine
Algorithm B
Algorithm A
You need to design an algorithm to solve a variation of the knapsack problem with a capacity C and N items. However, you can take multiple copies of the same item. Which algorithmic paradigm best addresses this scenario and provides an efficient solution?
Dynamic Programming with a 2D table
Depth First Search with Backtracking
Divide and Conquer
Greedy Algorithm
You have two algorithms, A and B, for the same problem. A has a time complexity of O(n log n) and Ω(n), while B has a time complexity of O(n^2) and Ω(log n). Which of the following statements is always true?
Algorithm A is faster than Algorithm B for sufficiently large input sizes.
Algorithm A is faster than Algorithm B for all input sizes.
We cannot definitively compare the performance of the two algorithms based on the given information.
Algorithm B is faster than Algorithm A for all input sizes.
You need to choose an algorithm for a real-time system with strict timing constraints. Algorithm X has a time complexity of O(n log n) in the worst case and Θ(1) in the best case. Algorithm Y has a time complexity of Θ(n) in all cases. Which algorithm is a safer choice and why?
Algorithm Y, because its time complexity is consistent and predictable.
Algorithm X, because its best-case complexity is excellent.
Neither algorithm is suitable for a real-time system.
Both algorithms are equally suitable for a real-time system.
You are tasked with optimizing a critical function in a real-time trading system. Your theoretical analysis suggests Algorithm A (O(log n)) is superior to Algorithm B (O(n)). However, benchmarking reveals Algorithm B performs better for your typical data size. What is the MOST likely reason for this discrepancy?
Theoretical analysis is unreliable and cannot predict real-world performance.
Algorithm A's hidden constant factors are significantly larger than Algorithm B's, making it slower for smaller input sizes.
Benchmarking is flawed and does not accurately represent the real-world scenario.
The trading system's hardware is optimized for linear-time algorithms like Algorithm B.
Which of the following statements about best-case, worst-case, and average-case analysis is FALSE?
Best-case analysis considers the input that results in the algorithm's fastest possible execution.
Worst-case analysis always provides an upper bound on the algorithm's runtime for any input.
Worst-case analysis is typically the most important to consider when designing for performance-critical applications.
Average-case analysis attempts to determine the average runtime over all possible inputs.
Which notation is most appropriate to describe the best-case time complexity of an algorithm where the input size doesn't affect the running time?
Big-Theta (Θ)
Big-Omega (Ω)
Little-omega (ω)
Big-O (O)
Consider a dynamic array implementation where resizing to double the capacity takes O(n) time. If we perform 'n' insertions sequentially, what is the amortized time complexity per insertion?