You have two algorithms, A and B, for the same problem. A has a time complexity of O(n log n) and Ω(n), while B has a time complexity of O(n^2) and Ω(log n). Which of the following statements is always true?
We cannot definitively compare the performance of the two algorithms based on the given information.
Algorithm B is faster than Algorithm A for all input sizes.
Algorithm A is faster than Algorithm B for sufficiently large input sizes.
Algorithm A is faster than Algorithm B for all input sizes.
What is the time complexity of the following code snippet?
def mystery(n): if n <= 1: return 1 return mystery(n-1) + mystery(n-2)
O(2^n)
O(n)
O(log n)
O(n log n)
A randomized algorithm has a worst-case running time of O(n^2), but its expected running time is O(n log n). What does this imply about the algorithm's performance?
The algorithm is unsuitable for practical use due to its unpredictable running time.
On average, the algorithm performs better than its worst-case bound.
The algorithm's running time is independent of the input.
The algorithm always runs in O(n log n) time.
Which notation is most appropriate to describe the best-case time complexity of an algorithm where the input size doesn't affect the running time?
Big-O (O)
Big-Theta (Θ)
Little-omega (ω)
Big-Omega (Ω)
You need to design an algorithm to solve a variation of the knapsack problem with a capacity C and N items. However, you can take multiple copies of the same item. Which algorithmic paradigm best addresses this scenario and provides an efficient solution?
Depth First Search with Backtracking
Divide and Conquer
Dynamic Programming with a 2D table
Greedy Algorithm
Consider an algorithm that iterates through a sorted array of size n. In the best-case scenario, the algorithm finds the desired element in the first comparison. What is the time complexity of this algorithm in the best case?
O(1)
Which statement about benchmarking is TRUE?
Benchmarking is only useful for large-scale applications and not for smaller projects.
Benchmarking is primarily used for comparing the theoretical time complexity of different algorithms.
Benchmarking focuses on measuring the execution time of an algorithm in a controlled environment.
Benchmarking results are always generalizable across different hardware and software environments.
Which of the following is the LEAST reliable indicator of an algorithm's real-world performance?
Worst-case time complexity
Benchmarking results on different hardware configurations
Average-case time complexity
Profiling results on a representative dataset
Why is time complexity generally considered more important than space complexity in algorithm analysis?
Space complexity cannot be expressed using Big-O notation.
Modern computers have abundant memory, while efficient time utilization often remains a bottleneck.
Time complexity is always more difficult to analyze.
Space complexity is irrelevant in modern computing.
Consider a nested loop structure where the outer loop runs n times and the inner loop runs from 1 to i, where 'i' is the current iteration of the outer loop. What is the total number of iterations of the inner loop?
n(n+1)/2
log n
n^2
n