Which of the following statements about best-case, worst-case, and average-case analysis is FALSE?
Worst-case analysis is typically the most important to consider when designing for performance-critical applications.
Best-case analysis considers the input that results in the algorithm's fastest possible execution.
Worst-case analysis always provides an upper bound on the algorithm's runtime for any input.
Average-case analysis attempts to determine the average runtime over all possible inputs.
You have two algorithms, A and B, for the same problem. A has a time complexity of O(n log n) and Ω(n), while B has a time complexity of O(n^2) and Ω(log n). Which of the following statements is always true?
Algorithm A is faster than Algorithm B for all input sizes.
Algorithm A is faster than Algorithm B for sufficiently large input sizes.
We cannot definitively compare the performance of the two algorithms based on the given information.
Algorithm B is faster than Algorithm A for all input sizes.
Which notation is most appropriate to describe the best-case time complexity of an algorithm where the input size doesn't affect the running time?
Big-Omega (Ω)
Little-omega (ω)
Big-O (O)
Big-Theta (Θ)
You need to choose an algorithm for a real-time system with strict timing constraints. Algorithm X has a time complexity of O(n log n) in the worst case and Θ(1) in the best case. Algorithm Y has a time complexity of Θ(n) in all cases. Which algorithm is a safer choice and why?
Both algorithms are equally suitable for a real-time system.
Algorithm Y, because its time complexity is consistent and predictable.
Algorithm X, because its best-case complexity is excellent.
Neither algorithm is suitable for a real-time system.
Consider a social network graph with V users and E friendships. You want to find the shortest path between two users (A and B) in terms of friendship connections. What is the time complexity of the most efficient algorithm for this scenario in the worst-case?
O(E log V) using Dijkstra's Algorithm with a binary heap
O(V + E) using Breadth First Search
O(V!) using brute-force depth-first search
O(V^2) using Floyd-Warshall Algorithm
Imagine you're optimizing a QuickSort algorithm for an array of size N. In the worst-case scenario, how does choosing a pivot element randomly impact the time complexity compared to always selecting the first element as the pivot?
Random pivot selection changes the worst-case time complexity to O(N log N)
Random pivot selection makes the worst-case time complexity unpredictable
Random pivot selection reduces the worst-case time complexity to O(N)
Random pivot selection has no impact on the worst-case time complexity, it remains O(N^2)
You are designing a data structure that supports two operations: 'insert' and 'find median'. 'Insert' adds an element in O(log n) time. Which of the following allows the 'find median' operation to also be achieved in O(log n) time?
A standard binary search tree
A hash table
A sorted array
A self-balancing binary search tree
You need to design an algorithm to solve a variation of the knapsack problem with a capacity C and N items. However, you can take multiple copies of the same item. Which algorithmic paradigm best addresses this scenario and provides an efficient solution?
Divide and Conquer
Dynamic Programming with a 2D table
Depth First Search with Backtracking
Greedy Algorithm
A randomized algorithm has a worst-case running time of O(n^2), but its expected running time is O(n log n). What does this imply about the algorithm's performance?
The algorithm's running time is independent of the input.
On average, the algorithm performs better than its worst-case bound.
The algorithm always runs in O(n log n) time.
The algorithm is unsuitable for practical use due to its unpredictable running time.
Consider a nested loop structure where the outer loop runs n times and the inner loop runs from 1 to i, where 'i' is the current iteration of the outer loop. What is the total number of iterations of the inner loop?
n
n^2
n(n+1)/2
log n