Which of the following notations provides the tightest possible bound on the growth rate of an algorithm's running time?
Big-Omega (Ω)
Big-O (O)
Little-o (o)
Big-Theta (Θ)
An algorithm performs the following operations: it iterates through an array of size n, and for each element, it performs a binary search on a sorted array of size m. What is the overall time complexity of the algorithm?
O(n + log m)
O(m log n)
O(n log m)
O(n * m)
Consider a dynamic array implementation where resizing to double the capacity takes O(n) time. If we perform 'n' insertions sequentially, what is the amortized time complexity per insertion?
O(n)
O(n log n)
O(1)
O(log n)
Consider an algorithm that iterates through a sorted array of size n. In the best-case scenario, the algorithm finds the desired element in the first comparison. What is the time complexity of this algorithm in the best case?
You have two algorithms, A and B, for the same problem. A has a time complexity of O(n log n) and Ω(n), while B has a time complexity of O(n^2) and Ω(log n). Which of the following statements is always true?
Algorithm A is faster than Algorithm B for all input sizes.
Algorithm B is faster than Algorithm A for all input sizes.
We cannot definitively compare the performance of the two algorithms based on the given information.
Algorithm A is faster than Algorithm B for sufficiently large input sizes.
Consider a nested loop structure where the outer loop runs n times and the inner loop runs from 1 to i, where 'i' is the current iteration of the outer loop. What is the total number of iterations of the inner loop?
log n
n(n+1)/2
n
n^2
You need to sort a large dataset. Algorithm A has a time complexity of O(n log n) and a space complexity of O(1). Algorithm B has a time complexity of O(n) but requires O(n) extra space. Which algorithm is preferable if memory usage is severely constrained?
Algorithm B
Insufficient information to determine
Both are equally preferable
Algorithm A
You are comparing two sorting algorithms: Algorithm X with O(n log n) average-case complexity and Algorithm Y with O(n^2) worst-case complexity. In your benchmarks, Algorithm Y consistently outperforms Algorithm X. What is a PLAUSIBLE explanation for this observation?
The input data used in the benchmark always triggers the worst-case scenario for Algorithm X.
Algorithm Y is inherently more efficient than Algorithm X for all input sizes and distributions.
The benchmark is flawed and does not accurately measure the execution time of the algorithms.
The input data, despite being randomly generated, consistently represents a special case where Algorithm Y excels.
You are tasked with optimizing a critical function in a real-time trading system. Your theoretical analysis suggests Algorithm A (O(log n)) is superior to Algorithm B (O(n)). However, benchmarking reveals Algorithm B performs better for your typical data size. What is the MOST likely reason for this discrepancy?
The trading system's hardware is optimized for linear-time algorithms like Algorithm B.
Theoretical analysis is unreliable and cannot predict real-world performance.
Algorithm A's hidden constant factors are significantly larger than Algorithm B's, making it slower for smaller input sizes.
Benchmarking is flawed and does not accurately represent the real-world scenario.
You are given a sorted array of n integers. You want to determine if there exists a pair of elements in the array that sum up to a specific target value. Which algorithm provides the most efficient time complexity?
Nested loops to check all pairs
Using a hash table to store seen elements
Using two pointers, one at the beginning and one at the end of the array
Sorting the array and then using binary search