Which of the following is the LEAST reliable indicator of an algorithm's real-world performance?
Profiling results on a representative dataset
Benchmarking results on different hardware configurations
Worst-case time complexity
Average-case time complexity
What is the time complexity of the following code snippet?
def mystery(n): if n <= 1: return 1 return mystery(n-1) + mystery(n-2)
O(n log n)
O(log n)
O(2^n)
O(n)
You are given a sorted array of n integers. You want to determine if there exists a pair of elements in the array that sum up to a specific target value. Which algorithm provides the most efficient time complexity?
Sorting the array and then using binary search
Using a hash table to store seen elements
Nested loops to check all pairs
Using two pointers, one at the beginning and one at the end of the array
A hash table using open addressing has a load factor of 0.8. What is the expected average-case time complexity for a successful search operation?
It depends on the hash function
O(1)
Consider a nested loop structure where the outer loop runs n times and the inner loop runs from 1 to i, where 'i' is the current iteration of the outer loop. What is the total number of iterations of the inner loop?
n
n^2
log n
n(n+1)/2
You need to sort a large dataset. Algorithm A has a time complexity of O(n log n) and a space complexity of O(1). Algorithm B has a time complexity of O(n) but requires O(n) extra space. Which algorithm is preferable if memory usage is severely constrained?
Algorithm A
Both are equally preferable
Insufficient information to determine
Algorithm B
Imagine you're optimizing a QuickSort algorithm for an array of size N. In the worst-case scenario, how does choosing a pivot element randomly impact the time complexity compared to always selecting the first element as the pivot?
Random pivot selection changes the worst-case time complexity to O(N log N)
Random pivot selection makes the worst-case time complexity unpredictable
Random pivot selection reduces the worst-case time complexity to O(N)
Random pivot selection has no impact on the worst-case time complexity, it remains O(N^2)
Consider an algorithm that iterates through a sorted array of size n. In the best-case scenario, the algorithm finds the desired element in the first comparison. What is the time complexity of this algorithm in the best case?
Which statement about benchmarking is TRUE?
Benchmarking is primarily used for comparing the theoretical time complexity of different algorithms.
Benchmarking is only useful for large-scale applications and not for smaller projects.
Benchmarking results are always generalizable across different hardware and software environments.
Benchmarking focuses on measuring the execution time of an algorithm in a controlled environment.
What is the time complexity of inserting an element at the beginning of a dynamically sized array that needs to resize by doubling its capacity when full?