You need to design an algorithm that operates on a very large dataset. Which time complexity should you aim for to ensure reasonable performance?
O(n^3)
O(log n)
O(n!)
O(2^n)
Which asymptotic notation provides an upper bound on an algorithm's runtime, but the bound may not be tight?
Big Omega (Ω)
Big Theta (Θ)
Little-o (o)
Big-O (O)
Which of the following scenarios typically leads to an algorithm with linear time complexity, O(n)?
Finding the smallest element in a max-heap.
Calculating the Fibonacci sequence recursively.
Traversing a binary tree.
Performing a linear search on a sorted array.
An algorithm processes an input of size 'n' by dividing it in half repeatedly until the size becomes 1. What is the most likely time complexity of this algorithm?
O(n log n)
O(n)
O(n^2)
What is the Big-O notation of an algorithm that iterates through an array of size 'n' twice in nested loops?
O(1)
Which of the following is NOT a common reason why real-world performance might differ from theoretical time complexity analysis?
The programming language and compiler optimizations
Caching and data locality
The specific hardware being used
The choice of algorithm used
An algorithm has a time complexity of O(n log n). Which of the following statements is NOT necessarily true?
The algorithm will always be slower than an algorithm with O(n) complexity.
For large enough input sizes, the runtime is bounded above by some constant multiple of n log n.
The algorithm's best-case runtime could be O(log n).
The algorithm's runtime grows at most as fast as n log n.
Which of the following sorting algorithms has the best average-case time complexity?
Insertion Sort
Bubble Sort
Merge Sort
Selection Sort
You have an algorithm with a time complexity of O(2^n). If you double the input size, how would you expect the execution time to be affected?
It increases exponentially.
It doubles.
It increases by a factor of n.
It remains roughly the same.
What does it mean for an algorithm to have a time complexity of Θ(1)?
The algorithm is guaranteed to be the fastest possible solution for the problem.
The algorithm's runtime is unpredictable and varies greatly with different inputs.
The runtime is constant and independent of the input size.
The runtime grows linearly with the input size.