What is the time complexity of calculating the nth Fibonacci number using the dynamic programming approach?
O(2^n)
O(n^2)
O(n)
O(n log n)
Which asymptotic notation provides an upper bound on an algorithm's runtime, but the bound may not be tight?
Big-O (O)
Little-o (o)
Big Omega (Ω)
Big Theta (Θ)
You need to design an algorithm that operates on a very large dataset. Which time complexity should you aim for to ensure reasonable performance?
O(n^3)
O(n!)
O(log n)
The Floyd-Warshall algorithm is used to find the shortest path between:
All pairs of vertices in a graph
None of the above
A vertex and all other vertices in a graph
Two specified vertices in a graph
Which of the following scenarios typically leads to an algorithm with linear time complexity, O(n)?
Finding the smallest element in a max-heap.
Performing a linear search on a sorted array.
Calculating the Fibonacci sequence recursively.
Traversing a binary tree.
If an algorithm has a time complexity of Ω(n log n), what can you conclude about its best-case runtime?
It will always be slower than an algorithm with O(n^2) time complexity.
It cannot be faster than an algorithm with O(log n) time complexity.
It will always be faster than an algorithm with O(n) time complexity.
It will have a constant runtime regardless of input size.
A linear search algorithm iterates through an unsorted array to find a target element. What is its average-case time complexity?
O(1)
O(n²)
What is the time complexity of searching for an element in an unsorted array of size 'n' in the worst-case scenario?
Which notation signifies that a function 'f(n)' grows strictly slower than another function 'g(n)' as 'n' approaches infinity?
Which of the following is NOT a common reason why real-world performance might differ from theoretical time complexity analysis?
The choice of algorithm used
The programming language and compiler optimizations
Caching and data locality
The specific hardware being used