You have an algorithm with a time complexity of O(2^n). If you double the input size, how would you expect the execution time to be affected?
It increases exponentially.
It increases by a factor of n.
It doubles.
It remains roughly the same.
Consider an algorithm with a best-case time complexity of O(1) and a worst-case time complexity of O(n). Which of the following statements is ALWAYS true?
The algorithm's time complexity cannot be determined solely from the best- and worst-case scenarios.
The average-case time complexity is also O(n).
The algorithm's performance is independent of the input data.
The algorithm will always have a time complexity of O(1) or O(n).
What is the Big-O notation of an algorithm that iterates through an array of size 'n' twice in nested loops?
O(n^2)
O(1)
O(log n)
O(n)
Consider a binary search algorithm on a sorted array. In the worst-case scenario, how many comparisons are required to find a target element?
n
n/2
logâ‚‚(n) + 1
1
What is the time complexity of searching for an element in an unsorted array of size 'n' in the worst-case scenario?
You need to design an algorithm that operates on a very large dataset. Which time complexity should you aim for to ensure reasonable performance?
O(n^3)
O(n!)
O(2^n)
Which asymptotic notation provides an upper bound on an algorithm's runtime, but the bound may not be tight?
Big-O (O)
Little-o (o)
Big Omega (Ω)
Big Theta (Θ)
Which notation signifies that a function 'f(n)' grows strictly slower than another function 'g(n)' as 'n' approaches infinity?
What is the time complexity of Dijkstra's algorithm for finding the shortest path in a graph with V vertices and E edges using an adjacency list and a priority queue?
O(E log V)
O(V^2)
O(V + E)
O(V log E)
Which of the following statements about the relationship between theoretical time complexity and real-world performance is FALSE?
Constant factors and lower-order terms ignored in Big O notation can significantly impact real-world performance.
Real-world data often exhibits patterns that can be exploited to improve performance beyond theoretical predictions.
An algorithm with better theoretical complexity will always outperform an algorithm with worse complexity in practice.
Hardware characteristics like CPU speed and cache size can influence the actual runtime of an algorithm.