What is the time complexity of searching for an element in an unsorted array of size 'n' in the worst-case scenario?
O(n)
O(1)
O(log n)
O(n^2)
Dynamic programming aims to reduce the time complexity of an algorithm by:
Using a greedy approach to always make the locally optimal choice
Storing the results of overlapping subproblems to avoid redundant computations
Dividing the problem into smaller subproblems and solving them recursively
Employing a divide-and-conquer strategy to break down the problem into smaller instances
Consider a binary search algorithm on a sorted array. In the worst-case scenario, how many comparisons are required to find a target element?
1
n
n/2
logâ‚‚(n) + 1
What does it mean for an algorithm to have a time complexity of Θ(1)?
The runtime is constant and independent of the input size.
The runtime grows linearly with the input size.
The algorithm is guaranteed to be the fastest possible solution for the problem.
The algorithm's runtime is unpredictable and varies greatly with different inputs.
What is the Big-O notation of an algorithm that iterates through an array of size 'n' twice in nested loops?
Which of the following scenarios typically leads to an algorithm with linear time complexity, O(n)?
Finding the smallest element in a max-heap.
Performing a linear search on a sorted array.
Traversing a binary tree.
Calculating the Fibonacci sequence recursively.
An algorithm processes an input of size 'n' by dividing it in half repeatedly until the size becomes 1. What is the most likely time complexity of this algorithm?
O(n log n)
Which of the following statements about the relationship between theoretical time complexity and real-world performance is FALSE?
Hardware characteristics like CPU speed and cache size can influence the actual runtime of an algorithm.
An algorithm with better theoretical complexity will always outperform an algorithm with worse complexity in practice.
Constant factors and lower-order terms ignored in Big O notation can significantly impact real-world performance.
Real-world data often exhibits patterns that can be exploited to improve performance beyond theoretical predictions.
Which asymptotic notation provides an upper bound on an algorithm's runtime, but the bound may not be tight?
Little-o (o)
Big Theta (Θ)
Big-O (O)
Big Omega (Ω)
Which of the following is NOT a common reason why real-world performance might differ from theoretical time complexity analysis?
Caching and data locality
The choice of algorithm used
The programming language and compiler optimizations
The specific hardware being used