Dynamic programming aims to reduce the time complexity of an algorithm by:
Using a greedy approach to always make the locally optimal choice
Employing a divide-and-conquer strategy to break down the problem into smaller instances
Dividing the problem into smaller subproblems and solving them recursively
Storing the results of overlapping subproblems to avoid redundant computations
You have two algorithms for a task: Algorithm A with Θ(n) complexity and Algorithm B with O(n^2) complexity. Which statement is ALWAYS true?
It's impossible to compare the efficiency of the algorithms without knowing the exact implementation details.
Algorithm A will be faster than Algorithm B for all input sizes.
Algorithm A and Algorithm B have the same efficiency in terms of time complexity.
Algorithm B might be faster for very small input sizes, but Algorithm A will eventually be faster as input grows.
Which of the following statements about the relationship between theoretical time complexity and real-world performance is FALSE?
Constant factors and lower-order terms ignored in Big O notation can significantly impact real-world performance.
Real-world data often exhibits patterns that can be exploited to improve performance beyond theoretical predictions.
An algorithm with better theoretical complexity will always outperform an algorithm with worse complexity in practice.
Hardware characteristics like CPU speed and cache size can influence the actual runtime of an algorithm.
Which asymptotic notation provides an upper bound on an algorithm's runtime, but the bound may not be tight?
Big Omega (Ω)
Big-O (O)
Big Theta (Θ)
Little-o (o)
Which of the following sorting algorithms has the best average-case time complexity?
Bubble Sort
Insertion Sort
Selection Sort
Merge Sort
An algorithm processes an input of size 'n' by dividing it in half repeatedly until the size becomes 1. What is the most likely time complexity of this algorithm?
O(n^2)
O(log n)
O(n log n)
O(n)
You have an algorithm with a time complexity of O(2^n). If you double the input size, how would you expect the execution time to be affected?
It increases by a factor of n.
It doubles.
It increases exponentially.
It remains roughly the same.
An algorithm has a time complexity of O(n log n). Which of the following statements is NOT necessarily true?
For large enough input sizes, the runtime is bounded above by some constant multiple of n log n.
The algorithm's runtime grows at most as fast as n log n.
The algorithm's best-case runtime could be O(log n).
The algorithm will always be slower than an algorithm with O(n) complexity.
Which of the following scenarios typically leads to an algorithm with linear time complexity, O(n)?
Calculating the Fibonacci sequence recursively.
Finding the smallest element in a max-heap.
Traversing a binary tree.
Performing a linear search on a sorted array.
Which of the following has a time complexity of O(n^2) in its worst-case scenario?
Heap Sort
QuickSort