What is the worst-case time complexity of Merge Sort?
O(n^2)
O(log n)
O(n)
O(n log n)
Bucket Sort can be considered a stable sorting algorithm under which condition?
The input data is already sorted.
Bucket Sort is inherently stable regardless of the input or implementation.
The number of buckets is equal to the number of elements.
The underlying sorting algorithm used within each bucket is stable.
Which of the following statements accurately describes the stability of Quick Sort?
Quick Sort is inherently stable
Quick Sort can be easily modified to be stable
The stability of Quick Sort depends on the input data
Quick Sort is inherently unstable
How does Kruskal's algorithm utilize sorting to find the minimum spanning tree of a graph?
It sorts the edges of the graph in increasing order of their weights and then iteratively adds edges to the growing minimum spanning tree while avoiding the formation of cycles.
Sorting is not used in Kruskal's algorithm; it's a greedy algorithm that makes locally optimal choices without the need for sorting.
It sorts the nodes of the graph based on their distances from a randomly chosen starting node.
It sorts the nodes of the graph in ascending order of their degrees (number of connected edges).
Which aspect of Radix Sort's implementation significantly impacts its overall performance, particularly for large datasets?
Number of passes required to sort all digits
Initial order of elements in the input array
Choice of sorting algorithm for individual digits
Data structure used to store and access buckets
How does the time complexity of Radix Sort compare to comparison-based sorting algorithms like Merge Sort and Quick Sort for integers with a wide range?
Radix Sort is always slower
Radix Sort can be faster under certain conditions
Radix Sort has the same time complexity
Radix Sort is consistently faster
In the context of Heap Sort, what is the process called where we ensure that a subtree maintains the heap property?
Heapify
Sift-up
Heap-balance
Sift-down
How does the choice of pivot affect the performance of Quick Sort?
The choice of pivot has no impact on the performance of Quick Sort
Using the first element as the pivot is generally the most efficient approach
Selecting a random pivot always guarantees the best performance
A poorly chosen pivot can lead to the worst-case time complexity of O(n^2)
Which of the following real-world applications is well-suited for counting sort?
Sorting an array of timestamps representing events in chronological order.
Sorting a large dataset of student GPAs ranging from 0.0 to 4.0.
Sorting a list of words alphabetically.
Sorting a collection of images based on their file sizes.
How does using the median-of-three partitioning strategy in Quick Sort help optimize its performance?
It has no impact on the performance of Quick Sort; it's simply an alternative partitioning approach.
It eliminates the need for recursive calls in the sorting process, making it significantly faster.
It guarantees the selection of the median element as the pivot, always leading to perfectly balanced partitions.
It reduces the likelihood of selecting a very small or very large element as the pivot, thereby decreasing the chances of worst-case scenarios.