In which scenario is Bucket Sort likely to perform poorly?
Data is uniformly distributed within a known range
Data is heavily skewed towards one end of the range
Data is already sorted in reverse order
Data consists of a small number of unique elements
What is the primary advantage of using counting sort over comparison-based sorting algorithms like merge sort or quick sort?
Counting sort is a stable sorting algorithm by default.
Counting sort is an in-place sorting algorithm.
Counting sort works efficiently even for large datasets with a wide range of values.
Counting sort can achieve a time complexity better than O(n log n) in certain scenarios.
What is the space complexity of Quick Sort in the average and worst case scenarios?
O(n) in the average case and O(log n) in the worst case
O(1) in both average and worst cases
O(n) in both average and worst cases
O(log n) in the average case and O(n) in the worst case
What is a key characteristic of in-place partitioning within the context of Quick Sort?
The algorithm sorts the array by recursively dividing it into smaller subarrays and then merging them back together.
The partitioning process is performed entirely within the original array, without requiring the allocation of substantial additional memory proportional to the input size.
In-place partitioning is only applicable when the input array is already sorted in reverse order.
The partitioning step always selects the first element of the subarray as the pivot.
Which of the following is a common use case for Merge Sort?
Finding the smallest element in an array
Sorting a nearly sorted array
Sorting a linked list
Sorting a small array with less than 10 elements
How does Kruskal's algorithm utilize sorting to find the minimum spanning tree of a graph?
It sorts the nodes of the graph based on their distances from a randomly chosen starting node.
It sorts the nodes of the graph in ascending order of their degrees (number of connected edges).
Sorting is not used in Kruskal's algorithm; it's a greedy algorithm that makes locally optimal choices without the need for sorting.
It sorts the edges of the graph in increasing order of their weights and then iteratively adds edges to the growing minimum spanning tree while avoiding the formation of cycles.
What is the worst-case time complexity of Quick Sort and when does it occur?
O(n log n), when the pivot is always the median element
O(n^2), when the input array is already sorted or reverse sorted
O(n log n), when the input array is sorted or reverse sorted
O(n^2), when the pivot is always the median element
Which of the following real-world applications is well-suited for counting sort?
Sorting an array of timestamps representing events in chronological order.
Sorting a large dataset of student GPAs ranging from 0.0 to 4.0.
Sorting a collection of images based on their file sizes.
Sorting a list of words alphabetically.
Which aspect of Radix Sort's implementation significantly impacts its overall performance, particularly for large datasets?
Data structure used to store and access buckets
Number of passes required to sort all digits
Choice of sorting algorithm for individual digits
Initial order of elements in the input array
Why is Quick Sort often preferred over Merge Sort in practice, despite having the same average-case time complexity?
Quick Sort is more memory-efficient due to its recursive nature
Quick Sort is an in-place sorting algorithm, while Merge Sort requires additional space for merging
Quick Sort is easier to parallelize and implement on multi-core processors
Quick Sort has a lower constant factor in its time complexity, making it faster for smaller datasets