Consider an algorithm that iterates through a sorted array of size N. In each iteration, it performs a binary search on the array. What is the overall time complexity of this algorithm?
O(log N)
O(N)
O(N^2)
O(N log N)
Given an array of integers, find the subarray with the maximum sum in linear time. The subarray must contain at least one element.
Kadane's Algorithm, which iterates through the array, keeping track of the maximum sum ending at each index.
Brute force approach by checking all possible subarrays.
Divide and conquer approach by finding the maximum subarray in the left half, right half, and the one crossing the midpoint.
Dynamic programming approach by building a table to store the maximum sum ending at each index.
What is the time complexity of Bucket Sort in the average case, assuming uniformly distributed data and a fixed number of buckets?
O(n^2)
O(n log n)
O(n)
O(log n)
What is a significant disadvantage of using arrays for storing and processing extremely large datasets, particularly in the context of limited memory resources?
Arrays do not support dynamic resizing, making it challenging to handle growing datasets.
Arrays require contiguous blocks of memory, which can be difficult to allocate for massive datasets.
Arrays have slow access times for individual elements.
Arrays are not suitable for storing structured data, such as key-value pairs.
You need to sort a massive dataset of social media posts by timestamp. The timestamps are represented as long integers. Which sorting approach is likely the MOST efficient?
Bucket Sort with buckets representing time intervals (e.g., hours, days)
External Merge Sort, as timestamps are naturally comparable
Any of the above would be equally efficient for this scenario
Radix Sort using a base that aligns with the structure of the timestamps
Given an array containing n distinct numbers taken from 0, 1, 2, ..., n, find the missing number.
Sort the array and find the missing element.
Use the XOR operation to find the missing number.
Calculate the sum of all numbers from 0 to n and subtract the sum of the array elements.
Use a hash table to store the presence of each number.
Given an array of integers, find the kth largest element in the array.
Sort the array and return the element at the kth position from the end.
Use quickselect, a selection algorithm with an average time complexity of O(n).
Use a max-heap to store all the elements and extract the kth largest element.
Use a min-heap of size k to store the k largest elements encountered so far.
In a real-world application, you are using a dynamic array to store a constantly growing dataset. You notice that the performance degrades significantly during the array resizing operations. What strategy could you employ to mitigate this performance bottleneck?
Switch to a linked list data structure, sacrificing some element access speed for better insertion performance.
Optimize the algorithm that processes the data to reduce the overall number of insertions into the array.
Increase the frequency of resizing, reallocating the array with smaller size increments.
Implement a custom memory allocator that reserves larger chunks of contiguous memory in advance.
In the context of external sorting, what does the term 'run' typically refer to?
The number of disk I/O operations performed.
The process of merging two sorted subarrays.
A sequence of sorted elements that can be held in memory.
A single pass through the entire dataset.
You are given an array of integers and a target sum. Find all unique quadruplets in the array that sum up to the target sum.
Use a backtracking algorithm to explore all possible combinations of elements.
Use a hash table to store the sum of all pairs of elements.
Sort the array and use two pointers to find pairs of elements that sum up to a specific value.
Use four nested loops to iterate through all possible combinations of four elements.