Given an array containing n distinct numbers taken from 0, 1, 2, ..., n, find the missing number.
Sort the array and find the missing element.
Calculate the sum of all numbers from 0 to n and subtract the sum of the array elements.
Use the XOR operation to find the missing number.
Use a hash table to store the presence of each number.
You are given an array of integers and a target sum. Find all unique quadruplets in the array that sum up to the target sum.
Sort the array and use two pointers to find pairs of elements that sum up to a specific value.
Use a backtracking algorithm to explore all possible combinations of elements.
Use a hash table to store the sum of all pairs of elements.
Use four nested loops to iterate through all possible combinations of four elements.
Which sorting algorithm is generally considered IN-PLACE, meaning it requires minimal additional memory overhead?
Quick Sort
External Merge Sort
Radix Sort (using counting sort as a subroutine)
Bucket Sort
When is Bucket Sort LEAST likely to be an efficient sorting algorithm?
The data is heavily skewed towards a few buckets.
The dataset is very large and sparse.
The elements are integers within a known range.
The data is uniformly distributed.
In the context of amortized analysis, what is the purpose of the potential function?
To optimize the performance of individual array operations.
To calculate the average runtime of a single operation over a sequence of operations.
To analyze the space complexity of an algorithm.
To determine the maximum possible runtime of a single operation in the worst-case scenario.
What is the time complexity of Bucket Sort in the average case, assuming uniformly distributed data and a fixed number of buckets?
O(n^2)
O(log n)
O(n log n)
O(n)
Which of the following array operations has an average-case time complexity that differs from its worst-case time complexity?
Accessing an element at a given index.
Searching for a specific value in a sorted array.
Deleting an element from the end of an array.
Inserting an element at the beginning of a dynamic array (implemented with reallocation).
What is a significant disadvantage of using arrays for storing and processing extremely large datasets, particularly in the context of limited memory resources?
Arrays require contiguous blocks of memory, which can be difficult to allocate for massive datasets.
Arrays do not support dynamic resizing, making it challenging to handle growing datasets.
Arrays have slow access times for individual elements.
Arrays are not suitable for storing structured data, such as key-value pairs.
You need to sort a massive dataset of social media posts by timestamp. The timestamps are represented as long integers. Which sorting approach is likely the MOST efficient?
Radix Sort using a base that aligns with the structure of the timestamps
Any of the above would be equally efficient for this scenario
External Merge Sort, as timestamps are naturally comparable
Bucket Sort with buckets representing time intervals (e.g., hours, days)
Given an array of n positive integers and a positive integer s, find the minimal length of a contiguous subarray of which the sum ≥ s. If there isn't one, return 0 instead.
Use two nested loops to iterate through all possible subarrays.
Use binary search to find the minimal length.
Use a sliding window approach to find the minimal length subarray.
Use dynamic programming to store the minimal length for all subarrays ending at each index.