Consider an algorithm that iterates through a sorted array of size N. In each iteration, it performs a binary search on the array. What is the overall time complexity of this algorithm?
O(N^2)
O(N)
O(log N)
O(N log N)
In the context of external sorting, what does the term 'run' typically refer to?
The process of merging two sorted subarrays.
The number of disk I/O operations performed.
A sequence of sorted elements that can be held in memory.
A single pass through the entire dataset.
You need to perform matrix multiplication on two large matrices, A and B, where A is of size M x N and B is of size N x P. You have multiple machines available for distributed computing. Which approach would likely yield the BEST performance improvement?
Divide matrix B into column blocks and distribute each block with a copy of A to different machines.
Divide matrix A into row blocks and distribute each block with a copy of B to different machines.
Divide both matrices A and B into smaller, equally sized submatrices and distribute the computation of these submatrices across the machines.
Performing the matrix multiplication sequentially on a single machine without any distribution.
When is Bucket Sort LEAST likely to be an efficient sorting algorithm?
The data is heavily skewed towards a few buckets.
The dataset is very large and sparse.
The elements are integers within a known range.
The data is uniformly distributed.
Given an array containing n distinct numbers taken from 0, 1, 2, ..., n, find the missing number.
Calculate the sum of all numbers from 0 to n and subtract the sum of the array elements.
Use the XOR operation to find the missing number.
Sort the array and find the missing element.
Use a hash table to store the presence of each number.
In a real-world application, you are using a dynamic array to store a constantly growing dataset. You notice that the performance degrades significantly during the array resizing operations. What strategy could you employ to mitigate this performance bottleneck?
Increase the frequency of resizing, reallocating the array with smaller size increments.
Optimize the algorithm that processes the data to reduce the overall number of insertions into the array.
Implement a custom memory allocator that reserves larger chunks of contiguous memory in advance.
Switch to a linked list data structure, sacrificing some element access speed for better insertion performance.
Given an array of n integers, find three elements in the array such that the sum is closest to a given target number. Return the sum of the three integers.
Sort the array and use two pointers to find pairs of elements with a sum close to the target minus the current element.
Use three nested loops to iterate through all possible triplets.
Use a hash table to store the sum of all pairs of elements.
Use dynamic programming to store the closest sum for all subarrays of size three.
Given an unsorted array of integers, find the length of the longest consecutive sequence.
Sort the array and iterate through it to find the longest consecutive sequence.
Use a hash table to store the elements of the array and their visited status.
Use a sliding window approach to find the longest consecutive sequence.
Use dynamic programming to store the length of the longest consecutive sequence ending at each index.
Which sorting algorithm is generally considered IN-PLACE, meaning it requires minimal additional memory overhead?
External Merge Sort
Radix Sort (using counting sort as a subroutine)
Bucket Sort
Quick Sort
In the context of amortized analysis, what is the purpose of the potential function?
To analyze the space complexity of an algorithm.
To calculate the average runtime of a single operation over a sequence of operations.
To determine the maximum possible runtime of a single operation in the worst-case scenario.
To optimize the performance of individual array operations.