Stage 4: Advanced Techniques·Binary Search on Answer
Hard#4108 min readBinary SearchGreedyDynamic ProgrammingLeetCode

Split Array Largest Sum

Binary search on answer

Binary search on max sum. Check: can we split into ≤k parts each ≤ mid?

Published Apr 16, 2026

Problem

Given an integer array nums and an integer k, split nums into k non-empty contiguous subarrays. Minimize the largest sum among those k subarrays and return that minimum.

The subarrays must cover every element, in order. You choose only the cut points.

Input
nums = [7, 2, 5, 10, 8], k = 2
Output
18
Explanation
Split as [7, 2, 5] and [10, 8]. Sums are 14 and 18; the largest is 18. No other split is better.
Input
nums = [1, 2, 3, 4, 5], k = 2
Output
9
Explanation
[1, 2, 3] and [4, 5] give sums 6 and 9. [1, 2, 3, 4] and [5] give 10 and 5. Best max = 9.
Input
nums = [1, 4, 4], k = 3
Output
4
Explanation
Each element stands alone; the largest piece is 4.

Intuition

The cut points form a discrete combinatorial space — C(n-1, k-1) possibilities — so enumerating them directly is out.

Here is the reframe. Forget the cuts. Pretend someone hands you a candidate answer T and asks: "can you split nums into at most k pieces, each with sum ≤ T?" That question has a simple greedy answer: walk left to right, accumulate into the current piece, and whenever adding the next element would exceed T, start a new piece. The minimum number of pieces needed is determined by T alone.

Now notice the monotonicity:

  • if T works, every T' > T also works (bigger caps are more permissive);
  • if T fails, every T' < T also fails.

That is lower-bound binary search over the answer space [max(nums), sum(nums)]. The lower bound is max(nums) because no piece can be smaller than the largest single element; the upper bound is sum(nums) because a single piece always works.

This "binary search on the answer with a feasibility predicate" is the dominant shape for minimize-the-maximum / maximize-the-minimum problems.

Approach

Interval DP

TimeO(n² · k)
SpaceO(n · k)

The canonical DP. Let dp[i][j] = minimum possible largest sum when splitting the first i elements into j pieces. The transition fixes the last piece as nums[p..i]:

dp[i][j] = min over p of max(dp[p][j-1], sum[p..i])

Correct, straightforward, and too slow for the constraints (n 1000, k 50). Included here because it makes the optimization contrast vivid.

impl Solution {
    pub fn split_array(nums: Vec<i32>, k: i32) -> i32 {
        let n = nums.len();
        let k = k as usize;
        let mut prefix = vec![0i64; n + 1];
        for i in 0..n {
            prefix[i + 1] = prefix[i] + nums[i] as i64;
        }

        const INF: i64 = i64::MAX / 2;
        let mut dp = vec![vec![INF; k + 1]; n + 1];
        dp[0][0] = 0;

        for i in 1..=n {
            for j in 1..=k.min(i) {
                for p in (j - 1)..i {
                    let piece = prefix[i] - prefix[p];
                    let worst = dp[p][j - 1].max(piece);
                    if worst < dp[i][j] {
                        dp[i][j]= worst;
                    }
                }
            }
        }

        dp[n][k] as i32
    }
}
Rust · runnable

Binary Search on Answer

TimeO(n log S)
SpaceO(1)
Recommended

Search the minimum feasible cap T inside [max(nums), sum(nums)] where S = sum(nums). The feasibility predicate uses a greedy left-to-right pass: open pieces as late as possible.

The algorithm is textbook lower-bound: if pieces(T) k, the cap is feasible — shrink hi. Otherwise shrink from the left by moving lo.

Each feasibility call is O(n) and we run O(log S) of them, where S 10⁹ for the LeetCode constraints — well under the DP's work.

impl Solution {
    pub fn split_array(nums: Vec<i32>, k: i32) -> i32 {
        let k = k as usize;
        let mut lo: i64 = 0;
        let mut hi: i64 = 0;
        for &x in &nums {
            let v = x as i64;
            if v > lo {
                lo = v;
            }
            hi += v;
        }

        let feasible = |cap: i64| -> bool {
            let mut pieces: usize = 1;
            let mut cur: i64 = 0;
            for &x in &nums {
                let v = x as i64;
                if cur + v > cap {
                    pieces += 1;
                    cur = v;
                    if pieces > k {
                        return false;
                    }
                } else {
                    cur += v;
                }
            }
            true
        };

        while lo < hi {
            let mid= lo + (hi - lo) / 2;
            if feasible(mid) {
                hi= mid;
            } else {
                lo= mid + 1;
            }
        }

        lo as i32
    }
}
Rust · runnable

Trace

Binary search on nums = [7, 2, 5, 10, 8], k = 2. Search space starts at [max, sum] = [10, 32]:

lohimidpieces(mid)action
1032212pieces ≤ 2, move hi to 21
1021153pieces > 2, move lo to 16
1621182pieces ≤ 2, move hi to 18
1618173pieces > 2, move lo to 18

Loop ends with lo = hi = 18. The greedy split at cap 18: [7, 2, 5] (sum 14) then [10, 8] (sum 18). Two pieces; largest is 18.

Edge cases

  • k = 1 — no splits; answer is sum(nums). Binary search collapses since lo = max(nums) sum(nums) = hi and pieces(sum) is always 1.
  • k = n — every element alone; answer is max(nums). The lower bound lo = max(nums) is already feasible.
  • Single element — trivially nums[0].
  • Overflow — for constraints n 1000, nums[i] 10⁶, the sum fits in i32. For safety and to match larger variants, the Rust solution uses i64 internally.
💡
Tip

The greedy feasibility check is the same shape as problem #1011 (Ship Packages) — the cap is a truck capacity, the pieces are days. Different story, identical predicate.

Takeaway

"Minimize the maximum" and "maximize the minimum" are the same pattern under different names. The work is: identify the monotone predicate, bracket the answer space, and binary search.

The DP baseline is worth keeping in your head because it tells you why the binary search wins: the cost model collapses from O( · k) cut enumeration to O(n log S) monotone search. That ratio, not the cleverness, is what makes this pattern show up in real systems — provisioning problems, load balancing, capacity planning.

More in Binary Search on Answer