Word Break
Bottom-up DP over string prefixes
dp[i] = can we segment s[0..i]? Try all possible last words.
Problem
Given a string s and a dictionary wordDict of words, return true if s can be segmented into a space-separated sequence of one or more dictionary words. A word may be reused any number of times.
Intuition
Stand at position i in s. The question is: can the prefix s[0..i] be segmented? That question has a tidy recurrence:
dp[i] = true if ∃ j < i where dp[j] && s[j..i] ∈ dict
In English: s[0..i] is breakable if we can find some split point j such that the left half is already known to be breakable and the right half is a dictionary word.
Base case: dp[0] = true (the empty prefix is vacuously segmented). Answer: dp[n].
The key insight is that we never re-solve the same prefix. Naive recursion on start index branches on every dictionary word and can blow up exponentially ("aaaaa..." against ["a", "aa", "aaa", ...]). Memoization — or the bottom-up table — collapses that back to polynomial.
Approach
Memoized Recursion
O(n² · m)O(n)Walk from the current start index; try each dictionary word as the next piece; recurse on the remainder. Memoize the result for each start index. m is the dictionary size; the word-fit check is O(word length) but dominated by the outer loop.
use std::collections::HashSet;
impl Solution {
pub fn word_break(s: String, word_dict: Vec<String>) -> bool {
let dict: HashSet<&str> = word_dict.iter().map(String::as_str).collect();
let bytes = s.as_bytes();
let n = bytes.len();
let mut memo: Vec<Option<bool>> = vec![None; n + 1];
fn go(i: usize, s: &str, dict: &HashSet<&str>, memo: &mut Vec<Option<bool>>) -> bool {
if i == s.len() {
return true;
}
if let Some(v) = memo[i] {
return v;
}
for j in (i + 1)..=s.len() {
if dict.contains(&s[i..j]) && go(j, s, dict, memo) {
memo[i] = Some(true);
return true;
}
}
memo[i] = Some(false);
false
}
let _ = bytes;
go(0, &s, &dict, &mut memo)
}
}
function wordBreak(s: string, wordDict: string[]): boolean {
const dict = new Set(wordDict);
const memo = new Map<number, boolean>();
const go = (i: number): boolean => {
if (i === s.length) return true;
const cached = memo.get(i);
if (cached !== undefined) return cached;
for (let j = i + 1; j <= s.length; j++) {
if (dict.has(s.slice(i, j)) && go(j)) {
memo.set(i, true);
return true;
}
}
memo.set(i, false);
return false;
};
return go(0);
}Bottom-Up DP
O(n² · m)O(n)Fill dp[0..=n] forward. For each end i, scan each split point j; the prefix is breakable if some earlier dp[j] is true and s[j..i] is in the dictionary. No recursion, no stack concerns, and the loop structure makes it easy to reason about complexity.
use std::collections::HashSet;
impl Solution {
pub fn word_break(s: String, word_dict: Vec<String>) -> bool {
let dict: HashSet<&str> = word_dict.iter().map(String::as_str).collect();
let n = s.len();
let mut dp = vec![false; n + 1];
dp[0] = true;
for i in 1..=n {
for j in 0..i {
if dp[j] && dict.contains(&s[j..i]) {
dp[i] = true;
break;
}
}
}
dp[n]
}
}
function wordBreak(s: string, wordDict: string[]): boolean {
const dict = new Set(wordDict);
const n = s.length;
const dp = new Array<boolean>(n + 1).fill(false);
dp[0] = true;
for (let i = 1; i <= n; i++) {
for (let j= 0; j < i; j++) {
if (dp[j] && dict.has(s.slice(j, i))) {
dp[i]= true;
break;
}
}
}
return dp[n];
}For large inputs where most words are short, a micro-optimization: bound the inner loop by the max word length in the dictionary. For the LC constraints it's unnecessary.
Trace
Bottom-up on s = "leetcode", dict = {"leet", "code"}:
| i | s[0..i] | split j | s[j..i] ∈ dict? | dp[i] |
|---|---|---|---|---|
| 0 | '' | base | — | true |
| 1 | l | 0 | no | false |
| 2 | le | 0, 1 | no | false |
| 3 | lee | 0, 1, 2 | no | false |
| 4 | leet | 0: leet ✓ | yes | true |
| 5 | leetc | 0..4 | no | false |
| 6 | leetco | 0..5 | no | false |
| 7 | leetcod | 0..6 | no | false |
| 8 | leetcode | 4: code ✓ | yes | true |
dp[8] = true. ✓
Edge cases
- Empty dictionary — return
trueonly ifsis empty; otherwisefalse. Both forms handle this: the inner loop never fires, so no prefix past index 0 becomestrue. sshorter than every word —false.dp[i]staysfalsefor alli ≥ 1.- Overlapping dictionary words (e.g.
"cats"and"cat") — the scan tries every split, so both options are considered. The first to succeed wins viabreak. - Unicode — Rust's
s[j..i]uses byte indices, which panics on non-ASCII boundaries. LC inputs are ASCII, but be explicit about the assumption if you lift this code.
For very large dictionaries, index by prefix into a trie and walk it character-by-character from each dp[i] that's true. That drops the inner loop's dictionary lookup cost and is the right move when m dominates n.
Takeaway
When the brute force recurses on string prefixes, a 1D DP over "can I reach position i" almost always collapses the exponential tree. The same shape generalizes to #140 Word Break II (enumerate the segmentations — DP for existence, backtracking for reconstruction), and the prefix-existence + dictionary-lookup pattern shows up again in #300's patience sort when you think of tails as a "which lengths are reachable" boolean that happens to carry extra witness data.
One more reflex to absorb: when the naive recursion looks like "try every dictionary word here," reach for memoization keyed on start index first — the state space is the string's positions, not the combinatorial explosion of choices.
More in 1D DP
Fibonacci in disguise. dp[i] = dp[i-1] + dp[i-2]. Start here.
dp[i] = max(dp[i-1], dp[i-2] + nums[i]). The ‘take or skip’ template.
O(n²) DP first, then learn the O(n log n) patience sorting approach.
Track both max AND min at each position (negatives flip signs).