You are given a string s and a dictionary of strings wordDict, add spaces in s to construct a sentence where each word is a valid dictionary word. Return all such possible sentences in any order.
Note that the same word in the dictionary may be reused multiple times in the segmentation.
Example 1:
Input: s = "neetcode", wordDict = ["neet","code"]
Output: ["neet code"]Example 2:
Input: s = "racecariscar", wordDict = ["racecar","race","car","is"]
Output: ["racecar is car","race car is car"]Example 3:
Input: s = "catsincars", wordDict = ["cats","cat","sin","in","car"]
Output: []Constraints:
1 <= s.length <= 201 <= wordDict.length <= 10001 <= wordDict[i].length <= 10s and wordDict[i] consist of only lowercase English letters.wordDict are unique.We need to find all possible ways to segment the string into valid dictionary words. Starting from the beginning of the string, we try every possible prefix that exists in the dictionary. When we find a valid prefix, we recursively process the remaining substring. When we reach the end of the string, we have found a valid segmentation and add it to our result.
O(1) lookups.0.class Solution:
def wordBreak(self, s: str, wordDict: List[str]) -> List[str]:
wordDict = set(wordDict)
def backtrack(i):
if i == len(s):
res.append(" ".join(cur))
return
for j in range(i, len(s)):
w = s[i:j + 1]
if w in wordDict:
cur.append(w)
backtrack(j + 1)
cur.pop()
cur = []
res = []
backtrack(0)
return resWhere is the length of the string and is the sum of the lengths of the strings in the .
Building a Trie from the dictionary words allows us to efficiently check prefixes while traversing the string. Instead of checking each substring against a set, we walk character by character through the Trie. This can provide early termination when no dictionary word starts with the current prefix, avoiding unnecessary substring operations.
0 with an empty path.class TrieNode:
def __init__(self):
self.children = {}
self.isWord = False
class Trie:
def __init__(self):
self.root = TrieNode()
def addWord(self, word):
curr = self.root
for c in word:
if c not in curr.children:
curr.children[c] = TrieNode()
curr = curr.children[c]
curr.isWord = True
class Solution:
def wordBreak(self, s: str, wordDict: List[str]) -> List[str]:
trie = Trie()
for word in wordDict:
trie.addWord(word)
def backtrack(i, path):
if i == len(s):
res.append(" ".join(path))
return
node = trie.root
word = []
for i in range(i, len(s)):
char = s[i]
if char not in node.children:
break
word.append(char)
node = node.children[char]
if node.isWord:
path.append("".join(word))
backtrack(i + 1, path)
path.pop()
res = []
backtrack(0, [])
return resWhere is the length of the string and is the sum of the lengths of the strings in the .
The pure backtracking approach may recompute results for the same suffix multiple times. By caching the list of all valid sentences that can be formed starting from each index, we avoid redundant computation. When we encounter a starting position we have already processed, we simply return the cached result.
cache dictionary.i to end.i equals the string length, return a list containing an empty string.i is in the cache, return the cached result.i to end. For each valid dictionary word, get all sentences from the next position and prepend the current word.cache before returning.class Solution:
def wordBreak(self, s: str, wordDict: List[str]) -> List[str]:
wordDict = set(wordDict)
cache = {}
def backtrack(i):
if i == len(s):
return [""]
if i in cache:
return cache[i]
res = []
for j in range(i, len(s)):
w = s[i:j + 1]
if w not in wordDict:
continue
strings = backtrack(j + 1)
for substr in strings:
sentence = w
if substr:
sentence += " " + substr
res.append(sentence)
cache[i] = res
return res
return backtrack(0)Where is the length of the string and is the sum of the lengths of the strings in the .
Instead of recursing from the start, we can build the solution iteratively from the beginning. For each position i, we store all valid sentences that can be formed using characters from index 0 to i-1. We extend existing sentences by appending new words when a valid dictionary word ends at position i.
dp[i] contains all valid sentences using the first i characters.dp[0] with an empty string as the base case.i from 1 to n, check all possible last words ending at i.s[j:i] is in the dictionary and dp[j] is not empty, extend each sentence in dp[j] by appending the new word.dp[n] contains all valid segmentations of the entire string.class Solution:
def wordBreak(self, s: str, wordDict: List[str]) -> List[str]:
wordSet = set(wordDict)
n = len(s)
dp = [[] for _ in range(n + 1)]
dp[0] = [""]
for i in range(1, n + 1):
for j in range(i):
if s[j:i] in wordSet:
for sentence in dp[j]:
dp[i].append((sentence + " " + s[j:i]).strip())
return dp[n]Where is the length of the string and is the sum of the lengths of the strings in the .
This combines the benefits of Trie-based prefix matching with memoization. The Trie provides efficient character-by-character matching and early termination, while the cache prevents recomputation of results for the same starting positions. This is particularly effective when the dictionary contains many words with common prefixes.
cache to store results for each starting index.i.class TrieNode:
def __init__(self):
self.children = {}
self.isWord = False
class Trie:
def __init__(self):
self.root = TrieNode()
def addWord(self, word):
curr = self.root
for c in word:
if c not in curr.children:
curr.children[c] = TrieNode()
curr = curr.children[c]
curr.isWord = True
class Solution:
def wordBreak(self, s: str, wordDict: List[str]) -> List[str]:
trie = Trie()
for word in wordDict:
trie.addWord(word)
cache = {}
def backtrack(index):
if index == len(s):
return [""]
if index in cache:
return cache[index]
res = []
curr = trie.root
for i in range(index, len(s)):
char = s[i]
if char not in curr.children:
break
curr = curr.children[char]
if curr.isWord:
for suffix in backtrack(i + 1):
if suffix:
res.append(s[index:i + 1] + " " + suffix)
else:
res.append(s[index:i + 1])
cache[index] = res
return res
return backtrack(0)Where is the length of the string and is the sum of the lengths of the strings in the .
When recursion reaches the end of the string, returning an empty list [] instead of a list with an empty string [""] causes all valid sentences to be lost since there's nothing to append the last word to.
# Wrong: returns empty list, loses all sentences
if i == len(s):
return []
# Correct: returns list with empty string for concatenation
if i == len(s):
return [""]When concatenating words to form sentences, forgetting to add spaces between words results in malformed output like "catsand" instead of "cats and".
# Wrong: no space between words
sentence = word + substr
# Correct: add space only if there's a suffix
sentence = word if not substr else word + " " + substrChecking if a substring exists in the word dictionary using a list results in O(m * t) lookup per check instead of O(t). This significantly slows down the solution.
# Wrong: O(m * t) lookup per check
if s[i:j+1] in wordDict: # wordDict is a list
# Correct: O(t) lookup with hash set
wordSet = set(wordDict)
if s[i:j+1] in wordSet:Without memoization, the same suffix is recomputed many times, leading to exponential time complexity even when not necessary. This causes Time Limit Exceeded on inputs with many overlapping subproblems.
# Wrong: recomputes same suffixes repeatedly
def backtrack(i):
if i == len(s):
return [""]
res = []
for j in range(i, len(s)):
# ... no caching
# Correct: cache results for each starting index
def backtrack(i):
if i in cache:
return cache[i]
# ... compute and store in cache[i]When using a shared list to build the current path, forgetting to pop the last word after recursion corrupts the path for other branches.
# Wrong: path keeps growing incorrectly
cur.append(word)
backtrack(j + 1)
# forgot to pop!
# Correct: restore state after recursion
cur.append(word)
backtrack(j + 1)
cur.pop()