You are given an array of integers stones where stones[i] is the weight of the ith stone.
We are playing a game with the stones. On each turn, we choose any two stones and smash them together. Suppose the stones have weights x and y with x <= y. The result of this smash is:
x == y, both stones are destroyed, andx != y, the stone of weight x is destroyed, and the stone of weight y has new weight y - x.At the end of the game, there is at most one stone left.
Return the smallest possible weight of the left stone. If there are no stones left, return 0.
Example 1:
Input: stones = [2,4,1,5,6,3]
Output: 1Explanation:
Example 2:
Input: stones = [4,4,1,7,10]
Output: 2Constraints:
1 <= stones.length <= 301 <= stones[i] <= 100The key insight is that smashing stones is equivalent to partitioning them into two groups and finding the minimum difference between their sums. When two stones collide, the result is the absolute difference of their weights. If we think of assigning a positive or negative sign to each stone, the final result is the absolute value of the sum. This transforms the problem into finding a subset with sum as close to half the total as possible.
class Solution:
def lastStoneWeightII(self, stones: List[int]) -> int:
stoneSum = sum(stones)
target = (stoneSum + 1) // 2
def dfs(i, total):
if total >= target or i == len(stones):
return abs(total - (stoneSum - total))
return min(dfs(i + 1, total), dfs(i + 1, total + stones[i]))
return dfs(0, 0)Where is the number of stones and is the sum of the weights of the stones.
The recursive solution recomputes the same subproblems many times. For example, reaching a total of 10 using stones at different indices might happen through multiple paths. By caching results based on the current index and running total, we avoid redundant work and speed up the solution significantly.
target as in the recursive approach.(index, total).class Solution:
def lastStoneWeightII(self, stones: List[int]) -> int:
stoneSum = sum(stones)
target = (stoneSum + 1) // 2
dp = {}
def dfs(i, total):
if total >= target or i == len(stones):
return abs(total - (stoneSum - total))
if (i, total) in dp:
return dp[(i, total)]
dp[(i, total)] = min(dfs(i + 1, total), dfs(i + 1, total + stones[i]))
return dp[(i, total)]
return dfs(0, 0)Where is the number of stones and is the sum of the weights of the stones.
Instead of working recursively from the first stone forward, we can build up solutions iteratively. We create a 2D table where dp[i][t] represents the maximum sum achievable using the first i stones without exceeding capacity t. This is essentially a 0/1 knapsack problem where we want to pack stones into a knapsack of capacity target to maximize the total weight.
(n+1) x (target+1) with zeros.i from 1 to n:t from 0 to target:t >= stones[i-1]), take the maximum of skipping it or including it.stoneSum - 2 * dp[n][target].class Solution:
def lastStoneWeightII(self, stones: List[int]) -> int:
stoneSum = sum(stones)
target = stoneSum // 2
n = len(stones)
dp = [[0] * (target + 1) for _ in range(n + 1)]
for i in range(1, n + 1):
for t in range(target + 1):
if t >= stones[i - 1]:
dp[i][t] = max(dp[i - 1][t], dp[i - 1][t - stones[i - 1]] + stones[i - 1])
else:
dp[i][t] = dp[i - 1][t]
return stoneSum - 2 * dp[n][target]Where is the number of stones and is the sum of the weights of the stones.
Looking at the bottom-up solution, each row only depends on the previous row. This means we don't need to store the entire 2D table. A single 1D array is sufficient if we iterate through capacities in reverse order to avoid overwriting values we still need.
target + 1 with zeros.t from target down to the stone's weight.dp[t] as the maximum of keeping it or adding the stone.stoneSum - 2 * dp[target].class Solution:
def lastStoneWeightII(self, stones: List[int]) -> int:
stoneSum = sum(stones)
target = stoneSum // 2
dp = [0] * (target + 1)
for stone in stones:
for t in range(target, stone - 1, -1):
dp[t] = max(dp[t], dp[t - stone] + stone)
return stoneSum - 2 * dp[target]Where is the number of stones and is the sum of the weights of the stones.
Instead of tracking the maximum achievable sum at each capacity, we can simply track which sums are reachable. A hash set stores all possible subset sums. For each stone, we generate new reachable sums by adding the stone's weight to existing sums. The answer is the largest reachable sum that doesn't exceed target.
0 (representing an empty subset).target, return 0 immediately.stoneSum - 2 * max.class Solution:
def lastStoneWeightII(self, stones: List[int]) -> int:
stoneSum = sum(stones)
target = stoneSum // 2
dp = {0}
for stone in stones:
new_dp = set(dp)
for val in dp:
if val + stone == target:
return stoneSum - 2 * target
if val + stone < target:
new_dp.add(val + stone)
dp = new_dp
return stoneSum - 2 * max(dp)Where is the number of stones and is the sum of the weights of the stones.
A bitset can represent reachable sums more compactly than a hash set. Each bit position represents whether that sum is achievable. Shifting the bitset left by a stone's weight and OR-ing with the original efficiently computes all new reachable sums in a single operation.
0 set (sum 0 is reachable).target down to 0, find the first set bit. This represents the largest achievable sum not exceeding target.stoneSum - 2 * t.Where is the number of stones and is the sum of the weights of the stones.
The problem appears to be about simulating stone collisions, but it is actually a partition problem. The key insight is that the final result equals the absolute difference between two groups of stones. Missing this connection leads to inefficient simulation approaches that fail on larger inputs.
The target should be stoneSum // 2 (floor division), representing the largest possible sum for one partition. Using ceiling division or forgetting to halve the sum leads to incorrect DP table dimensions or wrong final calculations.
In the 1D DP approach, iterating from 0 to target instead of from target down to stone causes each stone to be counted multiple times. This happens because updated values at lower indices affect higher indices within the same iteration, violating the 0/1 knapsack constraint.