1235. Maximum Profit in Job Scheduling - Explanation

Problem Link

Description

You are given n jobs described with three arrays: startTime, endTime, and profit. The i--th job starts at startTime[i], ends at endTime[i], and yields a profit of profit[i].

Return the maximum profit achievable by selecting a subset of jobs such that no two selected jobs overlap in their time ranges.

Note: If a job ends at time X, you are allowed to start another job that begins exactly at time X.

Example 1:

Input: startTime = [4,2,4,8,2], endTime = [5,5,5,10,8], profit = [1,2,8,10,4]

Output: 18

Explanation: The subset to choose to get maximum profit is the third and fourth job (8 + 10 = 18).

Example 2:

Input: startTime = [2,3,4,5,7], endTime = [4,6,11,7,10], profit = [100,100,180,150,140]

Output: 390

Explanation: The subset to choose to get maximum profit is the first, second and fifth job (100 + 140 + 150 = 390).

Constraints:

  • 1 <= startTime.length == endTime.length == profit.length <= 50,000
  • 1 <= startTime[i] < endTime[i] <= 1,000,000,000
  • 1 <= profit[i] <= 10,000


Topics

Company Tags

Please upgrade to NeetCode Pro to view company tags.



Prerequisites

Before attempting this problem, you should be comfortable with:

  • Dynamic Programming (Memoization) - Using recursion with caching to avoid recomputing overlapping subproblems
  • Sorting - Sorting jobs by start time to enable efficient forward traversal
  • Binary Search - Finding the next non-overlapping job in O(log n) time instead of linear scan
  • Interval Scheduling - Understanding how to handle overlapping intervals and make optimal skip/take decisions

1. Dynamic Programming (Top-Down)

Intuition

This is a classic interval scheduling problem. For each job, we have two choices: skip it or take it. If we take a job, we earn its profit but must skip all overlapping jobs. By sorting jobs by start time and using memoization, we can efficiently explore all valid schedules.

Algorithm

  1. Combine start times, end times, and profits into intervals and sort by start time.
  2. Use recursion with memoization starting from index 0.
  3. At each index i, either skip the job or take it.
  4. If taking the job, find the next non-overlapping job by scanning forward.
  5. Return the maximum profit achievable.
class Solution:
    def jobScheduling(self, startTime: List[int], endTime: List[int], profit: List[int]) -> int:
        intervals = sorted(zip(startTime, endTime, profit))
        cache = {}

        def dfs(i):
            if i == len(intervals):
                return 0
            if i in cache:
                return cache[i]

            # don't include
            res = dfs(i + 1)

            # include
            j = i + 1
            while j < len(intervals):
                if intervals[i][1] <= intervals[j][0]:
                    break
                j += 1

            cache[i] = res = max(res, intervals[i][2] + dfs(j))
            return res

        return dfs(0)

Time & Space Complexity

  • Time complexity: O(n2)O(n ^ 2)
  • Space complexity: O(n)O(n)

Intuition

The linear scan to find the next non-overlapping job can be optimized using binary search. Since jobs are sorted by start time, we can binary search for the first job whose start time is at least the current job's end time.

Algorithm

  1. Sort intervals by start time.
  2. Use recursion with memoization.
  3. At each index, either skip the job or take it.
  4. When taking a job, use binary search to find the next non-overlapping job.
  5. Return the maximum profit.
class Solution:
    def jobScheduling(self, startTime: List[int], endTime: List[int], profit: List[int]) -> int:
        intervals = sorted(zip(startTime, endTime, profit))
        cache = {}

        def dfs(i):
            if i == len(intervals):
                return 0
            if i in cache:
                return cache[i]

            # don't include
            res = dfs(i + 1)

            # include
            j = bisect.bisect(intervals, (intervals[i][1], -1, -1))
            cache[i] = res = max(res, intervals[i][2] + dfs(j))
            return res

        return dfs(0)

Time & Space Complexity

  • Time complexity: O(nlogn)O(n \log n)
  • Space complexity: O(n)O(n)

3. Dynamic Programming (Top-Down) + Binary Search (Optimal)

Intuition

Instead of creating new interval objects, we can sort indices by start time and work with the original arrays. This reduces memory allocations while maintaining the same algorithmic approach.

Algorithm

  1. Create an index array and sort it by start time of the corresponding jobs.
  2. Use recursion with memoization on the sorted indices.
  3. At each position, either skip the current job or take it.
  4. Binary search for the next non-overlapping job using the original arrays.
  5. Return the maximum profit.
class Solution:
    def jobScheduling(self, startTime: List[int], endTime: List[int], profit: List[int]) -> int:
        n = len(startTime)
        index = list(range(n))
        index.sort(key=lambda i: startTime[i])

        cache = [-1] * n

        def dfs(i):
            if i == n:
                return 0
            if cache[i] != -1:
                return cache[i]

            res = dfs(i + 1)

            left, right, j = i + 1, n, n
            while left < right:
                mid = (left + right) // 2
                if startTime[index[mid]] >= endTime[index[i]]:
                    j = mid
                    right = mid
                else:
                    left = mid + 1

            cache[i] = res = max(res, profit[index[i]] + dfs(j))
            return res

        return dfs(0)

Time & Space Complexity

  • Time complexity: O(nlogn)O(n \log n)
  • Space complexity: O(n)O(n)

Intuition

We can convert the top-down approach to bottom-up by iterating from the last job to the first. At each position, we compute the maximum profit considering all jobs from that point onwards. This eliminates recursion overhead.

Algorithm

  1. Sort indices by start time.
  2. Create a DP array where dp[i] represents the maximum profit starting from job i.
  3. Iterate from n-1 down to 0.
  4. For each job, use binary search to find the next non-overlapping job.
  5. dp[i] = max(dp[i+1], profit[i] + dp[j]) where j is the next non-overlapping job.
  6. Return dp[0].
class Solution:
    def jobScheduling(self, startTime: List[int], endTime: List[int], profit: List[int]) -> int:
        n = len(startTime)
        index = list(range(n))
        index.sort(key=lambda i: startTime[i])

        dp = [0] * (n + 1)

        for i in range(n - 1, -1, -1):
            left, right, j = i + 1, n, n
            while left < right:
                mid = (left + right) // 2
                if startTime[index[mid]] >= endTime[index[i]]:
                    j = mid
                    right = mid
                else:
                    left = mid + 1

            dp[i] = max(dp[i + 1], profit[index[i]] + dp[j])

        return dp[0]

Time & Space Complexity

  • Time complexity: O(nlogn)O(n \log n)
  • Space complexity: O(n)O(n)

Common Pitfalls

Sorting by End Time Instead of Start Time

For the top-down DP approach, jobs must be sorted by start time so that when we take a job, we can efficiently find the next non-overlapping job. Sorting by end time breaks the forward progression logic and makes binary search for the next valid job incorrect.

When finding the next non-overlapping job, the condition should be startTime[j] >= endTime[i] (using >= not >). Jobs that start exactly when another ends are non-overlapping. Using strict greater-than excludes valid job combinations and produces suboptimal results.

Forgetting to Handle the Skip Case

At each job, there are two choices: take it or skip it. A common mistake is only considering taking the job and recursing to the next non-overlapping position. You must also consider skipping the current job entirely with dfs(i + 1) and take the maximum of both options.

While a linear scan to find the next non-overlapping job produces correct results, it degrades the time complexity from O(nlogn)O(n \log n) to O(n2)O(n^2). For large inputs, this causes time limit exceeded errors. Always use binary search to find the next valid job index.

Not Memoizing the Recursion

Without memoization, the recursive solution revisits the same subproblems exponentially many times. Each index can be reached through multiple paths, and recomputing the answer each time leads to exponential time complexity. Always cache the result for each index after computing it.