Two Pointers / Sliding Window

  • Consider soring the array, so that you can use two pointers and move L and R pointers accordingly
  • If the input array is sorted, that might be a hint for using two pointers
  • If unique results are needed, you might have to skip duplicates using a while loop
  • Using two pointers means you might be scanning through the array two times, which makes the time complexity O(2n)O(2*n)

Intervals

In interval problems, deciding whether to sort by start time or end time depends on the problem’s goal. Here’s a guide to help determine when each approach is more useful:

1. Sort by End Time

Sorting by end time is ideal when you want to maximize the number of non-overlapping intervals or minimize the number of overlaps. This is because:

  • Choosing intervals with the earliest end times first leaves the most space available for future intervals.
  • Typical Problems: “Erase Overlap Intervals” (where you need to remove intervals to avoid overlap) or “Maximum Number of Non-Overlapping Intervals”.

2. Sort by Start Time

Sorting by start time is useful when you need to merge overlapping intervals or find a range that contains certain points. This is because:

  • Sorting by start times helps you easily check if the current interval overlaps with the next interval in sequence.
  • Typical Problems: “Merge Intervals” or “Insert Interval”.

Quick Summary

  • End Time Sorting: Use when aiming to select the maximum number of non-overlapping intervals or minimize overlaps.
  • Start Time Sorting: Use when trying to merge overlapping intervals or check consecutive intervals for overlaps.

In general, consider the problem’s requirements—whether it’s about maximizing non-overlapping intervals or merging intervals—and this will guide your choice.

Greedy

Kadane’s Algorithm

  • For problems such as maximum subarray
  • Keep a variable for maxSum and another for curSum
  • The core idea is that we keep building the window until the window sum becomes negative, at which point we reset curSum to 0
  • If the sum becomes negative, even if we get large positive numbers later, they will only be working to offset the negative weight of our window
  • Another way to think about this is that for each element, decide whether it’s better to start a new subarray with just this element (num) or to extend the existing subarray (curSum + num)

Dynamic Programming

Two Approaches

  • Top-down approach (Memoization):

    • Solve the problem recursively, storing solutions to subproblems in a memoization table
    • Recursive function calls with memoization to avoid redundant calculations
  • Bottom-up approach (Tabulation):

    • Solve subproblems iteratively in a specific order
    • Fill the DP table or array in a systematic manner
    • No recursive function calls; iteration is used to fill the table

Patterns

Knapsack

  • Two dimensional dynamic programming, so the subproblem requires keeping track of two variables
  • For 0/1 knapsack, each item can be included only once
  • For unbounded knapsack, each item can be included multiple times
  • The idea is to calculate two outputs by including the current element and skipping it
  • The result is the maximum of the two outputs

Longest Common Subsequence

  • Two dimensional dynamic programming, so the subproblem requires two indices for two the two strings (i1 and i2)

  • For the recursive DFS solution,

    • If out of bounds (base case), return 0
    • For memoization, if the value is in cache, return cache[i1][i2]
    • If characters at both indices are equal, return 1 + dfs(i1+1, i2+1)
    • Else, return max(dfs(i1+1, i2), dfs(i, i2+1))
  • The bottom up dynamic programming approach is similar to memoization

  • Initialize a cache with an additional row and column, setting them all to 0

  • Iterate over the strings using two for loops

    • If the characters are equal, cache[i+1][j+1] = 1 + cache[i][j]
    • If not equal, cache[i+1][j+1] = max(cache[i][j+1], cache[i+1][j])

Palindromes

  • Loop through each character
  • In each interation, do two more loops
    • For the first loop, l, r = i, i
    • For the second loop, l, r = i, i + 1
  • Check for palindrome by matching characters at these indices

Graphs

Matrix traversal

Create a directions array to make traversal easier
const directions = [
  [0, 1],
  [0, -1],
  [1, 0],
  [-1, 0],
];
 
for (let [dx, dy] of directions) {
  let newX = x + dx;
  let newY = y + dy;
  if (newX >= 0 && newX < rows && newY >= 0 && newY < cols) {
    // do something
  }
}

Matrix DFS

  • Go down a path up to the edge before exploring other paths
  • Useful for counting number of paths
  • DFS can be done either recursively or through iteration
  • For the recursive solution, when you are at a cell,
    • add it to visited set
    • call dfs on all valid neighbors
    • remove it from visited set
  • For the iterative solution, first push the cell onto a stack (LIFO), and while the stack is not empty,
    • pop cell from the stack
    • mark cell as visited
    • push all valid neighbors of the cell onto the stack
    • repeat the loop
  • Time complexity: O(4nm)O(4^{n*m})
  • Memory complexity: O(nm)O(n*m)

Matrix BFS

  • Explore one layer entirely before moving on to the next
  • Useful for finding the shortest path
  • Push starting cell to a double ended queue and mark it as visited
  • While the queue is not empty,
    • loop n times, where n = len(queue)
      • pop cell from the left of the dequeue
      • mark all of its valid neighbors as visited
      • and push them onto the queue
      • repeat the loop
  • Time complexity: O(mn)O(m*n)
  • Memory complexity: O(mn)O(m*n)

Adjacency List

  • Build an adjacency list using a hash map and then run either BFS or DFS
  • DFS time complexity: O(NV)O(N^V), where V is the number of vertices and N is the average number of edges
  • BFS time complexity: O(V+E)O(V+E), where V is the number of vertices and E is the number of edges
  • BFS memory complexity: O(V)O(V)
  • You might need a set to detect cycles
    • Base cases:
      • If node is in visited set, return false
    • Add to cycle set before recursive dfs call
    • Remove from cycle set after recursive dfs call
    • Then add to a completed set, if necessary