146. LRU Cache - Explanation

Problem Link

Description

Implement the Least Recently Used (LRU) cache class LRUCache. The class should support the following operations

  • LRUCache(int capacity) Initialize the LRU cache of size capacity.
  • int get(int key) Return the value corresponding to the key if the key exists, otherwise return -1.
  • void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the introduction of the new pair causes the cache to exceed its capacity, remove the least recently used key.

A key is considered used if a get or a put operation is called on it.

Ensure that get and put each run in O(1)O(1) average time complexity.

Example 1:

Input:
["LRUCache", [2], "put", [1, 10],  "get", [1], "put", [2, 20], "put", [3, 30], "get", [2], "get", [1]]

Output:
[null, null, 10, null, null, 20, -1]

Explanation:
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 10);  // cache: {1=10}
lRUCache.get(1);      // return 10
lRUCache.put(2, 20);  // cache: {1=10, 2=20}
lRUCache.put(3, 30);  // cache: {2=20, 3=30}, key=1 was evicted
lRUCache.get(2);      // returns 20 
lRUCache.get(1);      // return -1 (not found)

Constraints:

  • 1 <= capacity <= 100
  • 0 <= key <= 1000
  • 0 <= value <= 1000


Topics

Recommended Time & Space Complexity

You should aim for a solution with O(1) time for each put() and get() function call and an overall space of O(n), where n is the capacity of the LRU cache.


Hint 1

Can you think of a data structure for storing data in key-value pairs? Maybe a hash-based data structure with unique keys.


Hint 2

We can use a hash map which takes O(1) time to get and put the values. But, how can you deal with the least recently used to be removed criteria as a key is updated by the put() or recently used by the get() functions? Can you think of a data structure to store the order of values?


Hint 3

A brute-force solution would involve maintaining the order of key-value pairs in an array list, performing operations by iterating through the list to erase and insert these key-value pairs. However, this would result in an O(n) time complexity. Can you think of a data structure that allows removing and reinserting elements in O(1) time?


Hint 4

We can use a doubly-linked list, which allows us to remove a node from the list when we have the address of that node. Can you think of a way to store these addresses so that we can efficiently remove or update a key when needed?


Hint 5

We can use a doubly linked list where key-value pairs are stored as nodes, with the least recently used (LRU) node at the head and the most recently used (MRU) node at the tail. Whenever a key is accessed using get() or put(), we remove the corresponding node and reinsert it at the tail. When the cache reaches its capacity, we remove the LRU node from the head of the list. Additionally, we use a hash map to store each key and the corresponding address of its node, enabling efficient operations in O(1) time.


Company Tags

Please upgrade to NeetCode Pro to view company tags.



Prerequisites

Before attempting this problem, you should be comfortable with:

  • Doubly Linked Lists - Implementing nodes with prev/next pointers for O(1) insertion and removal from both ends
  • Hash Maps - Using dictionaries for O(1) key-to-node lookup to enable fast cache access
  • Data Structure Design - Combining multiple data structures (hash map + linked list) to achieve desired time complexity for all operations

1. Brute Force

Intuition

We store all (key, value) pairs in a list.
To follow LRU (Least Recently Used) behavior:

  • Whenever we access a key, we move it to the end of the list (most recently used).
  • When inserting a new key:
    • If the key already exists → update its value and move it to the end.
    • If the cache is full → remove the first element (least recently used).
    • Then add the new key at the end.

Algorithm

  1. Initialization

    • Store the cache as a list.
    • Store the capacity.
  2. get(key)

    • Loop through the list to find the key.
    • If found:
      • Remove it from its current position.
      • Append it to the end (mark as recently used).
      • Return its value.
    • If not found, return -1.
  3. put(key, value)

    • Look for the key in the list.
      • If found:
        • Remove it.
        • Update the value.
        • Append it to the end.
        • Return.
    • If not found:
      • If the cache is full, remove the first element.
      • Append [key, value] to the end.
class LRUCache:

    def __init__(self, capacity: int):
        self.cache = []
        self.capacity = capacity

    def get(self, key: int) -> int:
        for i in range(len(self.cache)):
            if self.cache[i][0] == key:
                tmp = self.cache.pop(i)
                self.cache.append(tmp)
                return tmp[1]
        return -1

    def put(self, key: int, value: int) -> None:
        for i in range(len(self.cache)):
            if self.cache[i][0] == key:
                tmp = self.cache.pop(i)
                tmp[1] = value
                self.cache.append(tmp)
                return

        if self.capacity == len(self.cache):
            self.cache.pop(0)

        self.cache.append([key, value])

Time & Space Complexity

  • Time complexity: O(n)O(n) for each put()put() and get()get() operation.
  • Space complexity: O(n)O(n)

2. Doubly Linked List

Intuition

We want all operations to be O(1) while still following LRU (Least Recently Used) rules.

To do that, we combine:

  1. Hash Map -> quickly find a node by its key in O(1).
  2. Doubly Linked List -> quickly move nodes to the most recently used position and remove the least recently used node from the other end in O(1).

We keep:

  • The most recently used node near the right side.
  • The least recently used node near the left side.

Whenever we:

  • Get a key: move that node to the right (most recently used).
  • Put a key:
    • If it exists: update value and move it to the right.
    • If it's new:
      • If at capacity: remove the leftmost real node (LRU).
      • Insert the new node at the right.

Dummy left and right nodes make insert/remove logic cleaner.

Algorithm

  1. Data Structures

    • A hash map cache that maps key -> node.
    • A doubly linked list with:
      • left dummy: before the least recently used node.
      • right dummy: after the most recently used node.
  2. Helper: remove(node)

    • Unlink node from the list by connecting its prev and next nodes.
  3. Helper: insert(node)

    • Insert node just before right (mark as most recently used).
  4. get(key)

    • If key not in cache, return -1.
    • Otherwise:
      • Remove its node from the list.
      • Insert it again near right (mark as recently used).
      • Return the node's value.
  5. put(key, value)

    • If key already exists:
      • Remove its old node from the list.
    • Create or update the node and store it in cache[key].
    • Insert the node near right.
    • If len(cache) > capacity:
      • Take the node right after left (this is LRU).
      • Remove it from the list.
      • Delete its key from the hash map.

This way, both get and put run in O(1) time, and the LRU policy is always maintained.

class Node:
    def __init__(self, key, val):
        self.key, self.val = key, val
        self.prev = self.next = None

class LRUCache:
    def __init__(self, capacity: int):
        self.cap = capacity
        self.cache = {}  # map key to node

        self.left, self.right = Node(0, 0), Node(0, 0)
        self.left.next, self.right.prev = self.right, self.left

    def remove(self, node):
        prev, nxt = node.prev, node.next
        prev.next, nxt.prev = nxt, prev

    def insert(self, node):
        prev, nxt = self.right.prev, self.right
        prev.next = nxt.prev = node
        node.next, node.prev = nxt, prev

    def get(self, key: int) -> int:
        if key in self.cache:
            self.remove(self.cache[key])
            self.insert(self.cache[key])
            return self.cache[key].val
        return -1

    def put(self, key: int, value: int) -> None:
        if key in self.cache:
            self.remove(self.cache[key])
        self.cache[key] = Node(key, value)
        self.insert(self.cache[key])

        if len(self.cache) > self.cap:
            lru = self.left.next
            self.remove(lru)
            del self.cache[lru.key]

Time & Space Complexity

  • Time complexity: O(1)O(1) for each put()put() and get()get() operation.
  • Space complexity: O(n)O(n)

3. Built-In Data Structure

Intuition

Many languages provide a built-in ordered map / dictionary that:

  • Stores key–value pairs
  • Keeps track of the order in which keys were inserted or recently updated

This is perfect for an LRU cache:

  • When we access a key (get or put), we mark it as most recently used by moving it to the "end" of the order.
  • When the cache exceeds capacity, we remove the key that is at the front of the order (the least recently used).

So the ordered map itself handles:

  • Fast lookups (like a normal hash map)
  • Fast updates of usage order (moving keys to the end)
  • Fast removal of the least recently used key (from the front)

This gives a clean and concise LRU implementation using library support.

Algorithm

  1. Initialization

    • Create an ordered map cache.
    • Store the maximum capacity cap.
  2. get(key)

    • If key is not in cache, return -1.
    • Otherwise:
      • Move key to the "most recent" position in the ordered map.
      • Return the associated value.
  3. put(key, value)

    • If key is already in cache:
      • Update its value.
      • Move key to the "most recent" position.
    • If key is not in cache:
      • Insert (key, value) into cache at the "most recent" position.
    • If the size of cache is now greater than cap:
      • Remove the least recently used key (the one at the "oldest" position in the ordered map).

This uses the built-in ordered map to achieve LRU behavior with O(1) average time for both get and put.

class LRUCache:

    def __init__(self, capacity: int):
        self.cache = OrderedDict()
        self.cap = capacity

    def get(self, key: int) -> int:
        if key not in self.cache:
            return -1
        self.cache.move_to_end(key)
        return self.cache[key]

    def put(self, key: int, value: int) -> None:
        if key in self.cache:
            self.cache.move_to_end(key)
        self.cache[key] = value

        if len(self.cache) > self.cap:
            self.cache.popitem(last=False)

Time & Space Complexity

  • Time complexity: O(1)O(1) for each put()put() and get()get() operation.
  • Space complexity: O(n)O(n)

Common Pitfalls

Forgetting to Update on Get Operations

A critical LRU requirement is that get() operations also update recency. Many implementations correctly update order on put() but forget that accessing a key via get() should also move it to the most recently used position. This breaks the LRU invariant and causes wrong evictions.

Incorrect Doubly Linked List Pointer Updates

When implementing the doubly linked list approach, pointer manipulation errors are common. When removing a node, you must update both prev.next and next.prev. When inserting, you must update four pointers: the new node's prev and next, plus the adjacent nodes' pointers. Missing any of these updates corrupts the list structure.

Not Storing Keys in List Nodes

When evicting the least recently used item, you need to remove it from both the linked list and the hash map. If your list nodes only store values (not keys), you cannot efficiently find and remove the corresponding hash map entry. Always store the key in each list node to enable O(1) eviction.