Coding interviews test your ability to choose the right data structure for the problem. A wrong choice can turn an O(n log n) solution into O(n²). This cheat sheet gives you the complexity guarantees and common patterns for every major structure, with code examples you can run right now.
Two pointers — O(n) for sorted arrays or problems with pair/triplet searches.
def two_sum_sorted(nums: list[int], target: int) -> tuple[int, int]: """Assumes nums is sorted.""" left, right = 0, len(nums) - 1 while left < right: s = nums[left] + nums[right] if s == target: return left, right elif s < target: left += 1 else: right -= 1 return -1, -1
Binary search — O(log n). Any time a sorted array appears, think binary search.
import bisectnums = [1, 3, 5, 7, 9, 11]idx = bisect.bisect_left(nums, 7) # 3 — insertion point for 7
Hash Maps
The most useful data structure for interview problems. Hash maps give you O(1) average-case lookups and are the answer to almost every "find duplicates", "count frequency", or "group items" question.
# Monotonic decreasing stack — find the next greater elementdef next_greater(nums: list[int]) -> list[int]: result = [-1] * len(nums) stack = [] # stores indices for i, n in enumerate(nums): while stack and nums[stack[-1]] < n: result[stack.pop()] = n stack.append(i) return resultprint(next_greater([4, 1, 2, 3])) # [-1, 2, 3, -1]
Queue (FIFO) — think BFS, level-order traversal.
from collections import dequedef bfs(graph: dict, start: str) -> list[str]: visited = set([start]) queue = deque([start]) order = [] while queue: node = queue.popleft() order.append(node) for neighbour in graph.get(node, []): if neighbour not in visited: visited.add(neighbour) queue.append(neighbour) return order
Deque — double-ended queue; O(1) append/pop from both ends. Use for sliding window maximum.
Linked Lists
Core patterns: fast/slow pointers (cycle detection, finding the middle), reversing a list, and merging two sorted lists.
class ListNode: def __init__(self, val=0, next=None): self.val, self.next = val, next# Detect cycle — Floyd's algorithmdef has_cycle(head: ListNode | None) -> bool: slow = fast = head while fast and fast.next: slow = slow.next fast = fast.next.next if slow is fast: return True return False# Reverse a linked list — O(n) time, O(1) spacedef reverse_list(head: ListNode | None) -> ListNode | None: prev = None curr = head while curr: nxt = curr.next curr.next = prev prev = curr curr = nxt return prev
Trees
Binary trees are the most common interview structure. Know these traversals:
from typing import Optionalclass TreeNode: def __init__(self, val=0, left=None, right=None): self.val, self.left, self.right = val, left, right# Inorder (left → root → right) — gives sorted output for BSTsdef inorder(root: Optional[TreeNode]) -> list[int]: if not root: return [] return inorder(root.left) + [root.val] + inorder(root.right)# Maximum depthdef max_depth(root: Optional[TreeNode]) -> int: if not root: return 0 return 1 + max(max_depth(root.left), max_depth(root.right))# Level-order (BFS)def level_order(root: Optional[TreeNode]) -> list[list[int]]: if not root: return [] result, queue = [], deque([root]) while queue: level = [] for _ in range(len(queue)): node = queue.popleft() level.append(node.val) if node.left: queue.append(node.left) if node.right: queue.append(node.right) result.append(level) return result
BST property: left subtree < root < right subtree. Inorder traversal gives sorted output.
Heaps (Priority Queues)
A heap gives you O(1) access to the minimum (min-heap) or maximum (max-heap) and O(log n) insert/delete.
import heapq# Min-heap by default in Pythonnums = [3, 1, 4, 1, 5, 9]heapq.heapify(nums) # O(n) in-placesmallest = heapq.heappop(nums) # 1# Max-heap: negate valuesnums = [3, 1, 4, 1, 5, 9]max_heap = [-x for x in nums]heapq.heapify(max_heap)largest = -heapq.heappop(max_heap) # 9# K largest elements — O(n log k)def k_largest(nums: list[int], k: int) -> list[int]: return heapq.nlargest(k, nums)
Interview pattern: "find the kth largest/smallest", "merge k sorted lists", "top k frequent elements" → heap.
Graphs
Two representations: adjacency list (sparse graphs, most interviews) and adjacency matrix (dense graphs, O(1) edge lookup).
# Adjacency listgraph = { "A": ["B", "C"], "B": ["A", "D", "E"], "C": ["A", "F"], "D": [], "E": ["D"], "F": [],}# DFS — recursivedef dfs(graph: dict, node: str, visited: set) -> list[str]: visited.add(node) result = [node] for neighbour in graph[node]: if neighbour not in visited: result.extend(dfs(graph, neighbour, visited)) return result# Detect cycle in directed graph using DFS + recursion stackdef has_cycle_directed(graph: dict) -> bool: WHITE, GRAY, BLACK = 0, 1, 2 color = {node: WHITE for node in graph} def dfs(u): color[u] = GRAY for v in graph[u]: if color[v] == GRAY: return True if color[v] == WHITE and dfs(v): return True color[u] = BLACK return False return any(dfs(node) for node in graph if color[node] == WHITE)
// BFS in Gofunc bfs(graph map[string][]string, start string) []string { visited := map[string]bool{start: true} queue := []string{start} var order []string for len(queue) > 0 { node := queue[0] queue = queue[1:] order = append(order, node) for _, neighbour := range graph[node] { if !visited[neighbour] { visited[neighbour] = true queue = append(queue, neighbour) } } } return order}
Tries (Prefix Trees)
Tries are ideal for autocomplete, word search, and prefix-matching problems. Insert and search are O(k) where k is the word length.
class TrieNode: def __init__(self): self.children: dict[str, "TrieNode"] = {} self.is_end = Falseclass Trie: def __init__(self): self.root = TrieNode() def insert(self, word: str) -> None: node = self.root for ch in word: node = node.children.setdefault(ch, TrieNode()) node.is_end = True def search(self, word: str) -> bool: node = self.root for ch in word: if ch not in node.children: return False node = node.children[ch] return node.is_end def starts_with(self, prefix: str) -> bool: node = self.root for ch in prefix: if ch not in node.children: return False node = node.children[ch] return True
Dynamic Programming: When to Use It
DP is not a data structure, but it is the most feared topic in coding interviews. The key questions:
Does the problem ask for an optimal value (min/max/count)?
Does it have overlapping subproblems (same sub-computations repeated)?
Does it have optimal substructure (optimal solution uses optimal solutions to subproblems)?
If yes to all three → reach for DP.
# Classic: 0/1 Knapsackdef knapsack(weights: list[int], values: list[int], capacity: int) -> int: n = len(weights) dp = [[0] * (capacity + 1) for _ in range(n + 1)] for i in range(1, n + 1): for w in range(capacity + 1): dp[i][w] = dp[i - 1][w] if weights[i - 1] <= w: dp[i][w] = max(dp[i][w], dp[i - 1][w - weights[i - 1]] + values[i - 1]) return dp[n][capacity]print(knapsack([2, 3, 4, 5], [3, 4, 5, 6], 8)) # 10
Quick Interview Tips
Step 1 — Clarify the problem. Ask about edge cases, input size, and expected output format.
Step 2 — State the brute-force O(n²) solution. Confirm with the interviewer before optimising.
Step 3 — Identify the bottleneck. Is it repeated lookups? → hash map. Is it tracking min/max? → heap. Is it overlapping subproblems? → DP.
Step 4 — Write clean code. Name variables clearly. Add a brief comment for non-obvious logic.
Step 5 — Test with examples. Walk through your code with the provided test cases, then check edge cases (empty input, single element, all duplicates).
Practice These Patterns
The fastest way to internalise these patterns is to solve problems every day. On uByte: