paint-brush
Embracing Coding Patterns: a Smarter Approach to Coding Interview Prep by@matthewguest
12,277 reads
12,277 reads

Embracing Coding Patterns: a Smarter Approach to Coding Interview Prep

by Matthew GuestOctober 26th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Traditional interview preparation methods, often characterized by excessive problem-solving without a structured approach, have shortcomings such as not recognizing one's weaknesses, fostering memorization of specific problems, and causing stress. A more effective approach is to embrace coding patterns, like Two-Pointers, Sliding Window, Modified Binary Search, and Topological Sort. Understanding these patterns and their applications provides a versatile framework for solving coding problems, making interview preparation more efficient and sustainable. The article promises to detail the top 15 coding patterns and how to use them effectively.
featured image - Embracing Coding Patterns: a Smarter Approach to Coding Interview Prep
Matthew Guest HackerNoon profile picture
Preparing for coding interviews can be a real challenge with developers often spending several weeks reviewing and learning new material.


The truth is, that most developers never quite feel fully prepared for their coding interviews. It’s not uncommon for developers to spend weeks solving problems on LeetCode and still feel unprepared. Clearly, there are issues with this approach.


Let's take a closer look at some problems associated with traditional coding interview prep.


The Pitfalls of Traditional Interview Prep

One of the most common pitfalls in traditional interview prep is the practice of "grinding." This approach involves solving as many LeetCode problems as possible without a clear plan or strategy. While this may seem like a productive strategy, it has several drawbacks.


When you solve problems without a structured approach, you might not recognize your weaknesses. There's no deliberate plan for your study; the goal is simply to solve as many problems as you can. As a result, you may feel like you've learned a lot, but there could be significant gaps in your knowledge.


Furthermore, the issue with this is that it essentially revolves around memorizing solutions to a multitude of problems. Attempting to remember solutions to a hundred different coding problems proves ineffective when interviewers present questions that deviate even slightly from what you've encountered before. This can leave you feeling unprepared to handle problem variations.


My last issue with this strategy is that, in the long run, it creates more stress and headaches. If you have to undergo the same several-week-long cramming session every time you want to switch jobs, you're going to struggle every time. You'll spend weeks relearning things and solving the same problems as before.


Given these challenges, there has to be a better way.


A Better Way: Embracing Coding Patterns

So, is there a more effective and sustainable approach to coding interview preparation? The answer lies in understanding and utilizing coding patterns.


When I prepare for coding interviews, I prioritize grasping the underlying patterns of problems. These patterns, which include techniques like Two-Pointers, Sliding Window, Modified Binary Search, Topological Sort, and many more, provide a versatile and powerful framework for tackling a wide range of coding problems. The emphasis shifts from memorizing solutions to proven problem-solving techniques.


By focusing on coding patterns, you can significantly streamline your interview preparation, making it more efficient.


Remember, prep smarter, not harder.


What to Expect?

In this article, I’ve compiled the Top 15 Coding Patterns that you need to know to answer any coding question. I’ll break down what each pattern is and how it works. Share key indicators to help you identify the pattern, and finally offer detailed graphics with template code to help you solidify your understanding.


While I do cover a lot in this article, if you’d like more in-depth training, explanations, and coding practice, you can check out our Comprehensive Coding Interview Prep Course at Techinterviews.io. We also cover crucial topics such as data structures, behavioral interviews, and even salary negotiation.


Now let’s dive into these coding patterns.


1. Linked List Reversal

Linked List Reversal involves changing the direction of pointers in a linked list to reverse the order of elements. It's a fundamental operation in data structures, and it often requires careful manipulation of node references.


This is useful when dealing with a linked list and the constraint is to reverse it in place.The process to reverse a linked list in place is as follows:


  1. Start by defining three variables: current, previous, and next. Set current as the head of the linked list, and initialize previous and next as None.

  2. While the current variable is not None, proceed as follows:

    1. Store the next node in a temporary variable.
    2. Reverse the current node's pointer to point to the previous node.
    3. Update previous to be the current node and current to be the next node.
  3. Finally, set the head of the reversed list to the last node reached, which is stored in the previous variable.




Key Indicators:

  • You need to reverse the order of elements in a linked list.
  • Problems involve checking for palindromes in linked lists.
  • You want to reverse the order of elements in a specific sublist within the list.


Template Code:


Python

def reverse_linked_list(head): 
    curr = head 
    prev = None 
    while curr: 
        next_node = curr.next 
        curr.next = prev 
        prev = curr 
        curr = next_node  
         
    return prev


JS

function reverseLinkedList(head) { 
    let curr = head; 
    let prev = null; 
    while (curr) { 
        const nextNode = curr.next; 
        curr.next = prev; 
        prev = curr; 
        curr = nextNode; 
    } 
     
    return prev; 
}


Java

public ListNode reverseLinkedList(ListNode head) { 
	ListNode curr = head; 
	ListNode prev = null; 
	while (curr != null) { 
		ListNode nextNode = curr.next; 
		curr.next = prev; 
		prev = curr; 
		curr = nextNode; 
	} 
		
	return prev; 
}


2. Modified Binary Search

Modified Binary Search adapts the classic binary search algorithm to solve various problems. Variations include finding the first/last occurrence of an element or searching in rotated arrays. It requires careful handling of midpoints and conditions.


If you’re ever given a sorted array, linked list, or matrix, consider using a modified binary search.


Here's a breakdown of how to apply this pattern to a sorted data structure:


  1. Begin by identifying the midpoint between the start and end positions. To avoid potential integer overflow, it's safer to calculate the middle as follows: middle = start + (end - start) / 2.
  2. Check if the key matches the element at the middle index. If it does, return the middle index as the result.
  3. If the key doesn't match the element at the middle index, proceed to the next steps.
  4. Evaluate whether the key is less than the element at the index middle. If it is, narrow your search range by updating end to middle - 1.
  5. Conversely, if the key is greater than the element at the index middle, adjust your search range by updating start to middle + 1.





Key Indicators:

  • You're working with a sorted data structure.
  • You need to find specific elements, boundaries, or pivot points efficiently.
  • Problems involve searching in rotated sorted arrays.


Template Code:


Python

def binary_search(arr: List[int], target: int) -> int: 
    left, right = 0, len(arr) - 1 
    first_true_index = -1 
     
    # Perform binary search until left and right pointers meet 
    while left <= right: 
        mid = (left + right) // 2 
        if feasible(mid): 
            # If the condition is true at mid index, update first_true_index 
            first_true_index = mid 
            right = mid - 1 
        else: 
            # If the condition is false at mid index, narrow down the search space 
            left = mid + 1 
    return first_true_index


JS

function binarySearch(arr, target) { 
    let left = 0; 
    let right = arr.length - 1; 
    let firstTrueIndex = -1; 
    // Perform binary search until left and right pointers meet 
    while (left <= right) { 
        const mid = Math.floor((left + right) / 2); 
        if (feasible(mid)) { 
            // If the condition is true at mid index, update firstTrueIndex 
            firstTrueIndex = mid; 
            right = mid - 1; 
        } else { 
            // If the condition is false at mid index, narrow down the search space 
            left = mid + 1; 
        } 
    } 
    return firstTrueIndex; 
}


Java

public int binarySearch(int[] arr, int target) { 
    int left = 0; 
    int right = arr.length - 1; 
    int firstTrueIndex = -1; 
    // Perform binary search until left and right pointers meet 
    while (left <= right) { 
        int mid = left + (right - left) / 2; 
        if (feasible(mid)) { 
            // If the condition is true at mid index, update firstTrueIndex 
            firstTrueIndex = mid; 
            right = mid - 1; 
        } else { 
            // If the condition is false at mid index, narrow down the search space 
            left = mid + 1; 
        } 
    } 
    return firstTrueIndex; 
}



3. Two Pointers

The Two Pointers technique involves maintaining two pointers that traverse a data structure, typically arrays or linked lists, often used for problems involving pairs or subarrays. It optimizes for problems that require comparison between elements at different positions.


The advantage of this technique lies in its simplicity and efficiency, especially when dealing with linear data structures like arrays or strings where you might initially use just one pointer. By employing two pointers, you can circumvent redundant operations and significantly enhance runtime efficiency, potentially reducing it from O(n^2) to O(n).


The "Two Pointers" pattern encompasses several variations, each tailored to specific scenarios. These variations include same direction, opposite direction, and a unique method known as "fast and slow," often referred to as the "tortoise and hare" technique, which involves two pointers moving at different speeds through a data structure, particularly useful for detecting cycles.


If you employ multiple pointers to traverse a data structure, you can categorize your approach as following the "Two Pointers" pattern.




Key Indicators:

  • You need to compare or operate on elements at different positions.
  • Problems require searching for pairs of elements in a sorted array.
  • You want to remove duplicates from a sorted array efficiently.
  • When dealing with linear data structures (arrays, strings, or linked lists) and facing a quadratic time complexity (O(n^2) brute force solution), consider employing Two Pointers.


Template Code:

Template for “opposite direction” variation


Python

def two_pointers_opposite(arr): 
    left = 0 
    right = len(arr) - 1 
    ans = 0 
    while left < right: 
        # Perform logic using the left and right pointers 
        if CONDITION: 
            left += 1 
        else: 
            right -= 1 
    return ans


JS

function twoPointersOpposite(arr) { 
    let left = 0; 
    let right = arr.length - 1; 
    let ans = 0; 
    while (left < right) { 
        // Perform logic using the left and right pointers 
        if (CONDITION) { 
            left++; 
        } else { 
            right--; 
        } 
    } 
    return ans; 
}


Java

public int twoPointersOpposite(int[] arr) { 
    int left = 0; 
    int right = arr.length - 1; 
    int ans = 0; 
    while (left < right) { 
        // Perform logic using the left and right pointers 
        if (CONDITION) { 
            left++; 
        } else { 
            right--; 
        } 
    } 
    return ans; 
}


4. Sliding Window

The Sliding Window technique involves maintaining a dynamic window over a linear data structure, such as arrays, strings, or linked lists. The window's size can vary depending on the specific implementation, and it can also be set as a fixed value. The primary objective of this window is to continuously monitor and capture data that satisfies specific criteria, making it particularly valuable for efficiently solving subarray or substring problems.


This pattern often utilizes a hash map to facilitate the tracking of individual data within the window. However, it's important to note that this approach can result in a space-time complexity of O(n).



Key Indicators:

  • You need to analyze contiguous or non-contiguous subarrays or substrings.
  • Problems involve tracking variable-length sequences within a larger dataset.
  • You want to find the maximum sum subarray of fixed length.
  • The problem input is a linear data structure such as an array, linked list, or string.


Template Code:


Python

def slidingWindow(nums): 
    # Initialize necessary variables 
    left = 0 
    window = []  # Initialize the window 
    ans = 0  # Initialize the answer variable 
    for right in range(len(nums)): 
        window.append(nums[right])  # Expand the window to the right 
        while invalid(window):  # Condition to shrink the window from the left until it is valid again 
            window.pop(0)  # Remove the left element from the window 
            left += 1 
        ans = max(ans, len(window))  # Update the answer, can vary on your implementation
    return ans


JS

function slidingWindow(nums) { 
    let left = 0; 
    const window = [];  // Initialize the window 
    let ans = 0;  // Initialize the answer variable 
    for (let right = 0; right < nums.length; right++) { 
        window.push(nums[right]);  // Expand the window to the right 
        while (invalid(window)) {  // Condition to shrink the window from the left until it is valid again 
            window.shift();  // Remove the left element from the window 
            left++; 
        } 
        ans = Math.max(ans, window.length);  // Update the answer , can vary on your implementation
    } 
    return ans; 
}


Java

public static int slidingWindow(int[] nums) { 
    int left = 0; 
    List<Integer> window = new ArrayList<>();  // Initialize the window 
    int ans = 0;  // Initialize the answer variable 
    for (int right = 0; right < nums.length; right++) { 
        window.add(nums[right]);  // Expand the window to the right 
        while (invalid(window)) {  // Condition to shrink the window from the left until it is valid again 
            window.remove(0);  // Remove the left element from the window 
            left++; 
        } 
        ans = Math.max(ans, window.size());  // Update the answer  , can vary on your implementation
    } 
    return ans; 
}


5. Top K Elements

This pattern involves finding the K largest or smallest elements in a collection, often implemented using data structures like heaps or priority queues. It's useful for selecting a subset of elements based on their value or frequency.


Anytime we are asked to find the most frequent, smallest, or top 'K' elements within a given dataset we can consider using this pattern.


The way it works is simple:
  1. We insert ‘K’ elements into our min or max heap (This depends on implementation).
  2. Anytime we add to our heap we must adjust so that at all times the frequent/smallest/top ‘K’ elements are maintained.


The beauty of this approach lies in its efficiency; you don't need to resort to any sorting algorithms, as the heap itself naturally maintains the required order for you.




Key Indicators:

  • You need to identify the K largest or smallest elements in a collection.
  • Problems require ranking elements based on certain criteria.
  • You want to find the top K frequent elements in a dataset.


Template Code:


Python

import heapq 
def top_k_elements(arr, k): 
    heap = [] 
    for num in arr: 
        # Define your own criteria and logic to push elements onto the heap 
        # For example, if you want to find the top k largest elements, push (-num, num) onto the heap 
        heapq.heappush(heap, (-CRITERIA, num)) 
        if len(heap) > k: 
            heapq.heappop(heap) 
     
    return [num for _, num in heap]


JS

function topKElements(arr, k) { 
    const minHeap = []; 
    for (const num of arr) { 
        // Define your own criteria and logic to push elements onto the heap 
        // For example, if you want to find the top k smallest elements, push num onto the heap 
        minHeap.push(CRITERIA); 
        if (minHeap.length > k) { 
            minHeap.sort((a, b) => a - b).shift(); 
        } 
    } 
    return minHeap.sort((a, b) => a - b); 
}


Java

import java.util.*;

public List<Integer> topKElements(int[] arr, int k) { 
    PriorityQueue<Integer> heap = new PriorityQueue<>(Comparator.comparingInt(a -> -CRITERIA)); 
    for (int num : arr) { 
        // Define your own criteria and logic to push elements onto the heap 
        // For example, if you want to find the top k largest elements, push -num onto the heap 
        heap.offer(-CRITERIA); 
        if (heap.size() > k) { 
            heap.poll(); 
        } 
    } 
    List<Integer> topK = new ArrayList<>(); 
    while (!heap.isEmpty()) { 
        topK.add(-heap.poll()); 
    } 
    Collections.reverse(topK); 
    return topK; 
}


6. Two Heaps

Two Heaps utilize two heaps (a max-heap and a min-heap) to optimize certain problems, like finding median values in a dataset. This pattern is particularly useful for maintaining a balanced structure.


Here's how it works:
  1. Utilize two heaps: A Max Heap to identify the largest element and a Min Heap to locate the smallest.
  2. Populate the Max Heap with the first half of the numbers, aiming to find the largest among them.
  3. Fill the Min Heap with the second half of the numbers to pinpoint the smallest in this portion.
  4. The median of the current number set can be computed at any time by examining the top elements of both heaps.



Key Indicators:

  • You need to maintain a balanced structure for efficient median finding.
  • Problems involve handling sliding window problems with dynamic medians.
  • You want to prioritize elements in a collection based on their values.


Template Code:


Python

import heapq

class TwoHeaps:
    def __init__(self):
        self.min_heap = []  # Right heap to store larger half
        self.max_heap = []  # Left heap to store smaller half

    def add_num(self, num):
        if not self.max_heap or num <= -self.max_heap[0]:
            heapq.heappush(self.max_heap, -num)
        else:
            heapq.heappush(self.min_heap, num)
        
        # Balance the heaps if necessary
        if len(self.max_heap) > len(self.min_heap) + 1:
            heapq.heappush(self.min_heap, -heapq.heappop(self.max_heap))
        elif len(self.min_heap) > len(self.max_heap):
            heapq.heappush(self.max_heap, -heapq.heappop(self.min_heap))

    def find_median(self):
        if len(self.max_heap) == len(self.min_heap):
            return (-self.max_heap[0] + self.min_heap[0]) / 2.0
        else:
            return -self.max_heap[0]

# Usage:
two_heaps = TwoHeaps()
two_heaps.add_num(1)
two_heaps.add_num(2)
median = two_heaps.find_median()
print("Median:", median)


JS

class TwoHeaps {
    constructor() {
        this.minHeap = []; // Right heap to store larger half
        this.maxHeap = []; // Left heap to store smaller half
    }

    addNumber(num) {
        if (this.maxHeap.length === 0 || num <= -this.maxHeap[0]) {
            this.maxHeap.push(-num);
        } else {
            this.minHeap.push(num);
        }

        // Balance the heaps if necessary
        if (this.maxHeap.length > this.minHeap.length + 1) {
            this.minHeap.push(-this.maxHeap.shift());
        } else if (this.minHeap.length > this.maxHeap.length) {
            this.maxHeap.push(-this.minHeap.shift());
        }
    }

    findMedian() {
        if (this.maxHeap.length === this.minHeap.length) {
            return (-this.maxHeap[0] + this.minHeap[0]) / 2;
        } else {
            return -this.maxHeap[0];
        }
    }
}

// Usage:
const twoHeaps = new TwoHeaps();
twoHeaps.addNumber(1);
twoHeaps.addNumber(2);
const median = twoHeaps.findMedian();
console.log("Median:", median);


Java

import java.util.*;

class TwoHeaps {
    private PriorityQueue<Integer> minHeap; // Right heap to store larger half
    private PriorityQueue<Integer> maxHeap; // Left heap to store smaller half

    public TwoHeaps() {
        minHeap = new PriorityQueue<>();
        maxHeap = new PriorityQueue<>(Collections.reverseOrder());
    }

    public void addNumber(int num) {
        if (maxHeap.isEmpty() || num <= -maxHeap.peek()) {
            maxHeap.offer(-num);
        } else {
            minHeap.offer(num);
        }

        // Balance the heaps if necessary
        if (maxHeap.size() > minHeap.size() + 1) {
            minHeap.offer(-maxHeap.poll());
        } else if (minHeap.size() > maxHeap.size()) {
            maxHeap.offer(-minHeap.poll());
        }
    }

    public double findMedian() {
        if (maxHeap.size() == minHeap.size()) {
            return (-maxHeap.peek() + minHeap.peek()) / 2.0;
        } else {
            return -maxHeap.peek();
        }
    }
}

// Usage:
TwoHeaps twoHeaps = new TwoHeaps();
twoHeaps.addNumber(1);
twoHeaps.addNumber(2);
double median = twoHeaps.findMedian();
System.out.println("Median: " + median);


7. Monotonic Stack

Monotonic - (of a function or quantity) varying in such a way that it either never decreases or never increases.


The Monotonic Stack maintains a stack of elements in non-increasing or non-decreasing order, often used for finding the nearest smaller/greater elements in an array. It's a powerful tool for optimizing certain problems.


The order is strict, whenever we encounter an element that is smaller (or greater) than what’s at the top of the stack then we must pop from the monotonic stack until the element we’re looking to add is the smallest (or greatest) of them.




Key Indicators:

  • You need to maintain elements in non-increasing or non-decreasing order.
  • Problems involve finding the nearest smaller or greater element.
  • You want to optimize stack-based operations while preserving order.


Template Code:


Python

def monotonic_stack(items): 
    stack = [] 
    for item in items: 
        # Adjust the condition for comparisons to suit your needs 
        while stack and stack[-1] <= item: 
            stack.pop() 
            # Do something with the popped item here 
        stack.append(item)


JS

function monotonicStack(items) { 
    const stack = []; 
    for (const item of items) { 
        // Adjust the condition for comparisons to suit your needs
        while (stack.length > 0 && stack[stack.length - 1] <= item) { 
            stack.pop(); 
            // Do something with the popped item here 
        } 
        stack.push(item); 
    } 
    return stack; 
}


Java

import java.util.*;

public static int[] monotonicStack(int[] items) { 
    Deque<Integer> stack = new ArrayDeque<>(); 
    for (int item : items) {  
        // Adjust the condition for comparisons to suit your needs
        while (!stack.isEmpty() && stack.peekLast() <= item) { 
            stack.pollLast(); 
            // Do something with the popped item here 
        } 
        stack.offerLast(item); 
    } 
    int[] result = new int[stack.size()]; 
    int i = stack.size() - 1; 
    while (!stack.isEmpty()) { 
        result[i--] = stack.pollLast(); 
    } 
    return result; 
}


8. Depth-First Search

DFS, or Depth-First Search, is a traversal method where you explore as deeply as possible along a branch before backtracking; it is widely applied in problems involving trees and graphs, particularly for traversal and search operations.


In order to perform DFS on a tree, you need to start at the root. Then follow the steps below:
  1. Explore the current node by visiting its children nodes, typically from left to right.
  2. Recursively apply the DFS process to each child node, going deeper into the tree.
  3. Once all child nodes have been visited, backtrack to the parent node.
  4. Repeat steps 1-3 for each unvisited child of the current node until all nodes in the tree have been visited.




Difference with DFS on a Graph: The key difference between tree DFS and graph DFS lies in the presence of cycles. In a tree, there are no cycles by definition, so tree DFS ensures that each node is visited exactly once, and it naturally terminates when the entire tree is traversed. In contrast, graph DFS must incorporate additional measures to handle cyclic structures within a graph. To avoid revisiting nodes in a cycle indefinitely, graph DFS requires mechanisms like marking visited nodes and handling backtracking appropriately. This distinction makes graph DFS more complex than tree DFS due to the potential for infinite loops in the presence of cycles.


Key Indicators:

  • You're dealing with tree structures and need to explore specific traversal orders.
  • Problems involve finding paths, calculating depth, or performing pre-order/in-order/post-order traversal.
  • You want to check if a path exists between two nodes.


Template Code:


Python

def dfs(root, target): 
    if root is None:  # Base case: Check if the current node is None 
        return None 
    if root.val == target:  # Base case: Check if the current node value matches the target 
        return root 
    left = dfs(root.left, target)  # Recursively search the left subtree 
    if left is not None:  # If the target is found in the left subtree, return the result 
        return left 
    return dfs(root.right, target)  # Recursively search the right subtree


JS

function dfs(root, target) { 
    if (root === null) { // Base case: Check if the current node is null 
        return null; 
    } 
    if (root.val === target) { // Base case: Check if the current node value matches the target 
        return root; 
    } 
    let left = dfs(root.left, target); // Recursively search the left subtree 
    if (left !== null) { // If the target is found in the left subtree, return the result 
        return left; 
    } 
    return dfs(root.right, target); // Recursively search the right subtree 
}


Java

public TreeNode dfs(TreeNode root, int target) { 
    if (root == null) { // Base case: Check if the current node is null 
        return null; 
    } 
    if (root.val == target) { // Base case: Check if the current node value matches the target 
        return root; 
    } 
    TreeNode left = dfs(root.left, target); // Recursively search the left subtree 
    if (left != null) { // If the target is found in the left subtree, return the result 
        return left; 
    } 
    return dfs(root.right, target); // Recursively search the right subtree 
}


9. Breadth-First Search

BFS is a traversal technique for trees and graphs that explores all nodes at the current depth before moving to the next level.


In order to perform BFS on a tree, you need to start at the root. Then follow the steps below:
  1. Initialize an empty queue data structure to keep track of nodes to be visited.
  2. Enqueue the root node into the queue.

  3. Enter a loop until the queue is empty:
    1. Dequeue the node at the front of the queue.
    2. Process the dequeued node (e.g., visit or perform some operation on it).
    3. Enqueue all of the node's children (if any) into the queue.
  4. Repeat steps 3a-3c until the queue becomes empty.

  5. The BFS traversal is complete when all nodes in the tree have been visited in a level-wise manner, from left to right.


In essence, BFS on a tree explores nodes level by level, starting from the root and moving to the children before proceeding to the next level. This ensures that nodes at each level are visited before moving to the next level, making it particularly useful for tasks like finding the shortest path in an unweighted tree or exploring a tree level by level.




Difference with BFS on a Graph: Similar to DFS, Graph BFS adapts to the presence of cycles and multiple paths among nodes. It employs mechanisms such as marking visited nodes and specialized termination conditions to efficiently navigate the intricate network of relationships within graphs.


Key Indicators:

  • You need to traverse a tree level by level.
  • Problems involve finding the shortest path in unweighted graphs.
  • You want to explore a tree or graph in a breadth-first manner.


Template Code:


Python

from collections import deque 
def bfs(root): 
    # Create a queue and initialize it with the root node 
    queue = deque([root]) 
    # Perform breadth-first search until the queue is empty 
    while len(queue) > 0: 
        # Dequeue the front node from the queue 
        node = queue.popleft() 
        # Process the current node 
        for child in node.children: 
            if is_goal(child): 
                # If the goal condition is satisfied, return the child node 
                return FOUND(child) 
            # Enqueue the child node to explore it in the next iterations 
            queue.append(child) 
    # Return NOT_FOUND if the goal is not reached 
    return NOT_FOUND


JS

function bfs(root) { 
    const queue = []; // Create a queue and initialize it with the root node 
    queue.push(root); 
    while (queue.length > 0) { // Perform breadth-first search until the queue is empty 
        const node = queue.shift(); // Dequeue the front node from the queue 
        for (const child of node.children) { // Process the current node 
            if (isGoal(child)) { // If the goal condition is satisfied, return the child node 
                return `FOUND ${child}`; 
            } 
            queue.push(child); // Enqueue the child node to explore it in the next iterations 
        } 
    } 
    return 'NOT_FOUND'; // Return NOT_FOUND if the goal is not reached 
}


Java

import java.util.LinkedList; 
import java.util.Queue;

public String bfs(Node root) { 
    Queue<Node> queue = new LinkedList<>(); // Create a queue and initialize it with the root node 
    queue.offer(root); 
    while (!queue.isEmpty()) { // Perform breadth-first search until the queue is empty 
        Node node = queue.poll(); // Dequeue the front node from the queue 
        for (Node child : node.getChildren()) { // Process the current node 
            if (isGoal(child)) { // If the goal condition is satisfied, return the child node 
                return "FOUND(child)"; 
            } 
            queue.offer(child); // Enqueue the child node to explore it in the next iterations 
        } 
    } 
    return "NOT_FOUND"; // Return NOT_FOUND if the goal is not reached 
}


10. Union Find (Disjoint-Set Union)

The Union-Find data structure, also known as Disjoint Set Union (DSU), is employed to efficiently manage and solve problems involving disjoint sets or connected components. It provides operations for merging sets (union) and determining the set to which an element belongs (find). Union-Find is commonly used in algorithms like Kruskal's Minimum Spanning Tree and cycle detection in graphs.


Union Find is implemented as follows:
  1. Initialization: Start with a collection of elements, each considered as a separate disjoint set. Assign a unique representative (often the element itself) to each set.
  2. Union Operation: To merge two sets, perform the union operation. Choose the representative of one set (often based on some criteria, like rank or size) and make it the parent of the representative of the other set. This effectively connects the two sets.
  3. Find Operation: When you need to determine the set to which an element belongs, use the find operation. Starting with the element in question, traverse the tree upwards to find the root node (representative) of its set. The path compression technique can be applied here to optimize future find operations by making the elements along the path directly point to the root.
  4. Cycle Detection: Union-Find is often used to detect cycles in graphs. When performing the union operation, if both elements belong to the same set (i.e., they have the same representative), it indicates the addition of an edge between nodes would create a cycle in the graph.
  5. Optimization: Various optimization techniques can be applied to improve the efficiency of Union-Find operations, such as union by rank and path compression.



Template Code:


Python

class UnionFind:
    def __init__(self):
        self.id = {}

    def find(self, x):
        y = self.id.get(x, x)
        if y != x:
            self.id[x] = y = self.find(y)
        return y

    def union(self, x, y):
        self.id[self.find(x)] = self.find(y)


JS

class UnionFind { 
  constructor() { 
    this.id = {}; 
  } 
  /** 
   * Find the root parent of an element in the set. 
   * Implements path compression for better efficiency. 
   * @param x The element to find the root parent for. 
   * @returns The root parent of the element. 
   */ 
  find(x) { 
    let y = this.id[x] || x; 
    if (y !== x) { 
      this.id[x] = y = this.find(y); 
    } 
    return y; 
  } 
  /** 
   * Union two elements into the same set. 
   * @param x The first element. 
   * @param y The second element. 
   */ 
  union(x, y) { 
    this.id[this.find(x)] = this.find(y); 
  } 
}


Java

import java.util.*;

class UnionFind { 
    private Map<String, String> id; 
    public UnionFind() { 
        id = new HashMap<>(); 
    } 
    /** 
     * Find the root parent of an element in the set. 
     * Implements path compression for better efficiency. 
     * @param x The element to find the root parent for. 
     * @return The root parent of the element. 
     */ 
    public String find(String x) { 
        String y = id.getOrDefault(x, x); 
        if (!y.equals(x)) { 
            id.put(x, find(y)); 
        } 
        return y; 
    } 
    /** 
     * Union two elements into the same set. 
     * @param x The first element. 
     * @param y The second element. 
     */ 
    public void union(String x, String y) { 
        id.put(find(x), find(y)); 
    } 
}


11. Topological Sort

A directed acyclic graph (DAG) is a directed graph with no directed cycles.


Topological Sort is an algorithm for arranging the nodes of a directed acyclic graph (DAG) in a linear order, where each node appears before its successors. It is crucial for tasks like scheduling dependencies, compiling code, and analyzing the precedence of tasks in various applications.


Here’s a step-by-step breakdown of Topological Sorting:
  1. Initialization: Begin with a directed acyclic graph (DAG) that represents tasks or nodes with dependencies. Initialize a queue to hold the sorted nodes.

  2. In-Degree Calculation: Calculate the in-degree (the number of incoming edges) for each node in the graph. Nodes with an in-degree of 0 have no dependencies and are suitable to be the starting point of the topological sort.

  3. Initial Queue Filling: Enqueue all nodes with an in-degree of 0 into the queue. These nodes can be processed first.

  4. Topological Sort Loop: While the queue is not empty, perform the following steps:

    1. Dequeue a node from the front of the queue.
    2. Process the node (e.g., add it to the sorted list).
    3. For each of the node's neighbors (nodes it points to), decrement their in-degree by 1.
    4. If a neighbor's in-degree becomes 0 as a result of the decrement, enqueue it into the queue.
  5. Completion: Once the topological sort loop is complete, the queue will be empty, and the sorted list will contain all nodes in a valid topological order.

  6. Cycle Detection: If at any point during the topological sort process, there are no nodes with an in-degree of 0 left in the queue, it indicates the presence of cycles in the graph, making topological sorting impossible.


The result of the Topological Sort is a linear ordering of nodes that respects their dependencies, making it suitable for scheduling tasks or analyzing the order of execution in various applications.




Key Indicators:

  • The problem involves scheduling or ordering tasks with dependencies.
  • You're working with a directed acyclic graph (DAG).
  • Tasks need to be executed in a specific order while adhering to their dependencies.


Template Code:


Python

from collections import deque

def find_indegree(graph): 
    # Calculate the indegree of each node in the graph 
    indegree = {node: 0 for node in graph} 
    for node in graph: 
        for neighbor in graph[node]: 
            indegree[neighbor] += 1 
    return indegree

def topological_sort(graph): 
    result = [] 
    queue = deque() 
    indegree = find_indegree(graph) 
    # Add nodes with 0 indegree to the queue 
    for node in indegree: 
        if indegree[node] == 0: 
            queue.append(node) 
    while queue: 
        node = queue.popleft() 
        result.append(node) 
        # Update the indegree of neighboring nodes 
        for neighbor in graph[node]: 
            indegree[neighbor] -= 1 
            if indegree[neighbor] == 0: 
                queue.append(neighbor) 
    # If all nodes are visited, return the result 
    if len(graph) == len(result): 
        return result 
    else: 
        return None


JS

/** 
 * Finds the indegree of each node in the graph. 
 * @param {object} graph - The input graph. 
 * @returns {object} - The indegree of each node. 
 */ 
function findIndegree(graph) { 
  const indegree = {}; 
  for (const node in graph) { 
    indegree[node] = 0; 
  } 
  for (const node in graph) { 
    for (const neighbor of graph[node]) { 
      indegree[neighbor]++; 
    } 
  } 
  return indegree; 
}

/** 
 * Performs topological sorting on the given graph. 
 * @param {object} graph - The input graph. 
 * @returns {array|null} - The sorted nodes in topological order or null if a cycle is detected. 
 */ 
function topologicalSort(graph) { 
  const result = []; 
  const queue = []; 
  const indegree = findIndegree(graph); 
  // Add nodes with no incoming edges to the queue 
  for (const node in indegree) { 
    if (indegree[node] === 0) { 
      queue.push(node); 
    } 
  } 
  while (queue.length > 0) { 
    const node = queue.shift(); 
    result.push(node); 
    // Decrement the indegree of neighbors and enqueue if indegree becomes zero 
    for (const neighbor of graph[node]) { 
      indegree[neighbor]--; 
      if (indegree[neighbor] === 0) { 
        queue.push(neighbor); 
      } 
    } 
  } 
  // Check if all nodes have been visited (no cycles) 
  if (Object.keys(graph).length === result.length) { 
    return result; 
  } else { 
    return null; 
  } 
}


Java

import java.util.*;

/** 
 * Finds the indegree of each node in the graph. 
 * @param graph - The input graph. 
 * @return The indegree of each node. 
 */ 
public Map<String, Integer> findIndegree(Map<String, List<String>> graph) { 
    Map<String, Integer> indegree = new HashMap<>(); 
    for (String node : graph.keySet()) { 
        indegree.put(node, 0); 
    } 
    for (String node : graph.keySet()) { 
        for (String neighbor : graph.get(node)) { 
            indegree.put(neighbor, indegree.getOrDefault(neighbor, 0) + 1); 
        } 
    } 
    return indegree; 
}

/** 
 * Performs topological sorting on the given graph. 
 * @param graph - The input graph. 
 * @return The sorted nodes in topological order or null if a cycle is detected. 
 */ 
public List<String> topologicalSort(Map<String, List<String>> graph) { 
    List<String> result = new ArrayList<>(); 
    Queue<String> queue = new LinkedList<>(); 
    Map<String, Integer> indegree = findIndegree(graph); 
    // Add nodes with no incoming edges to the queue 
    for (String node : indegree.keySet()) { 
        if (indegree.get(node) == 0) { 
            queue.offer(node); 
        } 
    } 
    while (!queue.isEmpty()) { 
        String node = queue.poll(); 
        result.add(node); 
        // Decrement the indegree of neighbors and enqueue if indegree becomes zero 
        for (String neighbor : graph.get(node)) { 
            indegree.put(neighbor, indegree.get(neighbor) - 1); 
            if (indegree.get(neighbor) == 0) { 
                queue.offer(neighbor); 
            } 
        } 
    } 
    // Check if all nodes have been visited (no cycles) 
    if (graph.size() == result.size()) { 
        return result; 
    } else { 
        return null; 
    } 
}


12. Tries (Prefix-Tree)



A Trie is a tree-like data structure used for efficient string matching and storage of words. It excels in problems where you need to store and search for strings with common prefixes.


Here is how to implement a Trie:
  1. Initialization: Start with an empty Trie, which typically consists of a root node with no associated character.

  2. Insertion: To insert a word into the Trie, begin at the root node and traverse down the tree, one character at a time. For each character in the word:

    1. Check if a child node with that character exists under the current node.
    2. If it does, move to the child node.
    3. If not, create a new child node with the character and move to it.
  3. Word Completion: To check if a word exists in the Trie, traverse it in a manner similar to insertion. Ensure that each character in the word corresponds to a child node of the current node. If the traversal reaches the end of the word and there is a valid end marker (e.g., a boolean flag) at the last character node, the word exists in the Trie.

  4. Prefix Search: Trie excels in prefix searching. To find all words with a given prefix, start the traversal at the root node and move down the tree following the characters of the prefix. Once you reach the last character of the prefix, you can perform a depth-first search (DFS) from that node to find all words that share the same prefix.

  5. Deletion: To delete a word from the Trie, perform a search for the word. When you reach the end of the word, remove the end marker (if it exists). If the node has no other children, you can safely remove the entire branch of the Trie, which represents the word.


Tries can be memory-intensive, especially for large vocabularies. To optimize memory, techniques like compression (e.g., using a map instead of an array of characters in each node) and pruning (removing nodes with no descendants) can be applied.


Tries are particularly useful for efficient string matching, auto-complete suggestions, spell checkers, and indexing words with common prefixes. They provide a fast and space-efficient way to store and search for words or strings with shared prefixes in a tree-like structure.


Key Indicators:

  • You're dealing with strings and need efficient string matching.
  • Problems involve autocomplete suggestions, spell checkers, or searching for words with common prefixes.
  • You want to store and search for words efficiently.


Template Code:


Python

class TrieNode: 
    def __init__(self, value): 
        self.value = value 
        self.children = {} 
    def insert(self, s, idx): 
        # idx: index of the current character in s 
        if idx != len(s): 
            self.children.setdefault(s[idx], TrieNode(s[idx])) 
            self.children.get(s[idx]).insert(s, idx + 1)


JS

class TrieNode { 
    constructor(value) { 
        this.value = value; 
        this.children = {}; 
    } 
    insert(s, idx) { 
        // idx: index of the current character in s 
        if (idx !== s.length) { 
            if (!this.children[s[idx]]) { 
                this.children[s[idx]] = new TrieNode(s[idx]); 
            } 
            this.children[s[idx]].insert(s, idx + 1); 
        } 
    } 
}


Java

import java.util.*;

class TrieNode { 
    char value; 
    Map<Character, TrieNode> children; 
    TrieNode(char value) { 
        this.value = value; 
        this.children = new HashMap<>(); 
    } 
    void insert(String s, int idx) { 
        // idx: index of the current character in s 
        if (idx != s.length()) { 
            char c = s.charAt(idx); 
            if (!children.containsKey(c)) { 
                children.put(c, new TrieNode(c)); 
            } 
            children.get(c).insert(s, idx + 1); 
        } 
    } 
}


13. Dynamic Programming

Dynamic Programming is a powerful problem-solving technique used in computer science and mathematics to solve a wide range of complex problems efficiently. It is especially valuable when a problem can be broken down into simpler subproblems, and the solutions to these subproblems can be combined to solve the overall problem.


Here are its key characteristics and how it works:


Optimal Substructure:

  • This characteristic refers to the property that an optimal solution to a larger problem can be constructed from optimal solutions to its smaller subproblems.
  • In other words, DP problems exhibit a recursive structure, where the optimal solution for a problem relies on the optimal solutions of its subproblems.
  • For example, in finding the shortest path between two points in a graph, the shortest path between any two intermediate points should also be optimal.


Overlapping Subproblems:

  • Overlapping subproblems occur when the same subproblems are solved multiple times during the computation, leading to redundant work.
  • Dynamic Programming aims to address this issue by storing the solutions to subproblems in a data structure (often a table or memoization array) to avoid recalculating them.
  • This storage and reuse of subproblem solutions significantly reduce the time complexity of the algorithm.


How Dynamic Programming Works:

  1. Identify Subproblems: The first step in solving a problem using DP is to identify the subproblems. Break down the problem into smaller, manageable subproblems that, when solved, contribute to solving the main problem.
  2. Recursive Solution: Develop a recursive solution that expresses the solution of a larger problem in terms of solutions to smaller subproblems. This recursive formulation captures the optimal substructure.
  3. Memoization or Tabulation: To address overlapping subproblems, you can choose between two common techniques:
    • Memoization: Store the results of subproblems in a data structure (usually an array or hash table) as they are computed. When a subproblem is encountered again, retrieve its solution from the storage instead of recalculating it. This is also known as the top-down approach.
    • Tabulation: Create a table (often a 2D array) to store solutions to subproblems in a bottom-up fashion. Start by solving the smallest subproblems and progressively build up to the main problem.
  4. Optimal Solution: Once all subproblems are solved, the solution to the main problem can be obtained by combining the solutions of its subproblems according to the problem's recursive structure.


Dynamic Programming is applied to a wide array of problems, including finding the shortest paths in graphs, optimizing resource allocation, counting combinations, solving knapsack problems, and much more. Its ability to optimize solutions by breaking down complex problems into simpler parts and avoiding redundant computations makes it a fundamental technique in algorithm design and optimization.



Important Concepts: Bottoms Up Approach, Top-Down, Memoization, Tabulation


Key Indicators:

  • The problem can be broken down into smaller overlapping subproblems.
  • There's a need to optimize by storing and reusing solutions to subproblems.
  • You want to solve problems involving optimization or counting.


Template Code:

Template for top-down memoization is one of many ways to implement dynamic programming.


Python

def top_down_memo(arr): 
    def dp(state): 
        # Base case(s): Define your own base case(s) for the problem 
        if base_case: 
            return 0 
         
        # Check if the state has already been memoized 
        if state in memo: 
            return memo[state] 
         
        # Calculate the result using recurrence relation and memoize it 
        result = recurrence_relation(state) 
        memo[state] = result 
        return result 
    memo = {}  # Memoization table to store calculated results 
    return dp(initial_state)


JS

function topDownMemo(arr) { 
  function dp(state) { 
    // Base case(s): Define your own base case(s) for the problem 
    if (baseCase) { 
      return 0; 
    } 
    // Check if the state has already been memoized 
    if (memo.has(state)) { 
      return memo.get(state); 
    } 
    // Calculate the result using recurrence relation and memoize it 
    const result = recurrenceRelation(state); 
    memo.set(state, result); 
    return result; 
  } 
  const memo = new Map(); // Memoization map to store calculated results 
  return dp(initialState); 
}


Java

import java.util.HashMap; 
import java.util.Map;

public int topDownMemo(int[] arr) { 
    Map<StateType, Integer> memo = new HashMap<>(); // Memoization map to store calculated results 
    return dp(initialState, memo); 
}

private int dp(StateType state, Map<StateType, Integer> memo) { 
    // Base case(s): Define your own base case(s) for the problem 
    if (baseCase) { 
        return 0; 
    } 
    // Check if the state has already been memoized 
    if (memo.containsKey(state)) { 
        return memo.get(state); 
    } 
    // Calculate the result using recurrence relation and memoize it 
    int result = recurrenceRelation(state); 
    memo.put(state, result); 
    return result; 
}


14. Backtracking


Backtracking is a general algorithmic technique for solving problems incrementally by trying out different possibilities and undoing them if they don't lead to a solution. It's used when an exhaustive search is required.


Here's an in-depth look at how backtracking works:
  1. Problem Space Exploration: Backtracking explores the problem space by incrementally building a solution one piece at a time. At each step, it makes a decision and moves forward.
  2. Recursive Structure: Backtracking often involves recursion. It starts with an initial partial solution and explores all possible ways to extend it. This process continues recursively until a complete solution is found or it becomes evident that no valid solution exists.
  3. Decision Points: At each step, there are decision points where the algorithm must choose from available options. These choices lead to branching in the exploration process.
  4. Solution Validation: After making a choice, the algorithm checks whether the current partial solution is valid. If it's valid, the algorithm proceeds to the next step. If not, it backtracks, undoing the previous choice and exploring other options.
  5. Termination Conditions: Backtracking continues until one of two conditions is met:
    • A valid solution is found, in which case the algorithm stops and returns the solution.
    • It's determined that no valid solution exists, at which point the algorithm backtracks to a previous decision point and explores different options.
  6. Pruning: To optimize the search, backtracking often includes pruning strategies. Pruning involves avoiding the exploration of paths that are guaranteed to lead to invalid solutions, significantly reducing the search space.


Applications of Backtracking:


Backtracking is used in various problem domains, including:
  • Solving puzzles like Sudoku and N-Queens.
  • Generating permutations and combinations.
  • Searching for paths in graphs and trees.
  • Constraint satisfaction problems (e.g., the traveling salesman problem).
  • Game-playing algorithms (e.g., chess AI).


Key Indicators:

  • The problem involves exploring multiple possibilities incrementally.
  • There are decision points or constraints that necessitate trying out different options.
  • You need to find all possible solutions or satisfy specific conditions through an exhaustive search.


Template Code:


Python

def backtrack(curr, OTHER_ARGUMENTS...): 
    if BASE_CASE: 
        # Modify the answer according to the problem's requirements 
        return 
     
    ans = 0 
    for ITERATE_OVER_INPUT: 
        # Modify the current state according to the problem's requirements 
        ans += backtrack(curr, OTHER_ARGUMENTS...) 
        # Undo the modification of the current state (backtrack) 
     
    return ans


JS

function backtrack(curr, ...OTHER_ARGUMENTS) { 
    if (BASE_CASE) { 
        // Modify the answer according to the problem's requirements 
        return; 
    } 
    let ans = 0; 
    for (let i = 0; i < ITERATE_OVER_INPUT.length; i++) { 
        const item = ITERATE_OVER_INPUT[i]; 
        // Modify the current state according to the problem's requirements 
        ans += backtrack(curr, ...OTHER_ARGUMENTS); 
        // Undo the modification of the current state (backtrack) 
    } 
     
    return ans; 
}


Java

public int backtrack(StateType curr, OtherArgumentType... OTHER_ARGUMENTS) { 
    if (BASE_CASE) { 
        // Modify the answer according to the problem's requirements 
        return; 
    } 
    int ans = 0; 
    for (int i = 0; i < ITERATE_OVER_INPUT.length; i++) { 
        ItemType item = ITERATE_OVER_INPUT[i]; 
        // Modify the current state according to the problem's requirements 
        ans += backtrack(curr, OTHER_ARGUMENTS); 
        // Undo the modification of the current state (backtrack) 
    } 
     
    return ans; 
}


15. Merge Intervals


Merging intervals involves combining overlapping or adjacent intervals into a consolidated set, often used in problems involving time intervals or scheduling. It simplifies interval-based operations.


Here's a closer look at how merging intervals works:
  1. Interval Representation: Intervals are typically represented as pairs of start and end points (e.g., [start, end]).
  2. Sorting: To merge intervals efficiently, start by sorting them based on their start points. This ensures that overlapping or adjacent intervals are adjacent in the sorted list.
  3. Merging Process: Initialize an empty list to hold the merged intervals. Then, iterate through the sorted intervals one by one:
    • If the current interval doesn't overlap with the previous one, add it to the list of merged intervals.
    • If there is an overlap, update the endpoint of the previous merged interval to encompass both the current and previous intervals, effectively merging them.
  4. Completion: After processing all intervals, the list of merged intervals will contain non-overlapping and consolidated intervals.


Applications of Merge Intervals:


Merging intervals are commonly used in:
  • Scheduling and time management applications.
  • Overlapping event detection in calendar systems.
  • Interval-based data analysis, such as in database queries.
  • Solving problems related to resource allocation and meeting scheduling.


By merging overlapping intervals, this technique simplifies interval-based operations and enhances efficiency in various applications.


Key Indicators:

  • You're dealing with intervals or time-based data.
  • Problems involve merging overlapping or adjacent intervals.
  • You want to simplify interval-based operations or optimize event scheduling.


Template Code:


Python

def merge_intervals(intervals):
    # 1. Sort the intervals based on their start values.
    intervals.sort(key=lambda x: x[0])
    
    # 2. Initialize an empty list to store merged intervals.
    merged_intervals = []
    
    # 3. Iterate through the sorted intervals.
    for interval in intervals:
        # 4. If the merged_intervals list is empty or the current interval does not overlap with the last merged interval:
        if not merged_intervals or interval[0] > merged_intervals[-1][1]:
            merged_intervals.append(interval)
        else:
            # 5. If the current interval overlaps with the last merged interval, merge them.
            merged_intervals[-1][1] = max(merged_intervals[-1][1], interval[1])
    
    # 6. Return the merged_intervals list.
    return merged_intervals


JS

function mergeIntervals(intervals) {
    // 1. Sort the intervals based on their start values.
    intervals.sort((a, b) => a[0] - b[0]);
    
    // 2. Initialize an empty array to store merged intervals.
    const mergedIntervals = [];
    
    // 3. Iterate through the sorted intervals.
    for (const interval of intervals) {
        // 4. If the mergedIntervals array is empty or the current interval does not overlap with the last merged interval:
        if (mergedIntervals.length === 0 || interval[0] > mergedIntervals[mergedIntervals.length - 1][1]) {
            mergedIntervals.push(interval);
        } else {
            // 5. If the current interval overlaps with the last merged interval, merge them.
            mergedIntervals[mergedIntervals.length - 1][1] = Math.max(mergedIntervals[mergedIntervals.length - 1][1], interval[1]);
        }
    }
    
    // 6. Return the mergedIntervals array.
    return mergedIntervals;
}


Java

public class MergeIntervals {
    public int[][] merge(int[][] intervals) {
        // 1. Sort the intervals based on their start values.
        Arrays.sort(intervals, (a, b) -> a[0] - b[0]);
        
        // 2. Create a list to store merged intervals.
        List<int[]> mergedIntervals = new ArrayList<>();
        
        // 3. Iterate through the sorted intervals.
        for (int[] interval : intervals) {
            // 4. If the mergedIntervals list is empty or the current interval does not overlap with the last merged interval:
            if (mergedIntervals.isEmpty() || interval[0] > mergedIntervals.get(mergedIntervals.size() - 1)[1]) {
                mergedIntervals.add(interval);
            } else {
                // 5. If the current interval overlaps with the last merged interval, merge them.
                mergedIntervals.get(mergedIntervals.size() - 1)[1] = Math.max(mergedIntervals.get(mergedIntervals.size() - 1)[1], interval[1]);
            }
        }
        
        // 6. Convert the list to an array and return it.
        return mergedIntervals.toArray(new int[mergedIntervals.size()][]);
    }
}


Want to learn more?

If you’d like to learn more about these patterns and how you can implement them more effectively during your next coding interview, then check out techinterviews.io today! We offer a comprehensive curriculum to prepare you for your next coding interview, covering topics such as data structures, algorithms, behavioral interviews, and even salary negotiation. We even have a built-in coding workspace for you to practice.


Supercharge your coding interview prep today with our promo code TECH30 for $30 off!


Featured image “a developer writing code” by @Limarc

Graphics made with Okso.app


바카라사이트 바카라사이트 온라인바카라