jonsully1.dev

Algorithms Explained: Finding Nemo (Binary Search)

Cover Image for Algorithms Explained: Finding Nemo (Binary Search)
Photo by Ana Noelle  on Unsplash
John O'Sullivan
John O'Sullivan
Senior Full Stack Engineer
& DevOps Practitioner

📖 8-10 minute read
Perfect for your morning coffee break.

❓ Why Do Algorithms Matter? How Can They Help Us Find Nemo?

Lately I've been reading Algorithms Unlocked, by Thomas H. Cormen where he introduces the fundamental concepts of algorithms, why they matter, and how they shape our digital world.

Binary Search is one of many commonly used algorithms. But before we dive underwater in search of our little orange friend, let's understand why algorithms are so important in our everyday lives.

❓ What Is an Algorithm?

An algorithm is simply a precise method for performing a task.

💡 Everyday Example

Making a cup of coffee could be considered an algorithim that is programmed into your brain from the moment you first completed the act.

  1. Boil the kettle
  2. Add ground coffee beans to your cafetière
  3. Pour in hot water
  4. Let it step for 3-4 minutes
  5. Plunge the coffee to seperate it from the grounds
  6. Pour the coffee into a mug and enjoy

💡 Computational Examples

  • When you type a search into Google, algorithms decide which results to show you
  • Amazon’s recommendation system uses algorithms to suggest products
  • Uber’s pricing algorithm adjusts fares based on demand

❓ Why Should You Care About Algorithms?

🙎🏻‍♂️ For Non-Techies

  • Social media algorithms control what news/videos you see
  • Loan approvals, job applications, and insurance rates increasingly rely on algorithms
  • Understanding algorithms helps you question/understand why tech behaves certain ways
  • Good algorithms help doctors diagnose diseases faster or find missing children

👨‍💻 For Engineers

  • Algorithms are the foundation of clean, efficient code
  • Understanding them transforms how you approach system design, debugging and technical interviews
  • A well-designed algorithm can dramatically increase the performance and user experience of an application whilst similtaneously reducing server costs
  • A poorly designed algorithm may crash servers, damaging not only a company's profits but also reputation
  • Engineers who deeply understand algorithms get assigned to high-impact projects, advance faster to staff/principal roles and earn 20-40% higher salaries

❓ What Makes a Good Algorithm?

A useful algorithm must do two things well:

  • be correct
  • be efficient

🎯 Be Correct

An algorithm must reliably produce accurate results for its intended purpose.

💡 Real-World Examples

Google Maps must account for real-time accidents, road closures, live traffic patterns and other contributing factors.

Sleep Tracking Apps should distinguish between, actual sleep vs. reading in bed, REM cycles vs. light sleep (many apps still fail to accurately calculate this).

Autopilot Systems - such as Tesla's FSD - must correctly identify stop signs in heavy rain, pedestrians at night or low visibilty, construction zones, etc.

The Gray Areas

There are however more and more gray areas in today's modern use of algorithms.

  • Generative AI such as ChatGPT display disclaimers like "ChatGPT can make mistakes. Check important info."
  • Spam filters occasionally flag legitimate emails (false positives)
  • Facial recognition works better for some demographics than others

Therefore the correctness thresholds is often context dependant - weather apps can tolerate errors, aircraft control systems cannot.

⚡ Be Efficient

A well designed algorithm should minimize resource usage. It must be optimised for:

  • Time complexity (how long the algorithm takes to complete)
  • Memory usage (How much temporary "workspace" the algorithm needs)
  • Energy consumption (How much battery/power the algorithm uses)

❓ Why Efficiency Matters

  • In 2006, Amazon found that every 100ms in added page load time cost them 1% in sales, roughly $107 million
  • Netflix's AI-based recommendation algorithm system is known to be responsible for reducing the customer attrition rate, saving $1B per year

💡 Pro Tip

In interviews, always:

  1. Verify correctness first
  2. Optimize efficiency second
  3. Discuss tradeoffs between them

🕵🏼‍♂️ Finding Nemo: A Binary Search Adventure

There are an estimated 34,000 fish species in the world's oceans so how would Marlin find Nemo most efficiently?

const fish = ["Anemonefish", ..., "Barracuda", ..., "Zebrafish"]; // 34k fish

function linearFindNemo(fish: string[], target: string = 'Nemo'): number {
  for (let i = 0; i < fish.length; i++) {
    if (fish[i] === target) return i;
  }
  return -1;
}

The linearFindNemo function above is a simple for loop that iterates over an array of fish, checking if each fish is Nemo.

How It Works

  1. Start at the first fish
  2. Ask: "Are you Nemo?"
  3. If not, move to the next fish
  4. Repeat until Nemo is found or all fish have been checked

📊 Key Stats

Fish Max Checks Needed
10 10
1,000 1,000
34,000 34,000

Summary

The linear search approach is highly ineficient, its like checking every supermarket aisle for toothpaste even though you know its in the toiletries section.

Just like Marlin panicking and checking every reef, this approach gets slow, fast.

Binary Search is highly regarded as one of the fastest and most precise search algorithms, making it a popular choice in programming for its efficiency - if Marlin had access to it, the movie would have been far less than the advertised 1h 40m!

function binaryFindNemo(fish: string[], target: string = "Nemo"): number {
  let left = 0;
  let right = fish.length - 1;

  while (left <= right) {
    const mid = Math.floor((left + right) / 2);
    const current = fish[mid];

    if (current === target) return mid; // Found Nemo!
    if (current < target) {
      // Search right half (alphabetically later names)
      left = mid + 1;
    } else {
      // Search left half (alphabetically earlier names)
      right = mid - 1;
    }
  }
  return -1; // Nemo not found
}

The data to be searched must be sorted to apply binary search.

🎬 Initial Setup

The binaryFindNemo function first sets up the known range of fish from its first to last index positions:

  • left = 0 (first fish index)
  • right = sortedFish.length - 1 (last fish index, e.g., 33,999 for 34k fish)

🔍 Finding the Midpoint

It then finds the fish at the mid point of the entire 34,000 alphabetically sorted fish:

  • Calculates mid = Math.floor((left + right) / 2) as (0 + 33999)/2 = 16,999.5 → 16,999
  • Gets the fish at index 16,999: current = sortedFish[mid]

🔦 The Comparison Logic

It then checks if the current fish is Nemo. If so, job done, we found Nemo! If not, we check if the name of the current fish is alphabetically lower or higher in value than the target fish Nemo. Depending on the result, we adjust the value of left or right, essentially choosing which half of the original data will be used for search in the next while loop interation.

  1. If current === target → Immediate return (found Nemo!)
  2. If current < target (e.g., "Marlin" < "Nemo") → Discards left half by setting left = mid + 1
  3. If current > target → Discards right half by setting right = mid - 1

🔁 Subsequent Iterations

After the first discard of half the data, new search range is either:

  • 0-16,998 (if Nemo comes before midpoint fish)
  • 17,000-33,999 (if Nemo comes after midpoint fish)

Each iteration halves the search space.

🏁 Termination

The while loops ends when either:

  • Nemo is found (current === target)
  • Or not found (left > right)

📊 Maximum steps needed

As briefly mentioned, Binary search works by repeatedly dividing the search space in half. Each iteration eliminates 50% of remaining fish until Nemo is found.

Therefore the maximum steps needed to find Nemo is 15.

We can also prove it with the following table:

Step Fish Remaining Search Range (Indices) Midpoint Index Action
1 34,000 0 - 33,999 16,999 Discard half
2 17,000 0 - 16,998 8,499 Discard half
3 8,500 0 - 8,498 4,249 Discard half
4 4,250 0 - 4,248 2,124 Discard half
5 2,125 0 - 2,123 1,061 Discard half
6 1,062 0 - 1,060 530 Discard half
7 531 0 - 529 264 Discard half
8 265 0 - 263 131 Discard half
9 132 0 - 130 65 Discard half
10 66 0 - 64 32 Discard half
11 33 0 - 31 15 Discard half
12 16 0 - 14 7 Discard half
13 8 0 - 6 3 Discard half
14 4 0 - 2 1 Discard half
15 2 0 - 0 0 Final check (found/not found)

❓ But Why Is Binary Search So Much Faster?

The secret lies in the mathematical principle of ‘logarithms’, or to be more precise the Binary logarithm which answers the question: "How many times must we multiply 2 by itself to reach a target number?"

Binary Logarithmic Magic:

  • log₂(8) = 3 because 2 × 2 × 2 = 8
  • log₂(16) = 4 because doubling again requires one more multiplication

Binary search mirrors this exactly:

  • 8 items? Max 3 steps (halving: 8 → 4 → 2 → 1)
  • 16 items? Just 4 steps (16 → 8 → 4 → 2 → 1)

Binary search leverages logarithmic scaling through its methodical halving of the search space. Each comparison operation divides the remaining items by two, directly reflecting how logarithms measure exponential growth.

This brings us to big O notation, the speedometer for algorithms.

🏎️ Big O Notation: Algorithm Speedometer

In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.

Big O notation gives us a worst-case scenario gauge of the performance of an algorithm.

Algorithm Time Complexity Real-World Analogy
Linear Search O(n) Reading every page of a book
Binary Search O(log n) Using the index to jump to a page

In the case of our Linear Search example the time taken grows in proportion to the number of fish species in the fish dataset. We can say that the linear search approach has a time complexity of O(n), i.e. time grows linearly with input size.

For the Binary Search example, as we have concluded above, the search speed is a dramatic improvement, therefore we can say it had a time complexity of O(log n), i.e. time grows logarithmically with input size.

📌 Conclusion: The Power of Working Smarter, Not Harder

As you’ve likely realised, the "Finding Nemo" analogy was a playful gateway into understanding algorithms - but the real-world applications are far more profound. That list of fish names operates on the same principles as your phone’s contact list (which likely uses binary search to instantly locate names), or a SQL database querying billions of records with a simple SELECT statement. These systems don’t panic like Marlin — they work smarter.

The Hidden Algorithmic World

🗄️ Databases

A SELECT * FROM users WHERE id = 509 query on billions of records uses B-tree indexes (binary search’s advanced cousin) to return results in milliseconds.

🛒 E-Commerce

Amazon’s product search filters 600 million items using logarithmic scaling to display relevant products before you finish typing.

🏥 Medicine

Genomic databases use divide-and-conquer algorithms to match DNA sequences across millions of records.

❓ Why This Matters to You

As an engineer: These principles help you write code that scales gracefully (and keeps your servers from melting down).

As a tech user: Understanding algorithms helps you recognize why some apps feel "magically" fast while others lag.

🧠 Final Thought

The next time you search for a contact or filter a massive spreadsheet, remember: behind that instant result is an algorithm like binary search, quietly doing the heavy lifting. Whether you’re hunting for a clownfish or a customer record, working smarter, not harder, is what turns impossible tasks into trivial ones.

Thanks for reading.

John O