The Difference Between Good Code and Bad Code Is 5 Minutes

I’m a fan of the KISS principle. Keep it simple. That applies when you’re building something new, but it applies just as much when you’re fixing something old.

Across every role I’ve had — agency work, staff engineer at a fintech startup, time at AWS and Meta, consulting for startups, building my own products — the pattern is always the same. The difference between code that’s maintainable and code that becomes a nightmare is usually about 5 extra minutes of thought. Not hours. Not days. Five minutes to ask: will the next person understand this? Can they extend it? Will they have to undo what I did before they can make progress?

But here’s the nuance: sometimes those 5 minutes are actually 5 months in disguise. Knowing the difference is the real skill.

When Skipping the 5 Minutes Costs You

One of my junior engineers manually provisioned an EC2 instance when we needed to spin up a server. We were on a tight deadline, and I get it — clicking through the console is fast. The problem was that we ran everything through CDK. Every piece of infrastructure was defined as code.

I came down on him pretty hard for it. Not because the EC2 didn’t work — it worked fine. But because of the hidden cost he was creating.

When everything is defined as code, you know what exists, where it’s deployed, and how to reproduce it. When someone provisions something manually, you get ghosts in the system. Six months later, someone gets paged at 2am for a service being down, and nobody knows where it was deployed or how it was configured. That’s a real problem. And by the way, CDK is terrible at importing existing infrastructure — so migrating that manually created EC2 back into our managed stack was significantly harder than just doing it right the first time.

The 5 minutes here was writing the CDK construct instead of clicking through the console. He skipped it, and it cost us hours later.

That said — I want to be honest about context. “Everything as code” was non-negotiable at that scale, with that team size, at AWS. For my own startup products right now? I’m skipping Terraform and creating resources manually. The tradeoff is different when you’re a solo founder versus a team of engineers who need to understand each other’s infrastructure. The principle isn’t “always do it the hard way.” It’s “understand the cost of the shortcut before you take it.”

When Spending the 5 Minutes Pays Off

On a recent consulting engagement, I inherited a codebase where the backend had queries timing out at 30+ seconds. The team had worked around it by hardcoding data into static JSON files — a reasonable decision when you need to demo next week.

When I wired the frontend to the actual API, the pages were unusable. Even loading 100 entries was painfully slow. The problem was one giant JOIN across millions of rows trying to do everything at once.

I’d already created ORM models for all their tables during my discovery phase. Instead of trying to optimize the existing raw SQL in place, I stripped out the old queries entirely and replaced them with ORM-based ones — decomposed into smaller, faster queries with pagination.

Query time dropped from 30+ seconds to ~200ms.

Here’s the interesting part: I had Claude write the optimized version in raw SQL as well, just to compare. The ORM-based query was 15% faster. The ORM generates cleaner query plans because it’s been optimized for the patterns you’re using. Raw SQL scattered through a codebase tends to accumulate inefficiencies that nobody notices.

The 5 minutes here was replacing the queries with ORM-based ones instead of just optimizing the raw SQL in place. Both would have gotten me to ~200ms. But now every query in the codebase is type-safe, composable, and maintainable. The next developer doesn’t have to parse raw SQL strings to understand what’s happening.

I’m a big believer in ORMs — my favorite is EntGo, a graph-based ORM for Go that I loved at Meta — but the principle is the same regardless of language: represent your schemas as code. Mirror the database, don’t scatter raw queries through your application.

When the 5 Minutes Is Actually 5 Months

Not every “do it right” instinct is correct. Sometimes the extra effort is over-engineering in disguise.

At a previous role, our engineering team decided to build a GraphQL server. It was the “right” choice architecturally — flexible queries, typed schema, great developer experience. We were excited about it. I was excited about it.

It was the wrong call.

Building a GraphQL server right took way too long. The complexity snowballed. We delayed showing progress to stakeholders because nothing was “ready” yet — the exact mistake I talk about in telling the story of your work. We should have built a REST API. It would have been simpler, faster to ship, and honestly better for our use case.

And then there’s the security angle. GraphQL has a fundamental problem with infinitely nested queries. If you expose it to the world as a public API, you’re opening yourself up to query depth attacks that are genuinely hard to defend against. We were using it internally, so it was manageable, but I can never recommend GraphQL for a public-facing API again. The edge cases are a nightmare.

We were spending the extra 5 minutes. But those 5 minutes were actually 5 months. The KISS principle would have told us to build the REST API, ship it, and evaluate whether we actually needed GraphQL later. We didn’t.

When 15 Hours Is Still the 5-Minute Choice

On that same engagement, the frontend was a Vue app iframing a separate Angular app through an Express server. I launched it locally and an entire section of the UI was blank — errored out. That’s when I discovered the iframe setup.

I could have wired the Vue app to the API around the iframe. It would have “worked.” Instead, I spent about 15 hours porting the Angular components into Vue using Claude Code and consolidating into one application. Two cross-dependent repos became one. Two build pipelines became one. The iframe that broke when it failed to load was gone.

Was 15 hours the “5 minute” choice? Yes — because the alternative was maintaining that complexity indefinitely.

The same principle applied to customer data isolation. Users could upload documents, but their data wasn’t tagged with their user ID. The RAG chatbot searched all data, not just theirs. I could have added a filter at the query layer — check user_id at read time, don’t bother tagging at write time. It would have solved the immediate problem.

Instead, I added user_id at the source. A new documents table to track uploads, two new columns on the data table, and user_id threaded through the entire pipeline — from upload to extraction to storage to query. The extraction pipeline itself didn’t change. The database loader got two new columns. The query layer got one additional WHERE clause.

Five extra minutes of schema design. But now future features — usage analytics, per-user billing, data export — all have the foundation they need. Without it, every one of those features would start with a data migration.

The Actual Principle

The 5-minute rule isn’t “always do it the right way.” It’s a decision framework:

  1. What’s the shortcut? Understand it clearly. Sometimes it’s the right call.
  2. What’s the cost of the shortcut 6 months from now? If it’s “someone gets paged and doesn’t know what this is” — spend the 5 minutes. If it’s “we’ll need to refactor this eventually” — maybe that’s fine.
  3. Is the “right way” actually 5 minutes, or is it 5 months? If you’re building a GraphQL server when a REST API would do, you’re not being thorough — you’re over-engineering.

The difference between good code and bad code isn’t talent or experience or how many design patterns you know. It’s the willingness to pause for 5 minutes and think about the person who comes after you. Sometimes that means writing the CDK construct. Sometimes that means shipping the REST API and moving on.

Know which one you’re in. That’s the skill.