Let’s do a quick exercise. Think about the last major technical decision your organization made. Maybe it was adopting Kubernetes. Maybe it was choosing a new data platform. Maybe it was committing to microservices, or picking an AI vendor, or deciding to build something in-house instead of buying it.

Now ask yourself: what was the decision-making process?

If you’re being honest, it probably looked something like this: someone (a senior engineer, an architect, maybe a vendor’s sales engineer) made a recommendation. The recommendation came with a rationale that sounded reasonable. There was a meeting. A few people asked questions. Nobody had a strong enough objection to block it. The decision was made. A Jira epic was created. The organization committed months of engineering time based on what was, functionally, one person’s informed opinion that nobody stress-tested.

This is how most technical decisions get made. Not through rigorous evaluation. Not through structured analysis of trade-offs. Through the gravitational pull of whoever has the most conviction in the room.

And then, twelve months later, when the Kubernetes migration has tripled your operational complexity, or the microservices architecture has turned a simple feature change into a six-team coordination exercise, or the build-versus-buy decision has left you maintaining a system that a vendor could run better and cheaper… nobody goes back to examine how the decision was made. They just deal with the consequences and start the cycle over.

Decisions are bets, and most organizations don’t track their bets

Annie Duke wrote a book called Thinking in Bets that should be required reading for anyone who makes technical decisions for a living. Her core argument is simple: every decision is a bet on an uncertain future, and the quality of a decision should be judged by the process that produced it, not by the outcome.

This is the opposite of how most engineering organizations work. Good outcomes get attributed to good decisions, even when the decision was a coin flip that happened to land well. Bad outcomes get attributed to execution failures, even when the decision itself was the problem. The result is an organization that never gets better at making decisions because it never honestly examines how decisions were made.

Here’s a test. Go find the last three significant technical decisions in your organization and look for the written record. Not the announcement. The analysis. The alternatives that were considered. The trade-offs that were identified. The risks that were acknowledged. The criteria that were used to choose.

If that record doesn’t exist, your organization is making bets without keeping a ledger. You can’t learn from decisions you can’t examine.

The hype cycle is not a decision framework

The most expensive technical decisions we see aren’t the ones driven by bad analysis. They’re the ones driven by no analysis. Decisions that were made because a technology was trending, because a conference talk was compelling, because a competitor was doing it, because the CEO read an article on the plane.

We are currently living through the most aggressive technology hype cycle in a generation. AI, specifically. The pressure on engineering leaders to “do something with AI” is immense, and the decision-making around it reflects that pressure. Companies are committing to AI platforms, hiring AI teams, and restructuring roadmaps around AI capabilities based on a level of analysis that wouldn’t pass muster for a database migration.

This isn’t an argument against AI. We build production AI systems, and we’re obviously believers. It’s an argument against making consequential technical decisions based on vibes. “Everyone is doing it” is not a strategy. “Our investors expect it” is not a technical requirement. “We’ll fall behind” is a fear, not an analysis.

The companies that are getting real value from AI are the ones that started with a specific problem, evaluated whether AI was the right solution to that problem (sometimes it isn’t), and committed to the unglamorous work of building it properly. The ones that are burning money are the ones that started with the technology and went looking for a problem to justify it.

The reversibility test

Not all decisions are equal, and the failure to distinguish between reversible and irreversible decisions kills more engineering organizations than bad architecture ever has.

Jeff Bezos described this as “one-way doors” versus “two-way doors.” A one-way door is a decision that’s extremely difficult or impossible to undo: choosing your primary database, committing to a programming language for your core platform, signing a multi-year vendor contract. A two-way door is a decision you can reverse without catastrophic cost: picking a logging framework, choosing a CI/CD tool, selecting an internal communication protocol.

Most organizations treat every decision like a one-way door. They over-analyze two-way door decisions with weeks of evaluation that the decision doesn’t warrant. And they under-analyze one-way door decisions because the team is tired of analysis and just wants to start building.

The practical effect is that organizations are slow where they should be fast and fast where they should be slow. The team spends three weeks evaluating monitoring tools (two-way door) and three days choosing a cloud provider (one-way door, multi-year commitment, seven-figure annual spend). The decision process is proportional to the team’s interest in the topic, not proportional to the stakes.

If you’re making a decision that you’ll have to live with for years, spend the time. Document the alternatives. Quantify the trade-offs where you can. Identify what would have to be true for each option to be the right one. If you’re making a decision you can reverse in a month, make it in a meeting and move on. The time you save on small decisions is time you can spend getting big ones right.

How to actually make better technical decisions

There’s no framework that makes technical decisions easy. Anyone selling you one is lying. But there are practices that make them less bad:

Write it down before you decide. The act of writing forces clarity. If you can’t articulate the problem you’re solving, the alternatives you’ve considered, and the trade-offs you’re accepting, you’re not ready to decide. A one-page decision document takes an hour to write and saves months of misalignment. We use a lightweight version of architecture decision records for this, and the number of times the act of writing the document changed the decision is embarrassing.

Separate the decision from the decision-maker’s identity. The most dangerous dynamic in technical decision-making is when the person advocating for an approach has their credibility staked on the outcome. Once a senior engineer has publicly championed Kubernetes, they’re not going to be the one to say “actually, this is more complexity than we need.” Create space for people to change their minds without losing face. Pre-mortems help. Before you commit, ask “it’s twelve months from now and this decision failed. What went wrong?” The answers will be more honest than anything you’ll hear in a normal review.

Track your decisions and revisit them. Keep a log. Not a heavy process, just a simple document that records what was decided, why, and what you expected to happen. Every six months, look at the decisions you made and compare expected outcomes to actual outcomes. This is how you build organizational judgment over time. Without it, you’re making every decision with the same level of insight you had on day one.

Budget for being wrong. No matter how good your process is, some decisions will be wrong. The question is how expensive it is to recover. Build systems that are modular enough to swap components. Avoid vendor lock-in where the switching cost is existential. Design architectures that can evolve. The goal isn’t to make perfect decisions. It’s to make decisions that don’t destroy you when they turn out to be imperfect.

The question to ask yourself

Think about the biggest technical bet your organization has made in the last year. Now ask: if you had to defend that decision to a skeptical board member (not the outcome, the process) could you? Could you show them the alternatives you evaluated? The risks you identified? The criteria you used?

If the answer is no, the decision might still be right. But you got there by luck, not by judgment. And luck is a terrible engineering strategy.


Steadfast Digital helps engineering leaders make technical decisions they can defend, and live with. If you’re facing a decision that’ll shape the next two years, bring us in before you commit.