Here is a test of your investment intuition: An investor buys a stock after thorough research—detailed competitive analysis, conservative valuation, sound reasoning throughout. The stock drops 40% over the next year. Was this a good decision or a bad decision?
Most people answer immediately: bad decision. The stock went down. Obviously wrong.
Now consider the inverse: An investor buys a cryptocurrency because their neighbor mentioned it at a barbecue. No research, no understanding, pure social contagion. The cryptocurrency triples over the next year. Good decision or bad decision?
Again, most people answer immediately: good decision. It worked. Obviously right.
Both answers are wrong. And the error behind them—what professional poker player Annie Duke calls “resulting”—may be the single most destructive cognitive bias in investing. It causes investors to learn the wrong lessons from their experience, reinforcing terrible processes that happened to work and abandoning excellent processes that happened to fail. Over time, resulting creates investors who are confidently wrong about what actually drives success.
Understanding resulting isn’t just intellectually interesting. It’s the difference between becoming a better investor with each passing year and spending decades learning the wrong lessons.
What Resulting Actually Is
Resulting is the tendency to evaluate the quality of a decision based on its outcome rather than the quality of the decision-making process itself. The term comes from poker, where players constantly face situations that clarify the concept.
Consider: You hold pocket aces—the best starting hand in Texas Hold’em. You get all your money in against an opponent who has pocket sevens. This is a massively profitable situation; you’ll win roughly 80% of the time. Your opponent catches a third seven on the river and takes your stack.
Did you make a bad decision?
Any serious poker player will tell you: absolutely not. You got your money in with a massive edge. That’s exactly what winning poker looks like. The fact that you lost this particular hand is irrelevant to the quality of your decision. If you could replay this exact situation a thousand times, you’d win roughly 800 of them. The other 200 losses aren’t mistakes—they’re the price of admission to a game that is, on net, massively profitable.
This is obvious in poker because the probabilities are explicit. We can calculate the exact odds. We know precisely how often pocket aces will beat pocket sevens. The randomness is undeniable and quantifiable.
In investing, the randomness is equally present but far less visible. We don’t know the exact probability that a well-researched stock will outperform. We can’t run the simulation a thousand times to see the true distribution. We get exactly one outcome—and that single outcome creates an overwhelming temptation to judge the decision by what happened rather than how it was made.
The Mechanism of Miseducation
When you judge decisions by outcomes, you create a feedback loop that systematically corrupts your learning.
A stock you bought goes up. You conclude your analysis was good. You reinforce whatever process you used—whether that process was rigorous fundamental research or a tip from social media. The positive outcome validates the approach, regardless of whether the approach deserves validation.
A stock you bought goes down. You conclude your analysis was flawed. You question the process, modify your approach, maybe abandon the framework entirely. The negative outcome invalidates the approach, regardless of whether the approach was actually sound.
The problem is that outcomes contain noise—often enormous amounts of noise. A mediocre investment thesis can succeed spectacularly if the right unexpected catalyst materializes. A brilliant investment thesis can fail miserably if the wrong unexpected event occurs. When you learn from outcomes alone, you’re learning from a signal corrupted by randomness. Over time, you drift toward whatever happened to work recently rather than whatever is actually most likely to work going forward.
Michael Mauboussin, in his essential book The Success Equation, places this in a clarifying framework. Activities exist on a continuum from pure luck to pure skill. At the luck end: roulette, the lottery. At the skill end: chess, running races. And in the middle—often much closer to the luck end than we want to admit—investing.
Mauboussin argues that in activities dominated by luck, recent outcomes contain almost no information about skill. In the short run, bad investors look brilliant and brilliant investors look incompetent. Only over long periods, with large sample sizes, does skill become visible through the noise.
The Luck-Skill Continuum
Where does investing fall on this spectrum? Much closer to the luck end than our egos prefer.
Consider the evidence. Most professional fund managers fail to beat their benchmarks over extended periods. Of those who outperform in one period, most fail to repeat in the next. Persistence of returns—the hallmark of a skill-dominated activity—is notoriously weak in investing.
This isn’t because professional investors are unskilled. It’s because markets are competitive, information gets rapidly incorporated into prices, and the underlying businesses face genuine uncertainty about the future. Even excellent analysis gets overwhelmed by factors that are fundamentally unpredictable.
Think about what an investor is actually predicting: future earnings, competitive dynamics years out, management quality under stress, technological change, regulatory shifts, macroeconomic conditions. Each prediction carries substantial uncertainty. Combine them all, and the confidence interval around any investment outcome is enormous.
Philip Tetlock’s landmark research on expert prediction, detailed in Superforecasting, demonstrates that most experts barely beat random chance at predicting complex events. The forecasters who did outperform—the “superforecasters”—shared a particular mindset: they were intensely focused on process over outcome, constantly calibrating their probability estimates, and deeply humble about their ability to predict specific events.
The best predictors, in other words, were the ones who most clearly understood how much luck influenced their outcomes.
What This Means for Learning
If investing sits closer to the luck end of the continuum, the implications for learning are profound.
In high-skill activities, outcomes are excellent teachers. If you lose a chess game, you can analyze exactly where you went wrong. The outcome contains clear information about your performance. Learning from results is efficient and accurate.
In high-luck activities, outcomes are terrible teachers. They contain so much noise that learning from them requires enormous sample sizes—far more than any individual investor will experience in a lifetime. Learning from small samples of noisy outcomes leads reliably to wrong conclusions.
This is the core insight: in investing, you cannot learn primarily from your results. You must learn primarily from your process—evaluating decisions based on whether they were well-reasoned at the time they were made, regardless of what happened afterward.
How Resulting Reinforces Bad Behavior
The destructive power of resulting becomes clear when you trace its effects over time.
The Lucky Fool
An investor buys a speculative stock with no real analysis—maybe a tip, maybe it looked exciting, maybe it just felt right. The stock doubles. The investor concludes they have good instincts for this sort of thing. They buy more speculative stocks the same way. A few work, a few don’t, but on net, they feel validated.
Eventually, they make a concentrated bet on something they don’t understand at all—but their past “successes” have given them confidence. This time the bet goes completely wrong. They lose years of gains in months.
This pattern is extraordinarily common. The initial lucky outcomes create false confidence that encourages increasingly reckless behavior until the luck runs out. The problem is that resulting prevented any accurate feedback along the way. Every lucky win was interpreted as skill, building a foundation of false beliefs that guaranteed eventual disaster.
The Unlucky Genius
Consider the opposite case. An investor develops a thoughtful, well-researched approach. They buy high-quality companies at reasonable prices, within their circle of competence, with appropriate margin of safety. Sound process throughout.
Then they hit a rough patch. Markets shift in unexpected ways, and their holdings underperform for a year or two. Resulting kicks in: they conclude something must be wrong with their approach. They tinker with their process, chase what’s been working recently, abandon the discipline that would have served them over time.
A few years later, their original approach would have delivered excellent returns. But they’d already abandoned it, taught by bad outcomes to distrust a sound process.
This pattern is equally common, though less visible because it doesn’t end in spectacular blowups. It just ends in persistent mediocrity—an investor who never gives any good approach enough time to work because they keep abandoning ship after unlucky results.
Institutional Resulting
If you think resulting only afflicts inexperienced retail investors, consider how institutional capital allocation works.
Fund managers with poor recent performance get fired and replaced with managers who’ve done well recently. Capital flows from underperforming funds to outperforming ones. Consultants recommend managers with strong track records and move away from those with weak ones.
All of this makes intuitive sense—of course you want managers with good results—but it’s resulting on a grand scale. Research consistently shows that chasing recent performance is a losing strategy. Top-performing funds regress to the mean. Flows into recently hot funds tend to arrive just before returns moderate.
The institutional investment industry is structurally designed to learn the wrong lessons from outcomes. Billions of dollars slosh around chasing whatever happened to work recently, with the predictable result that investors systematically buy high and sell low.
Escaping the Resulting Trap
If outcomes are unreliable teachers, what should you learn from instead? The answer is process—but implementing this in practice requires specific tools and disciplines.
Decision Journals
A decision journal is the single most powerful tool for defeating resulting. The concept is simple: before making any significant investment decision, write down your reasoning. Document:
- What you’re buying and why
- What your expected return is and what assumptions drive it
- What could go wrong, and what you’d do if it did
- How confident you are, on a probability scale
- What new information would change your mind
The journal creates a contemporaneous record of your decision-making that you can evaluate later—independently of outcomes. When you review past decisions, you can see whether your reasoning was sound at the time, whether you identified the key risks, whether your confidence was calibrated to the evidence.
Annie Duke emphasizes that the journal must be written before the outcome is known. Once you know what happened, your memory of what you were thinking gets corrupted. You’ll remember being more confident in things that worked and more skeptical of things that failed. The written record defeats this hindsight bias.
Over time, a decision journal reveals patterns in your thinking that outcomes alone never would. Maybe you’re systematically overconfident. Maybe you’re better at identifying risks than opportunities. Maybe your best decisions came when you did a particular type of analysis. These insights are invisible in outcome data but crystallized in process records.
Pre-Mortems
A pre-mortem is a technique developed by psychologist Gary Klein and popularized by Daniel Kahneman. The exercise is simple but surprisingly powerful.
Before making an investment, imagine that you’re looking back from one year in the future, and the investment has failed badly. Now write down why it failed. What went wrong? What did you miss? What assumptions proved incorrect?
The pre-mortem works by leveraging prospective hindsight—the human brain’s ability to generate explanations for events it believes have already occurred. When you imagine the failure has already happened, you bypass the optimism bias that normally clouds assessment of future risks.
Well-executed pre-mortems surface risks that standard analysis misses. They force you to consider scenarios you’d otherwise dismiss as unlikely. And they create written records of the risks you knowingly accepted—so if those risks materialize, you can evaluate whether you priced them appropriately rather than concluding you made an “error.”
Base Rate Calibration
Base rates are the underlying frequencies of events in a reference class. What percentage of growth stocks actually deliver the growth the market is pricing in? How often do turnaround stories actually turn around? What fraction of acquisitions create the value management promised?
Studying base rates calibrates expectations about outcomes. If you understand that only 10% of biotech companies in Phase 2 trials ultimately bring a successful drug to market, you’ll interpret any single trial failure differently than if you expected most to succeed. The failure isn’t evidence of bad process—it’s the expected outcome in most cases.
Base rate thinking is the antidote to treating every outcome as informative. When you know the underlying probability distribution, individual outcomes shrink in significance. A loss in a 70-30 situation doesn’t mean you were wrong; it means the 30% scenario happened.
Cultivating base rate awareness requires deliberate study. Read the research on investor performance, on stock returns by category, on the historical distribution of outcomes. Mauboussin’s research is an excellent starting point. The goal is to develop an intuition for how much luck influences outcomes in various situations—so you can weight your conclusions appropriately.
Large Sample Thinking
A single outcome tells you almost nothing. Ten outcomes tell you a little. A hundred outcomes start to reveal patterns. A thousand outcomes reveal your actual edge.
This mathematical reality should shape how you evaluate your own performance. Don’t draw conclusions from any individual investment. Don’t even draw strong conclusions from a year of results. Think in decades, not quarters.
This is psychologically difficult. Humans are wired to detect patterns quickly—it was adaptive in our evolutionary environment. But in high-luck domains, pattern detection from small samples leads systematically to wrong conclusions. You must consciously override your pattern-seeking instincts and demand more data before updating beliefs.
Practically, this means tracking your decisions systematically over time. Keep records of every investment, your thesis for each, and the outcome. Periodically analyze the aggregate: What percentage of your theses worked? How did your winners compare to your losers in magnitude? Were you better in some types of situations than others?
These aggregate patterns, emerging over many decisions, reveal genuine skill in ways that individual outcomes never can.
The Practice
Escaping resulting requires building habits that interrupt the automatic impulse to judge decisions by outcomes. Here are specific exercises to develop this capacity.
Exercise 1: The Outcome-Blind Review
Take a past investment—ideally one where you remember your original reasoning—and write an assessment of whether it was a good decision. But don’t look at what the stock actually did afterward.
Evaluate solely based on: Was your analysis sound? Did you identify the key risks? Was your confidence appropriate to the evidence? Did you stay within your circle of competence? Did you have adequate margin of safety?
Only after completing this assessment should you look at the outcome. Notice whether the outcome matches your process assessment. Often it won’t—and that gap is where the learning happens.
Exercise 2: Invert the Outcome
When reviewing a successful investment, ask: “What if this had gone down 50% instead? Would I conclude my analysis was flawed?” If yes, your evaluation is resulting-dependent. Sound process should look sound regardless of outcome.
Similarly, for unsuccessful investments: “What if this had gone up 50% instead? Would I conclude I was a genius?” If yes, you’re giving outcomes too much weight in your self-assessment.
The goal is developing an evaluation framework that produces consistent answers regardless of what happened after your decision.
Exercise 3: Probability Journaling
For your next ten investment decisions, assign explicit probabilities to your expected outcomes. “I believe there’s a 60% chance this investment returns more than 10% over the next year, a 25% chance it’s roughly flat, and a 15% chance it loses significant money.”
After the outcomes are known, evaluate your calibration. Were your 60% predictions right about 60% of the time? Were your 15% risks materializing about 15% of the time?
Most investors discover they’re poorly calibrated—usually overconfident. The exercise builds awareness of where your probability estimates drift from reality, which is essential information for improving decision quality.
Exercise 4: The Process Scorecard
Create a checklist of process elements you consider essential for a good investment decision:
- Thesis documented in writing
- Key risks identified
- Margin of safety calculated
- Circle of competence confirmed
- Base rates considered
- Pre-mortem completed
- Position sized appropriately
Score each investment on these process dimensions, entirely independently of outcomes. Track both scores over time: process quality and investment returns. Look for the relationship—or lack thereof—between them in the short run, and the emerging correlation over longer periods.
Exercise 5: Teach the Distinction
Explain resulting to someone else. Walk through examples. Discuss why outcomes are unreliable teachers in high-luck domains. Teaching forces you to articulate concepts clearly and often reveals gaps in your own understanding.
Find a fellow investor interested in these ideas and commit to discussing decision quality rather than results in your conversations. “How was your process on that decision?” becomes more valuable than “Did that stock work out?”
Resulting and Related Frameworks
Understanding resulting deepens your grasp of the other essential investment frameworks, and vice versa.
Margin of Safety as Process Insurance
Margin of safety is fundamentally a process-level concept. When you demand a substantial gap between price and intrinsic value, you’re building error tolerance into your decisions. You’re acknowledging that your analysis might be wrong and your timing might be bad—and structuring the investment so that you can still profit even if both are true.
From a resulting perspective, margin of safety protects against learning the wrong lessons. If you buy with 40% margin and the stock goes down 20%, you haven’t lost money yet—and more importantly, you haven’t necessarily been proven wrong. The margin gave you room to be early, room for normal volatility, room for temporary pessimism. You can evaluate your thesis patiently rather than panicking at every price movement.
Investors who buy with minimal margin learn the wrong lessons constantly. Every fluctuation feels significant. Every dip suggests error. They abandon sound theses too early because they have no cushion against short-term noise.
Circle of Competence and Decision Quality
Your circle of competence determines the quality of your decisions—the clarity of your analysis, the accuracy of your risk assessment, the reliability of your probability estimates. Inside your circle, you can make high-quality decisions that deserve patience. Outside your circle, even decisions that happen to work out were fundamentally low-quality.
Resulting is most dangerous outside your circle of competence. When you don’t deeply understand an investment, you can’t distinguish between an outcome driven by factors you knew about and one driven by factors you never understood. Every outcome is equally mysterious, providing no real information about whether to continue or abandon your approach.
Inside your circle, you can at least attempt to separate signal from noise. “The investment went down because of X, which I correctly anticipated and priced in” is a very different statement than “the investment went down, and I have no idea if this was something I should have seen coming.”
Compound Interest in Process
The power of compounding applies to decision quality, not just returns. Every decision you make is an opportunity to either improve or degrade your decision-making framework. Resulting pushes you toward degradation—learning the wrong lessons, reinforcing bad habits, abandoning good ones.
Process-focused evaluation compounds the other direction. Each decision becomes an opportunity to calibrate your confidence, improve your analysis, identify blind spots. Over decades, this compounds into dramatically better judgment—even if short-term results remain noisy.
The investors who compound most effectively are those who treat every decision as a learning opportunity about process, not just a data point about outcome.
The Deep Insight
Beneath all the practical techniques lies a profound epistemological humility: we cannot fully know what we’re doing.
The future is genuinely uncertain. Our analysis is always incomplete. Our confidence intervals are always too narrow. The world has a way of surprising us with outcomes we never considered.
Given this reality, the best we can do is make decisions that are wise given what we know at the time—and then accept that the outcomes will contain substantial randomness we cannot control.
Resulting is fundamentally a failure to accept this uncertainty. It’s an attempt to impose false clarity on an inherently unclear situation. It treats every outcome as a verdict on our competence when many outcomes are simply luck—good or bad—that says nothing about the quality of our thinking.
The deepest escape from resulting is accepting that you will never fully know whether your decisions were correct. You can know whether they were reasonable. You can know whether you followed sound process. But “correct” implies a certainty about the future that doesn’t exist.
This isn’t discouraging—it’s liberating. When you stop expecting to judge decisions by outcomes, you can focus entirely on what you can control: the quality of your analysis, the integrity of your reasoning, the discipline of your process. These are improvable. These compound over time. These create the conditions for good outcomes without ever guaranteeing them.
And that, ultimately, is what excellent investing looks like: disciplined process, intelligent humility, patience for the long run—and freedom from the resulting that corrupts so many investors into learning exactly the wrong lessons from their experience.
Resulting is the first cognitive bias every investor should master. Once you see the trap, you can’t unsee it—and that awareness alone begins to protect you. For frameworks that improve the quality of decisions you’re evaluating, explore margin of safety and circle of competence. For understanding how good decisions compound over time, see compound interest.