FeaturesStrategy TemplatesBlog

Cognitive Biases That Destroy Algo Trading Strategies

Why Biases Are the Biggest Risk in Algo Trading

Most algo traders lose money not because of bad code or bad data — they lose because of bad thinking. Cognitive and statistical biases quietly corrupt every step of strategy development: how you form ideas, how you test them, and how you interpret results.

The troubling part is that a biased backtest looks exactly like a good one. The equity curve climbs, the Sharpe ratio looks healthy, and the drawdowns seem manageable. You only discover the bias when the strategy is live and bleeding.

This guide walks through the biases that matter most, with concrete examples from Indian equity markets.


1. Overfitting Bias (Curve Fitting)

What it is: Tuning a strategy’s parameters until it performs perfectly on historical data — at the cost of real-world performance.

Suppose you are building a momentum strategy on Nifty 500 stocks. You test lookback periods from 1 month to 24 months, rebalance frequencies from weekly to quarterly, and a dozen different ranking metrics. After hundreds of combinations, you find one that returns 32% CAGR with a Sharpe of 1.8. You publish it.

The problem: you didn’t find a strategy. You found a pattern that happened to exist in your specific dataset. Out of sample, it performs like any random strategy — because it is random.

How to avoid it:

  • Limit the number of parameters you tune. Every free parameter is a degree of freedom you can overfit.
  • Use a walk-forward test: train on 2007–2018, test on 2019–2024. Never look at the test set until you are done tuning.
  • Apply the Deflated Sharpe Ratio to correct for multiple testing.

2. Survivorship Bias

What it is: Testing a strategy only on stocks that exist today, ignoring companies that went bankrupt, delisted, or merged.

If you pull the current Nifty 500 constituents and backtest across them going back to 2010, you are testing on the survivors. The stocks that went to zero — Unitech, DHFL, Yes Bank before its near-collapse — are not in your universe. Your backtest has never seen a permanent loss of capital.

The result: every backtest looks better than reality because you have removed the worst outcomes.

How to avoid it:

  • Use point-in-time universe data that captures index constituents as they existed on each historical date, including companies that later delisted or went bankrupt.
  • Be especially careful with small-cap and mid-cap universes, where survivorship bias is most severe.

3. Look-Ahead Bias

What it is: Using information in your backtest that would not have been available at the time of the trade.

Examples from Indian markets:

  • Using a company’s full-year earnings to make a trade that supposedly happened in the middle of that year
  • Using adjusted close prices without accounting for the fact that corporate actions (splits, bonuses) are only known after the fact
  • Ranking stocks by their December 31 fundamentals and “buying” on January 1, when those filings are typically published weeks later

Look-ahead bias always flatters a backtest. You are effectively trading with tomorrow’s newspaper.

How to avoid it:

  • Use filing dates, not period-end dates, when incorporating fundamental data
  • Add a reporting lag (typically 45–90 days for Indian quarterly results)
  • Audit your data pipeline for any feature that is computed using future information

4. Confirmation Bias

What it is: Designing a test to confirm what you already believe, and stopping when you get the result you want.

You have a hunch that high-ROE stocks outperform. You backtest ROE > 20% and it works. You stop there. You never test whether the result holds across different thresholds, different time periods, or different market caps. You never ask whether the effect disappears after adjusting for sector concentration.

How to avoid it:

  • Preregister your hypothesis before looking at the data. Write down exactly what you expect to find and what would falsify it.
  • Test the opposite of your hypothesis. If you believe high ROE outperforms, also test whether low ROE underperforms — and check if the relationship is monotonic.
  • Share your backtest with someone who is actively looking for problems.

5. Recency Bias

What it is: Overweighting recent market conditions when evaluating a strategy.

A momentum strategy looks brilliant if you built it in 2022–2024, when Indian small-caps and mid-caps ran relentlessly. The same strategy would have shown painful drawdowns in 2018–2019 and the COVID crash of March 2020.

Recency bias is especially dangerous for Indian retail investors because the domestic equity market has been in a structural bull run since 2020, and most retail participants have never managed a strategy through a genuine bear market.

How to avoid it:

  • Test across at least one full market cycle. For Indian equities, that means including 2008, 2011, 2018, and 2020.
  • Check rolling returns: does the strategy work in every 3-year window, or only in specific periods?
  • Ask: “Would I still believe in this strategy if the last two years were removed from the backtest?“

6. Data Mining Bias (p-Hacking)

What it is: Running so many tests that you find statistically significant results purely by chance.

If you test 100 different strategy ideas at a 5% significance level, you expect 5 to appear significant even if none of them actually work. If you then report only the 5 that “worked,” your published research is noise dressed as signal.

This is pervasive in quantitative finance. Most published anomalies have decayed or disappeared entirely out of sample.

How to avoid it:

  • Apply Bonferroni correction or the Benjamini-Hochberg procedure when testing multiple hypotheses.
  • Require economic intuition for why a factor should work, not just that it happened to work.
  • If a strategy only works with very specific parameters and degrades sharply when you perturb them slightly, it is almost certainly mined.

7. Optimism Bias in Cost Assumptions

What it is: Underestimating transaction costs, slippage, and market impact — making a strategy look profitable when it would lose money live.

For Indian retail investors, realistic costs include:

  • Brokerage: 0.01–0.05% per leg (or flat fee per trade)
  • STT: 0.1% on equity delivery sells
  • Exchange charges, GST, SEBI fees: ~0.05% per leg combined
  • Bid-ask spread: 0.05–0.5% depending on liquidity
  • Market impact: Significant for small-caps and large position sizes

A strategy that rebalances monthly in large-cap stocks can absorb these costs. A strategy that rebalances weekly in small-caps almost certainly cannot.

How to avoid it:

  • Model all-in transaction costs conservatively, not optimistically
  • Include a slippage assumption of at least 0.1–0.3% per trade for mid and small-caps
  • Run a sensitivity analysis: at what cost level does the strategy break even?

The Common Thread

Every bias on this list shares one root cause: the desire for the strategy to work overcomes the discipline to test whether it actually does.

The antidote is not more data or better code. It is structured skepticism — treating your own backtests as adversarially as you would treat someone else’s.

Before going live with any strategy, ask: “What is the most likely reason this looks good that has nothing to do with a real edge?” If you cannot answer that question, keep looking.