The most useful equation a trader will ever see is not a moving average crossover or an option pricing formula. It is a single line of high school algebra written down by an English minister 250 years ago.
It looks like this:
code-highlightP(A | B) = [ P(B | A) × P(A) ] / P(B)
This is Bayes' theorem. In one line, it tells you how a rational person should update their belief about anything — a stock, an election, a coin flip, a medical diagnosis — after seeing new evidence.
If you have ever said the words "buy the rumor, sell the news," wondered whether a 90% accurate signal is actually any good, or stared at a Polymarket price and asked "is this right?", you have already been thinking like a Bayesian without knowing it. This guide makes the framework explicit, then shows exactly how it shapes good trading and prediction market decisions.
The Hook: A Test That Looks Right and Isn't
Imagine a disease that affects 1% of the population. There is a test for it that is 99% accurate — meaning if you have the disease, the test is positive 99% of the time, and if you don't, the test is negative 99% of the time.
You take the test. It comes back positive.
What is the probability you actually have the disease?
Most people answer 99%. The math says about 50%.
Here is why. Out of 10,000 people:
- 100 actually have the disease (1%). Of those, 99 test positive.
- 9,900 do not have the disease. Of those, 99 test positive anyway (the 1% false positive rate).
So 198 people get a positive test. Only 99 actually have the disease. 99 / 198 ≈ 50%.
The test is not broken. The math is not a trick. The answer feels wrong because most people anchor on the test's accuracy and forget how rare the disease is in the first place.
This is the single most important idea in trading. Most "edges" are tests that look 90% accurate against a 1% event. They produce far more false positives than true wins, which is why most backtested strategies collapse in live trading.
Bayes' theorem is the equation that makes this visible.
The Mental Model (Skip the Math If You Want)
Before any equations, learn the four words. Every Bayesian update is a relationship between exactly four things.
| Term | Plain English | Trading Translation |
|---|---|---|
Prior — P(A) | Your belief before seeing evidence | Base rate. How often does this normally happen? |
Likelihood — P(B | A) | How well the evidence fits the hypothesis | If my thesis is right, how often would I see this signal? |
Evidence — P(B) | How common the evidence is overall | How often does this signal appear at all, true or false? |
Posterior — P(A | B) | Your updated belief after evidence | What should I actually believe now? |
The one-sentence intuition:
New belief = old belief × how surprising the evidence is
If the evidence is something you'd expect to see whether your hypothesis is true or not, it shouldn't change your view much. If the evidence is something you'd only see if your hypothesis is true, it should change your view a lot.
That is the whole game. Everything below is just applying this idea with discipline.
The Bayesian mantra: A signal is only valuable if it is much more likely under your hypothesis than against it. A 90% accurate signal that fires 50% of the time when you're wrong is barely a signal at all.
The Math Primer (For Readers Who Want the Full Picture)
If you want to skip directly to applications, jump to the next section. If you want to actually understand the engine, here it is.
The Formula, Term by Term
code-highlightP(B | A) × P(A) P(A | B) = ─────────────────── P(B)
P(A | B)— read as "probability of A given B." This is what you want to know.P(B | A)— probability of seeing evidence B if hypothesis A is true. Comes from your model or historical data.P(A)— your prior probability for A before seeing the evidence.P(B)— probability of seeing evidence B at all, regardless of whether A is true.
The denominator P(B) is what trips most people up. It is calculated by summing over all the ways B could happen:
code-highlightP(B) = P(B | A) × P(A) + P(B | not A) × P(not A)
In plain English: B can happen because A is true (true positive), or B can happen even though A is false (false positive). You add both contributions.
The Medical Test, Worked Out
Using the numbers from the hook:
P(Disease) = 0.01(prior — 1% base rate)P(Positive | Disease) = 0.99(likelihood — true positive rate)P(Positive | No Disease) = 0.01(false positive rate)P(No Disease) = 0.99
Calculate the evidence:
code-highlightP(Positive) = (0.99 × 0.01) + (0.01 × 0.99) = 0.0099 + 0.0099 = 0.0198
Apply Bayes:
code-highlightP(Disease | Positive) = (0.99 × 0.01) / 0.0198 = 0.0099 / 0.0198 ≈ 0.50
The 50% answer falls out of the math. The base rate dominates. When the prior is small, even a great test produces 50/50 odds.
This single example is enough to explain why most trading "signals" don't work: traders measure accuracy and ignore base rates.
Sanity check: If your eyes glazed over, just remember the 1% / 99% / 50% example. That single result will inoculate you against 90% of bad statistical reasoning in trading.
Bayesian Thinking in Stocks
Now the practical part. Three applications that change how you actually trade.
Application 1 — Reading Earnings Like a Bayesian
The classic question: "If a company beats earnings, will the stock go up?"
The naive answer is "yes — beats are good news." The Bayesian answer is "what is the base rate, and what is already priced in?"
Step 1 — Prior. What is the base rate of a stock rallying the day after an earnings beat? Historically, across the S&P 500, roughly 55–60% of stocks finish up the next day after a beat — not 90%, not 50%. Already, the answer is closer to a coin flip than most people assume.
Step 2 — Likelihood. A "beat" alone is not very informative because most companies beat. Roughly 75% of S&P 500 companies beat consensus EPS in a typical quarter, because analysts are coached to set a beatable bar. So "company beat" is high-likelihood evidence whether the stock is going up or down — which means it has weak predictive power.
Step 3 — The signal that actually moves the posterior. What strongly differentiates winners from losers after earnings is guidance, not the headline number. A beat with raised guidance shifts the posterior meaningfully. A beat with maintained or lowered guidance often sends the stock down despite the headline.
This is why "buy the rumor, sell the news" is fundamentally a Bayesian observation. By the time the headline hits, the market has already updated on the prior (analyst whisper numbers, pre-earnings drift, sector tone). The headline beat is mostly evidence that was already absorbed. Only the unexpected component — guidance, margin direction, demand commentary — shifts the posterior further.
The takeaway: stop trading on headline beats. Trade on the gap between the headline and the priced-in expectation.
post_earnings_move > 5%Alert when NVDA moves more than 5% after earnings — a signal that guidance, not the headline number, surprised the consensus posterior.
Application 2 — Why Most Technical Signals Fail the Bayesian Test
Take a popular setup: "NVDA RSI drops below 30, buy the bounce."
Frame it as Bayes:
A= "NVDA rallies at least 5% over the next 5 days"B= "NVDA RSI dropped below 30 today"
To make this a real edge, you need:
- A high
P(B | A)— when NVDA rallies, does RSI < 30 actually appear beforehand? Often, no. Most rallies happen from neutral or already-overbought conditions. - A low
P(B | not A)— when NVDA does not rally, how often does RSI < 30 also fire? Frequently, because RSI < 30 in a downtrend means the stock keeps falling.
The result is that RSI < 30 alone fires plenty of times during real declines, producing false positives that swamp the true positives. The base rate of a 5-day rally in a falling stock is low; multiplying by a high false-positive rate kills the edge.
This is why most chart patterns don't survive forward testing. They have intuitive appeal (the prior feels right) and they show up in winning examples (high P(B | A)), but they also show up just as often in losing examples (high P(B | not A)). Without the second number, the pattern is just confirmation bias.
The fix is to combine signals so that P(B | not A) drops. RSI < 30 plus price above the 200-day moving average plus volume contraction is a much rarer event under failure scenarios — and that lower false positive rate is what creates a genuine posterior shift.
Bayesian strategy design: Don't ask "how often does my signal work?" Ask "how often does my signal fire when I'm wrong?" If both numbers are high, you have no edge — just a popular pattern.
Application 3 — "It's Already Priced In" Is a Bayesian Argument
When traders say "it's priced in," they are stating an efficient-market claim: the current price reflects the market's posterior probability given all public information.
Stated formally: market price ≈ aggregated posterior across all participants.
To beat that posterior, you need exactly one of three things:
-
A different prior. You think the base rate is different from what the market believes. Example: you think the base rate of a Fed pause after three rate hikes is higher than the futures market is pricing.
-
Private likelihood information. You have a data point the market does not, that strongly differentiates outcomes. Example: alternative data on credit card spending suggesting a retailer's quarter is much weaker than guidance implies.
-
A structural mispricing. The market cannot update properly because of constraints — short interest is maxed, options are illiquid, retail flow is overwhelming fundamentals. Example: short squeezes in low-float stocks where price stops reflecting probability and starts reflecting forced flows.
If you don't have at least one of these, you are not trading against the market's posterior — you are guessing in line with it. That is fine for portfolio exposure but not for active edge.
Bayesian Thinking in Prediction Markets
Prediction markets are where Bayes goes from useful framework to literal pricing model. Every contract is a posterior probability traded as a number between 0 and 1.
Application 4 — Polls vs. Prediction Markets
Suppose a national poll has Candidate X at 52%, with a 3-point margin of error. Polymarket has the same candidate priced at 65% to win.
A naive observer says "the market is wrong — the poll says 52%." A Bayesian asks better questions:
- Is the poll the prior, or is it the evidence? Probably evidence. Markets had a prior before the poll dropped, based on previous polls, fundamentals, and incumbency.
- How reliable is this poll vs. others? A single poll is one data point. The market has likely already absorbed the new information by the time you see it.
- What is the base rate of a candidate at 52% in this state actually winning? Historically, polling leads of 0–3 points convert to wins less than 70% of the time once you account for polling error. So 65% may be perfectly reasonable.
The lesson: prediction market prices are not opinions. They are aggregated posteriors that have already weighed the polls, the news, the fundamentals, and the time to event. To bet against a market price, you need to articulate which of the three edges from the previous section you believe you have.
Application 5 — Cross-Market "Arbitrage"
When the same event prices differently on two markets — Polymarket at 65%, Kalshi at 58% — beginners shout "arbitrage." Bayesians ask why.
Three possibilities, in order of likelihood:
-
Different priors / different participants. Polymarket is global crypto, Kalshi is US-regulated retail. Different distributions of beliefs. Different priors mean different posteriors. Not arbitrage — just two different aggregated views.
-
Different liquidity and slippage. A 7-point gap on a $5,000 book on Kalshi may not even close after fees and slippage. The "free money" is mostly transaction cost.
-
One side is genuinely mispricing. Real, but rare, and usually closed within minutes by sharper participants.
The Bayesian frame turns "arbitrage" into "what does the price gap tell me about how each market is updating?" That is a far more useful question for finding edges that actually pay.
Application 6 — The Fed as a Bayesian Update
Fed funds futures are one of the cleanest Bayesian instruments in finance. They literally trade as probabilities — "65% chance of a 25 bp cut at the next meeting."
Before an FOMC meeting, the futures market is the prior. The dot plot and the press conference are the evidence. The post-meeting price is the posterior.
If futures price 65% chance of a cut, and the Fed cuts but signals "this is the last one," the posterior on a cut at the next meeting may collapse from 50% to 15% — a massive update on a single piece of evidence (Powell's tone) because that evidence has very low likelihood under the "more cuts coming" hypothesis.
This is why the language of the press conference often moves markets more than the decision itself. The decision was already mostly priced. The language is the surprising evidence.
Pro tip: When watching the Fed, ignore the rate decision and read the statement diff. The diff is the new evidence. The decision is already in the prior.
Application 7 — Sports, Sharps, and Closing Line Value
A brief detour because it makes the Bayesian framework click.
In sports betting, sharps obsess over "closing line value" (CLV) — whether your bet's price was better than the final price at game time. Why?
Because the closing line is the most refined posterior the market produces. It has absorbed all available information, including late injuries and lineup changes. If you consistently bet at prices that are better than the closing line, you are consistently identifying mispriced posteriors before the market does.
CLV is the same idea as alpha in stocks: did your trade move toward the market's eventual updated belief? If yes, you have edge. If no, you are guessing.
Six Bayesian Mistakes Traders Make
These map directly to the four-term framework. If a trade is going wrong, it is almost always one of these.
1. Base Rate Neglect
You see a "90% win rate" claim and ignore that the underlying event happens 1% of the time. The signal is producing 99 false positives for every 99 true positives. Fix: always demand the base rate alongside the win rate.
2. Anchoring on the Prior
Evidence keeps piling up that your thesis is wrong, and you keep saying "the market doesn't see it yet." Sometimes true. Usually denial. Fix: decide in advance what evidence would force you to update, then update when it arrives.
3. Confusing P(A|B) with P(B|A)
"The stock is up 10 days in a row, so it's likely to continue." That confuses P(streak | uptrend) with P(uptrend | streak). Strong uptrends produce streaks; streaks do not equally imply strong uptrends — they often imply mean reversion is overdue. Fix: always ask which direction the conditional runs.
4. Cherry-Picking the Evidence Set
You only count signals after they paid off. Survivorship bias dressed up in statistics. Fix: log every signal in advance, including the ones that fail to fire, so the denominator is honest.
5. Ignoring the Likelihood of False Positives
You measure how often your signal fires when you are right but never how often it fires when you are wrong. Fix: for every backtest, run the same signal on the inverse outcome and compare hit rates.
6. Treating Public Information as a Private Edge
You read an article on CNBC at 9:31 AM and trade on it at 9:32. The market started updating on that information at 4:00 AM in pre-market. Fix: ask what evidence you have that the market does not already have.
A Bayesian Checklist Before Any Trade
Before you click buy, run through five questions. If you cannot answer at least three with specifics, you do not have a thesis — you have a feeling.
- What is the base rate? How often does this kind of move normally happen for this kind of stock in this kind of regime?
- What is the likelihood? If my thesis is right, how often would I see exactly this evidence?
- What is the false positive rate? If my thesis is wrong, how often would I see this evidence anyway?
- What does the market price imply? What posterior is already baked into the current quote?
- What evidence would make me reverse? Define it now, while you are calm.
This is the trade journal entry that separates investors with a process from gamblers with a story.
Putting It Together
Bayesian thinking is not a magic indicator. It is a discipline. It forces you to stop asking "will this work?" and start asking "what is the rational probability this works given everything I know?"
Three habits transfer directly from the math to the trading desk:
- Quote a base rate before any analysis. Anchor every trade to a probability you would have accepted before seeing the chart.
- Distinguish strong evidence from common evidence. A signal that fires constantly is not a signal — it is noise. Demand
P(B | A) >> P(B | not A). - Update fully when the evidence demands it. Bayes does not reward stubbornness. The whole point of the formula is that beliefs are supposed to change in proportion to the evidence.
Markets are not efficient because every participant is rational. They are efficient because the aggregation of all participants approximates a Bayesian update. To beat that aggregation, you have to be more disciplined about priors, likelihoods, and false positives than the average participant — not louder, not luckier, just more rigorous.
The same applies to prediction markets, only more directly. The price is the posterior. The question is always whether your prior, your evidence, or your reading of the market structure justifies a different number.
That is the entire job.
Track signals, not noise.
Stock Alarm Pro lets you build alerts on the kind of multi-condition signals that actually shift the posterior — not just the patterns that fire constantly. Start with a free account.
S&P 500 Screener
Filter by metrics, fundamentals
Price Alerts
Never miss a move
35+ Global Markets
Stocks, crypto, futures
AI Analysis
Powered by Claude


