Bayes Calculator

Interactive Fagan nomogram for Bayesian post-test probability. Enter pretest probability and likelihood ratio — see the update instantly. Data never leaves your browser.

Fagan NomogramShannon EntropyMethods Export

Try it out

Load example Bayes data to see the full workflow

  • Convert pretest probability to post-test probability using a known likelihood ratio
  • Visualize Bayesian diagnostic reasoning on an interactive Fagan nomogram
  • Determine how much a positive or negative test result shifts disease probability
  • Quantify diagnostic information gain in bits using Shannon entropy
  • Compute post-test probability from sensitivity, specificity, and a test result

Don't use for

  • Computing sensitivity, specificity, or likelihood ratios from raw data — use the Diagnostic Test Calculator
  • Finding the optimal threshold for a continuous-score test — use the ROC/AUC Calculator
  • Comparing two measurement methods — use the Method Comparison Analyzer

Bayesian Reasoning in Diagnostics

Bayes' theorem connects prior beliefs to updated beliefs through evidence. In diagnostic testing:

Pretest probability — the probability of disease before the test, based on prevalence, clinical presentation, and prior tests.
Likelihood ratio — how much more likely the test result is in diseased vs non-diseased patients.
Post-test probability — the updated probability of disease after incorporating the test result.

The key insight: a positive test does NOT mean you have the disease. It shifts the probability by a factor determined by the likelihood ratio. A highly specific test (LR+ = 20) applied to a low-prevalence condition (pretest = 1%) still yields a modest post-test probability (~17%).

Information Theory Perspective

Shannon entropy H(p)=plog2(p)(1p)log2(1p)H(p) = -p \cdot \log_2(p) - (1-p) \cdot \log_2(1-p) measures diagnostic uncertainty in bits. Before testing, uncertainty is H(pretest). After testing, uncertainty drops to H(posttest).

The information gain — the drop in entropy — quantifies how useful the test is. A test that moves probability from 50% to 90% reduces uncertainty by more bits than one that moves it from 90% to 95%, even though the absolute change is larger in the second case.

This provides a principled way to compare tests: the test with higher expected information gain is more diagnostically useful, regardless of whether it is used for ruling in or ruling out.

Frequently Asked Questions