Bayesian Reasoning in Diagnostics
Bayes' theorem connects prior beliefs to updated beliefs through evidence. In diagnostic testing:
Pretest probability — the probability of disease before the test, based on prevalence, clinical presentation, and prior tests.
Likelihood ratio — how much more likely the test result is in diseased vs non-diseased patients.
Post-test probability — the updated probability of disease after incorporating the test result.
The key insight: a positive test does NOT mean you have the disease. It shifts the probability by a factor determined by the likelihood ratio. A highly specific test (LR+ = 20) applied to a low-prevalence condition (pretest = 1%) still yields a modest post-test probability (~17%).
Information Theory Perspective
Shannon entropy H(p)=−p⋅log2(p)−(1−p)⋅log2(1−p) measures diagnostic uncertainty in bits. Before testing, uncertainty is H(pretest). After testing, uncertainty drops to H(posttest).
The information gain — the drop in entropy — quantifies how useful the test is. A test that moves probability from 50% to 90% reduces uncertainty by more bits than one that moves it from 90% to 95%, even though the absolute change is larger in the second case.
This provides a principled way to compare tests: the test with higher expected information gain is more diagnostically useful, regardless of whether it is used for ruling in or ruling out.