Diagnostic Test Calculator

Compute sensitivity, specificity, PPV, NPV, likelihood ratios, DOR, and Shannon entropy from a 2×2 table. Wilson or Clopper-Pearson CIs. Data never leaves your browser.

3 Input ModesShannon EntropyMethods Export

Try it out

Load example Diagnostic Test data to see the full workflow

Enter the four cells of the 2×2 contingency table.

Disease +
Disease −
Test +
Test −
CI Method:
  • Evaluate diagnostic test accuracy from a 2×2 contingency table (TP, FP, FN, TN)
  • Compute sensitivity, specificity, PPV, NPV, and likelihood ratios with confidence intervals
  • Quantify diagnostic information gain using Shannon entropy
  • Generate a manuscript-ready methods paragraph for a diagnostic accuracy study
  • Back-calculate a 2×2 table from known sensitivity, specificity, and prevalence

Don't use for

  • Continuous-score tests without a fixed threshold — use the ROC/AUC Calculator instead
  • Comparing two measurement methods for agreement — use the Method Comparison Analyzer
  • Sample size planning for a diagnostic study — use the Diagnostic Sample Size Calculator

Diagnostic Accuracy Fundamentals

Diagnostic test evaluation centers on the 2×2 table — the cross-classification of a reference standard (gold standard) and the index test. From this table, we compute:

Sensitivity = TP / (TP + FN) — detection rate among the diseased • Specificity = TN / (FP + TN) — exclusion rate among the non-diseased • PPV = TP / (TP + FP) — precision of a positive result • NPV = TN / (FN + TN) — precision of a negative result • LR+ = Sens / (1 − Spec) — how much a positive test increases disease odds • LR− = (1 − Sens) / Spec — how much a negative test decreases disease odds

Sensitivity and specificity are properties of the test (stable across populations). PPV and NPV depend on prevalence — the same test can have a PPV of 95% at 50% prevalence and 30% at 2% prevalence.

Information Theory in Diagnostics

Shannon entropy provides a principled measure of diagnostic uncertainty. For a binary outcome with probability p:

H(p) = −p · log₂(p) − (1−p) · log₂(1−p)

Maximum uncertainty (1 bit) occurs at p = 0.5. The information gain from a test is:

IG = H(pretest) − E[H(posttest)]

where E[H(posttest)] = P(T+) · H(posttest|T+) + P(T−) · H(posttest|T−). A test with high information gain dramatically reduces uncertainty regardless of the result. This metric is more nuanced than accuracy — a test can have high accuracy in low-prevalence settings while providing almost zero information.

Frequently Asked Questions