Sample Size & Power Calculator

Calculate sample size for t-tests, ANOVA, and proportion comparisons. Compute statistical power and generate power curves.

Statistical PlanningPower AnalysisClient-Side

Try it out

Load example Sample Size data to see the full workflow

Solve For
0.80
  • Determine required sample size before starting a study (prospective power analysis)
  • Compare statistical power across different design choices (e.g., paired vs. independent samples)
  • Evaluate whether a published study was adequately powered (retrospective power analysis)
  • Generate power curves to visualize how sample size affects detection probability
  • Justify sample size in grant applications, IRB protocols, and pre-registration documents

Don't use for

  • As a post-hoc power analysis on your own completed study — observed power adds no information beyond the p-value
  • When you cannot specify a plausible effect size — garbage in, garbage out
  • For complex designs (repeated measures, mixed models, clustered data) that require simulation-based power analysis

Power Analysis Fundamentals

Statistical power analysis links four quantities:

α (alpha) — significance level (Type I error rate), typically 0.05 • 1–β (power) — probability of detecting a true effect, typically 0.80 • Effect size — magnitude of the difference you want to detect • n (sample size) — number of observations per group

Given any three of these, the fourth can be computed. The most common use is to fix α\alpha, power, and effect size, then solve for the required sample size. This is a *prospective* (a priori) power analysis and should be done before data collection begins.

For a two-sample t-test, the formula is:

n = 2 ×\times ((z_{α/2} + z_β) / d)²

where d is Cohen’s d (standardized mean difference), z_{α/2} is the critical value for the two-sided significance level, and z_β is the z-value corresponding to the desired power.

Choosing Effect Sizes

The effect size is the most difficult and most important input to a power analysis. Three approaches:

Pilot data: Estimate from a small preliminary study. Be cautious — pilot studies typically overestimate effects. • Literature review: Use effect sizes reported in similar published studies. Prefer meta-analyses over individual studies. • Clinical significance: Define the smallest effect that would be practically meaningful. This is often the best approach for clinical trials.

Cohen’s conventions (small = 0.2, medium = 0.5, large = 0.8) should be used only as a last resort. A “medium” effect in one field may be unrealistically large in another. When in doubt, power for a smaller effect — the cost of over-sampling is usually less than the cost of an inconclusive study.

Frequently Asked Questions