Standard Error Calculator
Our biostatistics calculator computes standard error accurately. Enter measurements for results with formulas and error analysis.
Formula
SE = SD / sqrt(n)
Where SE is the standard error of the mean, SD is the sample standard deviation, and n is the sample size. The confidence interval is calculated as CI = mean +/- z * SE, where z is the critical value for the desired confidence level (1.96 for 95%). For the sample standard deviation, Bessel's correction is used: SD = sqrt(Sum(xi - mean)^2 / (n-1)).
Worked Examples
Example 1: Blood Pressure Study
Problem: A sample of 50 patients has a mean systolic BP of 130 mmHg with SD = 18 mmHg. Calculate the SE and 95% confidence interval.
Solution: SE = SD / sqrt(n) = 18 / sqrt(50) = 18 / 7.071 = 2.546\n95% CI = mean +/- 1.96 * SE\n= 130 +/- 1.96 * 2.546\n= 130 +/- 4.99\n= (125.01, 134.99)\nMargin of error = 4.99 mmHg
Result: SE = 2.546 mmHg | 95% CI: 125.01 - 134.99 mmHg | RSE = 1.96%
Example 2: Enzyme Activity Measurement
Problem: Enzyme activity measured in 8 replicates: 45, 52, 48, 50, 47, 53, 49, 51 units/mL. Calculate SE and 95% CI.
Solution: Mean = (45+52+48+50+47+53+49+51)/8 = 49.375\nSD = sqrt(sum(xi-mean)^2 / (n-1)) = sqrt(50.875/7) = 2.696\nSE = 2.696 / sqrt(8) = 2.696 / 2.828 = 0.953\n95% CI = 49.375 +/- 1.96 * 0.953 = (47.51, 51.24)
Result: SE = 0.953 units/mL | 95% CI: 47.51 - 51.24 | RSE = 1.93%
Frequently Asked Questions
What is standard error and how is it different from standard deviation?
Standard deviation (SD) measures the spread of individual data points around the mean of a single sample. Standard error (SE) measures the precision of the sample mean as an estimate of the population mean. SE = SD / sqrt(n), so it decreases as sample size increases. Intuitively, SD tells you how variable individual measurements are, while SE tells you how much uncertainty there is in your estimate of the average. For example, if blood pressure readings in a sample have SD = 15 mmHg and n = 100, the SE = 1.5 mmHg, meaning the true population mean is estimated with much more precision than any individual measurement.
How does sample size affect standard error?
Standard error decreases with the square root of sample size: SE = SD / sqrt(n). This means quadrupling your sample size halves the SE. Going from n=25 to n=100 halves the SE, but going from n=100 to n=400 halves it again. This diminishing return means there is a practical limit to how much increasing sample size improves precision. In biological research, this relationship helps determine optimal sample sizes. If SE needs to be reduced by half, you need 4 times as many observations. This is why very precise estimates in fields like genomics require thousands of samples.
What is the relationship between standard error and confidence intervals?
Confidence intervals are constructed using the standard error: CI = mean +/- z * SE, where z depends on the confidence level (1.96 for 95%, 2.576 for 99%). The SE directly determines the width of the confidence interval. A smaller SE (from larger samples or less variable data) produces narrower, more precise confidence intervals. For small samples (n < 30), t-values should be used instead of z-values, which produce slightly wider intervals to account for the additional uncertainty in estimating the population SD from small samples.
When should I report standard error vs standard deviation?
Report SD when describing the variability of individual measurements within your sample. This is appropriate in descriptive statistics and when the reader needs to understand the data distribution. Report SE when making inferences about the population mean, such as in error bars on graphs showing mean comparisons, or when reporting the precision of an estimate. A common convention in biomedical journals is to report mean +/- SD for descriptive purposes and mean +/- SE (or confidence intervals) for inferential statistics. Always clearly label which measure you are using, as confusion between SD and SE is one of the most common statistical errors in published research.
What is relative standard error and when is it useful?
Relative standard error (RSE) expresses the SE as a percentage of the mean: RSE = (SE / mean) * 100. It provides a scale-free measure of precision, making it useful for comparing the reliability of estimates across different measurements or studies. An RSE below 25% is generally considered reliable; 25-50% should be interpreted with caution; above 50% the estimate may be too unreliable to use. Government statistical agencies commonly use RSE thresholds to determine whether to publish estimates. In biology, RSE is useful for comparing measurement precision across different assays, species, or experimental conditions.
What formula does Standard Error Calculator use?
The formula used is described in the Formula section on this page. It is based on widely accepted standards in the relevant field. If you need a specific reference or citation, the References section provides links to authoritative sources.