Precision: The Closeness of Repeated Measurements
What Does "Closeness" Really Mean?
Imagine you are shooting arrows at a target. If all your arrows land very close together in one area, even if that area is not the bullseye, your shots are precise. In science, precision works the same way. It is not about being correct, but about being consistent. When a scientist measures the mass of a coin five times and gets 5.01 g, 5.02 g, 5.00 g, 5.01 g, and 5.02 g, those measurements are very close to each other. We say the measurement process has high precision. The closeness of these repeated results is the core idea.
This concept applies everywhere. A baker who measures flour for a recipe needs precise cup measurements to ensure every batch of cookies tastes the same. A robot on an assembly line must place screws in precisely the same spot every time. Precision is about repeatability and minimizing the scatter in your results.
Precision vs. Accuracy: A Crucial Team
Precision is often confused with accuracy, but they are different teammates on the same squad. Accuracy refers to how close a measurement is to the true or accepted value. Precision refers to how close the measurements are to each other. You can be precise without being accurate, and accurate without being precise.
| Scenario (Archery Target) | Precision (Closeness to each other) | Accuracy (Closeness to bullseye) |
|---|---|---|
| All arrows hit the bullseye. | High (they are all in the same small area) | High |
| All arrows are clustered in the top-left corner, far from the bullseye. | High | Low |
| Arrows are scattered randomly all over the target, but their average position is near the bullseye. | Low | High (on average) |
| Arrows are scattered widely with no pattern. | Low | Low |
In a lab, a scale might be poorly calibrated (low accuracy) but still give very consistent (high precision) wrong numbers. Fixing the calibration would then make it both accurate and precise. High precision is often the first step toward high accuracy because it means your process is under control.
How Do We Measure Precision?
We cannot just say "these measurements look close." Scientists use numbers to quantify precision. Let's look at three common ways, from simple to more advanced.
1. Range: The Simplest Indicator
The range is the difference between the highest and lowest value in a set of measurements. A smaller range generally indicates higher precision.
Example: Two students measure the length of a desk five times each (in cm).
Student A: 120.1, 120.3, 120.0, 120.2, 120.2
Student B: 119.5, 120.9, 120.0, 121.1, 119.8
Range for A: $120.3 - 120.0 = 0.3$ cm
Range for B: $121.1 - 119.5 = 1.6$ cm
Student A's measurements have a smaller range, showing higher precision.
2. Average Deviation: Considering All Points
Range only uses two values. Average deviation uses all data points. It is the average of how far each measurement is from the mean (average) of all measurements.
- Find the mean: $\bar{x} = \frac{x_1 + x_2 + ... + x_n}{n}$
- Find the deviation of each point from the mean: $|x_i - \bar{x}|$ (the vertical bars mean absolute value, ignore negative signs).
- Average those deviations.
Using Student A's data:
Mean: $\bar{x} = \frac{120.1+120.3+120.0+120.2+120.2}{5} = 120.16$ cm
Deviations: $|120.1-120.16|=0.06$, $|120.3-120.16|=0.14$, $|120.0-120.16|=0.16$, $|120.2-120.16|=0.04$, $|120.2-120.16|=0.04$
Average Deviation: $\frac{0.06+0.14+0.16+0.04+0.04}{5} = 0.088$ cm
A smaller average deviation means higher precision.
3. Standard Deviation: The Gold Standard
The most common and powerful tool is the standard deviation (often written as $s$ for a sample). It is similar to average deviation but squares the deviations first, which gives more weight to larger differences. A smaller standard deviation means higher precision.
$s = \sqrt{\frac{\sum_{i=1}^{n} (x_i - \bar{x})^2}{n-1}}$
Where $x_i$ is each measurement, $\bar{x}$ is the mean, $n$ is the number of measurements, and $\sum$ means "sum of."
Step-by-step for Student A's data:
1. Mean $\bar{x} = 120.16$ (as before).
2. Calculate squared deviations:
$(120.1-120.16)^2 = (-0.06)^2 = 0.0036$
$(120.3-120.16)^2 = (0.14)^2 = 0.0196$
$(120.0-120.16)^2 = (-0.16)^2 = 0.0256$
$(120.2-120.16)^2 = (0.04)^2 = 0.0016$
$(120.2-120.16)^2 = (0.04)^2 = 0.0016$
3. Sum of squared deviations: $0.0036+0.0196+0.0256+0.0016+0.0016 = 0.052$
4. Divide by $(n-1)$, which is $(5-1)=4$: $0.052 / 4 = 0.013$
5. Take the square root: $s = \sqrt{0.013} \approx 0.114$ cm
For Student B (calculation not shown in full), the standard deviation would be much larger, confirming the lower precision.
Sources of Imprecision: What Causes Scatter?
Why don't repeated measurements give the exact same number? Imprecision comes from random errors. These are unpredictable, small variations that affect each measurement differently. They are not mistakes, but inevitable noise.
- Human Factors: Slightly different placement of a ruler, tiny variations in starting/stopping a stopwatch, or reading a scale from a slightly different angle each time.
- Instrument Limitations: The smallest marking on a tool (its resolution). A ruler marked only in millimeters cannot precisely measure differences smaller than that.
- Environmental Fluctuations: Tiny changes in temperature causing materials to expand/contract, small vibrations, air currents affecting a balance, or electrical noise in digital sensors.
- Inherent Variability in the Sample: If you measure the thickness of five different leaves from the same tree, they will naturally vary.
Good experimental design aims to reduce random errors (e.g., using better tools, controlling the environment) and always accounts for them by taking multiple measurements.
Precision in Action: From Labs to Smartphones
Precision is not just for science class. It is the backbone of modern technology and quality control.
1. Pharmaceutical Manufacturing
When a company makes pills, each pill must contain a precise amount of active medicine. Imagine a batch where the dose should be 100 mg. High-precision machines ensure every pill has very close to 100 mg (e.g., 99.9 mg, 100.1 mg, 100.0 mg). Low precision would mean some pills have 95 mg and others 105 mg, which is dangerous. Precision saves lives here.
2. Sports Analytics
In basketball, a player's free-throw percentage is a measure of precision. A 90% shooter is highly precise—their shooting form and release are so consistent that the ball goes in the basket 9 times out of 10 under identical conditions. The "closeness of repeated measurements" here is the closeness of each shot's trajectory to a perfect, repeatable arc.
3. Global Positioning System (GPS)
Your phone's GPS determines your location by precisely measuring the time it takes for signals to travel from multiple satellites. The precision of those time measurements (in nanoseconds!) directly affects the precision of your reported location. High-precision timing means your map dot is within meters of your true location. Low precision would place you on the wrong street.
A Simple At-Home Experiment
Task: Measure the volume of a single drop of water from a faucet using a teaspoon.
Method: Carefully count 20 drops into a teaspoon. Use a graduated dropper or syringe to measure the total collected water in milliliters (mL). Divide that volume by 20 to get the volume per drop.
Repeat: Do this entire process 5 times.
Analyze: Calculate the range and average of your five results for the "volume per drop." Is your process precise? The scatter in your results will show the random errors from counting drops, reading the measurement tool, and natural variation in drop size.
Important Questions
Yes, absolutely. A common example is a kitchen scale that has not been "zeroed" or calibrated. It might always add an extra 5 grams to every measurement. If you weigh the same apple three times, you might get 155 g, 155 g, and 156 g—very precise results that are close together. However, if the apple's true mass is 150 g, all the measurements are inaccurate. The precision is high, but a systematic error (the 5 g offset) ruins the accuracy.
Reporting only the mean gives a single "best estimate" of the measured quantity. However, without knowing the precision, we cannot judge how trustworthy that mean is. A mean of 10.5 cm with a standard deviation of 0.1 cm tells us the measurements were very consistent, and 10.5 cm is a reliable value. A mean of 10.5 cm with a standard deviation of 2.0 cm tells us the measurements were all over the place, so the mean is much less reliable. The standard deviation provides context and shows the quality of the measurement process.
Not always. Higher precision often requires more expensive instruments, more time, and more controlled conditions. For many everyday tasks, "good enough" precision is sufficient. Building a doghouse does not require the laser-guided precision needed to build a satellite. Pursuing extreme precision can be wasteful if it does not add practical value. The goal is to have precision that is fit for purpose—appropriate for the task at hand.
Precision, defined as the closeness of repeated measurements, is a cornerstone of reliable science and engineering. It is quantified by the scatter or consistency in a set of data, best described by tools like the range and standard deviation. Understanding precision allows us to distinguish it from accuracy, identify sources of random error, and appreciate its critical role in everything from safe medicine and fair sports to the technology in our pockets. By taking multiple measurements and analyzing their precision, we build confidence in our results and make better, more informed decisions in both the lab and the wider world.
Footnote
1 GPS (Global Positioning System): A satellite-based navigation system that provides location and time information anywhere on Earth.
2 Random Error: Unpredictable variations in measurement that cause readings to be scattered around the true value. They affect precision.
3 Systematic Error: A consistent offset or flaw in the measurement process that causes all readings to be inaccurate in the same way. It affects accuracy.
4 Standard Deviation ($s$ or $\sigma$): A statistical measure of the amount of variation or dispersion in a set of values. A low standard deviation indicates high precision.
