A very high-level onine reference exist at: http://www.capgo.com/Resources/Measurem ... heory.html
Introduction to measurement theory
Measurement is the process of associating numbers with physical quantities and phenomena. The process is accomplished through the comparison of a measured value with some known quantity (standard) of the same kind. The subject has become of vital importance in sciences, engineering and to much everyday activity.
While measurement theory began with the Greeks in the 4th century BC, the first useful work appeared in the 18th century by English mathematician Thomas Simpson on observation error - perhaps the most important single aspect of measurement theory.
Practically all measurement of continuums involve errors. Understanding the nature and source of these errors can help in reducing their impact and in may instances prevent the drawing of incorrect conclusions.
In earlier times it was thought that errors in measurement could be eliminated by improvements in technique and equipment, however most scientists now accept this is not the case. Today, nearly all scientific and engineering results are routinely reported with likely error bounds. The types of errors that must be understood include instrumental errors, systematic errors, random errors, sampling errors and indirect errors.
An error that can be predicted and hence eventually removed from data is a systematic error. Systematic errors may change with time, so it is important that sufficient reference data be collected with the data set to allow the systematic errors to be quantified and subtracted from the data set: [Instrumental errors, Sensor placement errors, Indirect errors]
Examples of measurement equipment systematic errors include calibration errors, input zero drift and gain drift. Measuring equipment can also induce nonsystematic errors. Instrument errors are considered in more detail in the Measurement Methods pages.
Sensor placement errors
An often overlooked systematic error source is associated with the location of the sensor. Errors can be caused by measured parameter gradients or the impact of other parameters on the sensor. For example, in precision air temperature measurement, it is likely that temperature gradients exist or radiant energy be heating the sensor - so just what is being measured? Also radiant heat may heat the sensor directly giving an erroneous reading.
These are associated with calibration and conversions. Generally these errors are small, but can become significant with some types of measurement for example light intensity.
A nonsystematic error is one that cannot be predicted due to a randomness in its nature. Nonsystematic errors limit the ultimate accuracy of a measurement process by a masking effect that leads to information loss. They include: [Quantizing error, Rounding and truncation errors, Sampling errors, Random or noise errors, Sensor cross sensitivity errors]
All measuring equipment has a resolution limit, input variations below which can not be detected or measured, leading to unrecoverable information loss. In systems with evenly spaced quantization boundaries, quantization errors can be reduced by adding noise to the input and averaging many samples.
Rounding and truncation errors
In processing the measuring system's readings, the precision of calculation (number of significant digits) can compromise results.
The frequency and sample window time can impact accuracy, especially for changing or noisy quantity.
Random or noise errors
Random noise is always present in measurement, and sometimes is the dominant source of error. Depending on the noise spectrum, the noise error can generally be reduced by averaging many readings.
Sensor cross sensitivity errors
Few measuring systems respond only to the parameter being measured. All sensors have a degree of sensitivity to other parameters. For example a temperature sensor's output may change with pressure, humidity and/or ionizing radiation.