Search
Close this search box.

Home

Types of Errors in Measurement

What is Measurement?

Representing quantities of various attributes relating to a real time system, using numerical values, is known as Measurement. It can be realized as a comparison between the quantity of unknown magnitude and a predefined standard.

The main requirements for accurate measurements are

  1. Apparatus should be accurate.
  2. The method used ought to be provable.
  3. Standard used should be accurately defined.

How is Measurement Vital to Science and Technology?

Advancement in science and technology is of little significance without the availability of actual measured values to provide practical proofs. A scientific research is actually based on hypothesis, which is validated only with the help of obtained measured values.

The researcher can differentiate between various degrees of the measured attributes and can give a finite value to the occurrences in real time. Measurements are important in reducing the assumption work and provide more objectivity to the findings.

How are Instruments Defined?

A physical means or device for determining an attribute or variable is known as an instrument. An instrument serves as an aid for humans in determining values of unknown quantities.

An instrument can be mechanical, electrical or electronic. A basic instrument consists of a detector, a transfer device and an indicator, recorder or a storage device.

Mechanical instruments are the oldest used instruments. Though reliable for static and stable conditions, they are not appropriate for dynamic and transient conditions. Also, they are bulky and are a source of noise.

Electrical instruments, though use more rapid method of indicating the output, yet have limitations due to the use of mechanical meters.

Electronic instruments have faster responses and are able to detect dynamic changes in different attributes. An example is a CRO, which follows dynamic or transient changes of the order of microseconds.

What does Errors in Measurement Imply?

Before learning the main point regarding errors in instrumentation, let us first go through the following discussion.

Based upon the degree of variation of the measured quantity with respect to time, an instrument can have static or dynamic characteristics.

Some of the important static characteristics are Accuracy, Sensitivity, Reproducibility, Drift, Static error and dead zone.

When ideal conditions are applied to measure any parameter, the average deviations due to various factors tend to be zero. Average of these infinite number of measured values is termed as True Value. However, such a situation is hypothetical, since the negative and positive deviations do not actually cancel each other.

Practically, the measured value obtained under the most ideal conditions (as agreed upon by Experts) is considered as the True Value or the best-measured value.

Difference between the actual value and the true value is known as an Error.

Types of Errors

Systematic Errors

Errors which occur due to changes in environment conditions, instrumental reasons or wrong observations. These errors are of three types

  1. Instrumental Errors
  2. Environmental Errors
  3. Observational Errors

Instrumental Errors:

These errors occur due to shortcomings in the instruments, improper use of instruments or loading effect of the instrument. Sometimes improper construction, calibration or operation of an instrument might result in some inherent errors. For example, weak spring in a Permanent Magnet Instrument might result in too high readings. These errors can be easily detected or reduced by applying correction factors, careful planning of measurement procedure or re-calibrating the instrument.

At times, an error might also occur due to faulty use by the operator. Examples include the inability to adjust the zero (reference) point, improper initial settings, using extremely high resistance leads and so on. Though these errors might not cause permanent damage to the instrument, overloading or overheating might cause an eventual failure of the instrument.

Sometimes, improper loading can also result in errors. For example, connecting a high resistance load to a voltmeter might result in erroneous readings. Considering the loading effect of instruments and making possible corrections can result in negligible or no loading effects.

Environmental Errors

These errors occur due to external ambient conditions of the instrument. These conditions include changes in temperature, humidity, availability of dust, vibrations or effects of external magnetic or electrostatic fields. The resultant errors can be minimized by following the following corrective measures:

  1. Make sure to keep the ambient physical conditions constant. For example, placing the instrument in a temperature-controlled enclosure ensures the ambient temperature to be kept constant.
  2. Use instruments which have ample immunity to effects of environmental changes. For examples, using materials having low resistance temperature of coefficient can minimize variations in resistance.
  3. Use different techniques, for example sealing the instrument, to eliminate the effects.
  4. Use computed corrections.

Observational Errors

These errors occur due to a mismatch between a line of vision of the observer and the pointer above the instrument scale. This is also termed as Parallax error which occurs when the observer is unable to have a vision aligned with the pointer. These errors can be minimized by using highly accurate meters (having the pointer and scale on the same plane). Since they occur on Analog instruments, using digital display can eliminate these errors.

Random Errors

These errors occur due to a group of small factors which fluctuate from one measurement to another.  The situations or disturbances which cause these errors are unknown, hence they are termed as Random errors. Sources of these errors are not obvious and not easily figured.

The statistical treatment can be done in two ways:

  1. Using iterative measurements of the same quantity under different test conditions such as using different observers or instruments or ways of measurement. This results in data scattering around a central value, thus forming a Histogram or a frequency distribution curve. The following terms are calculated using the Histogram:
    • Arithmetic Mean: Average of all the readings. It is the most probable value.
    • Dispersion: Property by the virtue of which values are scattered or dispersed around the central value. For two sets of data, if one set has less dispersion than other, that set can be regarded for measurement of random errors.
    • Range: It is the difference between the greatest and least value of data. It is the measure of Dispersion.
    • Deviation: Deflection of the observed reading from the mean value is known as Deviation. The algebraic sum of all deviations is zero.
    • Average Deviation: Sum of absolute values of deviations divided by the number of readings gives the Average Deviation. A low average deviation indicates high precision instrument.
    • Standard Deviation: When squares of individual deviations are added up, the sum is divided by the total number of the readings, square root of the resultant value is known as Standard Deviation.
    • Variance: Square of the standard deviation is known as Variance.
  2. Single Sample Test: Succession of measurements done under similar conditions, at different times, is known as the Single Sample Test. Analyzing the obtained data is done using Kline and McClintock approach which uses Uncertainty distribution.

Limiting Errors

For any instrument, the manufacturer defines or guarantees a certain accuracy, which depends upon the type of material and the effort required to manufacture the instrument.  The accuracy is defined within a certain percentage of full-scale reading. In other words, the manufacturer specifies certain deviations from the nominal value. The limits of these deviations are known as Limiting or Guarantee Errors. The error is guaranteed within the limits.

The ratio of error to the specified nominal value is termed as Relative Limiting Error.

Note that smaller the voltage to be measured, greater is the percentage error, though the magnitude of limiting error is fixed.

Computing limiting error for a combination of two or more quantities, each having a limiting error, is found by considering the relative increment of the function if the result is an algebraic equation.

Gross Errors

Manual errors in reading instruments or recording and calculating measurement results are known as Gross errors. Generally, these errors occur during the experiments, where the experimenter might read or record a value different from the actual one, probably due to poor sight. With human involvement, these errors are inevitable, though they can be anticipated and rectified.

These errors can be prevented by the taking the below-given couple of measures:

  1. Precautious reading and recording of data.
  2. Taking multiple readings, by different persons. A close agreement between different readings ensures removal of any gross error.

The above content is a brief idea about different types of errors in measurement. A detailed discussion is beyond the scope of this article. However, any further information is welcome to be added in the below comments section.

One Response

Leave a Reply

Your email address will not be published. Required fields are marked *