In measurement errors, true scoring theory is a good model, but it cannot always be an accurate reflection of reality. In particular, it assumes that any observation is composed of the true value plus some random error value. But is that reasonable? What if all errors are not random? Is it not possible that some errors are systematic, that they remain in most or all members of a group? One way to approach this notion is to review the simple real-score model by dividing the error component into two sub-components, the random error and the systematic error.
What is a random error?
A Random error is caused by any factor that randomly affects the measurement of the variable throughout the sample. For example, the mood of each person can inflate or deflate their performance at any time. If mood affects your measurement performance, you can artificially inflate the scores observed for some children and artificially deflate them for others.
The important thing about random error is that it has no consistent effect on the entire sample. Instead, it pushes the observed scores up or down at random. This means that if we could see all the random errors in a distribution they would have to add up to 0 - there would be as many negative errors as positive ones. The important property of random error is that it adds variability to the data but does not affect the average performance of the group. Because of this, random error is sometimes considered to be noise.
Serious mistakes are due to human errors. Also, serious error can only be avoided by taking care in reading. For example - The experimenter reads the reading of 31.5Cº while the actual reading is 21.5Cº. This happens due to carelessness. The experimenter takes the wrong reading and therefore the error occurs in the measurement. This type of error is very common in measurement. The complete elimination of such error is not possible. Some of the serious errors are easily detected by the experimenter, but others are difficult to find.
Two methods can eliminate the serious error. These methods are:
The reading should be taken very carefully. Two or more readings of the measured quantity should be taken.
The readings are taken by the different experimenter and at a different point to eliminate the error.
What is systematic error?
Systematic error is caused by any factor that systematically affects the measurement of the variable throughout the sample. For example, if there is noisy traffic passing just outside a classroom where students are taking a test, this noise is likely to affect the scores of all children - in this case, by systematically reducing them. Unlike random error, systematic error tends to be consistently positive or negative - because of this, systematic error is sometimes considered a measurement bias. All measurements are prone to systematic error, often of several different types. Sources of systematic error can be imperfect calibration of measuring instruments, changes in the environment that interfere with the measurement process, and imperfect methods of observation.
A systematic error makes the measured value always smaller or larger than the true value, but not both. An experiment can involve more than one systematic error and these errors can cancel each other out, but each one alters the true value in only one way. Accuracy (or validity) is a measure of systematic error. If an experiment is accurate or valid, then the systematic error is very small. Accuracy is a measure of how well an experiment measures what it was trying to measure.
This is difficult to assess unless you have an idea of the expected value (for example, a textbook value or a calculated value from a data book). Compare your experimental value with the literature value. If it is within the margin of error for random errors, systematic errors are more likely to be smaller than random errors. If it is higher, then you need to determine where the errors have occurred. When you have an accepted value for a result determined by the experiment, you can calculate the percentage of error.
Types of Instrumental Errors
These errors are mainly due to three main reasons.
a) Deficiencies inherent in the instruments - This type of error is incorporated in the instruments due to their mechanical structure. They may be due to the manufacture, calibration or operation of the device. For example - If the instrument uses the weak spring, then it gives the high value of the quantity to be measured. The error occurs in the instrument due to friction loss or hysteresis.
b) Improper use of the instrument - The error occurs in the instrument because of the operator's fault. A good instrument used in an unintelligent way can give a huge result. For example - the wrong use of the instrument can cause the lack of zero adjustment of the instruments, a bad initial adjustment, the use of lead to a too high resistance.
c) Load effect - It is the most common type of error caused by the instrument in measurement works. For example, when the voltmeter is connected to the high resistance circuit it gives an erroneous reading, and when it is connected to the low resistance circuit it gives a reliable reading. This means that the voltmeter has a load effect on the circuit. The error caused by the charging effect can be overcome by using the meters intelligently. For example, when measuring a low resistance by the ammeter-voltmeter method, a voltmeter that has a very high resistance value should be used.
These errors are due to the external condition of the measuring devices. This type of error is mainly caused by the effect of temperature, pressure, humidity, dust, vibrations or by the magnetic or electrostatic field. The corrective measures used to eliminate or reduce these undesirable effects are The arrangement must be made to keep the conditions as constant as possible. Using equipment that is free from these effects. Using techniques that eliminate the effect of these disturbances. Applying the calculated corrections.
This type of error is due to incorrect observation of the reading. There are many sources of observation error. For example, the pointer of a voltmeter readjusts slightly above the scale surface. Therefore, an error occurs (due to parallax) unless the observer's line of sight is exactly above the pointer.
The error that is caused by the sudden change in the atmospheric condition, this type of error is called a random error.
Reduction of random errors
So how can we reduce measurement errors, random or systematic? One thing you can do is pilot test your instruments, getting feedback from your respondents on how easy or difficult the measurement was and information on how the test environment affected their performance.
Second, if you are collecting measures using people to collect the data (such as interviewers or observers) you should make sure that you train them thoroughly so that they do not inadvertently introduce errors.
Third, when collecting data for the study, you should check the data carefully. All data entered for computer analysis should be "double-tapped" and verified. This means that the data are entered twice, the second time by having the data entry machine check that exactly the same data are being entered as the first time.
Fourth, you can use statistical procedures to adjust the measurement error. These range from fairly simple formulas that can be applied directly to the data to very complex modeling procedures to model the error and its effects. Finally, one of the best things you can do to deal with measurement error, especially systematic error, is to use multiple measurements of the same construction. Especially if the different measurements do not share the same systematic errors, you will be able to triangulate through the multiple measurements and get a more accurate sense of what is going on.
The Uncertainty of Measurements
Some numerical statements are accurate: Mary has 3 siblings, and 2 + 2 = 4. However, all measurements have some degree of uncertainty that can come from a variety of sources. The process of evaluating the uncertainty associated with a measurement result is often referred to as uncertainty analysis or error analysis. The full declaration of a measured value should include an estimate of the confidence level associated with the value. Proper reporting of an experimental result along with its uncertainty allows others to make judgments about the quality of the experiment, and facilitates meaningful comparisons with other similar values or a theoretical prediction. Without an uncertainty estimate, it is impossible to answer the basic scientific question: "Does my result agree with a theoretical prediction or with the results of other experiments?
Exact and True Values
When we make a measurement, we generally assume that there is some exact or true value based on how we define what is being measured. While we may never know this true value exactly, we try to find this ideal amount as best we can with the time and resources available. As we make measurements by different methods, or even when we make multiple measurements using the same method, we may get slightly different results. So how do we report our findings for our best estimate of this elusive true value? The most common way to show the range of values that we believe include true value is: ( 1 ) measurement = (best estimate ± uncertainty) units
Examples of True and Accurate Values
Let's take an example. Suppose you want to find the mass of a gold ring that you would like to sell to a friend. You don't want to jeopardize your friendship, so you want to get an exact mass of the ring to charge a fair market price. You estimate the mass to be between 10 and 20 grams for the weight you feel on your hand, but this is not a very accurate estimate.
After searching, you find an electronic scale that gives a mass reading of 17.43 grams. Although this measurement is much more accurate than the original estimate, how do you know it is accurate, and how confident are you that this measurement represents the true value of the ring's mass? Since the balance's digital display is limited to 2 decimal places, you could report the mass as m = 17.43 ± 0.01 g.
Suppose you use the same electronic balance and get several more readings: 17.46 g, 17.42 g, 17.44 g, so that the average mass seems to be in the range of 17.44 ± 0.02 g. At this point you may feel confident that you know the mass of this ring to the nearest hundredth of a gram, but how do you know that the true value is definitely between 17.43 g and 17.45 g? Since you want to be honest, you decide to use another scale that gives a reading of 17.22 g. This value is clearly below the range of values found on the first scale, and under normal circumstances, you may not care, but you want to be fair to your friend. So, what do you do now? The answer lies in knowing something about the accuracy of each instrument.
Accuracy and Measurement.
To help answer these questions, we must first define the terms accuracy and precision: Accuracy is the closeness of agreement between a measured value and a true or accepted value. The measurement error is the amount of inaccuracy. Accuracy is a measure of how well a result can be determined (without reference to a theoretical or true value). It is the degree of consistency and agreement between independent measurements of the same quantity; also the reliability or reproducibility of the result.
The estimation of the uncertainty associated with a measurement must take into account both the accuracy and precision of the measurement. Note: Unfortunately, the terms error and uncertainty are often used interchangeably to describe both imprecision and inaccuracy. This use is so common that it is impossible to avoid it completely. Whenever you come across these terms, make sure you understand whether they refer to accuracy or precision, or both.
Note that to determine the accuracy of a particular measurement, we need to know the ideal and true value. Sometimes we have a "textbook" measured value, which is well known, and we assume that this is our "ideal" value, and use it to estimate the accuracy of our result. Other times we know a theoretical value, which is calculated from basic principles, and this can also be taken as an "ideal" value.
But physics is an empirical science, which means that theory must be validated by experiments, not the other way around. We can escape these difficulties and maintain a useful definition of accuracy by assuming that, even when we do not know the true value, we can rely on the best available accepted value with which to compare our experimental value.
For our example with the gold ring, there is no accepted value to compare with, and both measured values have the same accuracy, so we have no reason to believe one more than the other. We could look up the precision specifications of each balance as supplied by the manufacturer (the Appendix at the end of this laboratory manual contains precision data for most of the instruments that will be used), but the best way to evaluate the precision of a measurement is to compare it to a known standard. For this situation, it may be possible to calibrate the balances to a standard mass that is accurate within a close tolerance and traceable to a primary mass standard at the National Institute of Standards and Technology (NIST). Calibration of the balances should eliminate the discrepancy between the readings and provide a more accurate mass measurement.
You might also be interested in: Nomothetic and Idiographic Science