In the previous posts there was a brief mention of internal and external validity and design problems that threaten each. Variables that are related to the independent variables may account for observed differences in the dependent variable, giving the false appearance of a valid answer to a research question. The astute researcher is aware of the possible sources of these invalidating influences on results and guards against them in planning designs. Specific sources are covered below.
In the previous posts there was a brief mention of internal and external validity and design problems that threaten each. Variables that are related to the independent variables may account for observed differences in the dependent variable, giving the false appearance of a valid answer to a research question. The astute researcher is aware of the possible sources of these invalidating influences on results and guards against them in planning designs. Specific sources are covered below.
There are events that may affect subjects in addition to the independent variables or conditions of interest. For example, one might design a study to see if the 55 mile per hour speed limit reduced traffic fatalities. Data on traffic deaths before and after the limit was introduced could be collected and might show that deaths declined. However, it might also be discovered that due to higher gasoline prices people drove fewer miles. If differences in fatality rates were found it would be difficult to separate the effects of speed limit from the effects of mileage.
Instrument Reactivity. It is an established principle that instruments react with the things they measure. In some cases the reactivity is small relative to the variable measured and is inconsequent-tial but in other cases it may totally distort measurement. In nuclear physics the principle of uncertainty is a statement of the fact that measurement of subatomic particles is never exact because instruments disturb the particles being measured. In the social sciences it is often difficult to know when reactivity is severe enough to be problematic. It may affect someone’s attitudes to fill out an attitude questionnaire and it may change behavior when a person knows he is being observed. Furthermore there is a vast literature on the effects an experimenter can have on the behavior of subjects many of which would be considered instrument reactions.
Instrument reactivity occurs in two ways. First, instruments directly affect subjects so that true measurements are distorted. In other words the instrument disturbs that which is measured. Second, instruments may interact with treatments so that subject responses to various treatments are changed and differ from treatment to treatment. As long as the former distortion is constant, comparisons among variables can be made although with increased error. The latter situation may totally distort results and confound a study.
Perhaps the best way to deal with instrument reactivity is to test for its effect within a design. That is the design would be structured to include a test for instrument reactivity. The best example is the solomon four-group design where a dependent variable is measured after application of the dependent variable and have the subjects receive an additional priendependent variable measurement while half do not. A comparison of these two groups on the measurement after intervention would be tested for reactivity of the pretest and its possible interaction with the independent variable.
Unreliability of Instruments. When instruments are unreliable it becomes difficult to draw firm conclusions from a study because the variance among subjects becomes too large. The best solution for unreliability is to improve the instrument, find another or take multiple measurements with the same or equivalent instruments. Using large samples can also help as the large errors of measurement may average out. However, if an instrument is too unreliable, it will lack any validity and be totally worthless.