Jump to content

Scientific control

From Wikipedia, the free encyclopedia
(Redirected from Experimentally controlled)
Take identical growing plants (Argyroxiphium sandwicense) and give fertilizer to half of them. If there are differences between the fertilized treatment and the unfertilized treatment, these differences may be due to the fertilizer as long as there weren't other confounding factors that affected the result. For example, if the fertilizer was spread by a tractor but no tractor was used on the unfertilized treatment, then the effect of the tractor needs to be controlled.

A scientific control is an experiment or observation designed to minimize the effects of variables other than the independent variable (i.e. confounding variables).[1] This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method.

Controlled experiments

[edit]

Controls eliminate alternate explanations of experimental results, especially experimental errors and experimenter bias. Many controls are specific to the type of experiment being performed, as in the molecular markers used in SDS-PAGE experiments, and may simply have the purpose of ensuring that the equipment is working properly. The selection and use of proper controls to ensure that experimental results are valid (for example, absence of confounding variables) can be very difficult. Control measurements may also be used for other purposes: for example, a measurement of a microphone's background noise in the absence of a signal allows the noise to be subtracted from later measurements of the signal, thus producing a processed signal of higher quality.

For example, if a researcher feeds an experimental artificial sweetener to sixty laboratories rats and observes that ten of them subsequently become sick, the underlying cause could be the sweetener itself or something unrelated. Other variables, which may not be readily obvious, may interfere with the experimental design. For instance, the artificial sweetener might be mixed with a dilutant and it might be the dilutant that causes the effect. To control for the effect of the dilutant, the same test is run twice; once with the artificial sweetener in the dilutant, and another done exactly the same way but using the dilutant alone. Now the experiment is controlled for the dilutant and the experimenter can distinguish between sweetener, dilutant, and non-treatment. Controls are most often necessary where a confounding factor cannot easily be separated from the primary treatments. For example, it may be necessary to use a tractor to spread fertilizer where there is no other practicable way to spread fertilizer. The simplest solution is to have a treatment where a tractor is driven over plots without spreading fertilizer and in that way, the effects of tractor traffic are controlled.

The simplest types of control are negative and positive controls, and both are found in many different types of experiments.[2] These two controls, when both are successful, are usually sufficient to eliminate most potential confounding variables: it means that the experiment produces a negative result when a negative result is expected, and a positive result when a positive result is expected. Other controls include vehicle controls, sham controls and comparative controls.[2]

Confounding

[edit]

Confounding is a critical issue in observational studies because it can lead to biased or misleading conclusions about relationships between variables. A confounder is an extraneous variable that is related to both the independent variable (treatment or exposure) and the dependent variable (outcome), potentially distorting the true association. If confounding is not properly accounted for, researchers might incorrectly attribute an effect to the exposure when it is actually due to another factor. This can result in incorrect policy recommendations, ineffective interventions, or flawed scientific understanding. For example, in a study examining the relationship between physical activity and heart disease, failure to control for diet, a potential confounder, could lead to an overestimation or underestimation of the true effect of exercise.[3]

Falsification tests are a robustness-checking technique used in observational studies to assess whether observed associations are likely due to confounding, bias, or model misspecification rather than a true causal effect. These tests help validate findings by applying the same analytical approach to a scenario where no effect is expected. If an association still appears where none should exist, it raises concerns that the primary analysis may suffer from confounding or other biases.

Negative controls are one type of falsification tests. The need to use negative controls usually arise in observational studies, when the study design can be questioned because of a potential confounding mechanism. A Negative control test can reject study design, but it cannot validate them. Either because there might be another confounding mechanism, or because of low statistical power. Negative controls are increasingly used in the epidemiology literature,[4] but they show promise in social sciences fields[5] such as economics.[6] Negative controls are divided into two main categories: Negative Control Exposures (NCEs) and Negative Control Outcomes (NCOs).

Lousdal et al.[7] examined the effect of screening participation on death from breast cancer. They hypothesized that screening participants are healthier than non-participants and, therefore, already at baseline have a lower risk of breast-cancer death. Therefore, they used proxies for better health as negative-control outcomes (NCOs) and proxies for healthier behavior as negative-control exposures (NCEs). Death from causes other than breast cancer was taken as NCO, as it is an outcome of better health, not effected by breast cancer screening. Dental care participation was taken to be NCE, as it is assumed to be a good proxy of health attentive behavior.

Negative control

[edit]

Negative controls are variables that meant to help when the study design is suspected to be invalid because of unmeasured confounders that are correlated with both the treatment and the outcome.[8] Where there are only two possible outcomes, e.g. positive or negative, if the treatment group and the negative control (non-treatment group) both produce a negative result, it can be inferred that the treatment had no effect. If the treatment group and the negative control both produce a positive result, it can be inferred that a confounding variable is involved in the phenomenon under study, and the positive results are not solely due to the treatment.

In other examples, outcomes might be measured as lengths, times, percentages, and so forth. In the drug testing example, we could measure the percentage of patients cured. In this case, the treatment is inferred to have no effect when the treatment group and the negative control produce the same results. Some improvement is expected in the placebo group due to the placebo effect, and this result sets the baseline upon which the treatment must improve upon. Even if the treatment group shows improvement, it needs to be compared to the placebo group. If the groups show the same effect, then the treatment was not responsible for the improvement (because the same number of patients were cured in the absence of the treatment). The treatment is only effective if the treatment group shows more improvement than the placebo group.

Negative Control Exposure (NCE)

[edit]
the NCE, is causally influenced by in a similar way to the treatment , but cannot influence . If the dotted line between and exists, the study design is invalid. An NCE test will check whether is associated with , and if so then causally influence , thus the effect of the study on is non-identifiable.

NCE is a variable that should not causally affect the outcome, but may suffer from the same confounding as the exposure-outcome relationship in question. A priori, there should be no statistical association between the NCE and the outcome. If an association is found, then it through the unmeasured confounder, and since the NCE and treatment share the same confounding mechanism, there is an alternative path, apart from the direct path from the treatment to the outcome. In that case, the study design is invalid.

For example, Yerushalmy[9] used husband's smoking as an NCE. The exposure was maternal smoking; the outcomes were various birth factors, such as incidence of low birth weight, length of pregnancy, and neonatal mortality rates. It is assumed that husband's smoking share common confounders, such household health lifestyle with the pregnant woman's smoking, but it does not causally affect the fetus development. Nonetheless, Yerushalmy found a statistical association, And as a result, it casts doubt on the proposition that cigarette smoking causally interferes with intrauterine development of the fetus.

Differences Between Negative Control Exposures and Placebo

[edit]

The term negative controls is used when the study is based on observations, while the Placebo should be used as a non-treatment in randomized control trials.

Negative Control Outcome (NCO)

[edit]
the NCO, is causally influenced by in a similar way to the outcome , but cannot be influenced by the treatment . If the dotted line between and exists, the study design is invalid. An NCO test will check whether is associated with , and if so then causally influence , thus the effect of the study on is non-identifiable.

Negative Control Outcomes are the more popular type of negative controls. NCO is a variable that is not causally affected by the treatment, but suspected to have a similar confounding mechanism as the treatment-outcome relationship. If the study design is valid, there should be no statistical association between the NCO and the treatment. Thus, an association between them suggest that the design is invalid.

For example, Jackson et al.[10] used mortality from all causes outside of influenza season an NCO in a study examining influenza vaccine's effect on influenza-related deaths. A possible confounding mechanism is health status and lifestyle, such as the people who are more healthy in general also tend to take the influenza vaccine. Jackson et al. found that a preferential receipt of vaccine by relatively healthy seniors, and that differences in health status between vaccinated and unvaccinated groups leads to bias in estimates of influenza vaccine effectiveness. In a similar example, when discussing the impact of air pollutants on asthma hospital admissions, Sheppard et al.[11] et al. used non-elderly appendicitis hospital admissions as NCO.

Formal Conditions

[edit]

Given a treatment and an outcome , in the presence of a set of control variables , and unmeasured confounder for the relationship. Shi et al.[4] presented formal conditions for a negative control outcome ,

  1. Stable Unit Treatment Value Assumption (SUTVA): For both and with regard to .
  2. Latent Exchangeability: Given and , the potential outcome is independent of the treatment.
  3. Irrelevancy: Ensures the irrelevancy of the treatment on the NCO.
    1. : There is no causal effect of on given and .
    2. : There is no causal effect of on given and . The NCO is independent of the treatment given and .
  4. U-Comparability: The unmeasured confounders of the association between and are the same for the association between and .

Given assumption 1 - 4, a non-null association between and , can be explained by , and not by another mechanism. A possible violation of Latent Exchangeability will be when only the people that are influenced by a medicine will take it, even if both and are the same. For example, we would expect that given age and medical history (), general health awareness (), the intake of influenza vaccine will be independent of potential influenza related deaths . Otherwise, the Latent Exchangeability assumption is violated, and no identification can be made.

A violation of Irrelevancy occurs when there is a causal effect of on . For example, we would expect that given and , the influenza vaccine does not influence all-cause mortality. If, however, during the influenza vaccine medical visit, the physician also performs a general physical test, recommends good health habits, and prescribes vitamins and essential drugs. In this case, there is likely a causal effect of on (conditional on and ). Therefore, cannot be used as NCO, as the test might fail even if the causal design is valid.

U-Comparability is violated when , and therefore the lack of association between and does not provide us any evidence for the invalidity of . This violation would occur when we choose a poor NCO, that is not or very weakly correlated with the unmeasured confounders.

Positive control

[edit]

Positive controls are often used to assess test validity. For example, to assess a new test's ability to detect a disease (its sensitivity), then we can compare it against a different test that is already known to work. The well-established test is a positive control since we already know that the answer to the question (whether the test works) is yes.

Similarly, in an enzyme assay to measure the amount of an enzyme in a set of extracts, a positive control would be an assay containing a known quantity of the purified enzyme (while a negative control would contain no enzyme). The positive control should give a large amount of enzyme activity, while the negative control should give very low to no activity.

If the positive control does not produce the expected result, there may be something wrong with the experimental procedure, and the experiment is repeated. For difficult or complicated experiments, the result from the positive control can also help in comparison to previous experimental results. For example, if the well-established disease test was determined to have the same effect as found by previous experimenters, this indicates that the experiment is being performed in the same way that the previous experimenters did.

When possible, multiple positive controls may be used—if there is more than one disease test that is known to be effective, more than one might be tested. Multiple positive controls also allow finer comparisons of the results (calibration, or standardization) if the expected results from the positive controls have different sizes. For example, in the enzyme assay discussed above, a standard curve may be produced by making many different samples with different quantities of the enzyme.

Randomization

[edit]

In randomization, the groups that receive different experimental treatments are determined randomly. While this does not ensure that there are no differences between the groups, it ensures that the differences are distributed equally, thus correcting for systematic errors.

For example, in experiments where crop yield is affected (e.g. soil fertility), the experiment can be controlled by assigning the treatments to randomly selected plots of land. This mitigates the effect of variations in soil composition on the yield.

Blind experiments

[edit]

Blinding is the practice of withholding information that may bias an experiment. For example, participants may not know who received an active treatment and who received a placebo. If this information were to become available to trial participants, patients could receive a larger placebo effect, researchers could influence the experiment to meet their expectations (the observer effect), and evaluators could be subject to confirmation bias. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, sham surgery may be necessary to achieve blinding.

During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments and must be measured and reported. Meta-research has revealed high levels of unblinding in pharmacological trials. In particular, antidepressant trials are poorly blinded. Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding.[12]

Blinding is an important tool of the scientific method, and is used in many fields of research. In some fields, such as medicine, it is considered essential.[13] In clinical research, a trial that is not blinded trial is called an open trial.

See also

[edit]

References

[edit]
  1. ^ Life, Vol. II: Evolution, Diversity and Ecology: (Chs. 1, 21–33, 52–57). W. H. Freeman. 2006. p. 15. ISBN 978-0-7167-7674-1. Retrieved 14 February 2015.
  2. ^ a b Johnson PD, Besselsen DG (2002). "Practical aspects of experimental design in animal research" (PDF). ILAR J. 43 (4): 202–206. doi:10.1093/ilar.43.4.202. PMID 12391395. Archived from the original (PDF) on 2010-05-29.
  3. ^ Mann, Bikaramjit; Wood, Evan (2012-05-16). "Confounding in Observational Studies Explained". The Open Epidemiology Journal. 5 (1): 18–20. doi:10.2174/1874297101205010018. ISSN 1874-2971.
  4. ^ a b Shi, Xu; Miao, Wang; Tchetgen, Eric Tchetgen (2020-10-15). "A Selective Review of Negative Control Methods in Epidemiology". Current Epidemiology Reports. 7 (4): 190–202. arXiv:2009.05641. doi:10.1007/s40471-020-00243-4. ISSN 2196-2995. PMC 8118596. PMID 33996381.
  5. ^ Shrout, Patrick E. (January 1980). "Quasi-experimentation: Design and analysis issues for field settings". Evaluation and Program Planning. 3 (2): 145–147. doi:10.1016/0149-7189(80)90063-4. ISSN 0149-7189.
  6. ^ Danieli, Oren; Nevo, Daniel; Walk, Itai; Weinstein, Bar; Zeltzer, Dan (2024-05-09), Negative Control Falsification Tests for Instrumental Variable Designs, arXiv:2312.15624
  7. ^ Lousdal, Mette Lise; Lash, Timothy L; Flanders, W Dana; Brookhart, M Alan; Kristiansen, Ivar Sønbø; Kalager, Mette; Støvring, Henrik (2020-03-25). "Negative controls to detect uncontrolled confounding in observational studies of mammographic screening comparing participants and non-participants". International Journal of Epidemiology. 49 (3): 1032–1042. doi:10.1093/ije/dyaa029. ISSN 0300-5771. PMC 7394947. PMID 32211885.
  8. ^ Lipsitch, Marc; Tchetgen Tchetgen, Eric; Cohen, Ted (May 2010). "Negative Controls". Epidemiology. 21 (3): 383–388. doi:10.1097/ede.0b013e3181d61eeb. ISSN 1044-3983. PMC 3053408. PMID 20335814.
  9. ^ Yerushalmy, J (October 2014). "The relationship of parents' cigarette smoking to outcome of pregnancy—implications as to the problem of inferring causation from observed associations1". International Journal of Epidemiology. 43 (5): 1355–1366. doi:10.1093/ije/dyu160. ISSN 1464-3685. PMID 25301860.
  10. ^ Jackson, Lisa A; Jackson, Michael L; Nelson, Jennifer C; Neuzil, Kathleen M; Weiss, Noel S (2005-12-20). "Evidence of bias in estimates of influenza vaccine effectiveness in seniors". International Journal of Epidemiology. 35 (2): 337–344. doi:10.1093/ije/dyi274. ISSN 1464-3685. PMID 16368725.
  11. ^ Sheppard, Lianne; Levy, Drew; Norris, Gary; Larson, Timothy V.; Koenig, Jane Q. (January 1999). "Effects of Ambient Air Pollution on Nonelderly Asthma Hospital Admissions in Seattle, Washington, 1987–1994". Epidemiology. 10 (1): 23–30. doi:10.1097/00001648-199901000-00006. ISSN 1044-3983. PMID 9888276.
  12. ^ Bello, Segun; Moustgaard, Helene; Hróbjartsson, Asbjørn (October 2014). "The risk of unblinding was infrequently and incompletely reported in 300 randomized clinical trial publications". Journal of Clinical Epidemiology. 67 (10): 1059–1069. doi:10.1016/j.jclinepi.2014.05.007. ISSN 1878-5921. PMID 24973822.
  13. ^ "Oxford Centre for Evidence-based Medicine – Levels of Evidence (March 2009)". cebm.net. 11 June 2009. Archived from the original on 26 October 2017. Retrieved 2 May 2018.
  14. ^ Lind, James. "A Treatise of the Scurvy" (PDF). Archived from the original (PDF) on 2 June 2015.
  15. ^ Simon, Harvey B. (2002). The Harvard Medical School guide to men's health. New York: Free Press. p. 31. ISBN 0-684-87181-5.
[edit]