The definition of causal effect applied in this case is often referred to as the “Rubin Causal Model” (1974). Suppose one is interested in the effect of some treatment on some outcome of interest Y, and to simplify suppose that the treatment is dichotomous (in other words, treatment or control).

The potential result Y(J) is defined as the value of the result Y given the type of treatment J. Next, the causal effect of treatment (in contrast to control) on Yi is defined as the difference in the potential outcomes Yi (1) – Yi (0), defined as follows: a selected unit i (e.g. a person at a given time) to which the treatment is applied Ji = 1 results in Yi(1), and the same selected unit to which the Ji = 0 control is applied results in Yi(0), keeping all other factors constant.

Example of Causal Relationship

For example, if what had happened to a subject under a treatment had been different from what would have happened to the same subject at the same time under the control, and if no other factor of the subject changed, the difference between treatment and control was said to have been the cause of the difference.

The problem with applying this definition is that, for a given entity or situation, you cannot observe what happens both when Ji = 0 and when Ji = 1. One of these potential outcomes is not observed, so the causal effect at the unit level cannot be estimated. However, if some assumptions are made about the constancy of treatment and independence between subjects, it is possible to estimate the average causal effect in a population of entities or situations. To do this, since we are comparing situations in which J = 1 versus those in which J = 0, we must use techniques that allow us to affirm that the units of analysis are as similar as possible with respect to the other causal factors.

Understanding causality is an important goal for policy analysis. If you understand which factors are causal and how they affect the outcome of interest, you can determine how changes in causal factors, even for a situation somewhat different from the current one, will affect the probability of various values for the outcome of interest. However, if one simply determines that a factor is associated with an outcome, it may be that the specific circumstances produced an apparent relationship that was actually a byproduct of confounding factors related to treatment and outcomes.

Individual Causal Effect

Zeus is a patient waiting for a heart transplant. On January 1 he received a new heart. Five days later, he died. Imagine that we can somehow know, perhaps by divine revelation, that if Zeus had not received a heart transplant on January 1 (with nothing else changing in his life), he would have been alive five days later. Most people who have this information would agree that the transplant caused Zeus’ death. The intervention had a causal effect on Zeus’ survival for five days.

Another patient, Hera, received a heart transplant on Jan. 1. Five days later she was alive. Again, let’s imagine that we can somehow know that if Hera had not received the heart on January 1 (all other things being equal), she would still be alive five days later. The transplant had no causal effect on Hera’s five-day survival.

Results Analysis

If the two outcomes differ, we say that action A has a causal, causal or preventive effect on the outcome. Otherwise, we say that action A has no causal effect on the outcome. In epidemiology, A is often referred to as exposure or treatment.

The next step is to make this causal intuition of ours susceptible to mathematical and statistical analysis by introducing some notation. Consider a dichotomous exposure variable A (1: exposed, 0: unexposed) and a dichotomous outcome variable Y (1: death, 0: survival). Let Ya = 1 be the outcome variable that would have been observed under the exposure value a = 1, and Ya = 0 the outcome variable that would have been observed under the exposure value a = 0. (The lowercase a represents a particular value of the variable A.) Zeus has Ya = 1 = 1 and Ya = 0 = 0 because he died when exposed but would have survived if not exposed.

Causal Effect for each case

We are now prepared to provide a formal definition of causal effect for each person: exposure has a causal effect if Ya = 0≠Ya = 1. When exposure has no causal effect for any subject—i.e., Ya = 0 = Ya = 1 for all subjects—we say that the acute causal null hypothesis is true.

The variables Ya = 1 and Ya = 0 are known as potential outcomes because one of them describes the value of the subject’s result that would have been observed under a potential exposure value that the subject did not actually experience. For example, Ya = 0 is a potential result for exposed Zeus, and Ya = 1 is a potential result for unexposed Hera. Since these results would have been observed in situations that did not actually occur (i.e., in situations contrary to the fact), they are also known as counterfactual results. For each subject, one of the counterfactual results is actually factual—the one that corresponds to the level of exposure or treatment regimen the subject actually received. For example, if A = 1 for Zeus, then Ya = 1 = Ya = A = Y for him.

Problems in Causal Inference

The fundamental problem of causal inference should now be clear. Individual causal effects are defined as a contrast of the values of the counterfactual outcomes, but only one of those values is observed. All other counterfactual results are missing. The unhappy conclusion is that, in general, individual causal effects cannot be identified due to a lack of data.

Causal Effect of the Population

According to Neyman (1990), we define probability Pr [Ya] as the proportion of subjects who would have developed the outcome Y if all subjects in the population of interest had received the exposure value to. We also refer to Pr [Ya] as Ya’s risk. Exposure has a causal effect on the population if Pr [Ya] [Ya] ≠Pr.

Then Pr [Ya] = 10/20 = 0.5, and Pr [Ya] = 10/20 = 0.5. That is, 50% of patients would have died if they had all received a heart transplant, and 50% would have died if no one had received a heart transplant. Exposure has no effect on outcome at the population level. When exposure has no causal effect on the population, we say that the null causal hypothesis is true.

Unlike individual causal effects, population causal effects can sometimes be calculated–or, more rigorously, consistently estimated (see appendix)–as discussed below. Some equivalent definitions of causal effect are

Pr [Ya] -Pr [Ya] ≠0

Pr [Ya] /Pr [Ya] ≠1


(Pr [Ya] /Pr [Ya] )/(Pr [Ya] /Pr [Ya] )≠1

where the left side of inequalities (a), (b) and (c) is the causal risk difference, the risk ratio, and the probability ratio, respectively. Causal risk difference, risk ratio and probability ratio (and other causal parameters) can also be used to quantify the strength of the causal effect when it exists. They measure the same causal effect on different scales, and we refer to them as measures of effect.

Association and Causality

To characterize the association, we first define the probability Pr[Y a=””] such as the proportion of subjects who developed the Y outcome among the subjects in the population of interest who received the exposure value to. We also refer to Pr[Y a=””] as the risk of Y given A = a. Exposure and result are associated if Pr[Y]≠Pr[Y]. Exposure and result are associated because Pr [Y] = 7/13, and Pr [Y] = 3/7. Some equivalent definitions of association are

Pr [Y] -Pr [Y] ≠0

Pr [Y] /Pr [Y] ≠1


(Pr [Y] /Pr [Y] )/(Pr [Y] /Pr [Y] )≠1

where the left side of inequalities (a), (b) and (c) is the difference in association risk, the risk ratio, and the probability ratio, respectively. The association risk difference, the risk ratio and the probability ratio (and other association parameters) can also be used to quantify the strength of the association when it exists. They measure the same association on different scales, and we refer to them as measures of association.

Lack of Partnership

When A and Y are not associated, we say that A does not predict Y, or vice versa. The lack of association is represented by Y⨿A (or, equivalently, A⨿Y), which reads as Y and A are independent.

Note that the risk Pr[Y a=””] is calculated using the subset of subjects in the population who meet the condition of “having actually received exposure to” (i.e. it is a conditional probability), while the risk Pr[Ya] is calculated using all subjects in the population if they had received counterfactual exposure to (i.e. it is an unconditional or marginal probability).

Therefore, the association is defined by a different risk in two disjoint subsets of the population determined by the subjects’ actual exposure value, while causality is defined by a different risk in the same subset (e.g., the entire population) under two potential exposure values (Fig. 1). This radically different definition explains the well-known adage “association is not causality”. When an association measure differs from the corresponding measure of effect, we say there is bias or confusion.

Calculation of Causal Effects through Randomization

Unlike association measures, measures of effect cannot be calculated directly due to lack of data. However, measures of effect can be calculated – or, more rigorously, consistently estimated in randomized experiments.

Suppose we have an (almost infinite) population and we flip a coin for each subject of that population. We assign the subject to group 1 if the coin comes out cross, and to group 2 if it comes out face. We then administer the treatment or exposure of interest (A = 1) to subjects in group 1 and placebo (A = 0) to those in group 2. Five days later, at the end of the study, we calculated the mortality risks in each group, Pr [Y] and [Y] Pr. For now, suppose this random experiment is ideal in all other respects (no follow-up losses, full adherence to assigned treatment, blind assignment).

We will demonstrate that, in such a study, the observed risk Pr [Y a=””] is equal to the counterfactual risk [Ya] Pr, and therefore the associative risk ratio is equal to the causal risk ratio.

Random Assignment

First, note that, when subjects are randomly assigned to groups 1 and 2, the proportion of deaths among those exposed, Pr, [Y] will be the same if subjects in group 1 receive exposure and those in group 2 receive placebo, or vice versa. Since group membership is random, both groups are “comparable”: which particular group received exposure is irrelevant to the value of [Y] Pr. (The same reasoning applies to [Y] Pr.) Formally, we say that both groups are interchangeable.


Interchangeability means that the risk of death in group 1 would have been the same as the risk of death in group 2 if subjects in group 1 had received the exposure given to those in group 2. That is, the risk under the potential exposure value to among the exposed, [Ya] Pr, is equal to the risk under the potential exposure value to among the unexposed, [Ya] Pr, for a = 0 and a = 1.

An obvious consequence of these (conditional) risks being equal in all subsets defined by exposure status in the population is that they must be equal to the (marginal) risk under the population-wide exposure value: Pr [Ya] = Pr [Ya] = [Ya] Pr. In other words, under interchangeability, actual exposure does not predict the counterfactual outcome; are independent, or Ya⨿A for all values a. Randomization produces interchangeability.

We only need one step to show that the observed risk Pr [Y a=””] is equal to the counterfactual risk Pr in ideal random [Ya] experiments. By definition, the value of the counterfactual outcome Already for subjects who actually received the exposure value to is their observed result value Y. So, among those who actually received the exposure value to, the risk below the potential exposure value to is trivially equal to the observed risk. That is, Pr [Ya a=””] = Pr [Y a=””] .

Interchangeability and Conditional Risk

Let us now combine the results of the previous two paragraphs. Under the interchangeability, Ya⨿A for all a, the conditional risk among those exposed to is equal to the marginal risk if the entire population had been exposed to: Pr [Ya] = Pr = Pr [Ya] [Ya] . And by definition of the counterfactual result Pr [Ya a=””] = Pr [Y a=””] . Therefore, the observed risk Pr [Y a=””] is equal to the counterfactual risk Pr [Ya] .

In ideal random experiments, the association is causality. On the other hand, in non-randomized (e.g., observational) studies the association is not necessarily causal due to the possible lack of interchangeability of exposed and unexposed subjects. For example, in our heart transplant study, the risk of death in the absence of treatment is different for exposed and unexposed: Pr [Ya] = 7/13≠Pr [Ya] = 3/7. We say that those exposed had a worse prognosis, and therefore a higher risk of death, than those not exposed, or that YaA is not met for a = 0.

Interventions and Causal Issues

So far we have assumed that counterfactual results already exist and are well defined. However, this is not always the case.

Suppose women (S = 1) have a higher risk of suffering from a certain disease AND than men (S = 0), that is, Pr[Y]>Pr [Y] . Does sex S have a causal effect on the risk of Y, i.e. Pr[Ys]>Pr [Ys] ? This question is rather vague because it is not clear what we mean by the risk of Y if everyone had been a woman (or a man). Do we mean the risk of Y if everyone had been “carrying a pair of X chromosomes,” “raised like a woman,” “with female genitalia,” or “with high estrogen levels between adolescence and menopause”? Each of these definitions of “female” exposure would lead to a different causal effect.

To give unequivocal meaning to a causal question, we must be able to describe the interventions that would allow us to calculate the causal effect in an ideal random experiment.

The fact that some interventions appear technically unfeasible or simply far-fetched simply indicates that the formulation of certain causal issues (e.g., the effect of sex, elevated serum LDL cholesterol, or elevated HIV viral load on the risk of certain diseases) is not always straightforward. A counterfactual approach to causal inference highlights the imprecision of ambiguous causal questions, and the need for a common understanding of the interventions involved.

Limitations of Randomized Experiments

Below we review some common methodological problems that can lead to bias in randomized experiments. According to Robins (1987), to fix the ideas, suppose we are interested in the causal effect of a heart transplant on one-year survival. We started with an (almost infinite) population of potential transplant recipients, randomly assigned each subject in the population to transplantation (A = 1) or medical treatment (A = 0), and determined how many subjects die during the following year (Y = 1) in each group. Next, we attempt to measure the effect of heart transplantation on survival by calculating the associative risk ratio [Y] Pr/Pr, [Y] which is theoretically equal to the causal risk ratio [Ya] Pr/Pr. [Ya] Consider the following issues:

Loss of tracking

Subjects may be lost during follow-up or leave the study before their outcome is determined. When this occurs, the Risk Pr [Y a=””] cannot be calculated because the value of Y is not available to some people. Instead, we can calculate Pr[Y = 1| A = a, C = 0], where C indicates whether the subject was lost (1: yes, 0: no). This restriction to subjects with C = 0 is problematic. Subjects who were lost (C = 1) may not be interchangeable with subjects who remained until the end of the study (C = 0).

For example, if subjects who did not receive a transplant (A = 0) and who had a more severe illness decide to drop out of the study, then the risk Pr[Y = 1| A = 0, C = 0] among those who remained in the study would be lower than the risk Pr[Y] among those originally assigned to medical treatment. Our measure of association Pr[Y = 1| A = 1, C = 0]/Pr[Y = 1| A = 0, C = 0] would not generally be equal to the effect measure [Ya] [Ya] Pr/Pr.


Subjects may not comply with the assigned treatment. Let A be the exposure to which the subjects were randomly assigned, and B the exposure they actually received. Suppose some subjects who had been assigned to a medical treatment (A = 0) get a heart transplant outside the study (B = 1). In an “intention to treat” analysis, we calculate [Y a=””] Pr, which is equal to [Ya] Pr. However, we are not interested in the causal effect of assignment A, a misclassified version of true exposure B, but in the causal effect of B itself.

The alternative “treatment-based” approach—using Pr [Y b=””] for causal inference—is problematic. For example, if the most seriously ill subjects in group A = 0 seek a heart transplant (B = 1) outside the study, then group B = 1 would include a higher proportion of severely ill subjects than group B = 0. Groups B = 1 and B = 0 would not be interchangeable, i.e. Pr [Y b=””] [Yb] ≠Pr.

In the presence of non-compliance, an intention-to-treat analysis ensures the interchangeability of the groups defined by a misclassified exposure (the original allocation). While a treatment analysis ensures a correct classification of the exposure but not the interchangeability of the groups defined by this exposure. However, intention-to-treat analysis is often preferred because, unlike treatment-based analysis, it provides an unbiased measure of association if the acute causal null hypothesis holds for exposure B.


When study subjects are aware of the treatment they are receiving (as in our heart transplant study), they may change their behavior accordingly. For example, those who received a transplant can change their diet to keep their new heart healthy. The equality Pr [Y a=””] = Pr is still [Ya] valid, but now the causal effect of A combines the effects of transplantation and diet change. To avoid this problem, knowledge of the level of exposure assigned to each group is hidden from the subjects and their doctors (they are “blinded”), when possible.

The aim is to ensure that the full effect, if any, of the exposure allocated A is solely attributable to the exposure received B (the heart transplant in our example). When this goal is achieved, we say that the exclusion restriction is maintained, i.e. Ya = 0.b = Ya = 1.b for all subjects and all b values and, specifically, for the B value observed for each subject. In unblinded studies, or when blinding does not work (e.g., well-known side effects of a treatment make it apparent who is taking it), exclusion restriction cannot be guaranteed and therefore intention-to-treat analysis may not produce an unbiased measure of association. Even under the acute causal null hypothesis for exposure B.

Unmasking and Interchangeability

In summary, the fact that the interchangeability Ya⨿A is maintained in a well-designed random experiment does not guarantee an unbiased estimate of the causal effect. This is because: i) And may not be measured for all subjects (loss of follow-up), (ii) A may be a misclassified version of the true exposure (non-compliance); and iii) A may be a combination of the exposure of interest plus other actions (unmasking). Causal inference from randomized studies in the presence of these problems requires assumptions and analytical methods similar to those of causal inference from observational studies.

These methodological problems aside, random experiments may be unfeasible for ethical, logistical or financial reasons. For example, it is questionable whether an ethics committee would have approved our heart transplant study. Hearts are scarce and society prefers to assign them to subjects who are most likely to benefit from transplantation, rather than randomly assigning them among potential recipients. Randomized experiments on harmful exposures (e.g., cigarette smoking) are also often unacceptable. Often, the only option is to conduct observational studies in which interchangeability is not guaranteed.

Our specialists wait for you to contact them through the quote form or direct chat. We also have confidential communication channels such as WhatsApp and Messenger. And if you want to be aware of our innovative services and the different advantages of hiring us, follow us on Facebook, Instagram or Twitter.

If this article was to your liking, do not forget to share it on your social networks.

Bibliographic References

Neyman J. On the application of probability theory to agricultural experiments: essay on principles, section 9. Translated in Statistical Science 1923, 1990;5:465–80.

Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology1974;56:688–701.

Robins JM. Addendum to “A new approach to causal inference in mortality studies with sustained exposure periods―application to control of the healthy worker survivor effect”. Computers and Mathematics with Applications1987;14:923–45 (errata appeared in Computers and Mathematics with Applications1987;18:477).

You may also be interested in:  Content Analysis

Causal Effect

Causal Effect

Abrir chat
Scan the code
Bienvenido(a) a Online Tesis
Nuestros expertos estarán encantados de ayudarte con tu investigación ¡Contáctanos!