HomeWHICHWhich Of The Following Factors Is Needed To Establish Causality

Which Of The Following Factors Is Needed To Establish Causality

While it doesn’t apply all of the time, generally speaking when we design a research project/conduct data analysis we’re interested in establishing causality. In an ideal world, we’d be able to state that some variable X is causally related to another variable Y, in that the presence of X and/or a change in X always results in the appearance of and/or a change in Y. What’s more, we’d want to know the magnitude of that effect-for every w unit change in X, we observe a z unit change in Y.

To establish causality you must have the following three things. The must is really important here, and it’s the must that leads to common errors in causal inference, as I’ll explain below. The three are the jointly necessary and sufficient conditions to establish causality; all three are required, they are equally important, and you need nothing further if you have these three…

Temporal sequencing — X must come before Y

Non-spurious relationship — The relationship between X and Y cannot occur by chance alone

Eliminate alternate causes — There are no other intervening or unaccounted for variable that is responsible for the relationship between X and Y

Temporal Sequencing

This one is pretty straight-forward…X has to come before Y. There is, though, the tricky question of just how long before we observed the change in Y that X occurred/changed. Let me illustrate…In examining the relationship between research and development intensity (R-and-D spending / sales) and changes in firm value (book value of the firm), you’ll often see a non-significant or even negative relationship between R-and-D intensity and value in the same year. Over time though, 1-3 years later for example, you might observe a positive relationship between the two. So which is the ‘correct’ time period to model the relationship between R-and-D intensity and firm value?Well, the answer is that it depends on your theory and what relationship you propose. Failing though to specify the timeframe-and provide a rationale for it-is a key threat to causal inference. Just because you lagged X or one year for example, as is often done in strategic management analysis, does not mean that the one-year lag is the appropriate temporal separation for the variables under study. Similarly, in an experiment just because you manipulate X before observing Y doesn’t mean that you’ve necessarily specified an appropriate temporal sequence.The most common trap though with temporal sequencing is the post hoc ergo propter hoc fallacy-after it, therefore because of it. The fallacy is that you attributed a causal relationship from X to Y simply because X happened before you observed Y. Think about it this way…yesterday I had a cold. Last night I ate a pint of Chubby Hubby ice cream. This morning, my cold had gone away, therefore, Chubby Hubby cured my cold. While a part of me certainly wishes Chubby Hubby was the magic cure for the common cold, it was by pure random chance that eating the ice cream and my cold going away occurred at roughly the same time. Because there is no cure for the common cold, we know for sure that Chubby Hubby had nothing to do with my cold symptoms dissipating, despite enjoying a pint the night before.The post hoc fallacy is particularly common in business, for example when we like to attribute changes in performance to a particular strategic decision solely on the basis of having made that decision in the past.

Refer to more articles:  Which Outdoor Sport Can Combine Both Aerobic And Non-aerobic Exercise

Non-Spurious Relationship

This one gets a little tricky, and I’ll talk more about it in post on null hypothesis testing and interpreting p-values, but the gist is to demonstrate that the relationship between X and Y is not do to chance alone. This is where statistics comes in to play. For example, with the common p < .05 standard in statistical testing, what you’re saying is that there is a 1 in 20 chance that the difference as large as you observed or larger between two groups could occur by chance alone. With a p value of .5, that would be a 1 in 2 chance, with a p value of .001, that would be a 1 in 1,000 chance, and so on.The kicker is that outside of the realm of pure mathematics, the best that we can do is to provide evidence against accepting the view that the relationship between X and Y happened by pure chance. This is why we say that you can’t ‘prove’ the null, or ‘accept the null’, but rather that you failed to reject the ‘null’, with the null being that there is no material difference between two groups (for example, a group that took a new drug treatment and a group that didn’t). Effectively, we broadly accept the presence of a causal relationship if through statistical analysis we establish a low probability that there is no meaningful difference, but we don’t actually know with certainty that there is a ‘true’ causal relationship between X and Y. As we’ll talk about in the post on p-hacking, with the 1 in 20 (p < .05) standard, if you conduct 20 statistical tests on the same data, at least one of them will reject the null hypothesis by pure random chance.

Refer to more articles:  Which Hoka For Flat Feet

Eliminate Alternate Causes

Citing the off-used example, if you were to collect data on shoe size and intelligence, you would find a strong positive correlation. The reason is not because size matters (sorry, couldn’t resist), but because there is an un-modeled variable that accounts for this relationship-in this case, the age of the respondent. To establish a causal relationship, there must be no third (or more) factor that accounts for the relationship between X and Y. As I’ll talk about in a discussion of the limitations of control variables, including controls in a statistical model may account for those particular confounds on the X -> Y relationship, but controls will not deal with all possible confounds. Fortunately, in a controlled experiment with perfect randomization, we can show that there are no un-modeled, or omitted, variables impacting our causal relationship. There are other ways to improve causal inference without using a randomized controlled experiment, but the material point and the one that is often overlooked is that absent an experiment with perfect randomization, there is no way to know for sure that there are no other factors we haven’t included that may account for the observed relationship.

Key Takeaway

To establish causality you need to show three things-that X came before Y, that the observed relationship between X and Y didn’t happen by chance alone, and that there is nothing else that accounts for the X -> Y relationship. Absent any one of those things, and at best you can demonstrate a correlational (covariance) relationship, hence the phrase, correlational does not imply causation.

Refer to more articles:  Which Nintendo Character Am I

Further Reading:

Antonakis J, Bendahan S, Jacquart P, Lalive R. 2010. On making causal claims: A review and recommendations. The Leadership Quarterly 21: 1086-1120.

Antonakis J, Bendahan S, Lalive R. 2014. Causality and endogeneity: Problems and solutions. In The Oxford Handbook of Leadership and Organizations. Day DV (ed.), Oxford University Press: New York.

RELATED ARTICLES

Most Popular

Recent Comments