Flaw/Vulnerable to Criticism

OK, folks, this is where things are going to start feeling different.

Remember in our discussion of inductive reasoning we said that inductive arguments are about what is likely to happen, rather than what is certain to have happened or to happen in the future? That means that there cannot be VALIDITY in an inductive argument, since validity refers to deductive certainty.

OK…

“So Jake, doesn’t that mean that all inductive arguments are flawed?”

Well, yes, in a manner of speaking. Let’s do a quick review of inductive reasoning to get some context:

Inductive causal arguments are inherently probabilistic and lack the certainty of deductive reasoning. While inductive reasoning is a cornerstone of scientific inquiry, its conclusions are always uncertain because they rely on generalizations drawn from limited evidence.

Philosopher David Hume highlighted this limitation through the problem of induction, which challenges the assumption that patterns observed in the past will necessarily hold in the future. For example, even if smoking is repeatedly associated with lung cancer, inductive reasoning cannot guarantee causation—it merely suggests it with varying degrees of probability. This inherent uncertainty stems from the inability of inductive arguments to account for all possible variables or unseen confounders.

In scientific contexts, the Bradford Hill Criteria provide a structured framework to evaluate the strength of an inductive causal argument. These criteria aim to bolster causal inferences by demanding multiple, independent lines of evidence. In particular:

  • The criterion of temporality requires that the proposed cause precedes the effect.

  • The criterion of plausible mechanism explains how the cause leads to the effect.

  • The criteria of strength, consistency, and gradient (collectively Data) give us statistics or numbers that make the connection between the cause and effect more likely in a strong and significant way.

While the Bradford Hill Criteria enhance the reliability of inductive causal arguments, they do not eliminate uncertainty. This reflects the fundamental limitation of inductive reasoning: even strong causal arguments are always open to revision upon new evidence or better understanding.

Expressing Flaws in Inductive Causal Arguments

When analyzing an inductive causal argument, it is important to articulate its flaws in terms of its causal reasoning. The following steps can help identify and express these flaws systematically:

  1. Identify the Causal Claim:
    Determine the relationship the argument is proposing (e.g., "X causes Y").

    • Example: "Exposure to chemical Z is correlated with an increase in disease Y."

  2. Evaluate Premises Against Bradford Hill Criteria:
    Use the criteria to assess the argument’s robustness:

    • Temporality: Does the proposed cause precede the effect? If not, the causal link can be undermined.

    • Data: Does the author explicitly explain that the in the presence of the cause there is increased effect AND in the absence of the cause there is decreased effect? Is the association consistently observed in different studies or contexts?

    • Plausible Mechanism: Does the causal link make sense based on current scientific knowledge?


      Flaws often emerge when these criteria are not adequately addressed.

  3. Identify Potential Confounding Variables or Alternative Explanations:
    Inductive causal arguments are vulnerable to confounding factors. For example, an observed association between chemical Z and disease Y may be better explained by a third factor (e.g., socioeconomic conditions or genetic predispositions).

  4. Address Probabilistic Nature:
    Highlight that, even if some criteria are met, the argument does not achieve deductive certainty. The conclusion remains contingent and probabilistic, not definitive.

    By systematically identifying weaknesses in the evidence, assumptions, and reasoning of an inductive causal argument, you can critically evaluate its reliability. This approach aligns with the scientific mindset, which recognizes the probabilistic nature of causal claims while striving to refine them through rigorous analysis and evidence gathering.

Let’s use the caffeine example again:

A recent study found that students who drink two cups of coffee before studying tend to perform better on memory tests compared to those who do not. The researchers have posited that coffee consumption enhances cognitive function. They believe that the caffeine stimulates the central nervous system, enhancing focus and the ability to encode new information into memory.

Here’s our circuit:

So, we have our Causal Conclusion identified

coffee consumption causes enhanced cognitive function

How do the premises then align with our criteria:

  • Temporality: …who drink two cups of coffee before studying…
    This seems relatively unassailable. The coffee consumption certainly happens prior to the improvement in memory. There really can’t be a reasonable way that the improved memory could be causing the consumption of coffee.

  • Data: …students who drink two cups of coffee before studying tend to perform better on memory tests compared to those who do not.
    When we examine data, we should go through the various parts of Bradford Hill to ensure every box is ticked.

    So let’s start with Gradient: in the presence of the cause do we have increased probability of the effect? Well, no, not really, since the data give us information about “memory tests” but the conclusion referred to “cognitive function.” These aren’t the same thing, and though increased memory could be indicative of increased cognitive function we don’t know that for sure. The Gradient (correlation) itself is there, though, so that’s all good.

    How’s the rest of our Data? “…tend to perform better” gives us the sense that there is sufficient Strength in the data. There was clearly an Experiment done. For Consistency we only have the one study and we don’t how many subjects there were, so that is certainly lacking. And lastly, the Specificity of the study is restricted to “students” where the conclusion generalizes to coffee consumption by anyone. That’s not great either.

    So, in anticipating a potential flaw in the Data family, we can say that it could be
    a) the lack of connection of the data to the proposed effect;
    b) the reliance on a single experiment with an unknown sample size
    c) the extrapolation of data regarding students to the general population

    OK, on to the other side of the diagram!

  • Plausible Mechanism: the caffeine stimulates the central nervous system, enhancing focus and the ability to encode new information into memory

    On this end we have the beginnings of an explanation of the biological pathway from “coffee consumption” to “enhanced cognitive function.” Is it the WHOLE explanation? No. Could we use more details? Of course. It could be strengthened or weakened, for sure. But this isn’t flawed per se, it’s just incomplete.

  • Mechanism Backing

    Here we have nothing: no alignment with the broader world, no elimination of potential alternative causes, no analogies to support our thinking.

    So, if we want to anticipate a flaw on the Mechanism side of the diagram, it’s likely we would have to see something regarding failing to acknowledge potential plausible alternative causes.

And that’s as far as we can get. We don’t know exactly what language the answer will use, nor which among the weak points it will identify. We just have to go in armed with our analysis. flexible and ready to pounce.

Necessary Assumption

Necessary Assumptions in causal reasoning are REALLY interesting. In a deductive argument, we think of a Necessary Assumption as a fact without which the argument will “fall apart.” In Inductive/Causal reasoning, it’s more helpful to think of one as a fact without which the facts presented cease to be relevant.

Same example:

A recent study found that students who drink two cups of coffee before studying tend to perform better on memory tests compared to those who do not. The researchers have posited that coffee consumption enhances cognitive function. They believe that the caffeine stimulates the central nervous system, enhancing focus and the ability to encode new information into memory.

Here’s our circuit:

So, what is something we NEED to be true for this argument presented to even be worth pursuing?

We have come to realize that any manner of mechanism or data CAN be used to support the argument, and similarly the LACK of any portion of the mechanism or the results of any one study cannot DESTROY the argument. That’s the whole sense of these conclusions being probable and reasonable. So that can’t be it.

Instead, let’s examine what they DID give us and what we need to be true about that. Take a look at the Data: “students who drink two cups of coffee before studying tend to perform better on memory tests compared to those who do not.” Remember, we said that there was a small gap there, that the results of the study indicated an effect on “memory”, which is not quite the same thing as “cognitive function.” Let’s expand on that idea. If enhanced memory is not actually indicative of enhanced cognitive function, these data would be useless to us in trying to reach the conclusion we did. So we’d better know for certain that it IS in fact the same thing.

So, in heading to the answer choices, I know I am looking for something to the effect of:

enhanced memory is a significant and primary indicator of enhanced cognitive function

And that, dear reader, is the Necessary Assumption in a Causal Argument. It connects a variable in the data to a variable in the conclusion. In this case, it’s effect to effect, though in another argument it could just as easily be cause to cause.

Sufficient Assumption

Sufficient Assumption questions ask you to find an answer choice that, if brought into the argument, would create validity.

Wait a moment!

Think back: Deductive reasoning leads to conclusions that are logically certain, provided the premises are true. Inductive reasoning, however, results in conclusions that are reasonable and probable, based on the evidence.

So, doesn’t that mean that validity in inductive arguments isn’t possible??? And if so, how can they ask us a Sufficient Assumption question?

Great catch! The answer is, “They can’t” and they don’t. There will not be Sufficient Assumption questions when the argument is causal.

And of course, like everything, there is an exception, but it is rare and only goes to prove the rule. The ONLY way they could do so would be to create some absurd rule that states that whenever there is a correlation between two variable that MUST mean that one of the variables causes the other. LSAC in fact wrote a question like this on PrepTest 157 (section 2, number 18 for those who want to look it up) for which the sufficient assumption is essentially “anything that is correlated with a certain effect must be helping to cause that effect.” In the real world, this is an absurdity, and any self-respecting statistician would bristle at reading it. but, in LSAT world, if they are willing to write a principle that makes such an assertion, we need to take it at face value.

HOWEVER, this is the only example/exception we’ve found (if you do find others, let us know!)

Principle Justify

Since we know that Principle Justify questions are the same as Sufficient Assumption questions, we can draw the same conclusion about them: there will not be Principle Justify questions about Causal Arguments, except in the case of another exception like the one above.

Strengthen

While sufficiency is not possible in an Inductive Argument, Strengthening is exactly how we DO argue in this paradigm. Want a stronger argument? Give me more info!

So how do we Strengthen? Focus on the three pillars.

Timing: If they didn’t give us the timing, provide information that suggests or affirms that the cause really did happen before the effect.

Mechanism: If they gave us no mechanism, give me info that contributes to one. If they did, give me more detail. Either way, you can always eliminate an Alternative Cause that would, if true, preclude OUR cause from contributing to the effect (this last one is a bit tricky; we’ll come back to it.)

Data: If there is no data provided, give me some. If there is, examine it. If it’s missing a key component (small sample size, no info regarding what happens when you remove the cause, etc), provide that missing piece.

NOTE: If the data given are definitionally correlative (e.g. the stimulus gives you data from “one million surveys” and says definitively that “there is a correlation/trend/increased probability or frequency” then it’s pretty tough to say that you could provide any data-type info that would strengthen what’s already there. In those cases, look first to the other side of the diagram and try to improve Mechanism or provide Timing.

Weaken

If you can Strengthen, you can Weaken. Same standard, same approach save for 180 degree turn (shift in BOLD below.)

Timing: If they didn’t give us the timing, provide information that suggests that the EFFECT may have happened before the CAUSE.

Mechanism: If they gave us no mechanism, you can’t really weaken it since there’s nothing there to weaken. But, if they did, find new information that breaks it up, makes it unreasonable or unlikely. Additionally, you can always provide a potential Alternative Cause that would, if true, preclude OUR cause from contributing to the effect. (again, tricky one, see below for more).

Data: If there is no data provided, give me some that suggests a lack of correlation. If there is, examine it. If it’s missing a key component (small sample size, no info regarding what happens when you remove the cause, etc), provide new info that suggests that that missing piece is NOT there.

NOTE: With all of the above, we don’t need to prove the truth or fullness of these weakening concepts, or even have an overwhelming probability. We just need to introduce the possibility. That’s enough to weaken.

So, that’s it (mostly). But, let’s talk about alternate causes for a second.

This is a very complex topic in the world of statistics. There are all sorts of ways statisticians describe the pitfalls that lie in front of the unwitting researcher in attempting to determine causal links. But for our purposes, we can focus ourselves on a few ideas

Common Causes:

Definition: A common cause is a single factor that independently causes two or more events or conditions, leading to a mistaken assumption of a direct causal relationship between the events themselves.

Key Characteristics:

  • This type of reasoning highlights how two events may be correlated not because one causes the other, but because they share a common underlying cause.

  • Recognizing common causes helps to avoid the correlation-causation fallacy.

Examples:

  1. Observation: People who carry lighters are more likely to develop lung cancer.

    Common Cause: Both carrying lighters and lung cancer are caused by smoking, not by carrying lighters itself.

  2. Observation: Increased ice cream sales correlate with higher drowning incidents.

    Common Cause: Both events are caused by hot weather, not by ice cream sales causing drowning.

How Common Causes Lead to Errors:

They create the illusion of a direct causal link where none exists.

They often appear in data where two variables are strongly correlated.

Alternative Causes

Definition: An alternative cause is a plausible explanation for an observed effect that differs from the proposed cause. This type of reasoning examines other factors that might be responsible for the effect.

Key Characteristics:

  • Alternative causes directly challenge the assumption that the stated cause is the only explanation for the effect.

  • Neglecting alternative causes can lead to oversimplified or erroneous conclusions.

Examples:

  1. Observation: A study concludes that eating breakfast leads to better academic performance.

    Alternative Cause: Students who eat breakfast might also get more sleep or have more structured morning routines, which are the real reasons for improved performance.

  2. Observation: Crime rates increase when unemployment rises.

    Alternative Cause: Other social factors, such as reduced community programs or increased inequality, may be the actual causes of rising crime rates.

IMPORTANT!!!!!!

Sorry, I used six exclamation points because this is vital.

An Alternative Cause that is a reasonable ADDITIONAL cause for the effect does not itself weaken a causal argument. Remember, dismissing Alternative Causes fall under the umbrella of COHERENCE, the criterion that says that our PLAUSIBLE MECHANISM makes sense in the context of what we know about the world. Until you demonstrate that that alternative cause makes the proposed one LESS REASONABLE then it is simply something else that also causes this effect.

Here’s an example:

An argument proposes that a confusing new traffic sign has caused the uptick in accidents on a certain road. The argument provides you good data.

If an answer choice tells you “the argument failed to consider that increased speed causes more accidents than does a confusing road sign” that would not in and of itself weaken the argument that the sign caused it. Those two things can be true simultaneously without problem.

However, one COULD weaken the argument by introducing that the sign was erected in response to road improvements that vastly increased speeds on that road.

So, Alternative Causes only weaken an argument when they alter the course of the mechanism of the provided cause, weakening its COHERENCE.