Site MapHelpFeedbackChapter Overview
Chapter Overview
(See related pages)

Inductive arguments reach conclusions about regular relationships among things. Causal arguments are inductive arguments that aim at one kind of conclusion about regularity, namely the regularity of a cause-and-effect relationship.

In daily life we most often think about causation as it applies to specific events, but causation can also be a matter of trends or patterns among populations.

The two kinds of causal relationships call for different argumentative methods and different standards of evaluation. This chapter outlines the reasoning proper to each causation, as well as common errors that can afflict each type of reasoning. Causation in population requires us to draw distinctions among types of causal studies, and also to master enough mathematical statistics to tell how reliable a study's results are.

  1. A causal claim says that one thing causes another; a hypothesis is an initial speculation about a causal claim.
  2. Informal causal reasoning follows one of two main patterns: relevant difference and common thread.
  3. Informal causal reasoning most often goes wrong by overlooking alternative explanations.
  4. Causation in populations differs significantly from causation among specific events and needs other argumentative strategies.
  5. Controlled cause-to-effect experiments try to show directly that the presence and absence of C among all members of a population yield different frequencies of E.
  6. A nonexperimental cause-to-effect study also tries to establish causation in populations, but with different methods and standards.
  7. A third type of study for causation in populations is called a nonexperimental effect-to-cause study.
  8. The meaning of "causal factor" shows why appeals to anecdotal evidence do not belong in discussions of causation in populations.
  9. Sometimes causal claims can be seen to be mistaken even without considering the arguments put forward as their support. Such claims and hypotheses are inherently defective.
  10. Causal explanations are useful as long as they are not confused with either arguments or excuses.

1. A causal claim says that one thing causes another; a hypothesis is an initial speculation about a causal claim.

2. Informal causal reasoning follows one of two main patterns: relevant difference and common thread.

  1. Relevant difference reasoning identifies an event X as the only relevant difference (or simply relevant difference) that has brought about the effect Y.
    1. More precisely, we say that one item has a feature that other items lack (the feature in question), and that only one relevant difference (the difference in question) distinguishes the item with the feature from the items without the feature; the difference in question then causes the feature in question.
    2. To make such an argument we need to know about at least two circumstances, one in which Y occurs and one in which it does not. If X is present along with Y and absent when Y is, then X might cause Y.
    3. "I ate my usual breakfast today, but with bacon instead of my usual sausage, and now I feel thirsty. Bacon tastes saltier than sausage, so I think the bacon made me thirsty."
      1. The bacon with breakfast is being put forward as the only occurring difference.
      2. Notice that the speaker is claiming some relevance to this difference. Bacon tastes saltier than sausage, and we know that salty foods can make us thirsty.
    4. Arguments from an only relevant difference can be as conclusive as any kinds of reasoning we know.
      1. If you walk into a room, flip a switch beside the door, and see the lights go on, you conclude that you found the light switch on the basis of relevant difference reasoning.
      2. Even less indubitable arguments about the only relevant difference can provide as much certainty as ordinary experience ever provides, as long as the difference in question is truly relevant.
      3. A difference is relevant if one is not unreasonable in supposing that it brought some effect about.
      4. Background information often helps you tell which differences are relevant, as it did in the example of bacon and thirst.
  2. In common thread reasoning we link a cause to the feature in question on the grounds that it is the only relevant common thread among possible causes of Y.
    1. In such an argument, we begin by noticing that the feature in question (Y) occurs more than once, and that some common thread X is present on every occasion.
    2. Such reasoning requires that we know of more than one circumstance in which Y occurs.
    3. If more than one other factor is present in every case of Y's presence, we find ourselves considering more than one possible cause of Y. Like differences in the last type of argument, the common threads must all be relevant.
  3. The two forms of argument are not equivalent. Reasoning from a common thread works better when forming hypotheses that one later tests with reference to a relevant difference.
    1. In cases of a common thread, the occurrences of Y might have separate and unrelated causes—that is, the appearance of any common thread at all could be a coincidence.
    2. When Y occurs on a number of occasions, it is quite possible that all those occasions are linked by more than one common thread.
    3. In the former case, a test for an only relevant difference helps us see if the common thread could have caused it. In the latter case, a test looking for a difference will help us choose from among the rival candidates.

3. Informal causal reasoning most often goes wrong by overlooking alternative explanations.

  1. Alternative common threads or relevant differences might be at work.
  2. The differences or common threads discovered might be irrelevant ones.
  3. The two events being linked in the causal claim are in fact related, but we have reversed causation.
    1. This failure is a partial success. You've discovered an actual cause and effect but only mislabeled them as one another.
    2. "Whenever I go to bed I feel sleepy so the bed must make me sleepy," for instance; more likely feeling sleepy sends you to bed.
  4. The alleged cause and effect might both follow from some third underlying cause. Such mistaken arguments correctly spot a link between two events but err in staying on the surface, not looking for a deeper cause of both events.
  5. Uncritical reasoning about causation can lead to the fallacy called post hoc, ergo propter hoc.
    1. That Latin phrase means "After that, therefore because of that."
    2. The mere appearance of Y after X makes us call X the cause of Y. "My car stopped running after I filled the tank with gas; therefore, the gasoline stopped the engine."
    3. This mistaken reasoning gets its plausibility from a resemblance to good arguments about X as the difference that caused Y. It goes wrong by not making sure to establish that X is the only relevant difference.
    4. To escape this error, avoid arguments that are based on nothing more than the mere appearance of Y after X.
  6. Some causal arguments simply overlook the possibility of coincidence.
    1. Two events might be completely unrelated to each other.
    2. In common-thread cases, we might take one common thread to be significant when in fact it is only present by coincidence, and some other common thread does the explanatory work.
    3. In yet other common-thread cases, multiple occurrences of some event derive from multiple causes; then there is no point looking for any common thread.

4. Causation in populations differs significantly from causation among specific events and needs other argumentative strategies.

  1. Most general causal claims do not mean that a causal link will exist between two particular events.
    1. Running makes for a healthier heart. Yet Jim's running might not improve Jim's heart.
    2. It does not even have to follow that running causes health of the heart in the majority of people who run.
  2. A claim of causation between C and E in a population P means that C is one factor producing E.
    1. More exactly: There will be more E in P when C is present (in every member of P) than when C is absent (from every member of P).
    2. This reasoning might remind you of the argument about differences. But notice one shift: In the case of specific events, we need X to be the difference that produces Y; in populations, we only expect X to be a difference.
  3. Three kinds of empirical studies yield support for claims of causation in populations.

5. Controlled cause-to-effect experiments try to show directly that the presence and absence of C among all members of a population yield different frequencies of E.

  1. Such experiments separate an experimental group from a control group and expose the first group, but not the second one, to a suspected causal factor.
    1. The abbreviation C denotes the suspected causal agent, and E denotes the effect we're trying to find a cause for.
    2. The experimental group is the sample of the target population whose members are exposed to C.
    3. The control group is that sample of the target population whose members are not exposed to C.
    4. In all other respects, the experimenters treat the control group and the experimental group alike.
    5. We use d to signify the difference between the frequency of E in the control group and its frequency in the experimental group.
  2. When d is large enough not to have resulted from chance, we conclude that C causes E.
    1. We calculate d by first calculating the frequency of effect, E, in both groups.
      1. The frequency is the percentage of members of a group who exhibit the effect.
      2. The difference between these percentages is d.
    2. Like error margin (see Chapter 11), statistical significance distinguishes between a real difference and one that could be the result of chance.
    3. Again like error margin, statistical significance depends on the sample size and the expected confidence level.
      1. Table 11-1 shows some minimal values of d needed to establish statistical significance with a confidence level of 95 percent; or as it is equivalently called, statistically significant at 0.05 level.
      2. This means that there is only a 5 percent chance of d's random occurrence.
      3. Larger samples and lower confidence levels will mean smaller values of d before we call a result statistically significant.
  3. When evaluating reports of experimental findings, there are several points to keep in mind.
    1. A sample might not be large enough to guarantee significance; so even a large value for d carries no weight.
    2. A sample may be large and still not make d significant, if d is small.
    3. Applying the results of a controlled experiment often means reasoning by analogy from one population to another.
      1. Controlled experiments typically involve animals, but their results matter most to humans. So one is reasoning by analogy that what affects rats or flies also affects humans.
      2. As with all analogical arguments (Chapter 11), consider the relevant similarities and dissimilarities between the groups.
    4. As with inductive generalizations, the samples (control group and experimental group) must be representative of the target population.
      1. Both groups should be randomly selected.
      2. In the case of reliable scientific experiments, you can assume random selection of groups.

6. A nonexperimental cause-to-effect study also tries to establish causation in populations, but with different methods and standards.

  1. Such studies most often involve human beings and causal factors that may greatly affect health or well-being.
    1. The members of the experimental group are exposed to C, but not by the investigators (for obvious ethical reasons).
    2. So instead of creating the experimental conditions investigators use a group that has already been exposed to C.
    3. As in experiments, the control and experimental groups are identical except for their exposure to C.
    4. The members of both groups also must not have shown evidence of the effect E.
    5. After both groups are watched for E, results are evaluated by calculating the frequencies of E and the difference d, as in controlled experiments.
  2. More cautions are needed when evaluating these results.
    1. As in the experimental case, watch for analogical arguments that extend the results to other populations.
    2. Watch closely for possible bias in the samples (see Chapter 11 on representative samples for inductive generalizations).
  3. Bias can enter good nonexperimental studies when a possible factor C is accompanied by other factors.
    1. Controlled experiments begin with random samples and then administer C, and only C, to the experimental group.
    2. The experimental group in a nonexperimental study might still be composed of randomly selected individuals who have been exposed to C (or say they were), but this group might differ from the target population in some other respect.
      1. A study of the effects of knuckle-cracking on arthritis will randomly select people who crack their knuckles, but they may not represent the target.
      2. If men crack their knuckles more than women, the experimental group will be disproportionately male; and men and women have different chances of developing arthritis.
    3. Studies try to control for these factors by choosing a control group that resembles the experimental group.
      1. Thus, if 61 percent of all knuckle-crackers are men, and the experimental group reflects that distribution, the control group should also be 61 percent male.
      2. A good study will begin by trying to imagine all relevant factors of this sort and adjusting the control group accordingly.
    4. Because we do not know which factors might bear on a given effect, we cannot control for all of them. So all nonexperimental studies yield weaker results than experimental ones.
    5. When evaluating a nonexperimental study, ask whether other factors could have biased the samples.

7. A third type of study for causation in populations is called a nonexperimental effect-to-cause study.

  1. This type of study reasons backwards from an existing effect to its possible cause (or to one causal factor).
    1. This time investigators begin with a given effect, E; they select an experimental group that exhibits E and a control group not exhibiting E.
    2. Members of both groups are inspected for exposure to C, the suspected cause.
    3. If the frequency of C in the experimental group significantly exceeds the frequency in C, we call C a cause of E in the target population.
  2. The same cautions about nonexperimental cause-to-effect studies also apply here.
    1. As before, the members of the experimental group may differ relevantly from the rest of the target population.
    2. If we begin by studying people with arthritis, we must first recall that they will be older than an average member of the population.
    3. Again, we adjust the control group to resemble the experimental group.
    4. Again, if you can think of other factors that could have influenced C, make sure the control group was adjusted to reflect them.
  3. One final alert about effect-to-cause studies: They are less useful in making causal predictions about the population.
    1. Effect-to-cause studies show only the probable frequency of the cause in cases of a given effect, not the probable frequency of the effect in cases of a given cause.
    2. Therefore, they don't permit us to say what percentage of the target population would display E if everyone were exposed to C.
    3. Ideally, we would follow such a study with a cause-to-effect study, watching people with C over a long period to see if they develop E.

8. The meaning of "causal factor" shows why appeals to anecdotal evidence do not belong in discussions of causation in populations.

  1. If such causation meant that C always causes E, then one or two examples to the contrary would indeed bring down the conclusion.
  2. But to call C a causal factor is only to say that exposure to C by the whole population will produce more examples of E than if C were absent.
    1. This claim is consistent with the existence of counterexamples and is not at all weakened by them.
    2. As a matter of fact the claim predicts that there will be counterexamples. If 69 percent of people exposed to C develop E, we have reason to expect that 1 percent will not.

9. Sometimes causal claims can be seen to be mistaken even without considering the arguments put forward as their support. Such claims and hypotheses are inherently defective.

  1. Circular or question-begging claims merely restate the effect and call that new name for it the cause. Thus: "Your worry is psychological."
  2. Untestable assertions come with no supporting evidence because nothing could possibly be evidence for them. Such claims usually invoke "metaphysical" or supernatural entities and powers.
  3. Excessively vague causal claims are also untestable, but they compound that fault by not meaning anything you can pin down. "The vibes are bad in this office; that's why productivity is low."
  4. Unnecessary assumptions plague other claims. An X-ray of King Tut's mummy revealed multiple fractures; so one tabloid claimed that he must have owned a jet plane and died in a crash. But multiple fractures can be accounted for in lots of more plausible ways: a beating, a fall, etc.
  5. Some claims, finally, are not consistent with well-established theory. A true conflict with scientific theory is an exciting thing; but it is rare, and a daunting burden of proof rests on such claims.

10. Causal explanations are useful as long as they are not confused with either arguments or excuses.

  1. Explanations are not arguments.
    1. Explanations are claims designed to show why something is the case, or why or how something happened.
    2. Arguments try to show that something is the case, or will be or should be the case, not why or how it is.
  2. Nevertheless explanations can find themselves confused with arguments.
    1. The same words sometimes work as either an explanation or an argument.
    2. An explanation can legitimately appear in an argument as one of its premises.
  3. People may also confuse explanations of behavior with attempted justifications or defenses of that behavior.
    1. Often an explanation does get offered as an excuse. "I swerved off the road because someone threw a brick at my windshield."
    2. But one may also try to explain behavior, even the very worst behavior, without at all intending to defend that behavior.
    3. It is important to be alert to the difference between explanations and excuses, so that we do not inappropriately attack a legitimate explanation of behavior.







Moore 8e OLCOnline Learning Center

Home > Chapter 11 > Chapter Overview