Site MapHelpFeedbackFrequently Asked Questions
Frequently Asked Questions
(See related pages)

1. Is there a method for knowing which criticisms to employ against bad cases of informal causal reasoning?
2. What are the most successful lines of criticism against arguments about causation in populations?

1. Is there a method for knowing which criticisms to employ against bad cases of informal causal reasoning?

The best thing to do first is to see whether the speaker has made an argument about causation, and, if so, which sort of argument it is. This technique will guide you to three of the four main criticisms.

If the causal claim has no argument behind it except for the mere order of events, a safe answer is post hoc, ergo propter hoc. Cause and effect do not work so simply that one observation announces the relevant causal connection. (Do keep in mind that a casual observation of this sort might provide a hint as to where to look next; but then it becomes legitimate only if the person does look again, and looks more closely.)

But while post hoc is often a safe answer, it does not illuminate a causal argument as well as other criticisms can. It can tell that reasoning has gone wrong but not about how it went wrong. To put it another way, this criticism has only negative value, showing up the weakness of a causal claim without adding to our knowledge about the causal link in question. The other criticisms have more substance.

If the speaker made an argument on the grounds of a difference between situations in which Y occurred and situations in which Y did not occur, the best criticism to try is that he or she ignored a possible common cause. This criticism works best when one of the effects of the possible common cause precedes the other. Then the first effect more often looks like a cause of the second. Some kinds of flu begin with vomiting and develop into fatigue. It might look natural to call the vomiting the cause of the fatigue (no food; hence people look tired), but in fact both of them follow from the viral infection.

But watch out for one thing: When accusing someone else of having ignored a possible underlying cause, you ought to be able to suggest some possibility. Not that you need to prove the other possibility; but it would help at least to describe it. "My cat Dee-Dee chewed up a magazine for the first time yesterday, and today she had no appetite. The magazine must have satisfied her." You will not get far just saying, "Maybe something else caused her to eat the magazine and then not eat her food." If you get even a little more specific you will have made progress: "Maybe she's come down with something that makes her chew paper and not want food."

In some cases the suggested underlying cause needs more support than that. See the box titled "Cigarettes, Cancer, and the Genetic Factors Argument." The tobacco industry has claimed that a genetic factor causes both a tendency toward smoking and susceptibility to cancer. Researchers' inability to discover such a factor makes the argument weak: When the original causal claim has evidence behind it, the alternative needs not only a name but evidence of its own.

Now suppose the argument was based on a common thread instead of an alleged relevant difference.

Again, the speaker may have ignored a possible underlying cause; but this might also be a case of assuming a common cause where none exists. You will probably find this an unsatisfying criticism to make, because it denies the exceedingly worthwhile human practice of looking for unifying explanations behind our observations. Still, not all events have common causes, even apparently similar events. A good test is plausibility: The proposed cause of two or more events ought to possess more plausibility than a chance connection. This is only common sense. If you get two phone calls in one day that are wrong numbers, you will not think about it. Wrong numbers happen, and two in a day don't make enough of a coincidence to wonder about. But four in the same day, or several a day for a week, will make you think there's a reason. Maybe a new business has opened with a number that resembles your number; or someone with a new number gave it out erroneously, so that dozens of people now have the wrong one; or you are being hounded by spies (but don't jump to this conclusion). Whatever the cause, you have a sense of how many wrong numbers will happen randomly, and you won't start looking for explanations until other explanations make more sense than chance does.

The kind of causal argument given will not help you much when the problem is reversing causation. Reversed causal claims can come up under any circumstances. People tend not to think of this criticism, mainly because most causal claims don't work well in reverse. Sophie's cold may or may not have made her grouchy, but it's a safe bet her grouchiness did not bring on the cold. Here, too, start by considering plausibility. If someone claims that X caused Y, how plausible is it that Y could have caused X? In the case of the magazine-chewing cat, we may have a reversal of causation: Because she hates the taste of this cat food, Dee-Dee goes hungry and therefore she chews whatever she sees. The mere existence of this alternative does not settle the argument, but it suggests an obvious test. Try another cat food, and see what changes occur in her eating and chewing.

2. What are the most successful lines of criticism against arguments about causation in populations?

The simplest criticisms to use here are, sad to say, not the most useful ones.

The simplest criticisms focus on the size of the samples and the size of d, to argue that d cannot be statistically significant with a sample of that size. Although the mathematics may look like an obstacle, you will find that you need no real math to apply this line of reasoning. Simply consult a table like 11-1 to see whether a given result, at a given confidence level, must have arisen from some cause other than chance.

(While we're on this subject: Watch out for the level at which results are statistically significant. Acceptable levels of d for statistical significance are much lower with lower confidence levels. But when the confidence level gets too low, it means less to say that a level of d is statistically significant. Some figure for d may be statistically significant at level 0.4, but then all you know is that there's a 60 percent chance that these significant results did not arise by chance; which is not very reassuring. There's a reason why 0.05, which translates into a 95 percent confidence level, is the standard for science.)

Most large-scale studies, including unreliable and even unscrupulous ones, respect the rules of statistically significant findings and confidence levels. So unless you are dealing with a small local study or an informal job, you will not be able to marshal such considerations against the proposed result.

Remember that this is not the end of the story. Even quite prestigious studies may prove vulnerable to two other lines of criticism. There might be problems in the analogical extension of the result and bias in the sample.

We are all familiar with analogical arguments that apply what has been discovered about rats and other laboratory animals to human beings. When a study concerns complex phenomena we are right to feel suspicious. Monogamy in rats, or rats' ability to learn new information, will probably not tell us much about corresponding phenomena in human beings, as our mating habits and learning abilities develop under more elaborate conditions than a laboratory survey could ever mimic.

But this line of argument does not work as well against medical findings. Rats especially have been seen to develop cancers under conditions very much like the ones that cause cancer in humans. This is a premise that supports the analogical extension: X causes Y in rats; rats have been seen to resemble humans in their history of Y; therefore X probably causes Y in humans. When researchers reason that some medical finding about nonhumans may apply to us, they probably have some such justification for the analogy. When they don't, they say so. For instance, one of the complications in laboratory experiments on HIV is that the virus does not seem to give chimpanzees AIDS. They are susceptible to SIV, the Simian Immunodeficiency Virus, which resembles HIV in many but not all respects.

Criticisms of the analogical extension work better when one unusual human population is studied for insights about all human beings. Say a village in another country contains a high percentage of people more than 100 years old. Investigators may try to run an effect-to-cause study in that village, controlling for families with and without very old members; still, we suspect that too many other differences exist between urban U.S. culture and the culture of that village for us to know what all the relevant factors are. Watch out for studies of populations that differ in too many relevant ways from your own.

Biased samples take even more work to discuss. The more technical the finding, the more background knowledge you will need before even suspecting that some causal factor has not been controlled for. When the effect in question is a matter of health or happiness, and the suspected causal agent belongs in people's ways of life--exercise, diet, place of residence, occupation, access to medical care--some background knowledge will alert you to the number of causal factors that can occur together. For example, people who exercise regularly into and past middle age will probably be people who also watch what they eat and enjoy a higher level of education (and hence maybe less work-related anxiety). Most studies of exercise will control for these factors. But if these are also people who go more often to their doctors and take prescribed medications more carefully, they may live longer regardless of their exercise.

Even so, medical results are easier to evaluate than claims about social and psychological phenomena such as crime and intelligence. It pays to be especially open-minded about factors that can bias the samples in such studies. Just for starters, all generalizations about criminal activity begin with data about reported crimes. Most murders get reported; but one's likelihood to report rape, robbery, and assault depends on a number of other social factors. So researchers have a harder time assembling representative samples.

Studies of convicted criminals face other obstacles. They oversample those who have been caught (through carelessness, inexperience, or excessive aggressiveness) and those arrested for more vigorously prosecuted crimes. Keep your eyes peeled for biases in the sample that could skew the study's result in one direction or another.

Intelligence makes an even more difficult matter, because more people disagree about how to measure intelligence than about how to measure crime. The Bell Curve, by Murray and Herrnstein, tried to show a causal connection between race and intelligence; responses to that book, most notably Gould's review in the New Yorker, have taken up the issues of bias in the samples and the book's failure to control for other factors. Although neither the book nor the review is easy to read, Gould's argument is a classic example of how to criticize a claim of causation in populations.








Moore 8e OLCOnline Learning Center

Home > Chapter 11 > Frequently Asked Questions