Lecture 3

2/8/94

The Causal Theory Of Explanation, Part I

Last time, we saw that the inferential view of explanation faced the asymmetry and irrelevance problems. There is another problem, however, that comes out most clearly when we consider the inductive-statistical (I-S) component of the inferential view. This problem strikes most clearly at the thesis at the heart of the inferential view, namely, that to explain a phenomenon is to provide information sufficient to predict that it will occur.

I-S explanation differs from D-N explanation only in that the laws that are cited in the explanation can be statistical. For example, it is a law of nature that 90% of electrons in a 90-10 superposition of spin-up and spin-down will go up if passed through a vertically oriented Stern-Gerlach magnet. This information provides us with the materials for stating an argument that mirrors a D-N explanation.

Ninety percent of electrons in a 90-10 superposition of spin-up and spin-down will go up if passed through a vertically oriented Stern-Gerlach magnet. (Law of Nature)

This electron is in a 90-10 superposition of spin-up and spin-down and is passed through a vertically oriented Stern-Gerlach magnet. (Statement of Initial Conditions)

Therefore, this electron goes up. (Explanandum) [90%]

This argument pattern is obviously similar to that exhibited by D-N explanation, the only difference being that the law in the inductive argument stated above is statistical rather than a universal generalization. On the inferential view, this argument constitutes an explanation since the initial conditions and laws confer a high probability on the explanandum. If you knew that these laws and initial conditions held of a particular electron, you could predict with high confidence that the electron would go up.

The problem with the inferential view is that you can't always use explanatory information as the basis for a prediction. That is because we frequently offer explanations of events with low probability. (Numbers in the examples below are for purposes of illustration only.)

Atomic Blasts & Leukemia. We can explain why a person contracted leukemia by pointing out the person was once only two miles away from an atomic blast, and that exposure to an atomic blast from that distance increases one's chances of contracting leukemia in later life. Only 1 in 1,000 persons exposed to an atomic blast eventually contract leukemia. Nevertheless, exposure to an atomic blast explains the leukemia since people who haven't been exposed to an atomic blast have a much lower probability (say, 1 in 10,000) of contracting leukemia.

Smoking & Lung Cancer. We can explain why someone contracted lung cancer by pointing out that the person had smoked two packs of cigarettes a day for forty years. This is an explanation since people who smoke that much have a much higher probability (say, 1 in 100) of contracting lung cancer than non-smokers (say, 1 in 10,000). Still, the vast majority of smokers (99 percent) will never contract lung cancer.

Syphilis & Paresis. We can explain why someone contracted paresis by pointing out that the person had untreated latent syphilis. This is an explanation since the probability of getting paresis is much higher (e.g., 1 in 100) if a person has untreated latent syphilis than if he does not (e.g., 0). Still, the vast majority of people with untreated latent syphilis will never contract paresis.

In each of these cases, you can't predict that the result will occur since the information does not confer a high probability on the result. Nevertheless, the information offered constitutes an explanation of that result, since it increases the probability that that result will occur.

In the 1960s and 1970s, Wesley Salmon developed a view of statistical explanation that postulated that, contrary to what Hempel claimed earlier, high probability was not necessary for an explanation, but only positive statistical relevance.

Definition. A hypothesis h is positively relevant (correlated) to e if h makes e more likely, i.e., pr(h|e) > pr(h).

The problem Salmon faced was distinguishing cases where the information could provide a substantive explanation from cases where the information reported a mere correlation and so could not. (For example, having nicotine stains on one's fingers is positively correlated with lung cancer, but you could not explain why a person contracted lung cancer by pointing out that the person had nicotine stains on their fingers.) Distinguishing these cases proved to be impossible using purely formal (statistical) relations. Obviously, some other type of information was needed to make the distinction. Rejecting the received view of explanation, Salmon came to believe that to explain a phenomenon is not to offer information sufficient for a person to predict that the phenomenon will occur, but to give information about the causes of that phenomenon. On this view, an explanation is not a type of argument containing laws of nature as premises but an assembly of statistically relevant information about an event's causal history.

Salmon points out two reasons for thinking that causal information is what is needed to mark off explanations. First, the initial conditions given in the explanatory information have to precede the explanandum temporally to constitute an explanation of the explanandum. Hempel's theory has no restriction of this sort. The eclipse example illustrates this fact: you can just as well use information about the subsequent positions of the Sun and Moon to derive that an eclipse occurred at an earlier time as use information about the current positions of the Sun and Moon to derive that an eclipse will occur later. The latter is a case of retrodiction, whereas the latter is a (familiar) case of prediction. This is an example of the prediction-explanation symmetry postulated by Hempel. However, as we saw earlier when discussing the problem of asymmetry, only the forward-looking derivation counts as an explanation. Interestingly, Salmon points out that the temporal direction of explanation matches the temporal direction of causation, which is forward-looking (i.e., causes must precede their effects in time).

Second, not all derivations from laws count as explanations. Salmon argues that some D-N "explanations" (e.g., a derivation from the ideal gas law PV = nRT and a description of initial conditions) are not explanations at all. The ideal gas law simply describes a set of constraints on how various parameters (pressure, volume, and temperature) are related; it does not explain why these parameters are related in that way. Why these constraints exist is a substantive question that is answered by the kinetic theory of gases. (Another example: People knew for centuries how the phases of the Moon were related to the height of tides, but simply describing how these two things are related did not constitute an explanation. An explanation was not provided until Newton developed his theory of gravitation.) Salmon argues that the difference between explanatory and non-explanatory laws is that the former describe causal processes, whereas non-explanatory laws (such as the ideal gas law) only describe empirical regularities.