Last time, we discussed Salmon's analysis of causal explanation. To review, Salmon said that explanation involved (1) statistical relevance, (2) connection via a causal process, and (3) change after a causal interaction. The change is the event to be explained. The notion of a causal process was filled out in terms of (a) spatiotemporal continuity and (b) the ability to transmit information (a "mark"). While we can sometimes speak simply of cause and effect being parts of a single causal process, the final analysis will typically be given in terms of more complex (and sometimes indirect) causal connections, of which Salmon identifies two basic types: conjunctive and interactive forks.
Today, we are going to look at these notions a bit more critically. To anticipate the discussion, we will discuss in turn (1) problems with formulating the relevant notion of statistical relevance (as well as the problems associated with a purely S-R view of explanation), (2) the range of causal information that can be offered in an explanation, and (3) whether some explanations might be non-causal. If we have time, we will also discuss whether (probabilistic or non-probabilistic) causal laws have to be true to play a role in good explanations (Cartwright).
The basic problem with statistical relevance is specifying what is relevant to what. Hempel first noticed in his analysis of statistical explanation that I-S explanations, unlike D-N explanations, could be weakened by adding further information. For example, given that John has received penicillin, he is likely to recover from pneumonia; however, given the further information that he has a penicillin-resistant strain of pneumonia, he is unlikely to recover. Thus, we can use true information and statistical laws to explain mutually contradictory things (recovery, non-recovery). On the other hand, note that we can also strengthen a statistical explanation by adding more information (in the sense that the amount of inductive support the explanans gives the explanandum can be increased). This "ambiguity" of I-S explanation--relative to one thing, c explains e, relative to another, it does not--distinguishes it in a fundamental way from D-N explanation.
As you know, the inferential view said that the explanans must confer a high probability on the explanandum for the explanation to work; Salmon and other causal theorists relaxed that requirement and only required that the explanans increase the probability of the explanandum, i.e., that it be statistically relevant to the explanandum. Still, the ambiguity remains. The probability that Jones will get leukemia is higher given that he was located two miles away from an atomic blast when it occurred; but it is lowered again when it is added that he had on lead shielding that completely blocked the effects of any radiation that might be in the area. This led Hempel, and Salmon, too, to add that the explanation in question must refer to statistical laws stated in terms of a "maximally" specific reference class (i.e., the class named in the "given" clause) to be an explanation. In other words, it is required that dividing the class C further into C1, C2, and so on would not affect the statistics, in that pr(E|Ci) = pr(E|Cj). This can be understood in two ways, either "epistemically," in terms of the information we have at our disposal, of "objectively," in terms of the actual "objective" probabilities in the world. (Hempel only recognized the former.) If our reference class can't be divided ("partitioned") into cells that give different statistics for E, then we say that the class is "homogeneous" with respect to E. The homogeneity in question can be either epistemic or objective: it must be the latter if we are really talking about causes rather than what we know about causes.
The problem with this is that dividing up the class can result in trivialization. For example, a subclass of the class of people who receive penicillin to treat their pneumonia (P) is the class of those people who recover (R). Obviously, it is always the cause that pr(|P&R) = 1. However, this type of statistical law would not be very illuminating to use in a statistical explanation of why the person recovered from pneumonia.
There are various ways around this problem. For example, you can require that the statistical law in question not be true simply because it is a theorem of the probability calculus (which was the case with pr(R|P&R) = 1). Hempel used this clause in his analysis of I-S explanation. Salmon adds that we should further restrict the clause by noting that the statistical law in question not refer to a class of events that either (1) follow the explanandum temporally, or (2) cannot be ascertained as true or false in principle independently of ascertaining the truth or falsity of the explanandum. The first requirement is used to block explanations of John's recovery that refer to the class of people who are reported on the 6:00 news to have recovered of pneumonia (supposing John is famous enough to merit such a report). This is the requirement of maximal specificity (Hempel) or that the reference class be statistically homogeneous (Salmon).
Of course, as we mentioned earlier, there might be many correlations that exist in the world between accidental events, such as that someone in Laos sneezes (S) whenever a person here recovers from pneumonia (R), so that we have pr(R|P&S) > pr(R|P). (Here the probabilities might simply be a matter of the actual, empirical frequencies.) If this were the case, however, we would not want to allow the statistical law just cited to occur in a causal explanation, since it may be true simply by "accident." We might also demand that there be causal processes linking the two events. That's why Salmon was concerned to add that the causal processes that link the two events must be specified in a causal explanation. The moral of this story is two-fold. Statistical relevance is not enough, even when you divide up the world in every way possible. Also, some ways of dividing up the world to get statistical relevance are not permissible.
What Kinds Of Causal Information Can Be Cited In An Explanation?
Salmon said that there were two types of causal information that could be cited in a causal explanation, which we described as conjunctive and interactive forks. Salmon's purpose here is to analyze a type of explanation that is commonly used in science, but the notion of causal explanation can be considered more broadly than he does. For example, Lewis points out that the notion of a causal explanation is quite fluid. In his essay on causal explanation, he points out that there is an extremely rich causal history behind every event. (Consider the drunken driving accident case.) Like Salmon, Lewis too argues that to explain an event is to provide some information about its causal history. The question arises, what kind of information? Well, one might be to describe in detail a common cause of the type discussed by Salmon. However, there might be many situations in which we might only want a partial description of the causal history (e.g., we are trying to assign blame according to the law, or we already know a fair chunk of the causal history and are trying to find out something new about it, or we just want to know something about the type of causal history that leads to events of that sort, and so on).
Question: How far back in time can we go to statistically relevant events? Consider the probability that the accident will occur. Relevant to this is whether gas was available for him to drive his car, whether he received a driver's license when he was young, or even whether he was lived to the age that he did. All of these are part of the "causal history" leading up to the person having an accident while drunk, but we would not want to cite any of these as causing the accident. (See section, "Explaining Well vs. Badly.")
Something to avoid is trying to make the distinction by saying that the availability of gas was not "the" cause of the person's accident. We can't really single out a given chunk of the causal history leading up to the event to separate "the cause." Lewis separates the causal history--any portion of which can in principle be cited in a given explanation--from the portion of that causal history that we are interested in or find most salient at a given time. We might not interested in information about any portion of the causal history, Lewis says, but it remains the case that to explain and event is to give information about the causal history leading up to that event.
In addition, Lewis points out that the range of ways of giving information about causal histories is quite broad. For example, Lewis allow negative information about the causal history to count as an explanation (there was nothing to prevent it from happening, there was no state for the collapsing star to get into, there was no connection between the CIA agent being in the room and the Shah's death, it just being a coincidence, and so on). To explain is to give information about a causal history, but giving information about a causal history is not limited to citing one or more causes of the event in question.