CAUSALITY AND DETERMINISM
GERMUND HESSLOW
Filosofiska institutionen
Lunds Universitet
From Philosophy of Science, 48 (1981) pp. 59l – 605.
A previous paper of mine, that criticized Suppes’ probabilistic
theory of causality, was in turn criticized by Deborah Rosen. This paper
is a development of my argument and an answer to Rosen. It is argued that
the concept of causation is used in contemporary science in a way that
presupposes determinism. It is shown that deterministic assumptions are
necessary for inferences from generic to individual causal relations and
for various kinds of eliminative arguments.
l. Introduction. In a recent paper Deborah Rosen (1978) defends the probabilistic theory of causality advocated by Patrick Suppes (1970) against two critical arguments of mine (Hesslow 1976). In this paper I will try to clarify and develop my argument somewhat as well as show why I find Rosen’s defense unsuccessful. But first let me recapitulate. Causes are usually not sufficient for their effects. If we wish to retain an essentially Humean view of causality, there seem to be two possible ways of accounting for this fact or two possible approaches, one probabilistic and one deterministic. An example of the former is the theory of Suppes (1970) according to which a cause is a probability-raising event:
A deterministic approach would be one that assumes that a cause is always a sufficient condition or a part of a sufficient condition for the effect. If At is a nonsufficient cause of Bt’, then there must be some further condition Ct, such that At in combination with Ct is sufficient for Bt’. Theories of this general kind abound in the literature, indeed the deterministic approach has been something of a received view of causation. This view, which we may call the ’sufficiency principle’ is also common among scientists.2 The sufficiency principle is not in itself strictly deterministic. It does not mean that every event has a sufficient cause, only that if an event has a cause, then it has a sufficient one. However, it seems that the popularity of the sufficiency principle is a reflection of a widely spread, though usually implicit, commitment to the stronger thesis, that every event has a sufficient cause. A nice formulation of this assumption is given by Anscombe (1975, p. 63): "If an effect occurs in one case and a similar effect does not occur in an apparently similar case, then there must be a relevant further difference."
Determinism is a theory of the world, and the two approaches to causality
concern the definition of a concept. There is therefore no logical connection
here – it would not be inconsistent to believe in determinism and at the
same time prefer a probabilistic definition of ’cause’. There is however
a strong pragmatic connection, for, as I shall argue, the concept of causality
is used in a way that presupposes a deterministic assumption and that determinism
therefore is necessary to characterize causation. A deterministic assumption
could be coupled with a probabilistic definition, but this would obviously
make the probabilistic theory quite pointless, for the only reasonable
motive for preferring it would be that it enables us to avoid deterministic
assumptions. The thesis to be defended in the following is, that the deterministic
approach is superior to the probabilistic one. I will try to defend it
by showing 1) that Suppes’ probabilistic theory leads to difficulties that
are not dependent on any defect in Suppes’ particular formulation of the
theory but that arise precisely because it is probabilistic, and 2) that
some common types of causal inferences in science require a deterministic
assumption.
2. Do causes raise the probability of their effects? My first argument in (Hesslow 1976) was, that whereas Suppes’ theory requires of a cause that it raise the probability of its effect, it is quite conceivable that some causes actually lower this probability. An example of such a phenomenon could be the use of contraceptive pills, that according to current medical opinion sometimes causes thrombosis. Since contraceptive pills also eliminate the risk of pregnancy, which is a more efficient cause of thrombosis, it seems at least possible that the probability of thrombosis is lowered rather than raised by the pill.
In a criticism of this argument Deborah Rosen claims, that "it does not follow from these epistemic observations that a particular person’s use of contraceptive pills lowers the probability that she may suffer a thrombosis, for, unknown to us, her neurophysiological constitution (N) may be such that the taking of the pills definitely contributes to a thrombosis" (1978, p. 606). Relative to N, contraceptive pills will raise the probability of thrombosis.
This could very well be true. If we allow the gratuitous introduction of factors "unknown to us", any probability statement could be true. There may be any number of physiological conditions (although probably not neuro-physiological), relative to which contraceptive pills raise the probability of thrombosis, just as there may be others relative to which the probability is lowered. But probability statements have to be based on what is known. Rosen’s claim, that we "wrongly" (1978, p. 606) believe that pills lower the probability of thrombosis because we do not take any such unknown factors into account, is quite unintelligible to me. How does she know that our belief is wrong, when the information that would make it wrong is unavailable?
Nevertheless, the idea of relativizing the probability estimates on additional information could still be useful, even if we restrict ourselves to information that is available. Suppes states repeatedly, and Rosen agrees, that the determination of a causal relationship is always relative to some conceptual framework. So, instead of saying that At is a prima facie cause of Bt’ when At raises the probability of Bt’ , we should say that At is a prima facie cause of Bt’ "with respect to background information Ct", when At raises the probability of Bt’ and the probabilities are conditionalized on Ct (Rosen 1978, p. 608). That is, when
(C1) Women in an underdeveloped country where all contraceptives are scarce, where there are no medical facilities for preventing thrombosis in pregnant women and where, consequently, pregnancy is a likely outcome of not using the pill and thrombosis a likely outcome of pregnancy.
(C2) Well educated women in New Orleans who know everything about contraception, who will therefore probably not get pregnant even if they do not use pills and who have access to physicians with the knowledge and resources required to prevent thrombosis in pregnant women.
It now seems more than plausible that, relative to C1, contraceptive pills will lower the probability of thrombosis whereas, relative to C2, the probability will be raised. There are two ways of dealing with this both of which can find support in Rosen’s text.
First, it follows from Rosen’s revised definition, that contraceptive pills will be a prima facie cause of thrombosis relative to background information C2 but relative to C1, it will not. We would thus be forced to say that pills cause thrombosis in New Orleans but not in, say, Uganda. But this is strongly counterintuitive as well as contrary to medical opinion. If the pills can cause thrombosis in New Orleans, there is surely no a priori reason to doubt that they can do so in Uganda as well. From an intuitive standpoint it would seem more reasonable to say that the difference between the two cases lies, not in the causal effects of the pills, but in the tendency of pregnancy to mask the statistical effects of the causal relation. If the definition of causation is relativized to additional information, event or proposition C, it will only be applicable when C is true or in situations where C obtains, but we often want to generalize beyond such situations.
Secondly, we could simply choose one of the two cases as the best or correct one. Rosen makes some suggestions along these lines, for instance when she says that my example "conflates inadequate and adequate epistemic frames of reference" and that "From the point of view of an adequate epistemic frame of reference... Suppes is correct..." (Rosen 1978, p. 605). But she does not say anything about how one is to decide if a certain framework is adequate, and it is difficult to see how this could be done. It is of course easy to choose between C1 and C2 for they describe easily verifiable empirical conditions, but that does not help us if we consider the women in Uganda or in any other population where pills lower the probability of thrombosis and where, presumably, pills still cause thrombosis. What is needed is some additional information that is true of the women in the population where pills lower the probability of thrombosis and relative to which the probability is raised. Since nothing of this kind is known, Rosen had to invent or assume it, viz. the neurophysiological condition (N). But here it must be asked first, how is the assumption that there is something like N to be justified? (Note here that determinists make such assumptions all the time, but the whole point of the probabilistic approach is to dispense with such maneuvers.) Secondly, even if something like N were known, why should the introduction of N make the frame of reference more adequate? As long as these questions are not answered, the talk about adequacy seems to reduce to the rather weak claim that there might be some ad hoc hypothesis that could rescue the situation.
Relativization, it seems, fails to solve the problem. But even if it did, there would be reason to doubt relativization per se. Rosen and Suppes defend this strategy with epistemic considerations. It is said for instance, that "the determination of a causal relationship between events or kinds of events is always relative to some conceptual framework" (Suppes 1970, p. 13), and "causal comments are derived from and relativized to different frameworks with different informational sets" (Rosen 1978, p. 609). This is true of course in the rather trivial sense that causal statements, like all statements, must be based on evidence and can only be expressed in a given language or within some conceptual system. But it hardly follows that a statement itself must be relative to this framework. It seems to me that Rosen and Suppes here conflate the belief in a causal relation with that relation itself, so that they blur the distinctions between facts and belief in those facts and between the truth of a statement and the evidence for its truth. If at a certain time we have evidence for some causal hypothesis but later find stronger evidence against it, we do not say that the hypothesis was true relative to the first framework and false relative to the second, and that (for some reason) we prefer the second. We simply say that we were mistaken. I agree with Rosen that we should always be prepared to revise our views about causal relations in the light of additional information, but the question is why? The answer that our initial opinion might simply be wrong is closed to the relativist, because the initial opinion will be right relative to the initial framework.
The contraceptive pill example is not a problem for the deterministic
approach. It simply illustrates the fact that an event can have different
effects in different circumstances. If pills can cause thrombosis but in
some cases lower the probability, then, from a deterministic point of view,
there must be some additional factor that accounts for this. The problem
only arises when we are unwilling to assume such ’additional factors’,
that is when we are unwilling to adopt the deterministic approach.
3. Does individual causation follow from generic causation? In (Hesslow 1976) I claimed that from the facts, that At may cause Bt’ (e.g., smoking causes cancer) and that At and Bt’ both occurred (John smoked and got cancer), it does not follow that At caused Bt’ (John’s smoking caused his getting cancer). Rosen seems to think that my reason for this claim was that the statistical association between smoking and cancer might be due to some third factor C and therefore spurious in Suppes’ sense. Such a criticism is obviously inapplicable to Suppes’ theory and would have been a most embarrassing blunder. But my point was very different. What I meant was that, even if smoking genuinely causes cancer, it still does not follow that, when both events occur, John’s smoking caused his cancer. On Suppes’ theory however, it does.
In a footnote (1978, p. 610) Rosen denies that the individual causal statement follows from the generic one in combination with the occurrence of the events in question. Here I am pretty certain that she is wrong. In fact, in (Rosen 1975, p. 7) she uses Suppes’ theory to derive as a theorem that a certain individual event causes another from assumptions of generic causality and occurrence.
The problem I wanted to address concerns the relations between generic and individual causal relations. Now, this problem is difficult to discuss within Suppes’ conceptual framework, for he does not make use of any distinction between kinds of events and individual events, and he thinks that his formal definitions can be interpreted both ways.3 Nevertheless, something must be added to the theory, for as it stands it allows as causes events that did not occur. There is nothing absurd about this, so long as we confine ourselves to generic events, but if we want the theory to be applicable to individual events, we must somehow require that both cause and effect occur. This can be done in various ways. We could for instance interpret the definitions of causality as applying to generic events and then say that the causal relation obtains between two individual events when both occur. Rosen, in an application of Suppes’ theory to a different problem, does this and makes occurrence an explicit requirement in a formal definition of ’genuine cause’ (1975, p. 4). Suppes uses a different strategy. We can interpret his definition of prima facie cause, he says, as applying to individual events and to a potential causal relation. "When both events occur, the potential becomes actual" (Suppes 1970, p. 40). Thus, if John neither smoked nor got cancer, his smoking was a potential cause of his getting cancer. If instead he both smoked and got cancer, his smoking was an actual cause of the cancer. Suppes’ statement explicitly concerns only prima facie causes, but since he defines genuine causes in terms of these, one would expect it to apply equally to genuine causation. It seems to be clear then, that if occurrence is taken into account, individual causation will, according to the probabilistic theory, follow from generic causation. According to common sense and scientific practice, however, this inference is not valid.
Since not everyone who smokes gets cancer, common sense will argue that there must be some difference between the smokers who do not get cancer and those who do. No such difference is known (although there is a biochemical property that is suspected of playing this role), but common sense assumes that there is one. Let us say then, that smoking together with some property S* is sufficient for cancer, while smoking and ~S* is not. Furthermore, exposure to asbestos together with A* is sufficient for the same kind of cancer, whereas asbestos and ~A*is not. Suppose now that John smokes and works with asbestos, and that he has the properties ~S* and A*. In this case we would, I think, say that the asbestos caused John’s cancer and that the smoking did not.4 In most cases we do not know if the ’relevant difference’ S* is present or not, so no certain conclusion about the individual case can be reached. Therefore, the generic causal relation between smoking and cancer does not automatically entail causation in the individual case.
The situation is rather similar to the one we have when making predictions. If A is an insufficient cause of B, it is assumed that there is some additional event A* together with which A is sufficient for B. If A has occurred, we cannot predict B, unless we know that A* has occurred too. Analogously, we cannot conclude that A caused B when both occurred, unless we know that A* did too. It must be admitted though, that the occurrence of B will sometimes be a good reason to believe that A* occurred.
A probabilist might want to reply here, that if a property like S* were known, smoking together with ~S*would not raise the probability of cancer and would, therefore, not be a cause on the probabilistic theory. The important point, however, is that no such property is known, and that the probabilist has no right to assume that there is one. There is thus no reason for the probabilist to regard the inference as invalid. The determinist, on the other hand, will have to suspend judgment and can only say something like ’smoking is the most likely cause of John’s cancer’.
Furthermore, the argument does not depend on the assumption of ’complementary’ factors like S* and A*. Suppose that we have reason to believe that A can cause B, but that the relation is only statistical, so that B sometimes occurs without being preceded by A. An example could be the bombardment of uranium with neutrons which causes disintegration of uranium atoms. If B can occur ’spontaneously’, that is without being preceded by A and, consequently, without being caused by A, then B must presumably sometimes occur spontaneously also when preceded by A. If we consider those occasions when A and B both occurred, we will thus have two classes of B-events, those that are caused by A and those that are spontaneous and that would have occurred even if A had not. But since B is not strictly related to any other event, there is no conceivable way in which we could determine to which class an individual occurrence of B belongs. Consequently, if the connection between cause and effect is essentially statistical, it will never be possible to justify an individual causal statement.
But if inferences from generic statistical to individual causal relations are necessarily invalid, and if all generic causal relations are statistical, then we must either accept invalid inferences or refrain from talking about individual causation at all.
When justifying inferences from generic to individual causal relations two ’methods’ are commonly employed. The first one, which may be called the ’complementary cause-method’, is that which was illustrated above with the smoking/cancer-example and which means that the individual relation obtains, if cause and effect have occurred and if some factor which together with the cause is sufficient for the effect has also occurred. The second, which we may call the ’alternative cause-method’, means that the individual relation obtains, if cause and effect have occurred and if we have reason to believe that no alternative independent sufficient cause of the effect has occurred. The first ’method’ assumes that a cause presupposes a sufficient cause and the second that there is a cause at all. But both are deterministic and it is precisely the rejection of these that is the point of the probabilistic approach to causality.
We have here, it seems, a basic weakness in the probabilistic theory
which does not stem from the particular formulation it has been given by
Suppes, but which results from the indeterminism presupposed by the theory.
4. Eliminative arguments. Talking of the "kind of probabilistic analysis emphasized in [his] monograph", Suppes thinks that this is "characteristic of the scientific methodology of much of biology and the social sciences" (1970, p. 93). I think that Suppes is mistaken here. It is true that statistical or probabilistic techniques are frequently employed in these sciences, but they typically enter in attempts to validate causal hypotheses or to describe incomplete knowledge, and there is nothing that shows that these hypotheses themselves are probabilistic in nature. In fact one can argue that the opposite is true, for statistics are commonly used in a way that presupposes determinism, namely, in various kinds of eliminative arguments.
These arguments all have a very simple basic structure. Jones, who suffered from a universally fatal disease, was given the newly discovered medicine M and recovered. We conclude that the cause of his recovery was M. Why? Because something caused the recovery and, other causes apparently being scarce, M is the most likely candidate. The structure of the argument can be put thus:
Parapsychology. When Uri Geller produces a couple of bent spoons and investigators of paranormal powers claim that no known physical force has been applied, the intended conclusion is that some undiscovered force was responsible for the bending. Skeptical scientists point out that the controls have not been stringent enough to rule out all forms of trickery and deception (they have in fact proved that this occurs), and that the conclusion therefore does not follow. Advocates and critics of Geller’s spoon-bending powers apparently disagree about premise (ii) in the following argument:
Control group experiments. A related application of the principle of causality occurs in the interpretation of the control group experiment, the paradigm of research in many branches of science. In such an experiment a number of individuals (people or plants for instance) are randomly allocated to two groups. One of these, the experimental group, receives some kind of treatment (like a medicine or a fertilizer). The other, the control group, is not given the special treatment but is otherwise treated exactly like the former. An effect variable (for instance the percentage cured or the rate of growth) is then measured and compared for the two groups. If they should turn out to differ by an amount that cannot be attributed to chance, it is concluded that the treatment caused the change in the effect variable. The logic behind this inference has been stated by scientists in two slightly different ways.
According to Phillips (1966, p. 96) "The general function of the control group... is to create a situation as nearly equivalent to the experimental group situation as possible. In this way, any resulting differences between the two groups may be attributed to the different treatments accorded to the two groups by the experimenter". That is, we have a difference that must be caused by something. Since everything except the treatment is assumed to be randomly distributed between the groups, it is very unlikely that anything different from the treatment caused the difference. Consequently, the treatment caused the difference.
To Sidman (1960, p. 342) the control group is a "technique for determining whether our experimental results are actually a product of our experimental manipulations, or whether they stem from the operation of some other known or even unsuspected factors". For instance, a recovery from a disease can be caused by other factors than a tested medicine. The control group can be thought of as an attempt to estimate the frequency of such other causes. If half of the patients in the control group get well without the medicine, we may conclude that sufficient causes of recovery occur in about a half of the patients. This frequency is then assumed to be the same (within chance limits) in the experimental group. If more treated patients recover than untreated, there will be an unexplained residue, a number of patients whose recovery cannot be attributed to the ’other’ causes.
But something must have caused their recovery, and the only possibility is the treatment. This case differs from the Geller example in two ways. First, there we eliminated known causes and inferred that an unknown event was the cause, whereas here we eliminated unknown causes and inferred that a known event was the cause. Secondly, the last example has a quantitative character that the former lacks. Each recovered patient represents a different event, and with n recovered patients we have n events to be explained. We also have two possible causes, the medical treatment M and the unknown factor X. If the control group yields a reliable estimate of the frequency of X, we may apply this to the experimental group and assume that a certain number x of recoveries can plausibly be attributed to X. The remaining n – x will then be the number of unaccounted for recoveries that must be attributed to M. The elimination here takes the form of an arithmetical subtraction. The possible causes are classified into two exhaustive and mutually exclusive groups, M: those sufficient causes of which M is a part, and X: those sufficient causes of which M is not a part. n is the number of events to be explained, m is the number of M’s and x the number of X’s. In a deterministic universe it must be true that
Quantitative genetics. Suppose that we have a number of genetically dissimilar plants of the same species scattered throughout our garden. The mean height µh of the plants is, say, 20 cm. We now pick out a certain plant x, which happens to have the height H of 30 cm, and we ask ourselves why x is different from the average plant. What is the cause of the H – µh deviation? Is it because the particular spot on which x grew was favorable to height, or is it a result of x’s having a different genotype? Or is it a combination of these factors? Luckily for us, there are in the garden a number of plants of the same genotype as x, and measuring these we find that the average height of those plants that are genetically identical to x is 25 cm. Thus, substituting an average plant for a plant of the x genotype causes an increase in height of 5 cm. Have we now explained the difference H – µh ? Well, not completely. We have shown that the genotype of x can explain why x is higher than the average plant, but we have actually only explained a half of the total deviation. If we assume that all of the difference is caused, the residual 5 cm must be attributed to the environment. If we study, not a single individual, but the whole population, we can express the average phenotypic deviation from µh with the phenotypic variance sp2 (the mean of the squared deviations from µh). That part of sp2 that is caused by genetic differences is designated sg2 and the part that is caused by environmental differences is designatedse2. Sometimes a certain environment will have different effects on different genotypes so that we get a cause of phenotypic variance that is not simply an additive effect of genes and environment. This is called the gene-environment interaction sge2 . Because of the definitions of the terms involved, genes, environment (that is, everything not genetic), additive and non-additive components, sg2 , se2 and sge2 represent an exhaustive and mutually exclusive classification of the potential causes of phenotypic variance. If it is assumed that all the variance is caused, we get the well-known linear model of quantitative genetics,
The kind of argument in which (1) is used is illustrated by the following quotation from the standard reference work on quantitative genetics: "If one or other component could be completely eliminated, the remaining phenotypic variance would provide an estimate of the remaining component." For instance, "If a group of... [genetically identical] individuals is raised under the normal range of environmental circumstances, their phenotypic variance provides an estimate of the environmental variance... Subtraction of this from the phenotypic variance of a genetically mixed population then gives an estimate of the genotypic variance of this population.6 (Falconer 1960, p. 130). If we found, in our original example, a phenotypical variance in height of h and a variance of e in the selected genotype, we could estimate the variation in height caused by genetic differences to be h – e. Clearly, it is assumed here that all the variance is caused.
Many philosophers have noted that eliminative arguments of the kind discussed above do not require that every event is caused. (see e.g., Pap 1962) They only require that the events under investigation, spoonbending, recovery and phenotypic variance, have causes. Although it is certainly correct, the importance of this logical point must not be exaggerated. If scientists are always prepared to assume that the events they are investigating are caused, they will in practice always work with a deterministic assumption.
The mere existence of eliminative arguments does not prove that the
testing of causal hypotheses always presupposes determinism, or that such
a presupposition is necessary. At best it shows that determinism is sometimes
assumed. If there were other equally good ways of testing causal hypotheses,
an indeterminist might think that these should be used instead. But the
existence of such methods is doubtful. 1t is impossible to defend this
view here, but I am pretty certain that practically every experimentally
supported causal law in the biological and social sciences is, directly
or indirectly, based on eliminative arguments. Far from being "characteristic
of the scientific methodology" of these sciences, indeterminism would make
their task virtually impossible.
5. Conclusions. It should be clear that we have not been concerned here with the truth or falsity of determinism as a metaphysical thesis. I have only argued that the concept of causation as it is used in many branches of science can only work within a deterministic framework. This view seems to be shared by many writers. Anscombe for instance, although a critic of determinism herself, writes that "So firmly rooted is it that for many even outside pure philosophy, it routinely determines the meaning of ’cause’... It is, indeed, a bit of Weltanschauung: it helps to form a cast of mind which is characteristic of our whole culture." (Anscombe 1975, p. 64) If Anscombe means that determinism is part of the meaning of ’cause’, she may be going a bit too far, but only a bit.
The arguments of the last two sections show, I think, that deterministic assumptions are necessary for inferences from generic to individual causal relations as well as for a variety of eliminative arguments. Inferences of these kinds are so central to causal reasoning that without them the concept of causation would hardly be recognizable. Thus, even if it would be rash to conclude that indeterminism would make the term ’cause’ meaningless in some semantical sense, it appears that a rejection of determinism would make causation a useless concept.
The principle of causality has often been construed as a heuristic device, "a guiding principle of causal inquiry", that " ’guides’ the scientist in his search for a difference in antecedent conditions to account for the fact that apparently similar antecedents were followed by dissimilar effects." (Pap 1962, p. 311) A belief in determinism would thus be a disposition to look for causes wherever causes are possible. But if such a readiness to look for causes is to be rational, it must be guided by the belief that there are causes to be found. As Pap notes, the distinction between the belief in a thesis and a predisposition to act as if it were true is somewhat arbitrary. Furthermore, if the principle only said that we should look for causes, it would be far too weak to serve the purposes of eliminative induction and inferences from generic to individual causal relations. Both cases require a statement of fact and not just a rule of procedure.
If we do not believe in determinism or if we wish to leave the question
of its truth open, we will be faced with the dilemma that a concept that
we want to use and can hardly avoid using7 requires assumptions
which we are not willing to make. The only course I am able to suggest
is that in analyzing causation we should adopt the deterministic
approach, but we should apply this concept only to deterministic
systems or to events which can reasonably be assumed to be sufficiently
caused. Should we feel the need for a concept with a wider range of application,
for instance to quantum mechanics, it might be a good idea to try to develop
a novel concept, perhaps something like Suppes’ theory. We should
be clear, however, that this concept of causation will not be the same
as that currently used in science.
NOTES
Anscombe, G. E. M. (1975), "Causality and Determination." In E. Sosa (ed.) Causation and Conditionals. London: Oxford University Press.
Falconer, D. S. (1960), Introduction to Quantitative Genetics. London: Oliver and Boyd.
Feinstein, A. (1971), "How do we measure ’safety and efficacy’?" Clinical Pharmacology and Therapeutics 12: 544 – 558.
Hesslow, G. (1976), "Two Notes on the Probabilistic Approach to Causality." Philosophy of Science 43: 290 – 92.
Lewontin, R. (1974), "The Analysis of Variance and the Analysis of Causes." American Journal of Human Genetics 26: 400 – 411.
Pap, A. (1962), An Introduction to the Philosophy of Science. New York: The Free Press. Phillips, B. S. (1966), Social Research: Strategy and Tactics. New York: The Macmillan Company.
Robbins, S. L. and M. Angell (1977), Basic Pathology. Philadelphia: W, B. Saunders Company.
Rosen, D. (1975), "An Argument for the Logical Notion of a Memory Trace." Philosophy of Science 42: 1 – 10.
Rosen, D. (1978), "In Defense of a Probabilistic Theory of Causality." Philosophy of Science 45: 604 – 613.
Salmon, W. S. (1971), Statistical Explanation and Statistical Relevance. Pittsburgh: University of Pittsburgh Press.
Sidman, M. (1960), Tactics of Scientific Research. New York: Basic Books.
Suppes, P. (1970), A Probabilistic Theory of Causality. Amsterdam: North-Holland.