Statistical Determinism:
The Odds get Odder and
Necessity gets Even
Paul Saka
Abstract. Libertarians claim that we
do or might have free will because determinism is wrong, and that determinism
is wrong because of indeterminacy at the quantum level, or because laws in the
human sciences are fundamentally statistical.
It is my thesis, however, that determinism is entailed not only by
classical laws of physics but by statistical laws as well.
Statistical laws, interpreted extensionally and combined
with facts about other people having actual existence, entail unique outcomes
in personal behavior. Statistical laws,
interpreted intensionally and combined with facts about other people having
counterfactual existence, also entail unique outcomes. In order to make my case, I discuss possible
interpretations of probability, confidence intervals and levels, the nomic
status of statistical generalizations, and the gambler's fallacy. Finally, aside from arguing that
probabilistic laws do not provide for branching possibilities as libertarianism
requires, I argue that even where branching possibilities exist they do not ground
freedom.
My thesis is that fundamentally
probabilistic laws, be they in quantum physics or the human sciences, entail a
certain kind of determinism. This kind
of determinism contradicts libertarianism, which is entertained by numerous
philosophers, is scattered through the physics and neuroscience literature, and
is found among jurists and the lay population.[1] In establishing
determinism, my argument outstrips the standard objection to libertarianism,
that indeterminacy amounts to mere randomness, and mere randomness is not
sufficient for free will.[2]
Glymour 1971 distinguishes two features that are often
regarded as standing or falling together: the determinateness of quantities and
the impossibility of forks in history.
Kane 1996 observes that they pull apart in Epicureanism, according to
which laws and physical magnitudes are exact (and hence quantities are
determinate), but not all events are subsumed by law (because of the swerve),
thus allowing for divergent possible histories.
Kane develops this point, citing Earman's 1986 argument that strict
Newtonian laws may issue in indeterminism, but he overlooks another logically
possible option. Perhaps a physical
system may be indeterminate in its present properties and regularities, while
closed to all but one way of unfolding.
My work may thus also be seen as a contribution to the project of
disentangling the strictness of laws from determinism proper.
After presenting definitions and my
basic argument for statistical determinism, the bulk of my paper turns to
forestalling objections. The concluding
section sketches independent considerations against libertarianism.
1. The Basic Argument
We can say that condition or fact P determines or necessitates
condition or fact Q so long as the proposition that-Q necessarily follows from,
or is a consequence of, the proposition that-P (cp. Earman 1986). This
characterization may fail to get at the heart of determinism – I think James
1884 and Kane 1996 do better – and it certainly fails to get at a number of
complexities. Nonetheless I believe it
is fairly safe in that many competing accounts of determinism would agree to
it, and more to the point it is what's relevant to the question of
libertarianism since it feeds into the Consequence Argument (see van Inwagen
1983).
My version of the Consequence Argument relies on two principles
[semantic turnstile represented by |=]:
(α) If
P|=Q then N(P®Q), where "N" means "it's not up to one to
decide";
(β) If
NP and N(P®Q) then NQ.
According
to (α) and (β), if P determines Q and if P is not up to us then Q is
not up to us. The
threat to free will thus emerges without our having to suppose or establish
that Q is categorically and metaphysically necessary, only that it is necessitated by events beyond our control.
It is because of the Consequence
Argument that libertarians deny classical determinism. According to classical determinism, the laws
of nature are all strict rather than statistical, and they are sufficient to
determine, given conditions that are beyond our control, all of our actions:
strict laws & independent conditions |= personal actions. Yet events obviously happen according to
patterns, even if the patterns do not conform to strict laws. For this reason opponents of classical
determinism point to the statistical laws of contemporary physics and of the
human sciences.
However, I shall argue that statistical laws necessitate
morally fraught outcomes, at least in "pivotal" cases: statistical
laws & independent conditions |= personal actions. To begin, consider statement (G), a
generalization about gluttony:
(G) For all human subjects S under some
particular condition C, the probability that S will overeat is .6.
The
libertarian's idea here is that we have various causal factors implicated in
eating habits (who could deny that?), and hence we have a derived law
(G), but it does not necessitate any one individual's behavior, thus leaving
room for free will. The question arises,
however: What does such a probability statement mean?
According to one interpretation,
probability means relative frequency
(actual and finite). To say that there
is a 20% chance of rain tomorrow, given current conditions, is to say that in
20% of all cases matching current conditions, it rains on the subsequent day.
Now take a scenario where condition
C refers exclusively to a certain cafeteria on a certain day. A total of 100 customers will go through it,
40 have already done so without overindulging, and next you step up. Will you
overindulge? According to (G), you have
to. For if you do not, then at least 41%
of the customers will not overindulge, and (G) would be false, contrary to
hypothesis. Indeed, the argument applies
to each customer, even the earliest.
Given (G), which refers to C, and given that C
involves some specified number of subjects, and given the actions of all
subjects other than S, we can deduce S's exact behavior, for any subject S in
C.
My argument invites some obvious
objections which I will address one by one. They involve alternative
interpretations of probability, confidence intervals and levels, innumerate
probability, the gambler's fallacy, and the nomic status of (G). [3]
2. Rival Interpretations
My basic argument invokes one particular interpretation of
probability, that of finite relative frequency.
There are, however, other possible interpretations.
Sometimes probability is said to be
the limit of relative frequency in a
random infinite series. If we
think of an infinite series as an actual series, however, there may be precious
few probabilities in the universe.
Although we might be able to speak of the probability of a random number
being prime, for instance, we could not speak of the probability of any
physical event, assuming that there are only finitely many physical particles
and events in the universe.
So let us think instead of infinities as hypothetical. Does it make sense to say that (G) is best
interpreted as (G')?
(G) For
all human subjects S in condition C, the probability that S will overeat is .6.
(G') If
there were infinitely many human subjects in condition C, then the proportion
of those who overeat to the total number of those who do not would approach .6.
The
truth-conditions of (G) cannot be explicated by those of (G'), for the fantasy
depicted by statement (G') is too indeterminate to have truth-conditions. Nor does it do any good to replace our
infinite series with an arbitrarily large one.
For no matter how large a series is, if it is finite then my original argument
applies.
Instead of invoking one hypothetical
infinity as in (G'), perhaps we should invoke an infinite number of
hypotheticals as in (G"):
(G") For all of the infinitely many possible human
subjects S in condition C, the proportion of those who overeat to the total
number of those who do not approaches .6.
This
admits two interpretations. According to
the doctrine of counterparts, an individual S1 exists in only one
possible world, though there be counterparts S2, S3...
each separately existing in other worlds.
To say that S1 acts but could abstain is to say that at least
one counterpart does abstain. According
to the doctrine of transworld identity, an individual may exist in many
possible worlds, and to say that S acts but could abstain is to say that in
some other world S herself does abstain. Now both theses can
be turned around: to enumerate the doings of all the counterparts is
to say what S1 can do and it is to entail what S1 does
do; to enumerate the goings on in all non-actual possible worlds is to say what
can actually go on and it is to entail what does go on.
For example, suppose that there exists exactly one possible
agent and one degree of freedom, to do A or not, for a total
of two possible worlds. Then this proposition combined with the
proposition that S1 does A entails that S2 does not do A. More generally, the
actual world wi (more precisely, the
proposition that a given world is actual) necessarily follows from the facts
that (i) w1 … wn
exhaust all possible worlds, (ii) there exists exactly one actual world, and (iii)
all but wi are mere possibilities. To put
the matter more abstractly, in my original argument a statistical law,
interpreted extensionally, is combined with facts about other people, having
actual existence; and from this combination we can deduce a given person's
behavior. In my
new argument a statistical law, interpreted intensionally, is combined with
facts about other people, having counterfactual existence; and from this
combination follows a given person's actual behavior. Either way, probability-governed
behavior is necessitated by facts outside one's
control.
The libertarian might reply:
"Granted that my counterpart S' and I are in a complementary relation
inasmuch as we make exactly the same choices and act alike except when it comes
to decision F, whereupon I choose +F and S' chooses –F. But perhaps the symmetry breaks down inasmuch
as free will is all located in the actual world; perhaps subjects do not enjoy
equal status; perhaps, in order to be in control of myself, I am the
puppetmaster of all my counterparts: I freely choose +F, which forces S' to
choose –F; S' chooses -F, but without freedom and without forcing my
choice." This suggestion, however,
is untenable. If all free will is
located in the actual world then it would not make sense to say, "under
other circumstances I could have freely acted." If there is no possible world in which I both
perform –F and enjoy free will, then in order for my action to be free I must
perform +F. But to say I must
perform +F in order to be free is, by libertarian lights, self-defeating.
Technicalities aside, the idea that
the actual world is the only one that counts for moral reckoning appears to
introduce a double standard. Why would
the libertarian insist on the relevance, to moral accountability, of alternate
possibilities, only to pooh-pooh it in the next breath? I conclude that infinitary interpretations,
as much as finite-frequency interpretations, spell trouble for libertarians and
indeed indeterminists generally.
According to the subjectivist view, probability is
degree of belief, or better yet degree of rational belief, an epistemic
notion. For me to rightfully say that
the probability of an event's happening is .5 is just for me to be rationally
willing to bet, at even odds, that it will happen; and that is
controlled by the body of evidence that is available to me.
Although many who debate determinism
speak of predictability, that is simply because epistemic discourse (a) entails
ontic discourse (b), which is the real issue at stake (as noted by many others,
for instance Garson 1995).
(a) We
know or can know that, because of laws L and initial condition P, Q ensues.
(b) Because
of laws L and initial condition P, Q ensues.
However,
for all sorts of reasons (a) might be false while (b) is true.[4] If external nature necessitates your actions,
then regardless of whether anyone knows it or even could know it, your actions
would not be up to you, which is all that the libertarian's Consequence
Argument requires.[5]
Finally, probability is sometimes
understood as propensity. The intuition is that individual events
possess objective probability (this coin at the last toss had a
50% chance of landing heads, even though we now know it landed tails). This means, in terms of (G), that even if all
among the first 60 of the 100 diners overeat, there is still a 60% chance that
the next diner will overeat.
Unfortunately, propensity is a contested notion even among
those who believe in it. For long-run propensity
theories, as found in Popper's classic 1959 work and more recently Gillies
2000, propensities apply to collections and never to individuals. This obviously offers no escape from the
basic argument. Single-case propensity
theories, on the other hand, come with unacceptable costs. Because the versions due to Popper 1990 and
Miller 1996 define propensity in terms of unrepeatable conditions, basic
propensities are admittedly "not open to empirical evaluation"
(Miller, 139; compare Popper): there can be no evidence for asserting a
probabilistic law, for denying it, for revising it, or even for entertaining it
in the first place; no conceivable experience whatsoever. (The same applies to Giere 1973 according to
Howson & Urbach 1989.) Meanwhile
Levi 1980 and Lewis 1981 forge a connection between probability and empirical
fact by sheer stipulation (Howson & Urbach 1989). Their theories are completely arbitrary and
could just as well be replaced by diametrically opposed ones. Finally, the version due to Fetzer 1981, to
avoid running afoul of Humphrey's paradox, desperately abandons the standard
probability axioms as used by all other probability theorists.
More fundamentally, the postulation of propensity hardly
averts the argument to statistical determinism.
After all, the guiding intuition behind single-case propensity involves
the relative frequency of a singular event's being repeated in other possible
worlds; and frequencies across possible worlds, as already indicated, yield the
same deterministic results that extensional frequencies do
To sum up, I have surveyed prominent interpretations of
probability. Subjective
probabilities and actual infinite frequencies are irrelevant to the matter at
hand. Propensities
are variously irrelevant, eccentric, and occult, and moreover
they feed determinism, as do fantasy frequencies generally.
3. Margins of Error
While our statistical law (G) cites a single exact
probability, laws more realistic state ranges of probability as in (H).
(G) For
all subjects S in condition C, there is a 60% chance that S will overeat.
(H) For
all subjects S in condition C, there is a 60% chance, +10%, that S will
overeat.
Such
ranges are usually known as "confidence intervals" or "margins
of error," but both labels are misleading.
They sound epistemic, yet they need not be:
instead of expressing imprecise knowledge they might refer to imprecise facts
of the matter.
Recall our original story: in a
certain cafeteria on a certain day, exactly 100 customers will go through, and
40 have already done so, without having overindulged. Now you step up as customer #41; will you
overindulge? According to (G), you have
to, but not according to (H). For (H)
asserts that anywhere from 50% to 70% of the customers will overeat – that 30-50%
will not – so in this case whether you overeat or not is open to you.
However, suppose that our cafeteria
again has 100 customers, and this time 50 have gone through without
overeating. Then given law (H), even
though it has a margin of error built in, it necessarily follows that
the next customer overeats. In short,
under some conditions introducing a second level of indeterminacy fails to
block the inference to determinism.
The scenario just sketched presents
a pivotal case inasmuch as the validity of the law hinges on the action
of just one agent. If all events are
pivotal, or none, then we have either universal statistical determinism or the
failure of statistical determinism; otherwise we have partial statistical
determinism. I imagine that partial
statistical determinism holds, which suffices to invalidate libertarianism. If the behavior of
even one subject S is determined then, by libertarian lights, S is unfree in
the moral sense (S is not responsible, S is not to be praised or blamed); and
if S is morally unfree then, by common acclaim – libertarian, compatibilist,
and hard determinist alike – everyone who is relevantly like S is unfree. (By relevant
similarity I mean to exclude comparing someone who has been ordered to overeat
by gunpoint with one who has not; the point is to compare, for instance, two
subjects who are identical except that one happens to eat in a cafeteria where
less than 30% of the previous customers chose to overeat while the other eats
in a cafeteria where more than 30% of the previous customers chose to
overeat.) To summarize, in only some
cases does a statistical law (plus ancillary conditions) determine a unique
outcome; but this suffices to make all actions covered by statistical laws
unfree, if freedom requires indeterminism.
Even if there are no actual pivotal
events, their very epistemic possibility subverts libertarianism. Applying modal logic as articulated in ____
2000 we get the Epistemic Statistical Determinism Argument, where premise (a) is supported by everything I've been arguing:
(a) For all we know, John Hancock's signing of
the Declaration of Independence was a pivotal event and thus necessitated.
(b) If an event is necessitated then it is
morally unfree. [libertarian
premise]
(c) Hence, for all we know, Hancock's signing was
unfree. [from
(a), (b)]
(d) But we know Hancock's signing was free. [by
common acclaim]
(e) Therefore libertarianism is false. [by reductio]
For
comparison, consider the Epistemic Strict Determinism Argument, which retains
lines (b)-(e) and replaces (a) by:
(a') For all we
know, Hancock's signing was classically determined and thus necessitated.
Libertarians
could reject this modified argument by denying (a'). They could claim that we know
that classical determinism is false, and in support they could invoke quantum
physics. Denying the original line (a),
however, seems not to be an option. Statistical laws do threaten to
create pivotal cases, the only question being whether such contingencies are
actualized, and to recognize this is to say that, for all we know, they exist. But the upshot of
this, as we have seen, is that libertarianism is false.
4. Levels of Confidence
Although statements (G) and (H) differ in exactitude, both
are definite: (G) is true if and only if the actual frequency is exactly
60%, (H) is true if and only if the actual frequency falls between 50% and
70%. In contrast, the indefinite
statements (G*, H*) could be true regardless of what the actual frequencies
are.
(G*) For all S in condition C there is a 20%
probability ("confidence level") that 60% (+0%) of S will
overeat.
(H*) For all S in condition C there is a 95%
probability ("confidence level") that 60% +10% of S will
overeat.
Notice
that as exactitude grows (as interval shrinks), confidence level tends to
diminish. At
one extreme we have 100% confidence in predicting that 60% of anything, +60%,
has a given property F; that is, we can be utterly certain that some
percentage of x's, from 0 to 100, is F.
At the other extreme, if we say that precisely n% of x's are F,
no more and no less, then we run considerable risk of being wrong.
I have said that the actual
frequency of events does not determine the truth or falsity of statements like
(G*, H*). This means that you could overeat
or not overeat without violating (G*) or (H*), regardless of the behavior of
other diners. But if neither your
behavior nor that of anyone else affects the truth of general laws, what then
does?
One possibility is that confidence
levels mark the epistemic reliability of a system. For example, suppose that over the course of
your lifetime you issue one hundred statements at a confidence-level of
95%. Then if your statements were all
true, 95 of their object clauses should be true. (In a statement like "There is an n%
chance that P", I call P the object clause.) Alternatively, instead of taking a cognitive
system to be an individual over a lifetime, we might take it to be an
institution, method, discipline, school of thought, intellectual tradition, or
body of data and/or doctrine. At any
rate, after delineating some collection of statements issued at a confidence
level of 95%, we can say that they are all true if and only if 95% of their
object clauses are true.
Although this epistemological approach
primarily concerns the truth-conducive reliability of cognitive systems
(however they be identified), it has ontological implications. For suppose that, of the object clauses you
hold at the 20% confidence level, 80% are false, not counting that of (G*). Then the object clause of (G*) must be true,
in which case the basic argument runs as before.
Although the very term "level of confidence"
suggests level of subjective certainty, it could also be taken as a brute
indeterminacy, as an absence of a strict fact of the matter. This ontological construal might be developed
in terms of possible worlds. To say (H*)
is to say that, in 95% of all possible worlds, 60% +10% of the diners
will overeat. However, this
interpretation too fails to save the indeterminist. To begin with, suppose that the number of
possible worlds is finite; indeed, for the sake of simplicity let us suppose
that there are 100 possible worlds, including the actual world and 99
non-actual worlds. Suppose furthermore
that, in 5 non-actual worlds, the number of diners who overeat falls outside
the 50%-70% interval. Then law (H*)
entails that, in the actual world, the number of diners who overeat
falls inside the 50%-70% interval. It
now follows, precisely as before, that the diners in the actual world are
determined to act as they do.
Alternatively, let us suppose that there is an infinite
number of possible worlds. As you might
extrapolate from my discussion of infinite hypotheticalities in section 2, this
move too leads to determinism. As
before, a statistical law, interpreted intensionally, combines with intensional
facts to provide conditions that sometimes necessitate a given person's
behavior. Probability-governed behavior
remains determined.
5. Innumerate Probabilities
So far I have considered simple probabilities, confidence
intervals, and confidence levels, which all involve cardinal numbers. But what if probability need not involve
cardinal numbers, as argued by Keynes 1921?
Applying this idea to the free-will discussion, Nozick writes: "I
am not suggesting... a well-defined probability distribution... there are not
fixed factual probabilities for each action, there is no such dispositional
propensity or limit of long-run frequencies or whatever" (1981, p. 302). In the same vein Kane writes: "with
indeterminate efforts, exact sameness is not defined; nor is exact difference
either... That is what indeterminacy amounts to" (1996, p. 171).
This approach might seem to derail
my basic argument. However, even if we
cannot assign cardinal numbers to the probability of an event, we can always
say something about its ordinal probability.
It is more probable that climatic changes will trigger another dark ages
within the next two centuries than that they will within the next two decades. This establishes
that the innumeracy thesis is untenable; even when propositions lack absolute
numerical values, it would make sense to assign some proposition an arbitrary
value and then to assign commensurable propositions relative values. Furthermore, while we cannot absolutely
measure the probability of, say, someone's committing suicide, we can say that
it is greater if they are socially isolated than if they enjoy caring
human contact. Indeed, if we could not
affect the probability of human action by means of praising, censuring,
modeling, and so forth, then attempts at moral education would be fruitless and
would never be pursued. Because even
libertarians believe in the efficacy of suasion, they must admit that
probability comes in amounts, and furthermore that laws of the following sort
can be found.
(6) Under normal conditions, the probability
that a child of white-collar workers will go to college is greater than the
probability that a child of blue-collar workers will.
Now
suppose that all white-collar children have had their chance to go to college
or not, that a certain percentage have, that all blue-collar children except S
have had their chance, and that the only way for law (6) to be true is for S to
drop out of school. Then
given the assumption that (6) is a true law, it necessarily follows that S
drops out of school, and this in spite of the fact that the law is formulated
without any cardinal values whatsoever.
In sum, the appeal to completely
innumerate improbabilities is untenable while the appeal to ordinal
probabilities still permits a deduction to determinism.
6. The Gambler's Fallacy
Is my argument just a version of the gambler's fallacy? According to the gambler's fallacy, it is a
mistake to use prior outcomes of gambling devices to predict future
outcomes. First, it is a mistake to make
the persistence assumption that dice that have always landed odds in the
previous one hundred throws will land odds on the next throw, for there is no
such thing as a lucky streak. Second, it
is equally a mistake to make the compensation assumption that said dice
will probably land evens on the next throw, as a consequence of some law of
averages.
Both assumptions are indeed
fallacious when misapplied. For example,
the persistence assumption is fallacious when we know that we are dealing with
gambling devices that are truly random. In practice, granted, were I to
notice that a certain pair of dice always landed odds over an extensive history
of throws, I would assume that the dice were loaded, and I would bet
that the next throw would yield odds.
But this changes the topic from that of objective probability to
justified belief.
It is the compensation assumption
that is at issue here.
Granted, track record is definitely sometimes irrelevant and
should be ignored. But
when we speak of objective frequencies as I am doing, be they actual or
forthright fantasy or cloaked as "propensity", then it is a simple
mathematical truth that the values in one subsequence, combined with a statement
about the whole sequence, necessitate the values in the complementary
subsequence. To
put it another way, the correct analogy is to cards rather than dice. If I know that there is generally a 4/52
chance of getting an ace in a given turn, and if I also know that all four aces
have already appeared, then I do in fact know that on my next turn I will not
get an ace. The soundness of gambler
reasoning all depends on the game one plays!
Of course if I initially accept
generalization (G) and thereupon learn that the first 60 out of 100 diners in C
overeat, then in practice I would normally not deduce the behavior of
diner #61. The reason is that in
practice I would normally take (G) as an approximation of the truth and as
expressing mere subjective probability.
But this is irrelevant, for I have been asking you to imagine a case
involving ontic probability. If a law
decrees that up to n% of a population shall do A, and if a given n% of the
people do A, and if the law is true rather than just provisionally
posited, then it necessarily follows that the remainder of the
population does not do A.
To summarize: Human behavior that invites moral appraisal
is systematic inasmuch as it exhibits recurring patterns. Hence it is described by true generalizations
and, inferring to the best explanation for why this should be so, it is
governed by true nomic generalizations (laws).
For the sake of argument, I assume that the fundamental laws are
ineliminably probabilistic in character, although it is possible that the
probabilistic laws of the human sciences are merely corollaries of strict
laws. Without omniscient access to these
true laws, we must make do with laws formulated in terms of epistemic
probability, which have no bearing on determinism. Nonetheless ontic laws are presumed to exist,
and from these – even if we cannot know them well enough to identify what
deductions necessarily follow – we know that deductions of morally
appraisable human behavior do necessarily follow.
7. Semantic Determinism
Some of my deductions refer to possible worlds. But since possible worlds do not stand in any
temporal order to each other, my counterparts in other worlds do not act before
I do. In what sense then do they cause
me to act the way I act? By the same
token, if the early cafeteria diners cause me to overeat, and if we are all in
symmetric situations, then I, in combination with others, cause them not to
overeat. But how can my eating after
they do have such causal power?
I concede that we are dealing with
an unusual sense of the word "cause".
This makes my argument for determinism radically different from the
classical accounts; it is more like semantic determinism.[6] According to semantic determinism, (7) is
derivable from (8) plus laws or facts of semantics; (8), a fact about the
distant past and therefore apparently not up to me, necessitates (7).
(7) The
statement "____ is typing now in 2003" is true today.
(8) The statement "____ will be typing in
2003" was true years ago, in fact it was true before I was ever born.
Similarly,
according to statistical determinism, (9) follows from (10) plus law (G) and
incidental facts; (10), a fact about other people and therefore apparently not
up to me, necessitates (9).
(9) I
do not overeat.
(10) The
first 60 of those in condition C overeat.
However,
there are differences between semantic and statistical determinism. For one thing, (7) and (8) are formally
symmetric – you can derive either one from the other (given semantic laws) –
whereas (9) and (10) are not, even given (G) and background information. For another, it seems intuitively clear that
the truth of (8) depends on that of (7) and not the other way around, whereas
(10) does not seem to depend on (9). In
other words, the postulation of backwards causation – whereby (8) makes (7)
true – defuses the moral significance of semantic determinism but not of my
own. I conclude that statistical
determinism is not just a variant of semantic determinism.
It might be argued that while I am
not in sole control of the truth or falsity of (9), the collectivity to which I
belong is. On this view, the agency
involved in determining the distribution of over-eaters and under-eaters is
holistic rather than individualistic; any one person's behavior is jointly a
function of that of all others, and the collective behavior receives
contributions from each individual member.
Accordingly, while I suffer diminished control of my own actions, I
enjoy added control over the actions of others, the net result being that I
have as much control over the future as libertarians ordinarily think. However, if everyone has equal control over
the totality of future outcomes then everyone is equally responsible for
whatever good and whatever ill happen; genuine individual responsibility would
not make sense.
Supposing I am an agent who enjoys
free will, can we say that I am, in part, freely responsible for whatever true
generalizations there are that cover me?
I admit we can. Can we, however,
say that I am freely responsible for whatever true laws there are? I think it is part of the concept of a law
that it controls me, not the other way around. In order to decide whether the likes of (G)
are within my control, then, we need to decide whether probabilistic statements
are always contingent reports or whether they ever express genuine laws.
8. Do Statistical Laws Exist?
Statistical determinism, if it is to threaten libertarianism,
rests on the assumption that statistical generalizations such as (G) hold
whether one wills them to or not. They
are not up to us.
My reason for thinking they are not up to us is that pure
randomness does not exist. At the
quantum level, for instance, radioactivity is not purely random, for otherwise
isotopes would not have distinctive half-lives.
The fact is, uranium-235 has a half-life of thousands years while
strontium-100's half-life is measured in milliseconds. My inference to the best explanation for this
difference is that some objective feature of reality makes uranium and
strontium decay at different rates.
Likewise in the human sciences, events exhibit enduring and pervasive
regularities. For instance, Emile
Durkheim found that suicide rates – in generation after generation, in country
after country – conforms to the following law:
(11) The
likelihood that S will commit suicide is a function of S's social integration.
In
short, if quantum and social-science "laws" were merely true
generalizations, then they would serve only as descriptions of observed cases,
and not as explanations or as inductions for and about the future. The fact that statistical laws hold up over
time, and are expected to hold up over time, means that some underlying feature
of reality is presumed to support inductions, some feature that forces
the statistics to hold as they do. I
conclude that the presupposition of statistical determinism is true, that
statistical laws exist. Though I use the
frequency interpretation to find empirical meaning in probability statements,
this is consistent with viewing them as having nomic content.
What if statistical statements were
never objective governing laws but rather descriptions attendant upon free
choices? Supposing this were so, then
the first diner S1 in condition C would not be constrained by the
.60 probability postulated earlier or by any other; S1 would either
overeat or not according to no probability at all, and likewise for S2
and so forth. The net result would not
add up to any enduring and pervasive regularity. Since we know that such regularities do
exist, we know that there must be governing laws that are not up to us.
One last note on the status of (G)
as a law. Because it refers by proper
name ("C") to a particular condition (a certain cafeteria on a
certain day), it is not an especially general law. However, it can be seen as the corollary of
some completely universal law, call it γ; and if γ has nomic force, all of its corollaries do too. (If, as a matter of
physical necessity, mass attracts mass, then the fact that Venus attracts Mars
is a matter of physical necessity too.)
Besides which, my example refers to a particular condition merely as an
expository aid. Instead I could just as
well have invoked a truly universal law along with an elaborate background
scenario using clumsy large numbers, and the conclusion would have been the
same.
To recapitulate, I have merely observed that simple
probabilistic laws like (G), in combination with sufficient background
conditions, always necessitate unique outcomes (sections 1, 2); and that
bells-and-whistle probabilistic laws like (H, G*, H*), in combination with
sufficient background conditions, necessitate unique outcomes in pivotal cases
(sections 3, 4):
(a) (G & background conditions) |= I
overeat.
From
this and the Consequence Argument's principle α we get that
it's not up to me that if said conditions hold then I overeat:
(b) N((G
& background conditions) ® I
overeat).
On
pain of obliterating individual responsibility, the number of customers in C,
and the choices made other than mine, are not up to me (section 7):
(c) N(background conditions).
Nor are the laws of nature up to me (section 8):
(d) N(G).
Since
neither (G) nor background conditions are up to me then surely the combination
of (G) and background conditions is not up to me:
(e) N(G & background conditions).
From
(b), (e), and the Consequence Argument's principle β, we get that it's not
up to me that I overeat:
(f) N(I
overeat).
Hard
determinism results from statistical determinism supplemented by the
Consequence Argument while compatibilism remains an option for those who reject
the Consequence Argument. Libertarianism,
however, is not viable.
9. Additional Problems for Libertarianism
So far I have argued that (a) in some circumstances involving
moral choices, laws of physical and human nature, even when fundamentally
probabilistic, will necessitate a unique outcome, from which it follows that
(b) probabilistic laws cannot underpin libertarian freedom. Leaving now the question of (a), I turn to
some of my initial motivations behind (b), motivations presented more as "intuition
pumps" than as rigorously developed arguments.
the compulsion argument. If N% of a population typically acts a
certain way, and if this percentage jumps to N+x% when the conditions are
altered, then at first appearance we have evidence that the altered conditions make
x% of the people act as they do. The new conditions may be
indeterminate inasmuch as they leave open exactly which individuals act in the
new way, and even exactly how many do, but nonetheless they are compelling for
some minimum number. Since it is compulsion
of any sort that threatens free will according to the incompatibilist, and not
just determination according to strict laws, a libertarian defense of free will
must reject both strict determinism and statistical laws.
The libertarian will insist that statistical laws merely
incline or influence and do not compel, but this is a false dichotomy. If you exert an influence on me in the sense
that you increase my likelihood of doing A then you have compelled a change in
my character: I have transitioned from being the kind of person who is hardly
likely to do A to being the kind of person who is quite likely to do A, the
transition being in your control and not mine.
This point is made in a priceless story by Stanislaw Lem (1976). In it, a mischievous character uses a probability
amplifier to convert the existence of a dangerous dragon, which would otherwise
be infinitesimally improbable, into a significant probability. When the dragon pops into being and goes on a
rampage, Lem observes, the mischief-maker is at criminal fault.
the proportionality argument. If a strict law of nature causes 100% of a
population to perform some base act A then, libertarians hold, 100% of the
population is to be excused for doing A.
So by parity of reasoning, if a statistical law of nature causes 75% of
the population to do A then 75% of the population ought to be excused for doing
so. Which 75% should be excused? Precisely those who perform A (that is, none
of those who do not perform A), for otherwise some of those who perform A will
be excused while others will not, which would be arbitrary and unjust.
the
sorites argument. Imagine that a
new drug, amokerine, works well on animal subjects. The developers then test it on a pool of 1000
normal human subjects, who all go on a murderous rampage. At the criminal trial one of the defendants
says, "I can't be held responsible; amokerine causes everyone who
takes it to turn violent, and neither I nor the doctors who prescribed it had
any way of knowing this beforehand."
The defense seems legitimate. But
now suppose that of the 1000 subjects, only 999 turned violent. Although the original defense would be a lie,
it would be fair for a defendant to say, "I can't be held responsible;
amokerine causes 999 out of 1000 users to turn violent, it made me turn
violent, it made me turn violent."
The single unaffected patient would be regarded as lucky, not as
especially virtuous. Since there is no
moral difference between 100% causation and 99.9% causation, there can be none
between 99.9% and 99.8%, and so on until there is no moral difference between
100% causation and .0001%: "I can't be held responsible; amokerine always
causes 1 out of 1,000,000 users to turn violent, and I happen to be that
one!"[7]
This argument may be criticized for
committing the slippery-slope fallacy.
However, not all arguments having the structure of a slippery slope are
specious. The reason that the Paradox of
the Heap is paradoxical is that, though the conclusion is clearly unacceptable
(a single grain of sand is not a heap), the reasoning that leads up to the
conclusion appears to be sound (ten-thousand grains of sand, piled together,
make a heap; and taking one grain of sand from a heap leaves a heap...). This suggests that slippery-slope reasoning
is defeasibly legitimate; until it leads to
conclusions that we have independent reason for rejecting, it must provisionally be accepted.
The case of the Heap is interesting because, though there is
no consensus on exactly how it goes wrong, it is clear to just about
everyone that it does. In
contrast, we do not know whether my moral conclusion is wrong (that if a drug
leads to .001% of its users going berserk then the affected patients are not
morally responsible). Some may be
tempted to think that if the overwhelming majority of amokerine users never
turn violent then those who do must have something wrong with them – perhaps
pre-existing malice that gets magnified or released – and therefore do deserve
blame. To think this, however, is to
adopt a deterministic view: if amokerine use plus other factors in a
person's life determines whether that person turns violent, then amokerine's
effects are not properly stochastic. For
the sake of argument, however, I have been assuming that genuinely
probabilistic laws do exist (in contrast to laws that merely seem probabilistic
because of hidden variables), and to illustrate the logic of such laws I
stipulate that predisposing factors in the effects of amokerine do not
exist. In this case, I maintain, there
is no defeater to my sorites deduction.
the irrelevance of forks. My paper began by saying that probabilistic
laws do not guarantee branching possibilities.
To this I would like to add that branching possibilities do not engender
freedom. If all possible worlds are on a
par, if non-actual worlds exist, and if I choose to perform action A in this
world, then necessarily it follows that my self in some other world chooses to
do other than A. Assuming transworld
identity, I myself am doomed to perform all conceivable actions at all times,
in one world or another; and assuming transworld counterparts, then although I
personally might lead a virtuous life, it would be logically possible for me to
do so only if my virtually indistinguishable counterparts do otherwise. In that case, responsibility
would have to be holistic rather than individual.
For this reason, libertarians cannot be complete realists
about possibility. But nor can they
reject possible worlds in favor of linguistic or cognitive constructs as Carnap
does. For taking possible worlds as mere
facons de parler, without real reference, annuls their objective existence,
which is what's at issue in the debate over ontic determinism. (Yes, we can imagine and talk about a
criminal's doing otherwise, but there's no there there.) Thus, libertarians appear to be committed to
some kind of position between modal realism and modal nominalism. Unless they can explain away such a
commitment, they must acknowledge it, they must articulate and defend it, and
moreover they must explain its relevance.
Why is it that I am responsible for performing an action A only if there
semi-exists another realm in which my other self does not perform A?
Conclusion.
The traditional view is that probabilistic laws do not necessitate
outcomes. But of course strict laws
never necessitate outcomes either. Only
when strict laws are supplemented by initial conditions are outcomes
necessitated. By the same token, just
because laws are "loose" or probabilistic does not rule out the
possibility that such laws, combined with appropriate background conditions,
stand in a "tight" or deterministic relation to some outcome.[8]
References
Dennett,
Daniel (1978) Brainstorms, MIT.
Earman,
John (1986) A primer on determinism,
Eccles,
John (1994) How the self controls its brain,
Ekstrom, Laura
(2000) Free will, Boulder CO: Westview Press.
Fetzer,
James (1981) Scientific knowledge,
Garson,
James (1995) "Chaos and free will", Philosophical Psychology.
Gillies, Donald (2000) Philosophical theories of
probability, Routledge.
Glymour,
Gribbin, John (1984) In search of Schrodinger's
cat, Bantam.
Hobart,
R.E. (1934) "Free will as involving determination and inconceivable
without it", Mind 43:1-27.
Hodgson,
David (2002) "Quantum physics, consciousness, and free will", in Kane
2002b.
Honderich, Ted (2002) How free are
you? 2/e,
Howson,
Colin & Peter Urbach (1989) Scientific reasoning,
James,
William (1884) "The dilemma of determinism", reprinted in The will to
believe and other essays,
––
(1891) Principles of psychology, Henry Holt.
Kane, Robert (1986) Free will and values, SUNY.
––
(1996) The significance of free will,
––,
ed. (2002a) Free will, Blackwell.
––,
ed. (2002b) The Oxford handbook of free will, Oxford University Press.
Keynes,
John Maynard (1921) A treatise on probability, Macmillan.
Leiber, Justin (1991) An invitation to cognitive science,
Blackwell.
Lem,
Stanislaw (1976) "The third sally", The cyberiad,
LeShan, L. & H. Margenau
(1982) Einstein's space and van Gogh's sky, Macmillan.
Levi,
Isaac (1980) The enterprise of knowledge, MIT.
Lewis,
David (1981) "A subjectivist's guide to objective chance", Studies in
inductive logic and probability (ed. R. Jeffrey),
McCall,
Mele, Alfred (1995) Autonomous agents, Oxford University
Press.
Miller, D.W. (1996) "Propensities and
indeterminism", in Karl Popper (ed. A. O'Hear),
Cambridge University Press.
Nozick, Robert (1981) Philosophical explanations, ch. 5, Harvard University Press, reprinted in O'Connor.
O'Connor,
Timothy, ed. (1995) Agents, causes, and
events, Oxford University Press.
Popper,
Karl (1959) "The propensity interpretation of probability", British
Journal for the Philosophy of Science
–– (1990) A world of propensities, Thoemmes.
Van
Inwagen, Peter (1983) An essay on free will, Oxford
University Press, relevant excerpt reprinted in Kane 2002a.
Walter,
Henrik (2001) Neurophilosophy
of free will, MIT.
Weatherford,
[1] A small sampling: jurists
include Hodgson 2002; scientists include Eccles 1994, Gribbin
1984, and LeShan & Margenau
1982; and philosophers include Ekstrom 2000, James
1884 & 1891, Kane 1986 & 1996, Mele 1995, Nozick 1981, van Inwagen 1983,
and Walter 2001.
[2] What I call the Mere
Randomness Argument can be found in
[3] The Basic Argument could equally well be cast in microphysical
terms. Instead
of law (G), consider this law: "for all particles S in condition C, the
probability that S will decay within period t is .5." Supposing that in condition C there
are 1010 particles and that, excluding particle p, 1010/2
of the particles decay within t, it necessarily follows that p does not decay
within t. Insofar
as decisions hinge on a single quantum event or series of quantum events, they
are determined to do so.
[4] Even if we knew the laws of
nature, the precise initial conditions are beyond our ken and possibly
non-existent, thanks to Heisenberg's uncertainty principle; and even if we knew
both laws and initial conditions, they might not be computable or the material
resources for implementing the algorithm might be unavailable (even "in
principle"), as suggested by the chaos of complex systems (Garson 1995)
and by reflexive paradoxes involved in any system's trying to comprehensively
cognize itself (Leiber 1991:53, ____ 1998).
[5] In actuality, I regard the
epistemic/ontic distinction as being more nuanced than indicated here; cp. ____
1998.
[6] This kind of determinism is
sometimes called logical determinism (though such a label better describes the
thesis that events are necessitated by the laws of logic alone, as held by
Spinoza), and it is sometimes said to be based on the argument from future
contingents (though if it is sound then nothing is actually contingent). The term
"semantic determinism" is motivated by the fact that it relies on
semantic principles of the following sort: "If a statement is true now
then the corresponding statement formulated in the future tense was true in the
past."
[7] To repeat, my concern is
ontic rather than epistemic. Whether a defendant S can prove that S is
"the" 1 out of a million whose behavior traces to amokerine is
irrelevant; what matters is that if S is that one then S's behavior would seem
to be excusable.
[8] This paper benefited from
comments by Patrick Maher, John Perry, and especially Jim Garson. In addition I
wish to thank Robert Kane for his encouragement.