<<

. 13
( 38 .)



>>

When one calls an action subjectively irrational, one is committed to the
claim that something has gone wrong in the practical mental functioning
14 Indeed, this seems to be the only type of action Thomas Scanlon is willing to call ˜irra-
tional.™ See Scanlon (1998), pp. 25“27.
15 See Brandt (1979), pp. 72“73; Gibbard (1990), pp. 18“19; Raz (1999a), p. 22. See also
Cullity and Gaut (1997), p. 2.

70
A functional role analysis of reasons

of the agent. But to call an action irrational in this sense is consistent with
the claim that there are facts, unknown to the agent, that support the
claim that, on the whole, she ought do the action. Reasons are directly
relevant to this latter sort of claim, and are only indirectly relevant to claims
about subjective rationality. This is why objective rationality is sometimes
referred to as ˜what we have most reason to do.™16 The kind of irrationality
involved in Joanna™s sleeping in, when she believes that she is rationally
required to get up, is subjective rationality. The plausibility that reasons
are directly relevant to what we ought to do, whether or not we know
of those reasons, and the fact that FA2 wrongly classi¬es the belief that
one ought to as a reason to if we take the relevant sense of rationality
to be subjective, both argue that the sense of rationality that ¬gures in
FA1 and FA2 should be taken to be objective rationality. Since Joanna™s
belief that she is rationally required to get up does not have any impact
on the objective rationality of her sleeping in (or getting up), it is not a
reason.
A second and related reason why we should not take Joanna™s belief as a
reason to get up is that it appears to be unnaturally strong, if it is taken as a
reason. This is because, no matter what other reasons in favor of an action there
might be, it seems irrational to do the action if one believes that the action
is irrational. It is as if the putative reason simply couldn™t be outweighed
by other reasons, no matter how strong they were. But that is very odd.
It suggests that one™s belief that a certain action is irrational is a stronger
reason against the action than is the fact that it will save twenty people
from dying horrible ¬ery deaths. The putative reason here does not seem
to lend itself to any sort of weighing or balancing. This second reason why
we should not regard Joanna™s belief as a reason is related to the ¬rst in the
following way. The reason why her belief makes it irrational to sleep in,
regardless of the ˜opposing™ reasons for sleeping in, is that there are again
two senses of rationality in play. The reasons in favor of sleeping in are
relevant to the objective rationality of Joanna™s action, and they support the
claim that Joanna ought to sleep in. Joanna™s (false) belief that it would be
irrational to sleep in does not make it irrational to sleep in by outweighing

16 See Par¬t (1997) and Scanlon (1998), p. 30. This characterization correctly re¬‚ects the
primary relevance of reasons to objective rationality. But the word ˜most™ smuggles in the
assumption that reasons play only one normative role, and has the unfortunate consequence
that the relevant reasons typically pick out one action as uniquely favored. Joseph Raz
argues persuasively against this view in “Explaining Normativity: Reason and the Will”
in Raz (1999b), pp. 100“2.

71
Brute Rationality

these reasons. Rather, it makes it irrational in a different sense altogether:
it makes it subjectively irrational.17
So far, we have seen a consideration that FA2 would wrongly classify as
a reason if the wrong sense of ˜irrational™ is being used. That is, Joanna™s
belief would be classi¬ed as a reason by FA2 because it turns an other-
wise subjectively rationally permissible action into a subjectively irrational
one. Are there also considerations that FA1 would wrongly classify if we
understand ˜rational™ in the subjective sense? That is, are there consid-
erations that can change an otherwise subjectively irrational action into
a subjectively rationally permissible one, but that we would not want to
classify as reasons? There are. Consider ignorance, as it functions in the
following example. Suppose we see our friend Bob about to remove a
wasps™ nest from the corner of his garage with his bare hands. We ask why
he is doing that, and he answers that it is ugly, and he wants the garage to
look nicer. If Bob knows what we all generally know about wasps, this is a
subjectively irrational action: it shows that something has gone wrong with
Bob™s mental functioning. But if we add to our description of Bob™s action,
the fact that he is completely, and (somehow) excusably, ignorant about
wasps and wasps™ nests, then it may be subjectively rationally permissible
(but extremely unfortunate) for him. Thus, ignorance turns a subjectively
irrational action into a subjectively rationally permissible one. If FA1 is
understood in terms of subjective rational status, ignorance will turn out
to be a reason. But ignorance is not a reason in favor of an action.


reas on s and mot ive s
There is another reason why we would not want to call ignorance a reason
for action. It does not seem that anyone could ever act for this reason. Here
we ¬nd the grain of truth in Bernard Williams™s explanatory requirement
on practical reasons.18 Williams holds that if a consideration is a normative
reason, then it must be that people sometimes act for that reason. Unfortu-
nately, he takes this to mean that it must be psychologically possible for any
agent who has a reason to act on it, merely by going through some broadly
instrumental rational processes. Because of this, Williams is committed to

17 See Stampe (1987), p. 344 for an argument that desires provide reasons that is based
entirely on this error. Stampe even notes the ˜extraordinary authority™ of desire as a reason
in this connection, but does not see this as a sign that something has gone wrong.
18 See Williams (1981). For a more moderate view, compatible with the position offered in
this chapter, see Raz (1999b), pp. 100“2.

72
A functional role analysis of reasons

the view that if an agent (perhaps because of a severe chemical depression)
simply has no desires that would motivate him to take some medicine that
would cure him, and if this
is not the product of false belief; and he could not reach any such motive from
motives he has by the kind of deliberative processes we have discussed; then I think
we do have to say that . . . he indeed has no reason to pursue these things.19

Many people have thought that this is too strong a conclusion to draw from
the fact that normative reasons are the kinds of things that are often cited in
explanations of action.20 But it does seem true that unless a consideration
is the kind of thing that people sometimes act for, then it is not a reason. People
do act in order to get pleasure, to avoid pain, to help other people get or
avoid these things, etc., and these things provide normative reasons. But
ignorance simply cannot play this role. It is true that ignorance can be
cited in explanations, but people cannot act for ignorance in the way they
can act for reasons. When people act for reasons, then those reasons are
their motives. Current ignorance is in the wrong category to be a motive
for anyone.21


we i g h i ng reas on s
A ¬nal formal condition on reasons is more complicated. It takes its cue
from a remark of John Broome™s, that “weighing is just what reasons are
made for.”22 Brie¬‚y, the condition is that the systematic contribution that
a reason makes to the rational status of action must lend itself to represen-
tation in terms of strength values.23 In order to explain what this amounts

19 Williams (1981), p. 105. See also Johnson (1999). For my own interpretation of the
explanatory requirement, see J. Gert (2002b).
20 See, e.g., Heath (1997), p. 454; Par¬t (1997), pp. 111“14.
21 A person could act in order to become or remain ignorant “ perhaps of some painful fact. But
in such a case it is more plausible to say that the real reason for her action was to avoid
pain.
22 Broome (1999), p. 412.
23 Joseph Raz (1999a), p. 43 makes this more explicit than Broome, writing that “all reasons
are comparable with regard to strength . . . and that this is their only feature relevant to the
outcome of practical inferences.” Unfortunately, Raz assumes that reasons play only one
normative role, and therefore have only one strength value. As a result of this assumption,
and in order to capture the full range of rationally permissible action, he is forced to
invent the notion of ˜exclusionary permissions,™ which have the effect of allowing, but
not requiring, one to omit certain reasons from one™s calculations of what to do. This
device approximates the effect of some reasons having more strength in the justi¬catory
role than in the requiring role. Raz™s suggestion is discussed at greater length in chapter 5.

73
Brute Rationality

to, it will be useful to examine another way in which a consideration might
contribute to the normative status of an action, without being a reason for
it in this sense. Because the rationality that ¬gures in FA1 and FA2 is very
plausibly exclusively a matter of the reasons relevant to an action, we will
have to look elsewhere for clear examples. Morality will provide fertile
ground.
First consider a simple act-utilitarian account of morality according to
which the sole good is pleasure, and the sole evil pain. On such a view,
the way to determine the moral status of an action is the following. One
surveys all the possibilities, and isolates the relevant consequences, which
consist only in increases and decreases in pleasure and pain. One then
calculates some utility score for each option. The morally correct choice is
the option with the highest score. In making the relevant calculations, each
increase or decrease in pain or pleasure makes a constant contribution in the
calculation of the total utility score for a possible choice. That is, if someone
will suffer a bitter disappointment as a result of the agent™s choosing option
A, this counts against A in exactly the same way that it would count against
B, if that same disappointment were to be a result of choosing B. Thus, we
can say that the pain of the disappointment makes a constant contribution
to the moral status of any given option. Because of this it makes sense to say
that the disappointment of the person is a moral reason against an option
(even if it is outweighed by other moral reasons). And thus it also makes
sense to call such a moral view a reasons-based morality, and to say that
moral reasons are provided by increases and decreases of pleasure and pain
for the people affected by a possible action. For what one does, in order to
decide which option to choose, is to list the reasons for and against each
possible option, and see which option is favored by the balance of these
reasons.
Now consider a certain sort of rule-utilitarian account of morality, again
according to which pleasure is the sole good, and pain the sole evil. This
account holds that morality can be understood as given by a group of
rules and severities of punishment that have the following joint feature: if
people knew that violations of these rules were liable to the speci¬ed levels
of punishment, then the consequences would be better (in terms of total
net pleasure and pain) than with any other set of rules and punishments.
What kinds of things might count as moral reasons on such a view? There
are two obvious options. The ¬rst is increases and decreases in pleasure
and pain: after all, the view is still a utilitarian one. The second option is


74
A functional role analysis of reasons

the fact that an action breaks one of the rules of the system: for example,
that it is an act of deception.
Are either, or both, of these kinds of considerations moral reasons,
within the framework of the rule-utilitarian morality we are considering?
It is very tempting to answer ˜Yes, of course.™ Nor is that answer wrong,
in a sense. One reason why it is tempting to answer ˜Yes™ is that both
the fact that an action will hurt someone and the fact that it will deceive
someone, are considerations that ful¬ll the moral analogue of FA2. That
is, they are considerations that can change a morally permissible action into
an immoral one. Moreover, these considerations also ful¬ll the motivational
condition on reasons. Namely, moral agents are commonly motivated to
avoid hurting or deceiving people: they often act for these reasons.
But is it possible to determine the moral status of an action by listing
these reasons for and against the available options, taking their strength val-
ues into account, and determining which actions come out with acceptable
comparative scores? Certainly one cannot do this if one takes increases in
pleasure and pain as the sole reasons. Indeed, this is part of the advantage
that rule-utilitarian views have over act-utilitarian views: there are actions
that actually have, overall, the best consequences, but that we neverthe-
less regard as highly immoral. To take one standard example, it may be
that the surreptitious killing of a sel¬sh patient who came to the hospital
for a wart removal could save the lives of ¬ve other extremely benevo-
lent people. And it is well known that the same sorts of problems arise
even if one takes the breaking of a moral rule as a reason with constant
weight. For when one does this, it becomes morally permissible to deceive
one person in order to prevent ¬ve other people engaging in similar acts
of deception, or to kill an innocent person to prevent ¬ve other such
murders.
But couldn™t we regard such a view as reason-based in any case, by
regarding moral reasons as having variable or context-dependent weights?
Although it is tempting to say so, the answer is ˜No.™ The problem is
that one could not know these variable weights until one had already
completed some other procedure that yielded the wholesale moral status
of the action.24 These variable weights could then be read back into the
24 See Philips (1987), pp. 367“75. Philips argues in a similar way against what he calls ˜the
constancy assumption™ in moral theory: the assumption that the weight of a moral reason
is constant and does not vary from context to context. His argument is similar to the
one offered here, in that he asserts the conceptual priority of moral principles over that of



75
Brute Rationality

consequences, but they would then obviously be of little additional use in
determining the moral status of the action. On the rule-utilitarian moral
theory being considered here, the determination of the moral status of an
action is not a matter of weighing moral reasons for and against it. This
should come as no surprise. Rule-utilitarianism is a rule-based, and not a
reasons-based account of the moral status of particular actions. And this
is true despite the fact that in determining which rules are best, we may
well assume a constant weight for the reasons for and against them. That
is, arguments about the rules may be reason-based, but arguments about
the status of particular actions will be in terms of the rules themselves, and
not the reasons that support the rules.
Now we can state the ¬nal formal condition on a consideration™s being
a reason: it must be possible to characterize the consideration™s normative
signi¬cance in a way that does not rely upon a prior determination of the
wholesale status of the particular action to which it is relevant. For reasons are
supposed to be useful in determining that very status. The simplest way in
which a reason can do this is by having one constant value (its strength),
which either counts in favor or against any action to which it is relevant.
The view offered in this book is that two values suf¬ce: the strength of the
reason in its justifying role, and the strength of the reason in its requiring
role. But the point here is only that there is a philosophically important
sense of ˜reason™ according to which considerations are reasons only if the
way they contribute to the status of action is suf¬ciently systematic that one
can use the reasons to determine that status. On the rule-utilitarian view
described above, the fact that an action is deceptive is not a moral reason
in this sense, though it is of course of moral signi¬cance. It is not a reason,
in this sense, because (again, on the rule-utilitarian view we are assuming)
there is no systematic way in which acts of deception, even of the same
severity and about the same subject matter, contribute to the moral status
of actions. Rather, in order to see that the deception is relatively important
(or unimportant) one ¬rst has to determine the wholesale moral status of
the action.
There is another reason why it is tempting to count the fact that an
action will deceive someone as a moral reason against the action. Or, if
one likes, there is a sense in which the fact that an action will deceive

moral reasons. But where Philips goes slightly wrong is in thinking that one can calculate
the variations in the weights in moral reasons, based on teleological considerations to

<<

. 13
( 38 .)



>>