The convergence and divergence of rationality and morality

A presentation for the amoralist

Reasoning

Imagine this scenario.

You wake up one morning, and wonder what to do. You consider going to work, as usual. But you remember that there is some special event occuring today, perhaps a sporting or social event or somesuch, which you happen to think you might prefer. You might choose to bunk off work, and go to the event instead.

The key feature of the scenario is this: you are considering alternate futures, with a view to choosing between them. What you choose matters in some way, because what you choose to do will affect what you actually go on to do. Now imagine this:

The scenario is as before, but you are a surgeon. If you do not go to work today, but instead bunk off, you think it is highly likely that someone will die. Now you might decide to go to work and save the life, or you might still choose to go to the event.

Again you have a choice to make, and it is worth noting that - if you do make a reasoned choice (i.e. you don't decide at random) - you will do it because of some features of the one imagined future compared to the other future. You will consider the expected differences, for example how you would feel in each case, and perhaps how the relatives of the person who may die will feel - and any number of other differences - and some of these differences you may regard as irrelevant or insignificant, and some of them will seem to suggest that one of the outcomes is better or worse than the other. Which is to say that one of the outcomes will seem preferable to the other, in virtue of some feature or features of it.

This kind of decision-making process must be familiar to any rational being. Yet this process is, I suggest, the core of ethical consideration. The question of "the source of intrinsic value", which may strike some people as being very strange and obscure, is nothing more than the question of which particular features count as making one of the outcomes better than another. So the sceptic who claims, of "intrinsic value" (moral or otherwise), to know of no such thing, has not, I submit, understood the term. Its ultimate existence is as nothing more than a reason for choosing one outcome over another, for doing one thing rather than something else. And any rational being - anyone who claims to act some way for a reason - must admit that such things exist.

And this, I submit, is a reasonably complete account of ethics. I have made no account of praise or blame; of punishment or reward; of duty, or obligation, nor even right and wrong, good and bad, and ought and ought not; but I submit that no account of these can be given, which is both sensible and yet cannot be reduced to the above - a matter of reasons for choosing one thing over another.

Three exclusive positions

Position 1 - The amoralist

I have not shown that this value - which any rational being is obliged to admit the existence of - must be moral value, in the normal sense of being concerned with others as well as the self. And this is where rationality and morality may, after all, diverge: a man may suggest that the only things that give him reasons for acting are considerations of his own interests, that the interests of others provide no reason for him to do anything whatever. So while there are reasons for acting, they are only self-centred reasons.

Derek Parfit, in his amazing book Reasons and Persons, has shown that our common-sense theory of personal identity is seriously flawed - so seriously flawed in fact, that our selfishly inclined opponent must fall back to Present Aim Theory, and not the more familiar varieties of rational egoism at all. Yet it remains an option: it might be possible for someone to be rational, without being moral.

Position 2 - The moralist

By a reason for any act, is conveyed the idea of its supposed addition, actual or probable, to the greatest happiness. J.B.

We can deny the selfish interpretation of rationality, and adopt an "agent-neutral" form, or "beneficiary-neutral" form. That is, if something gives a reason to do something, it gives anyone a reason to do it - the "self" gets no special consideration. And this will result in the acceptance of moral reasons for acting, as involved in utilitarianism and other standard consequentialist theories.

This position results in a different conclusion of the nature of the relationship between self-interest and morality than might otherwise be taken: if happiness is generally valuable, and it is good to promote it, then prima facie we should try to make ourselves happy too. So there are reasons, which are essentially moral reasons in this sense of impartiality (and which we would not hesitate to call moral reasons except for custom) for us to act self-interestedly. Thus, it is possible to ground self-interest in morality as easily or perhaps even easier than it is to ground morality in self-interest, as some have sought to do (see for instance Singer in the final chapter of Practical Ethics, Hare in Moral Thinking, and all those who feel obliged to suggest that "virtue is its own reward").

While this position may seem to be the natural choice to many, it does not obviously disprove the validity of the other positions. For, though the evidence is that - personal bias aside - happiness is happiness (which a moralist may hope suggests a beneficiary neutral form is in order... that it cannot really matter whose happiness it is, if it is essentially the same thing in any case) the amoralist does not need deny this: just as his own happiness gives him a reason for acting, he might say, other people's happiness gives them reasons for acting... this does not imply that one's happiness gives another a reason to act. So, in this, I reluctantly follow (the reluctant himself) Sidgwick, in allowing that there is no obvious reason, with a basis outside of one's intuitions, for choosing morality over self-centredness. But there is a sense in which to ask for a reason to accept a theory of rationality is asking too much.

Position 3 - The ambivalent

Someone might, it is clear, combine the preceding positions into a single position - he might say that there are moral reasons for acting, but that there are also other non-moral reasons for acting. He would, in value terms, be assuming the existence of both moral and non-moral types. But since it would involve - on any given complicated decision - weighing up two different types of reason against each other, it is not clear that this position would be workable, because it is not obvious that moral and non-moral value would be commensurable in a way that would make their conflict capable of resolution. Yet if some intelligible account of their commensurability were provided, then this too would seem to be a defensible position - and one which, beyond parsimony, I can think of no argument against.

utilitarian.org