An Argument for Altruism

Why Utilitarianism supports Altruism as a guiding principle

Part 1 - The Sufficiency of Altruism

One of the most common sources of unethical behaviour among potential moral agents is self-interest. That is, people do not consider the interests of others as equal to their own. It is, in a way, quite understandable... the value of our own interests is more readily apparent to us than the value of others' interests. So we find that as people make the transition from unethical to ethical behaviour, they tend to consider others more, and themselves less, than previously. I have formerly argued that this can be taken too far - some people assume that any action done for the benefit of the self at a cost to others is wrong, no matter how large the benefit or how small the harm. Technically, and on an individual and specific basis, I continue to oppose that idea. What follows is an argument for the acceptance of altruism as a general rule - a principle to be valued, and followed in the absence of significant reasons otherwise.

Imagine the egoist considering a certain course of action. He will consider the value of the consequences of the action for himself, and himself only. If we say that there are X billion beings with interests in the world, and the man is considering the effects on one being (when he should be considering the effects on [ignoring the unborn] X billion beings), then his consideration has one "X Billionth"* of the scope that it ought to have. It is therefore very likely to be wrong, and the injustice is apparent.

* [which is less than 0.0000001% if X is one, and is lower for larger X]

Now consider the altruist making a decision. He will consider every one of the X Billion beings, except himself. His consideration will have X Billion minus one "X Billionths"** of the scope that it should have. It is therefore very likely to be right.

** [over 99.999999% if X is one, and higher for larger X]

In other words, "everybody except myself" is a very good approximation, and "myself" is a very poor approximation, to "everybody". If we are in the position of choosing egoism or altruism, altruism is obviously the correct choice. In fact, from this perspective, there is really very little to recommend the normal (theoretically 100% accurate) utilitarian position (equal consideration of all interests) over altruism.

Part 2 - The problems with self

When we are considering how we will be affected by a certain course of action, and are trading costs to self with benefits to others (and benefits to self with costs to others) we will naturally be biased toward self-interest because ours is the only position we know and understand; and the ease with which we can over-value our own interests in an issue is obvious to anyone who debates with parties who are defending their own interests in a conflict. We simply cannot be impartial when defending ourselves... pain and pleasure are potent motivators, and for good or ill we only know our own. The pressures they wield are very subtle, and not at all easy to detect or compensate for: I suggest in many cases we shouldn't try. Given how accurate we can be with altruism, I propose it is often better to simply discard consideration of the self than attempt to retain it, and risk giving ourselves unequal consideration, and thereby unbalance the whole calculation.

If we consider our own interests, then sometimes we will act in an apparently selfish manner. Since people often (of necessity) make judgements on the smallest piece of evidence, we can come out badly from it - whereas if we make sure we always have others' interests as our motivation, they should not fail to be convinced of our ethical stance. The most compelling argument we could possibly give for the virtue of our cause is that we are prepared to sacrifice our self-interests for it.

We would do well to base our decisions solely on the expected consequences for others.

Part 3 - Problems with Altruism?

An initial problem has been suggested, though only by those who do not really understand utilitarianism. The problem is one of practicality - we would supposedly die of starvation because we could not see the value to others in feeding ourselves. This is really no problem at all, in fact it is a positive virtue of the system that we must always be justifying our lives in terms of benefits to others. If, in our whole life, we would not do enough good to others to warrant that small amount of food it would take to sustain ourselves, then we are exactly the kind of person the world would not miss anyway.

The main problem with altruism is that the effects of our actions on others are often significantly smaller than the effects on ourselves. And, though the decisions we make CAN affect billions of others, as the effects spread out the consequences become progressively harder to predict... so that a point is reached where we do not have any confidence in our ability to foresee the outcome and value of the chaotic interactions... so we have no option but to ignore these later effects, and base our actions solely on that limited chain of events that we can deal with. These two reasons combine to mean that the impression given earlier (that altruism is 99.99 etc % accurate) does not actually hold true in some cases - specifically in the cases where the effects are focused on the self, or the self and a small group of others. These cases are therefore reasonable exceptions to the rule.

Part 4 - Other Problems

The problem with the self - that we cannot accurately and impartially evaluate our own interests in many issues - can occur with others too. For example, certain situations or possibilities can lead to infatuations... the catalyst of which (Hi Dana!) may suddenly appear to be disproportionately valuable, its interests assuming whatever value is necessary to require the dedication of one's life resources to it. If we accept altruism as the solution to the problem with self, then it would seem to require a similar solution to this problem - that we completely discontinue the consideration of this being's interests... except in those cases where few others are affected. Of course, as with all rules, its value lies in knowing when to apply it, and when not to. Therein lies a problem of complexity similar to that which the rule was intended to help solve. Ho hum.


1999
utilitarian.org