It is quite possible that I am a sociopath. It is also possible that I am not. I certainly do not feel any sort of connection to much human suffering other than my own, and I find it nearly impossible to put myself into the shoes of others. Nonetheless, that does not make me wrong, or utilitarianism wrong. Actually, I think that my mental outlook makes me better suited to impersonal, rational judgments. (To the mods, I'm not offended by Nico's label of me, I think it might indeed be relevant to the discussion.)
For the sake of argument, let's say that I am, indeed, a sociopath. Now, I will qualify this with an admission that I have generally avoided doing anything that will get me tossed into prison. The most I typically indulge in is a bit of speeding now and then.
That said, I must confess that I feel I am being given an exceedingly large and easily hit target when I am asked why we should prefer happiness to misery. It is, I think, relatively self-evident that most, perhaps all, human beings prefer to be happy. I certainly do, although I am not particularly upset if you choose misery. Moreover, since misery typically has life-averse antecedents or consequences, being miserable is probably going to result in you not having any say at all fairly quickly. For example, there are few methods better for ensuring one's own misery than setting oneself on fire, but this also tends to ensure that one will soon not be much of anything. So, if nothing else, a preference for at least some happiness and well-being seems to be required for a voice in the conversation, as it were.
That said, I think that asking whether we want to be happy, or possess well-being, is to essentially hit philosophical bedrock. I think a lot can be gained from asking whether a person could even logically desire to exist in the worst possible world (if they do, then a desire is being satisfied, therefore not the WPW), and then going from there to pointing out that, therefore, moving away from the worst possible world is a good thing. But if one is determined to deny that humans really do, fundamentally, want to be happy rather than miserable, and that this is essentially an axiomatic, existence-preferring state of being, I'm not sure that I can help you. I can merely point out that, if one
truly wishes to experience non-well-being, they aren't really going to be a concern for longer than it takes them to find a means of self-destruction.
And, I must ask, where are you getting your preferences from, sir? You defend your ethics with an appeal to some higher happiness/well-being. Fine, fine, but if you are going to ask why I should prefer happiness, you are not excluded from the question. If misery is such a viable option, why should we not abandon all happiness, eh? Glass houses and stones, I think. But I digress. The point is that, in all decisions we make, we assume some happiness to be gained, some well-being to be achieved - unless you are willing to assert that a system of ethics which makes everyone worse off is still to be followed as a "good" thing.
- - -
Alright. So much for my defense of happiness. Now, what seems to be advocated is either some sort of odd deontology, or virtue ethics. I have a few criticisms of both.
1.First, in regards to virtue ethics, I think a fairly cogent criticism of the system is that there is no objective basis for assuming virtues to be virtues. Sure, you might claim that a just, honest, and honorable man is virtuous, but I can simply deny this, and claim that ruthlessness, power, and charisma are more important. If you appeal to the effects these various traits have, then you are appealing, essentially, to utilitarian consequentialism, and I win, because you've just used my ethics to underpin yours. If you appeal to the happiness that certain traits bring about, I can simply point out that if such traits do bring about more happiness, then utilitarian consequentialism incorporates them into the "greatest happiness for the greatest number". After all, if I were passing up sources of happiness, I wouldn't be very utilitarian now, would I?
Second, if there exists people like me, who do not even grasp this "higher happiness" that you speak of, then it is interesting to note that you ascribe to a theory of morality that actually does proclaim that some are born better than others. Not just that some are made more capable, but that some are, in fact, made to be moral superiors of others. But if this is true, if morality is the result of in-born traits, then it is not exactly a choice or personal development, is it? It's merely different programming. In which case, your claim to a morality I cannot grasp is more a claim of your preferences being superior to mine. Do you say that your preferences are better because of how they make you feel, or how they work out for others? Aha! Utilitarian consequentialism again!
2.And then, to deontology. Well, I think the most cogent argument against deontology is, essentially, the old childish retort of "says who?". More precisely, you can claim that I have a moral duty to do X, but you can't make me agree. I may believe instead in moral duty Y, or in no moral duties at all (as in my personal case). Can you show me to be wrong? Without appealing to the consequences of such behavior (i.e., happiness)?
Another criticism I have is, for example, the trolley problem. It is as follows:
1. Suppose there is a runaway trolley heading down the track, with five people standing on the track. You have a lever near you which you can use to switch the trolley onto an empty siding. Should you push it?
This seems reasonably obvious. Pull the level, get called a hero, whatever. You save five lives, although I would generally be of the opinion that whoever mindlessly stands on a train track may deserve just what he gets. For the purposes of this exercise, however, I'll assume that they had a reason to be there. On to problem 2...
2. As in the above, except that, on the siding, there is another man working on the tracks. If you switch the trolley, he will be killed instead of the five. Do you still pull the lever?
Most people do. After all, you aren't directly killing the man, he's just there. It's unfortunate, but you save five. This might cause problems, however, if one has a moral duty to protect others. Either way, in this case, someone is going to die. In other words, you will be evil no matter what you do, unless your deontology has some clever loopholes for consequences...
3. Instead of a lever and siding, you are standing next to a fat man, wearing some fairly tough clothing. There are still five people further down the track. If you push the fat man in front of the trolley, he will stop the trolley and save the five. He is also the only way to stop the trolley in time. Do you push the fat man?
One can substitute the above problem with another, even more extreme one - supposing that torturing one man to death would save a million? My initial response, in these cases, is simply to assume that, yes, pushing the fat man or torturing the innocent one are indeed justified. The death of five is, weighed in the balance, worse than the death of one, everything else being equal. If one has a moral duty not to kill or torture, however, one is stuck watching five people get run down by a trolley...unless one appeals to the, heh, consequences.
And even if one asserts that one should not push the fat man or torture the innocent man, it must be asked: of what use are morals which leave us dead? Why not simply go ahead and ignore moral duties when not profitable? Yes, we might be somewhat more miserable, but more of us will be alive, too, as opposed to dead, in which case one doesn't get to perform moral duties anyway.
In other words, I think that deontological ethics have no firm metaphysical grounds - there doesn't seem to be a reason to suppose that metaphysical moral imperatives or duties exist, unless these can be evidenced, which so far has not been done. And second, strict adherence to these moral duties would result in worse outcomes for many than would otherwise occur, or, to put it otherwise, even if deontology is true, why not just be "evil" and save more lives in such cases as the trolley problem?