Criticize my morality
2011-12-25 (Sunday) § 5 Comments
On this blog I’d like to not only consider and criticize other people’s ideas, but put forward a few of my own for the same treatment. Here goes on the subject of morality.
I take it as a given that we want our moral judgments to form some kind of consistent pattern. Can you imagine yourself respecting the moral judgments of someone who doesn’t care if their judgments are consistent with each other? I can’t. If we didn’t demand any kind of consistency in morals, there could be no conversation about moral judgments.
So what form should this consistency take? This is the only candidate I know of: All moral judgments should follow from a small set of fundamental principles. Small because the more numerous the principles, the closer we approach to an ad-hoc set of judgments, and that’s what we were trying to avoid in the first place by aiming for consistency.
I’ve gotten my set of fundamentals down to one. I’ll talk about it towards the end of this post. Maybe you can point out some problems.
Fundamentals are unjustified by definition. If a principle follows from something else, it’s not fundamental. So how do we figure out what our fundamentals are, if we can’t reason our way to them? This fits nicely with another general constraint that I think we’re all on board with.
We all have some moral intuitions, i.e. attitudes or judgments that we don’t consciously reason our way to. Causally speaking these are probably results of how the history of our species and our particular cultures have wired our brains. But forget about causes. The point is that we don’t reason to them. We do sometimes come up with justifications, but those are usually after the fact.
Some intuitions are fundamental and others aren’t. The difference is that when justifications are invalidated, we hold on to our fundamental intuitions.
For example, what’s your moral judgment about adults having sex with children? I’m guessing it goes something like “WRONG WRONG WRONG.” Can you justify that judgment in terms of something else? If so, can you say that if that justification were removed, you’d no longer have the same moral judgment?
How do we fit these moral intuitions into a consistent system? I think we can agree that if a moral system forces us to abandon all our existing moral intuitions, it’s a failure. But it might be that some of our moral intuitions are incompatible with others. If so, something has to go. We already signed up to be consistent. So here’s a first draft of the general constraint I mentioned above: Our moral system should preserve as many of our moral intuitions as it can without allowing any of them to conflict with each other.
Moral intuitions take the form of moral judgments of varying levels of specificity. We don’t value them all equally. Moreover, some of them are fundamental and others aren’t. So if we have to throw out some, it’s not going to be a coin toss.
Now here’s the part that takes some careful thinking. I’ve found for myself that NONE of my moral intuitions is really fundamental except for one. That is, none of my intuitive judgments of right or wrong, except for one, is completely invulnerable to any conceivable change in circumstantial facts. I have the same “WRONG WRONG WRONG” intuition about adults having sex with children that you have, but I can find a justification for it, and if I imagine a situation where that justification is removed, my intuition loses its moral force. That justification is the suffering experienced by the child. I’m including all of it, present and future. If there were no suffering involved and never would be, I wouldn’t consider such an act wrong. In reality kids who are molested do suffer, and that’s not going to change. But the point is that my moral judgment does depend on that state of affairs. Yours may not. Try out some of your strongest moral intuitions and see if you can conceive of any state of affairs (even if you’re sure it’ll never actually happen) where they would stop working.
I could give you more examples, but that’s the result I’ve come to. All my moral intuitions boil down to suffering and can be altered by different conceivable facts about suffering.
So far my only fundamental moral principle is that I object to suffering. Without further qualification, this leaves so much ambiguity as to be nearly useless for judging any real situation. Every possible situation involves some suffering, somewhere. I need a way to decide which of any two possible situations is worse even if both of them involve it.
Easy answer: the one with more suffering is worse. But there are at least a couple of different things “more suffering” could mean.
1) More individual experiences of suffering. That means more individuals having those experiences. (By “individual” I mean any entity capable of experiencing suffering, whether human or not.)
2) A greater quantity of suffering. We all agree that suffering is quantifiable at least in vague terms. Losing a child generally causes a person more suffering than hitting her head on a door frame. Losing your job will probably cause you more suffering than having to put up with a mildly annoying person every day. We can make statements like this that compare the quantity of suffering in one situation vs. another, and that’s meaningful to us.
#2 fits much better with my intuitions. Given a choice between a hundred people suffering a little or one person suffering a lot, I’ll take the hundred people. The badness or goodness of each individual experience is what matters to me.
I don’t mean #2 in a utilitarian sense. Utilitarians judge the goodness or badness of situations by the total of suffering of all individuals in the situation – in other words they add multiple individual experiences together into a sum. I see no value in such a sum. Experience is inherently an individual phenomenon. No conscious entity experiences any such thing as the sum of the suffering of a group of individuals, so why should we care about it? And if some entity did experience such a thing, then why should we care about it as a sum, instead of as simply the individual experience of that entity?
This leads me to a single moral metric for any given situation: the amount of suffering experienced by the individual who is suffering the most in that situation. For short I’ll call it the Worst Individual Suffering. The situation with the lowest WIS is morally the best situation.
It’s true that suffering is impossible to quantify precisely and often difficult to quantify even vaguely. With the metric I’m using, that difficulty introduces a commensurate difficulty in assessing the moral value of a situation. That’s a consequence I’m comfortable with.
The meaning of moral vocabulary
This is a whole separate problem, and it’s a big one. What do words like “right” and “wrong” even mean? I’ll put forward some opinions about this in an upcoming post.