Criticize my morality

2011-12-25 (Sunday) § 5 Comments

On this blog I’d like to not only consider and criticize other people’s ideas, but put forward a few of my own for the same treatment.  Here goes on the subject of morality.


I take it as a given that we want our moral judgments to form some kind of consistent pattern.  Can you imagine yourself respecting the moral judgments of someone who doesn’t care if their judgments are consistent with each other?  I can’t.  If we didn’t demand any kind of consistency in morals, there could be no conversation about moral judgments.

So what form should this consistency take?  This is the only candidate I know of:  All moral judgments should follow from a small set of fundamental principles.  Small because the more numerous the principles, the closer we approach to an ad-hoc set of judgments, and that’s what we were trying to avoid in the first place by aiming for consistency.

I’ve gotten my set of fundamentals down to one.  I’ll talk about it towards the end of this post.  Maybe you can point out some problems.

Fundamental principles

Fundamentals are unjustified by definition.  If a principle follows from something else, it’s not fundamental.  So how do we figure out what our fundamentals are, if we can’t reason our way to them?  This fits nicely with another general constraint that I think we’re all on board with.

Moral intuitions

We all have some moral intuitions, i.e. attitudes or judgments that we don’t consciously reason our way to.  Causally speaking these are probably results of how the history of our species and our particular cultures have wired our brains.  But forget about causes.  The point is that we don’t reason to them.  We do sometimes come up with justifications, but those are usually after the fact.

Some intuitions are fundamental and others aren’t.  The difference is that when justifications are invalidated, we hold on to our fundamental intuitions.

For example, what’s your moral judgment about adults having sex with children?  I’m guessing it goes something like “WRONG WRONG WRONG.”  Can you justify that judgment in terms of something else?  If so, can you say that if that justification were removed, you’d no longer have the same moral judgment?

How do we fit these moral intuitions into a consistent system?  I think we can agree that if a moral system forces us to abandon all our existing moral intuitions, it’s a failure.  But it might be that some of our moral intuitions are incompatible with others.  If so, something has to go.  We already signed up to be consistent.  So here’s a first draft of the general constraint I mentioned above:  Our moral system should preserve as many of our moral intuitions as it can without allowing any of them to conflict with each other.

Moral intuitions take the form of moral judgments of varying levels of specificity.  We don’t value them all equally.  Moreover, some of them are fundamental and others aren’t.  So if we have to throw out some, it’s not going to be a coin toss.

Now here’s the part that takes some careful thinking.  I’ve found for myself that NONE of my moral intuitions is really fundamental except for one.  That is, none of my intuitive judgments of right or wrong, except for one, is completely invulnerable to any conceivable change in circumstantial facts.  I have the same “WRONG WRONG WRONG” intuition about adults having sex with children that you have, but I can find a justification for it, and if I imagine a situation where that justification is removed, my intuition loses its moral force.  That justification is the suffering experienced by the child.  I’m including all of it, present and future.  If there were no suffering involved and never would be, I wouldn’t consider such an act wrong.  In reality kids who are molested do suffer, and that’s not going to change.  But the point is that my moral judgment does depend on that state of affairs.  Yours may not.  Try out some of your strongest moral intuitions and see if you can conceive of any state of affairs (even if you’re sure it’ll never actually happen) where they would stop working.

I could give you more examples, but that’s the result I’ve come to.  All my moral intuitions boil down to suffering and can be altered by different conceivable facts about suffering.

Which suffering?

So far my only fundamental moral principle is that I object to suffering.  Without further qualification, this leaves so much ambiguity as to be nearly useless for judging any real situation.  Every possible situation involves some suffering, somewhere.  I need a way to decide which of any two possible situations is worse even if both of them involve it.

Easy answer:  the one with more suffering is worse.  But there are at least a couple of different things “more suffering” could mean.

1) More individual experiences of suffering.  That means more individuals having those experiences.  (By “individual” I mean any entity capable of experiencing suffering, whether human or not.)

2) A greater quantity of suffering.  We all agree that suffering is quantifiable at least in vague terms.  Losing a child generally causes a person more suffering than hitting her head on a door frame.  Losing your job will probably cause you more suffering than having to put up with a mildly annoying person every day.  We can make statements like this that compare the quantity of suffering in one situation vs. another, and that’s meaningful to us.

#2 fits much better with my intuitions.  Given a choice between a hundred people suffering a little or one person suffering a lot, I’ll take the hundred people.  The badness or goodness of each individual experience is what matters to me.

I don’t mean #2 in a utilitarian sense.  Utilitarians judge the goodness or badness of situations by the total of suffering of all individuals in the situation – in other words they add multiple individual experiences together into a sum.  I see no value in such a sum.  Experience is inherently an individual phenomenon.  No conscious entity experiences any such thing as the sum of the suffering of a group of individuals, so why should we care about it?  And if some entity did experience such a thing, then why should we care about it as a sum, instead of as simply the individual experience of that entity?

A metric

This leads me to a single moral metric for any given situation:  the amount of suffering experienced by the individual who is suffering the most in that situation.  For short I’ll call it the Worst Individual Suffering.  The situation with the lowest WIS is morally the best situation.

It’s true that suffering is impossible to quantify precisely and often difficult to quantify even vaguely.  With the metric I’m using, that difficulty introduces a commensurate difficulty in assessing the moral value of a situation.  That’s a consequence I’m comfortable with.

The meaning of moral vocabulary

This is a whole separate problem, and it’s a big one.  What do words like “right” and “wrong” even mean?  I’ll put forward some opinions about this in an upcoming post.


Tagged: ,

§ 5 Responses to Criticize my morality

  • jasonburbage says:

    Even if you can indisputably quantify the amount of suffering a person experiences, I don’t see how you can say that 1 person suffering at an 8/10 is worse than 1000 people suffering at 7.9/10.

    I can agree that 1000 people suffering at 2/10 is not as bad as 1 person at 9/10. Adding them is too simplistic. I just think at some point the scales probably tip the other direction.

    As a mathematician I want to say that suffering is a logarithmic scale. So suffering at 1/10 is worth 1 “point” then 2/10 is 10, 3/10 is 100, etc. Then a sum is a little bit more meaningful. 1000 people suffering at 2 would be 10,000 “points” of suffering, while 1 person at 9 is 1,000,000,000 points.

    The hard question, then, for me is this: is there a number X for which I would think it’s justifiable for 1 person to suffer a large amount in order to prevent X people from suffering some smaller (but not insignificant) amount? I think unless the amount of suffering is almost trivially low, the answer is yes.

    • paginavorus says:

      It sounds like your view is an indifference curve with an axis for WIS and an axis for number of suffering individuals. Maybe you could try drawing that curve and see if you can make it accurately model your intuitions.

      I do say that 1 person suffering at 8/10 is worse than 1000 suffering at 7.9/10. That doesn’t seem strange to me. Why should it matter how many are suffering given that each of them only experiences one individual’s share of that suffering?

      I’m not sure I understand the point of the logarithmic scale for suffering. What is each of the two sets of numbers measuring? The variable that’s relevant to the WIS metric is the amount of subjective suffering. Is that what you’re representing by the 1-10 scale or the points that it logarithmically correlates to?

      • jasonburbage says:

        My 1-10 scale is your “amount of subjective suffering”. That number would then be used as an exponent on some base (I used 10 in my example) to compute the “moral value” of the suffering which could be added to other values of the same type and compared to other sums.

        If your position is that the number of people suffering does not matter, then do you not acknowledge a moral difference between torturing 1 person and torturing 2 people? Each of them only experiences one individual share of suffering.

        • paginavorus says:

          I think I can bite that bullet and say no, I don’t acknowledge a moral difference between torturing 1 or 2 people. That is, not in the immediate consequences. There might be some broader trauma that comes to bear on individuals as the result of the scale of some program of torture. But for me that would only be morally significant insofar as it would raise the WIS.

          I understand what you mean by the logarithmic scale now, but I don’t understand its theoretical motivation. Shouldn’t a numerical scale of subjective suffering simply represent the sufferer’s subjective judgments of the relative badness of various experiences? So a 10 should be twice as bad, subjectively, as a 5. Given that, I don’t understand why we would want to assign moral values to those numbers on a logarithmic scale. Why not just say if a 10 is subjectively twice as bad as a 5, then it’s twice as bad morally?

          • jasonburbage says:

            It is a necessity for me, because I do believe there is a moral difference between torturing 1 person and torturing 2 people (even as an isolated event), but I also believe that torturing 1 person is worse than causing some mild inconvenience or discomfort to 1000 people. For me to resolve those two beliefs, the ‘moral value’ of the suffering must be some function of the perceived suffering and the number of individuals experiencing it, and I believe that function could be described in mathematical terms. Whether or not it’s a logarithmic function I don’t know; it’s just what came to mind.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

What’s this?

You are currently reading Criticize my morality at paginavorus.


%d bloggers like this: