2012-02-04 (Saturday) § Leave a comment
Okay, I was wrong. Suffering isn’t the only thing that matters to me morally. Happiness does too. But not as much as suffering. The thing that made this clear to me was the thought experiment about a button that would instantly and painlessly wipe out every suffering-capable entity (including me). I couldn’t get myself to say I would push that button. If worst individual non-consensual suffering is my only moral metric, then I should push that button even if only one entity somewhere is suffering just a tiny bit. It would guarantee a WINCS of zero henceforth and forevermore. That’s as low as it goes. Since I don’t feel like a universe with no conscious entities at all is morally better even compared to one with a really low but nonzero WINCS, it can’t be that WINCS is the only relevant moral metric for me.
Below is a representation of how much each thing matters to me, using my favorite intellectual tool, the indifference curve. The curved line here represents all the outcomes that are equally morally acceptable to me. The idea is that as the WINCS increases, it requires a higher and higher greatest individual happiness to compensate for it morally. Any point under the curve is morally acceptable to me, and any point above it is morally unacceptable.
I made this graph with an awesome open-source vector image editor.
Now one thing left to specify is how much suffering and happiness (in vague terms) is represented by a given length on each axis.
Another is whether the curve approaches a horizontal asymptote as GIH increases and continues to rise indefinitely even if at a continually decreasing rate, or it actually reaches a 0 slope (flattens out). If it does reach 0 slope, that means there’s some amount of individual suffering such that no amount of individual happiness makes it okay to me. I’m inclined to say there is.
Now about that universal euthanasia button. With the graph the way I’ve set it up, the origin point (zero WINCS, zero GIH) is morally equivalent to any point on or below the curve, but morally better than any point above it. That means if I have the button in front of me and I’m in a situation represented by any point above the curve, I should push the button, since doing so will result in a morally better situation. I think I’m okay with that. I can imagine a situation where some entity is suffering so badly that I’d rather instantly extinguish all conscious life than allow that suffering to continue.
2012-02-04 (Saturday) § 8 Comments
A useful comparison here is with the problem of describing the sense of grammaticalness that we have for the sentences of our native language. In this case the aim is to characterize the ability to recognize well-formed sentences by formulating clearly expressed principles which make the same discriminations as the native speaker. This undertaking is known to require theoretical constructions that far outrun the ad hoc precepts of our explicit grammatical knowledge. A similar situation presumably holds in moral theory. There is no reason to assume that our sense of justice can be adequately characterized by familiar common sense precepts, or derived from the more obvious learning principles. A correct account of moral capacities will certainly involve principles and theoretical constructions which go much beyond the norms and standards cited in everyday life; it may eventually require fairly sophisticated mathematics as well.
from A Theory of Justice by John Rawls (revised edition), pp 41-42.
I like this comparison, although I think familiar common-sense precepts are more useful for moral theory than the ad-hoc precepts of our explicit grammatical knowledge are for syntactic theory. Telling people not to split infinitives or end sentences with a preposition does nothing to explain why “the go store I need” is a sentence of English you’ll never hear from a native speaker. Telling people to do to others what they want done to them will go some distance to explain why we judge physical assault as immoral in most cases. Of course that’s just some raw material for a theory. There’s lots and lots of work to be done after that.
2012-01-30 (Monday) § 5 Comments
Do you feel like this is a coherent statement? “Personally I don’t approve of wealth redistribution, but I know it’s the right thing to do.”
I’m asking because I want to get a sample of people’s intuitions about whether normative statements (statements involving words like “ought”, “should”, “right”, “wrong”, “good”) are correctly explained as expressions of approving/disapproving attitudes, rather than statements of any kind of objective fact. If the statement above strikes you as coherent, then normative statements for you don’t even contain expressions of attitudes, let alone nothing but those expressions.
2012-01-18 (Wednesday) § Leave a comment
Something just occurred to me that may force me to abandon the idea of using suffering as my sole moral metric. It seems obvious now that I’ve thought of it. If the morally better of any two situations is the one where the conscious being suffering most in that situation is suffering less than the being that occupies that position in the other situation, then the morally optimal situation is one where no one is suffering at all. So far so good. But if every being capable of suffering actually is suffering to some extent (even if just a tiny bit), then I should favor, if it’s available, an instant euthanasia of all suffering-capable beings, in preference to any situation in which any being is suffering at all. (I say “suffering-capable” in order to eliminate the possibility of causing survivors’ suffering to beings that have zero suffering now but might suffer as a result of all the other beings being dead.)
That euthanasia isn’t available, but I’m not sure if I can accept that it’s desirable in principle. Maybe I can. I’ll have to let that question swirl around for a while and see what my intuitions produce.
2012-01-05 (Thursday) § 2 Comments
2011-12-28 (Wednesday) § 1 Comment
In a Facebook discussion about my previous post, a friend pointed out the issue of consent in suffering. I neglected to build that in, but I meant to exclude consensual suffering from the metric. My next task will be to define consent. In the meantime, I’m changing the name to the even-catchier Worst Individual Non-Consensual Suffering.