WINCS: More tweaks

2012-01-05 (Thursday) § 2 Comments

Comments here and on Facebook have raised some points of ambiguity in my Worst Individual Non-Consensual Suffering metric for morality (described here and renamed here).  I’ll try and resolve them.

Intent and probability

Jason asked two related questions:

1) In this WINCS scheme, is it equally immoral to cause suffering unintentionally as it would be to cause it intentionally?

2) Is it equally immoral to increase the probability of suffering as it would be to actually cause it?

To answer these I have to clarify a few points by way of setup.

  • I mean to construe morality/immorality as a scalar variable, not a binary one.  That is, situations are not either moral or immoral; they are morally better or morally worse than some other hypothetical situation.
  • For me, acts themselves have no moral value.  Only the situations that result from them do.  This is a consequence of making subjective suffering the sole content of any moral measurement.  Any act, if it causes no subjective suffering, carries no moral cost.  You could kill six million people and that would be morally fine to me IF it resulted in no suffering for anyone ever.  (Big if.)  To put it another way:  Acts are not moral or immoral, but they do result in morally better or worse situations.  I might say “X act is immoral” as a shorthand for “X act results in a morally worse situation than the situation that results from some alternative act Y.”  But for me it is a shorthand, and a potentially misleading one.

Given that second point, intent and probability can’t play any role in assigning moral value, because they play no role in the ontology of actual situations – or at least no role in the features of actual situations that are relevant to this metric, namely subjective experiences of suffering.

Jason asked in particular:

For one example, is it immoral for a doctor to prescribe drugs that might cause a severe allergic reaction, regardless of how much it helps those that have no reaction? Assume that the suffering caused by the reaction is greater than the suffering relieved by the medication.

My answer is that the act of prescribing drugs, like any act, is neither moral nor immoral.  Only its results have moral value, and only a comparative moral value as against some other results of some other possible act.  Same answer to the rest of Jason’s questions.

This might look like I’ve jumped right into my own reductio ad absurdum.  If only situations have moral value and not acts, and intention plays no role in it, then how can we distinguish morally between accidents and malice?  Isn’t it a failure on the part of any moral system if it provides no basis for such a distinction, since the intuition that malice is morally worse than an accident is so near-universal and deeply felt?

I don’t think it is a failure, and that’s because I think the intuition has a little more to it than appears at first glance.  We don’t just want to say accidentally hitting someone with your car is not as morally bad as intentionally running them over.  We want to make that comparison because we need it as a basis for treating people differently depending on which of those two things they’ve done.  I agree that we need to treat people differently depending on which of those two things they’ve done, but I’m knocking out that basis and substituting a different one.  More on that below under “Praise and blame”.

Praise and blame; or, do you deserve to get yelled at

Fault and credit are kind of weird notions.  They’re strong judgments that depend crucially on some vague theoretical concepts about human action.  Those concepts are plenty interesting and I may try and dig into them in another post, but for now I’m bringing them up only to reject them as relevant to morality.  Just like analyses of intention, their typical use in moral questions is as a basis for figuring out how we want to treat someone who has done some particular thing.

  • You didn’t show up at 6:30pm like you said you would, so now your friend is more reluctant to meet up with you in the future.  But then she finds out it’s because you got hit by a car on the way.  So it’s not your fault and now she’s just as willing to meet up with you as before.
  • Somebody is convicted of a murder and after a psychological evaluation, is deemed incompetent and committed to a mental health institution instead of prison.

There’s a significant difference between these two examples.  In the first case, your friend’s response to your failure to show up changes because the newly-revealed circumstance of having been hit by a car removes the theoretical motivation for her initial response.  That motivation was her belief that your failure to show up was a predictor that you would fail to show up to future engagements.  There’s an emotional element in it too, the feeling of irritation or anger at being stood up.  But I think the connection from that feeling to the response of being more reluctant to meet up in the future also depends on the belief that this one failure to show up is a predictor for future recurrences.  In this case, intention does serve as a basis for deciding how to treat someone who has committed some act, but only as a proxy for making a prediction about future acts, not as a primary criterion in itself.  Since an accident outside your control is the reason you didn’t show up, your failure to show up was unintentional.  But your friend might still be reluctant to meet up with you again even though it wasn’t your “fault” that you didn’t show up, if she thinks you’re likely to be similarly detained in the future.  I don’t know why she would think that.  Let’s say there’s some crazy guy stalking you who tries to run you over every time you leave the house.  Whatever.  The point is that your intentions aren’t what determine your friend’s response to being stood up.  What determines that is whether she thinks you’ll stand her up again, intentionally or not.

In the second case, predictions of future behavior may not play a big role in the decision of the court to send the offender to a mental health institution instead of prison.  Life imprisonment without parole will be equally effective at preventing a person from killing anybody outside of prison whether that person is insane (whatever that means) or not.  An element that plays at least a significant role in the court’s decision is whether the person is “culpable” for the crime, or in more ordinary speech, whether it’s his fault.  This is the part that I reject as a moral concept.  I don’t need it in order to explain why we should treat some people differently from others even when they cause the same suffering.

There’s no moral difference for me between intentional and unintentional infliction of suffering, IF the suffering that results from them is the same.  (Another big if.)  But there is a difference in what response to those acts is appropriate, and that’s because of the difference in what response is likely to minimize future suffering (more precisely, to minimize the future WINCS).

Defining consent

How do we figure out which suffering is consensual so we know what to exclude from the Worst Individual Non-Consensual Suffering?  Here’s a criterion:  Suffering is consensual if it results only from consensual acts.

A common-sense definition of a consensual act might be the following:  An act committed without the application of force.

It’s easy to come up with a whole graduated spectrum of examples of actions which qualify more or less questionably as force.  Here are some end points:  Threatening to shoot someone if they don’t vacate their land will qualify as force in the opinion of just about everybody.  Asking politely if they will sell and leaving them alone if they refuse will qualify as force in the opinion of just about nobody.  In between, how about the following:  Building a factory nearby that dumps toxins into the stream they rely on for drinking water.  Building a house nearby and having loud raves all night every night.  Buying all the land around them and refusing right of way.  Buying all the land around them and allowing them to go through, but only for a fee.

Those examples involve varying degrees of suffering for the person so treated, and more importantly an expectation of future suffering.  I think for most of us our sense of how fully the examples qualify as force will vary commensurately.  But causing an expectation of suffering seems to be only necessary to qualify an act as force, not sufficient.  The other necessary part is belief that the expectation will be caused.  To be able to define this succinctly, I’m going to use letters like in algebra.  But first I’ll give a concrete example, a bank robbery.

The robber threatens to shoot the teller if he doesn’t open the safe.  Does this threat count as force?  Is opening the safe a consensual act?

Now the algebraic form: For any act A (threatening to shoot the teller) by a person X (the robber) to qualify as force against a person Y (the teller),

1) X has to believe that A will cause Y to expect suffering (getting shot) to result from failing to perform an act B (opening the safe).

Add to this the part already discussed,

2) A has to succeed in causing Y to expect future suffering.

and we have a definition of a consensual act that’s long but seems to produce the right results:  A consensual act is any act B performed by a person Y without the performance of an act A by any second party X such that X correctly (#2) believes that A will cause Y to expect suffering to result from failing to perform B.

I left out intention and included only X’s belief about the effectiveness of the threat because we could imagine a person being forced to force someone to do something.  If you tell someone “Go rob this bank or I’ll kill your family”, they might go and force the teller to open the safe, but they don’t necessarily want the teller to open the safe.  They just want to satisfy you so you don’t kill their family.  It still counts as force on their part.

Defining causation

This problem is harder than most people think, but it’s not crucial for the WINCS metric because I’ve defined moral value as belonging to situations only and not acts.  I don’t need to precisely identify causal connections between acts and situations in order to make moral judgments, because I’m making the moral judgments about the situations themselves, not the acts.

BUT – although it’s not theoretically crucial, defining causation is still important to make any practical use of the WINCS metric.  After all, a moral theory is pointless if it doesn’t give us a way to decide what we should do.  If I can’t say which of several options for action will result in a morally optimal outcome, it does me no good to know which outcome is morally optimal.

Right now the definition of causation that I have the least trouble buying is something that philosophers call the stepwise counter-factual account.  The counter-factual part goes like this: if you hadn’t thrown that rock, the window wouldn’t have broken.  That means your throwing the rock caused the window to break.  Stepwise means we need to break down that causal relation into a chain of little causal relations, possibly down to the microphysical level.

Addendum:  I learned about this definition from a conversation between philosophers Ned Hall and L.A. Paul at Philosophy TV, but I probably should have watched the whole thing.  I just googled up a paper by L.A. Paul and it says there are a bunch of problems with this account.  I’ll see what I think after I read the paper more attentively.

Duration of suffering

Another friend asked on Facebook what role duration of suffering plays in the WINCS metric.  Is suffering worse if it lasts longer?  I think I’m going to outsource that problem to each individual sufferer and let them assess their own subjective meta-suffering.  I’m guessing that the two variables of duration and intensity of suffering form an indifference curve for most people, whose slope and curvature will vary.

Second thoughts

After a conversation I had with a philosopher friend and other objections I’ve received, I might have to significantly modify this moral theory.  Maybe throw it out altogether.  That’s part of the point of putting it out there for criticism, and it’s welcome.  As long as you do it with honesty and diligence, finding out you were wrong is awesome.  It means you’re making progress.  But no promises.  I’m still thinking it over.

Also, maybe in future posts I’ll refrain from biting off so many philosophical problems to chew on all at once.  This post took a lot longer to write than I expected, and I just made a cursory stab at each part.

Advertisements

Tagged: ,

§ 2 Responses to WINCS: More tweaks

  • Justin says:

    4 brief comments:

    1. Setting the locus of moral evaluation states of affairs rather than acts or agents is very odd, given our actual practice of moral evaluation (we morally evaluate rational agents, not events). Do you intend to derive a notion of wrongness and rightness for acts on the basis of the wrongness or rightness of states of affairs? (this is very interesting, if you were to go this way, since it’s the inverse of a very natural way of proceeding) Or do you think all such talk of the rightness of doing X (etc) to be misguided?

    2. Your question [2] “Is it equally immoral to increase the probability of suffering as it would be to actually cause it?” raises the problem of moral luck: http://plato.stanford.edu/entries/moral-luck/

    3. I don’t agree with your “Lowest WIS” principle (from the earlier post) for the reasons raised by the other commenter. While I’m not sure what to say about torturing someone to find out information as to where the bomb is located, I *do* know that torturing an additional person for no reason is worse. But if torturing the additional person involves slightly less torturing of the first, the “Lowest WIS” principle says we should do that, which seems wrong.

    4. Here’s another problem for “Lowest WIS”. Imagine someone who has adopted it as their moral principle, call him Joe. Joe faces the following situation: there’s a bomb strapped to a person and if it detonates that person will experience a terrible agonizing death. However, Joe can order a soldier (in his command) to remove the bomb from that person, but in so doing both people will experience a slightly less agonizing death. If Joe is guided by Lowest WIS to order the soldier to remove the bomb, thus killing them both, I would say that he’s a moral monster.

    • paginavorus says:

      1. I think my aim right now is to derive a notion of “secondary” (not sure what word to use for this) wrongerness and righterness (comparative and scalar, not binary) for acts on the basis of the wrongerness or righterness of states of affairs. That is, acts in my scheme can’t be more wrong or right in themselves but only by virtue of the states of affairs that result from them. A state of affairs itself, by contrast, can be intrinsically more or less wrong than some other state of affairs – or as I’d rather term it, morally better or worse.

      2. Thanks for the article link. I’ve been reading it since you posted it and I may have something to say after I’ve digested it enough. That will probably take weeks.

      3. I do share the intuition that torturing an additional person _for no reason_ is worse. That is, I’m prepared to add in the rule that a greater number of sufferers makes a situation morally worse than another, _provided that_ they have the same individual peak of suffering. It’s a tie-breaker for me, in other words. I’m not sure I share your feeling that it’s wrong to torture an additional person with the result of slightly reducing the suffering of the person already being tortured. I understand the attempt at reductio here. My scheme does mean that if we can reduce the suffering of the highest sufferer by even a tiny bit at the cost of increasing the suffering of six billion people by a lot, but to a level less than the original peak, we’re morally obligated to do it. I’m not sure I regard that as an absurd outcome.

      4. I’m not sure I’d say that. Or maybe less infuriatingly, if I do figure out a way I can say that, it will probably be on the basis of the wider effects on the world of Joe’s decision to issue that order and the potential for future suffering for other people as a result of notions and practices about individual rights – which I value not inherently but for their effects on suffering. If we had the situation you describe in a causal vacuum, I wouldn’t say Joe was a moral monster, and I would bite the bullet and say yes, he should issue that order.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

What’s this?

You are currently reading WINCS: More tweaks at paginavorus.

meta

%d bloggers like this: