Message Number: 576
From: Matt Rudary <mrudary Æ eecs.umich.edu>
Date: Tue, 07 Nov 2006 17:44:17 -0500
Subject: Re: social welfare + fairness + knowledge
I don't have a whole lot to add to this conversation as my reading on
the subject of ethics is woefully short, but a starting point for this
discussion may perhaps be found at
http://en.wikipedia.org/wiki/Utilitarianism#Criticism_and_defense_of_u...
nism

Matt

Dave Morris wrote:
> I put forward the somewhat controversial point that sometimes absolutely 
> horrible things are in fact the ethically correct course of action. We 
> live in a universe that doesn't care whether we live or die, people 
> suffer regularly as a part of life. Furthermore we live as a species 
> filled with people who are willing to commit atrocities, because they 
> are mentally broken either by genetics or what has been done to them. 
> This is reality. That one can posit a situation that requires one to do 
> horrible things in response to this reality does not mean that ones 
> ethical code is flawed. It means that our universe is flawed (if bad 
> things happening were the definition of flaw).
> 
> In the extreme and unrealistic ticking bomb situation- where a) I know 
> the person I have captive knows the answer I need, b) I know that the 
> threat is real (but for some reason don't know where the bomb is?), and 
> c) I know that torture is the only solution to get the information- of 
> course I'd torture the person for information, you'd be a fool not to. 
> But then in the real world, a, b, and c, are never true. And in the real 
> world, I would happily sign a universal ban on torture ever even though 
> I admit to my first assertion. In reality people won't wait to see that 
> a, b, and c, are true, they'll use it more and more often for more and 
> more trivial reasons and many many people will suffer all the time, 
> which is a greater cost than the very low probability event of losing 
> New York city because you failed to torture the right person at the 
> right time. Even knowing that we live in a world where our government 
> can take almost anyone, almost any time, and disappear them to 
> Guantanimo Bay, and do whatever they want to them there without 
> oversight or regulation, is a huge cost to me. It really bothers me. And 
> it didn't even happy to me or anyone I know. That's the realistic 
> consideration of any utilitarian argument about torture. The realistic 
> costs in the realistic situations.	 Maybe the law should be that 
> torturing a subject for information is a capital offense- and this will 
> be universally applied, regardless of outcome. In which case, I would 
> still commit to my assertion at the beginning of this paragraph- any 
> rational ethical person would.
> 
> So no, utilitarianism is not broken because it can be used to justify 
> torture.
> 
> And I think Erik put it well- fairness etc. are adequately captured by 
> utilitarianism as well, since it's important to people, therefore it 
> provides them with utility.
> 
> The value of this argument is that we accept that the basis for argument 
> about such topics should be the overall utility of the decision. So when 
> we decide whether or not to pass a law banning torture, or requiring 
> hotels to put mints on pillows, we can talk about the utility it will 
> provide, and remove, to how many, and to whom, and thus come to 
> agreement on the best course of action.  Far more useful than talking 
> about what feels right or what god says we should do or most other 
> decision making processes I've seen.
> 
> Just my thoughts, you can tell I've gone over this argument more than 
> once before. :-)
> 
> Dave
> 
> On Nov 6, 2006, at 5:53 PM, Daniel Reeves wrote:
> 
>> That's another tricky thing about maximizing social welfare 
>> (synonymous with maximizing utility, as Dave notes) -- deciding how to 
>> include nonhumans in the equation.  You have to include animals' 
>> utility in some way otherwise it would be ethically A-OK to torture 
>> animals for fun.
>> Or maybe it suffices that there are *people* who get disutility from 
>> the torture of animals.  For example, if we had a yootles auction to 
>> decide whether to kill a puppy, we wouldn't need the puppy's 
>> participation to decide not to do it.
>>
>> That puts me tentatively in the "animals don't count" camp.	Anyone else?
>>
>> (I disagree with Dave that 2 & 3 are subsets of 1.  Splitting utility 
>> equally is often more important than maximizing the sum of utilities.  
>> For example, it's not OK to steal money from someone who doesn't need 
>> it as much as you.)
>>
>> (And knowledge, truth, and scientific understanding are intrinsically 
>> valuable, beyond their applicability to improving social welfare.  But 
>> perhaps my own strong feelings about this undermine my own point.  In 
>> other words, maybe we don't need to include it for the same reason we 
>> don't need to include animal welfare.)
>>
>>
>> --- \/   FROM Dave Morris AT 06.10.30 11:25 (Oct 30)   \/ ---
>>
>>> I think that it's important to note that 2 & 3, while distinct and 
>>> interesting components of the discussion, are in fact subsets of 1, 
>>> which could be rephrased in it's general sense as "maximization of 
>>> utility" if you don't want to treat only the defined subset of 
>>> "human". :-)
>>>
>>> On Oct 28, 2006, at 1:30 PM, Daniel Reeves wrote:
>>>
>>>> Based on off-line discussion with my grandfather, I propose that 
>>>> there are only three fundamental principles worth fighting for in 
>>>> human society:
>>>>   1. Social Welfare
>>>>   2. Fairness
>>>>   3. The Search for Knowledge
>>>> (This started with an argument about the parental retort "who says 
>>>> life's supposed to be fair?")
>>>>
>>>>   (1 and 2 are distinct because if we're all equally miserable, that's
>>>>   fair but not welfare maximizing.  Likewise, of the methods for 
>>>> dividing
>>>>   a cake, for example, the method of "I get all of it" maximizes the 
>>>> sum
>>>>   of our utilities, but we nonetheless prefer splitting it in half.)
>>>> Is there a number 4?
>>>> -- 
>>>> http://ai.eecs.umich.edu/people/dreeves  - -  search://"Daniel Reeves"
>>> David P. Morris, PhD
>>> Senior Engineer, ElectroDynamic Applications, Inc.
>>> morris Æ edapplications.com, (734) 786-1434, fax: (734) 786-3235
>>>
>>>
>>
>> -- 
>> http://ai.eecs.umich.edu/people/dreeves  - -  search://"Daniel Reeves"
>>
>> "Lassie looked brilliant in part because the farm family she lived
>> with was made up of idiots. Remember? One of them was always
>> getting pinned under the tractor and Lassie was always rushing
>> back to the farmhouse to alert the other ones. She'd whimper and
>> tug at their sleeves, and they'd always waste precious minutes
>> saying things: "Do you think something's wrong? Do you think she
>> wants us to follow her? What is it, girl?", etc., as if this had
>> never happened before, instead of every week. What with all the
>> time these people spent pinned under the tractor, I don't see how
>> they managed to grow any crops whatsoever. They probably got by on
>> federal crop supports, which Lassie filed the applications for."
>>   -- Dave Barry
> David P. Morris, PhD
> Senior Engineer, ElectroDynamic Applications, Inc.
> morris Æ edapplications.com, (734) 786-1434, fax: (734) 786-3235
>