X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00 autolearn=ham version=3.2.0-r431796 Sender: -2.6 (spamval) -- NONE Return-Path: Received: from newman.eecs.umich.edu (newman.eecs.umich.edu [141.213.4.11]) by boston.eecs.umich.edu (8.12.10/8.13.0) with ESMTP id kA7L9i8W003139 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Tue, 7 Nov 2006 16:09:44 -0500 Received: from dave.mr.itd.umich.edu (mx.umich.edu [141.211.14.131]) by newman.eecs.umich.edu (8.13.8/8.13.6) with ESMTP id kA7L9fLo015281; Tue, 7 Nov 2006 16:09:41 -0500 Received: FROM skycaptain.mr.itd.umich.edu (smtp.mail.umich.edu [141.211.93.160]) BY dave.mr.itd.umich.edu ID 4550F611.40A53.12841 ; 7 Nov 2006 16:09:37 -0500 Received: FROM [192.168.1.109] (Unknown [64.9.221.37]) BY skycaptain.mr.itd.umich.edu ID 4550F5D4.9E9FB.11438 ; 7 Nov 2006 16:08:36 -0500 Mime-Version: 1.0 (Apple Message framework v624) In-Reply-To: References: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Message-Id: X-Mailer: Apple Mail (2.624) X-Spam-Checker-Version: SpamAssassin 3.2.0-r431796 (2006-08-16) on newman.eecs.umich.edu X-Virus-Scan: : UVSCAN at UoM/EECS Content-Transfer-Encoding: 8bit X-MIME-Autoconverted: from quoted-printable to 8bit by boston.eecs.umich.edu id kA7L9i8W003139 Date: Tue, 7 Nov 2006 16:08:33 -0500 To: improvetheworld Æ umich.edu From: Dave Morris Subject: Re: social welfare + fairness + knowledge Status: O X-Status: X-Keywords: X-UID: 849 I put forward the somewhat controversial point that sometimes absolutely horrible things are in fact the ethically correct course of action. We live in a universe that doesn't care whether we live or die, people suffer regularly as a part of life. Furthermore we live as a species filled with people who are willing to commit atrocities, because they are mentally broken either by genetics or what has been done to them. This is reality. That one can posit a situation that requires one to do horrible things in response to this reality does not mean that ones ethical code is flawed. It means that our universe is flawed (if bad things happening were the definition of flaw). In the extreme and unrealistic ticking bomb situation- where a) I know the person I have captive knows the answer I need, b) I know that the threat is real (but for some reason don't know where the bomb is?), and c) I know that torture is the only solution to get the information- of course I'd torture the person for information, you'd be a fool not to. But then in the real world, a, b, and c, are never true. And in the real world, I would happily sign a universal ban on torture ever even though I admit to my first assertion. In reality people won't wait to see that a, b, and c, are true, they'll use it more and more often for more and more trivial reasons and many many people will suffer all the time, which is a greater cost than the very low probability event of losing New York city because you failed to torture the right person at the right time. Even knowing that we live in a world where our government can take almost anyone, almost any time, and disappear them to Guantanimo Bay, and do whatever they want to them there without oversight or regulation, is a huge cost to me. It really bothers me. And it didn't even happy to me or anyone I know. That's the realistic consideration of any utilitarian argument about torture. The realistic costs in the realistic situations. Maybe the law should be that torturing a subject for information is a capital offense- and this will be universally applied, regardless of outcome. In which case, I would still commit to my assertion at the beginning of this paragraph- any rational ethical person would. So no, utilitarianism is not broken because it can be used to justify torture. And I think Erik put it well- fairness etc. are adequately captured by utilitarianism as well, since it's important to people, therefore it provides them with utility. The value of this argument is that we accept that the basis for argument about such topics should be the overall utility of the decision. So when we decide whether or not to pass a law banning torture, or requiring hotels to put mints on pillows, we can talk about the utility it will provide, and remove, to how many, and to whom, and thus come to agreement on the best course of action. Far more useful than talking about what feels right or what god says we should do or most other decision making processes I've seen. Just my thoughts, you can tell I've gone over this argument more than once before. :-) Dave On Nov 6, 2006, at 5:53 PM, Daniel Reeves wrote: > That's another tricky thing about maximizing social welfare > (synonymous with maximizing utility, as Dave notes) -- deciding how to > include nonhumans in the equation. You have to include animals' > utility in some way otherwise it would be ethically A-OK to torture > animals for fun. > Or maybe it suffices that there are *people* who get disutility from > the torture of animals. For example, if we had a yootles auction to > decide whether to kill a puppy, we wouldn't need the puppy's > participation to decide not to do it. > > That puts me tentatively in the "animals don't count" camp. Anyone > else? > > (I disagree with Dave that 2 & 3 are subsets of 1. Splitting utility > equally is often more important than maximizing the sum of utilities. > For example, it's not OK to steal money from someone who doesn't need > it as much as you.) > > (And knowledge, truth, and scientific understanding are intrinsically > valuable, beyond their applicability to improving social welfare. But > perhaps my own strong feelings about this undermine my own point. In > other words, maybe we don't need to include it for the same reason we > don't need to include animal welfare.) > > > --- \/ FROM Dave Morris AT 06.10.30 11:25 (Oct 30) \/ --- > >> I think that it's important to note that 2 & 3, while distinct and >> interesting components of the discussion, are in fact subsets of 1, >> which could be rephrased in it's general sense as "maximization of >> utility" if you don't want to treat only the defined subset of >> "human". :-) >> >> On Oct 28, 2006, at 1:30 PM, Daniel Reeves wrote: >> >>> Based on off-line discussion with my grandfather, I propose that >>> there are only three fundamental principles worth fighting for in >>> human society: >>> 1. Social Welfare >>> 2. Fairness >>> 3. The Search for Knowledge >>> (This started with an argument about the parental retort "who says >>> life's supposed to be fair?") >>> >>> (1 and 2 are distinct because if we're all equally miserable, >>> that's >>> fair but not welfare maximizing. Likewise, of the methods for >>> dividing >>> a cake, for example, the method of "I get all of it" maximizes the >>> sum >>> of our utilities, but we nonetheless prefer splitting it in half.) >>> Is there a number 4? >>> -- >>> http://ai.eecs.umich.edu/people/dreeves - - search://"Daniel >>> Reeves" >> David P. Morris, PhD >> Senior Engineer, ElectroDynamic Applications, Inc. >> morris Æ edapplications.com, (734) 786-1434, fax: (734) 786-3235 >> >> > > -- > http://ai.eecs.umich.edu/people/dreeves - - search://"Daniel Reeves" > > "Lassie looked brilliant in part because the farm family she lived > with was made up of idiots. Remember? One of them was always > getting pinned under the tractor and Lassie was always rushing > back to the farmhouse to alert the other ones. She'd whimper and > tug at their sleeves, and they'd always waste precious minutes > saying things: "Do you think something's wrong? Do you think she > wants us to follow her? What is it, girl?", etc., as if this had > never happened before, instead of every week. What with all the > time these people spent pinned under the tractor, I don't see how > they managed to grow any crops whatsoever. They probably got by on > federal crop supports, which Lassie filed the applications for." > -- Dave Barry David P. Morris, PhD Senior Engineer, ElectroDynamic Applications, Inc. morris Æ edapplications.com, (734) 786-1434, fax: (734) 786-3235