❦
Are apples good to eat? Usually, but some apples are rotten.
Do humans have ten fingers? Most of us do, but plenty of people have lost a finger and nonetheless qualify as “human.”
Unless you descend to a level of description far below any macroscopic object—below societies, below people, below fingers, below tendon and bone, below cells, all the way down to particles and fields where the laws are truly universal—practically every generalization you use in the real world will be leaky.
(Though there may, of course, be some exceptions to the above rule…)
Mostly, the way you deal with leaky generalizations is that, well, you just have to deal. If the cookie market almost always closes at 10 p.m., except on Thanksgiving it closes at 6 p.m., and today happens to be National Native American Genocide Day, you’d better show up before 6 p.m. or you won’t get a cookie.
Our ability to manipulate leaky generalizations is opposed by need for closure, the degree to which we want to say once and for all that humans have ten fingers, and get frustrated when we have to tolerate continued ambiguity.
Raising the value of the stakes can increase need for closure—which shuts down complexity tolerance when complexity tolerance is most needed.
Life would be complicated even if the things we wanted were simple (they aren’t). The leakyness of leaky generalizations about what-to-do-next would leak in from the leaky structure of the real world. Or to put it another way:
Instrumental values often have no specification that is both compact and local.
Suppose there’s a box containing a million dollars. The box is locked, not with an ordinary combination lock, but with a dozen keys controlling a machine that can open the box. If you know how the machine works, you can deduce which sequences of key-presses will open the box. There’s more than one key sequence that can trigger the machine to open the box. But if you press a sufficiently wrong sequence, the machine incinerates the money. And if you don’t know about the machine, there’s no simple rules like “Pressing any key three times opens the box” or “Pressing five different keys with no repetitions incinerates the money.”
There’s a compact nonlocal specification of which keys you want to press: You want to press keys such that they open the box. You can write a compact computer program that computes which key sequences are good, bad or neutral, but the computer program will need to describe the machine, not just the keys themselves.
There’s likewise a local noncompact specification of which keys to press: a giant lookup table of the results for each possible key sequence. It’s a very large computer program, but it makes no mention of anything except the keys.
But there’s no way to describe which key sequences are good, bad, or neutral, which is both simple and phrased only in terms of the keys themselves.
It may be even worse if there are tempting local generalizations which turn out to be leaky. Pressing most keys three times in a row will open the box, but there’s a particular key that incinerates the money if you press it just once. You might think you had found a perfect generalization—a locally describable class of sequences that always opened the box—when you had merely failed to visualize all the possible paths of the machine, or failed to value all the side effects.
The machine represents the complexity of the real world. The openness of the box (which is good) and the incinerator (which is bad) represent the thousand shards of desire that make up our terminal values. The keys represent the actions and policies and strategies available to us.
When you consider how many different ways we value outcomes, and how complicated are the paths we take to get there, it’s a wonder that there exists any such thing as helpful ethical advice. (Of which the strangest of all advices, and yet still helpful, is that “the end does not justify the means.”)
But conversely, the complicatedness of action need not say anything about the complexity of goals. You often find people who smile wisely, and say, “Well, morality is complicated, you know, female circumcision is right in one culture and wrong in another, it’s not always a bad thing to torture people. How naive you are, how full of need for closure, that you think there are any simple rules.”
You can say, unconditionally and flatly, that killing anyone is a huge dose of negative terminal utility. Yes, even Hitler. That doesn’t mean you shouldn’t shoot Hitler. It means that the net instrumental utility of shooting Hitler carries a giant dose of negative utility from Hitler’s death, and a hugely larger dose of positive utility from all the other lives that would be saved as a consequence.
Many commit the type error that I warned against in Terminal Values and Instrumental Values, and think that if the net consequential expected utility of Hitler’s death is conceded to be positive, then the immediate local terminal utility must also be positive, meaning that the moral principle “Death is always a bad thing” is itself a leaky generalization. But this is double counting, with utilities instead of probabilities; you’re setting up a resonance between the expected utility and the utility, instead of a one-way flow from utility to expected utility.
Or maybe it’s just the urge toward a one-sided policy debate: the best policy must have no drawbacks.
In my moral philosophy, the local negative utility of Hitler’s death is stable, no matter what happens to the external consequences and hence to the expected utility.
Of course, you can set up a moral argument that it’s an inherently good thing to punish evil people, even with capital punishment for sufficiently evil people. But you can’t carry this moral argument by pointing out that the consequence of shooting a man holding a leveled gun may be to save other lives. This is appealing to the value of life, not appealing to the value of death. If expected utilities are leaky and complicated, it doesn’t mean that utilities must be leaky and complicated as well. They might be! But it would be a separate argument.