A Policy for Biting Bullets

I.

The CFAR Handbook has a really interesting chapter on policy-level decision-making (pages 170-173). It’s excellent, grounds much of this post, and comes with some classic Calvin & Hobbes comics; I recommend it. If you’re too lazy for that, I’ll summarize with a question: What should you do when you’ve made a plan, and then there’s a delay, and you really don’t want to do the thing you’ve planned anymore? The handbook starts with two basic perspectives:

“Look,” says the first perspective. “You’ve got to have follow-through.
You’ve got to be able to keep promises to yourself. If a little thing like a few
hours’ delay is enough to throw you off your game, there’s practically no point
in making plans at all. Sometimes, you have to let past you have the steering
wheel, even when you don’t feel like it anymore, because otherwise you’ll
never finish anything that takes sustained effort or motivation or attention.”

“Look,” says the second perspective. “There’s nothing to be gained from
locking yourself in boxes. Present you has the most information and context;
past you was just guessing at what you would want in this moment. Forcing
yourself to do stuff out of some misguided sense of consistency or guilt or
whatever is how people end up halfway through a law degree they never
actually wanted. You have to be able to update on new information and
adapt to new circumstances.”

Policy-level decision-making is the handbook’s suggested way to thread this needle:

[W]hat policy, if I followed it every time I had to make a decision like this, would strike the right balance? How do I want to trade off between follow-through and following my feelings, or between staying safe and seizing rare opportunities?

It’s obviously more work to come up with a policy than to just make the decision in the moment, but for those cases when you feel torn between the two basic perspectives, policy-level decision-making seems like a good way to resolve the tension.

II.

There is a peculiar manoeuvre in philosophy, as in life, called “biting the bullet”. Biting the bullet in life is to accept and then do something painful or unpleasant because you don’t think you have any better alternatives to get the thing you want. Want to swim in the ocean, but it’s the middle of winter? You’re going to have to “bite the bullet” and get in even though the water will be freezing cold.

Biting the bullet in philosophy is analogous; it means to accept weird, unpleasant, and frequently counter-intuitive implications of a theory or argument because the theory or argument is otherwise valuable or believed to be true. If you think that simple utilitarianism is the correct ethical theory, then you have to deal with the transplant problem, where you have the option to kill one random healthy person and use their organs to save five others. Really basic utilitarianism suggests this is a moral necessity, because five lives are more valuable than one life. One way to deal with this apparently appalling rule is to “bite the bullet”; accept and actually argue that we should kill people for their organs.

Bringing this back to policy-level decision-making: I realized recently that I don’t have a policy for biting bullets, in philosophy or in life.

In life, a policy for biting bullets is probably useful, and I’m sure there’s an important blog post to be written there, but at least personally I don’t feel the lack of policy too painfully. If there’s a thing I want and something in the way, then it’s a pretty standard (though frequently subconscious) cost-benefit analysis based on how much I want the thing and how much pain or work is in the way. If the analysis comes out right, I’ll “bite the bullet” and do the thing.

Philosophy, however, is a different matter. Not only have I realized that I am biting bullets in philosophy somewhat inconsistently, I also notice that it’s been the source of many times where I’ve agonized at length over an argument or philosophical point. I think a policy for biting philosophical bullets would help me be more consistent in my philosophy, and also save me a bit of sanity on occasion.

III.

So what’s a good policy for biting philosophical bullets? As a starting point, let’s copy the handbook and articulate the most basic (and extreme) perspectives:

“Look,” says the first perspective. “Philosophy is fundamentally grounded in our intuitions. You’ve got to be consistent with those, in the same way that any theory of physics has to be consistent with our empirical observations. If a philosophical theory asks you to deny an intuition, then that theory can’t be ultimately true; it might still be a useful approximation, but nothing more. And anyway it’s a slippery slope; if you accept biting bullets as a valid epistemic move, then every theory becomes equally valid because every objection can be ‘bitten’ away.”

“Look,” says the second perspective. “Our intuitions are basically garbage; you can’t expect them to be internally consistent, let alone universally correct. Humans are flawed, complicated creatures mostly built on hard-wired heuristics derived from a million years living on the savanna. A philosophical theory should be free to get rid of as many of these outdated intuitions as it needs to. After all, this is one of the ways we grow as people, by replacing our moral intuitions when persuaded by good arguments.”

Obviously both of these positions are somewhat exaggerated, but they do raise strong points. We don’t want a policy that lets us bite any old bullet, since that would significantly weaken our epistemology, but at the same time we do want to be able to bite some bullets or else we end up held captive by our often-flawed intuitions. But then how do we decide which bullets to bite?

IV.

Instinctively, there are two sides to the question of biting any particular philosophical bullet: the argument, and the intuition. In a sense, the stronger of the two wins; a strong argument countered by a weak intuition suggests biting the bullet (the argument wins), whereas a weak argument faced with a strong intuition suggests the opposite (the intuition wins). This is a nice model, but only succeeds in pushing the question down a layer: what do we mean by “strong” and “weak”, and how do we compare strengths between such disparate objects as arguments and intuitions? What I really want is Google’s unit conversion feature to be able to tell me “your intuition for atheism is worth 3.547 teleological arguments”. Alas, real life is somewhat messier than that.

“Strong” and “weak” for an intuition may be hard to precisely pin down with language, but at the very least I have a clear felt sense for what it means that an intuition is strong or weak, and I suspect this is common. Somewhat surprisingly, it is how to consider “strong” and “weak” with respect to arguments that seems to give more trouble. Assuming of course that the argument is logically valid (and that the empirical facts are well-specified), what makes a philosophical argument “stronger” seems to boil all the way down to intuitions again: a stronger philosophical argument is backed by more and/or stronger intuitions.

But if it’s true that argument strength is ultimately just intuition strength, then our policy for biting bullets can be summarized as “choose whichever side has the stronger intuitions”. This defeats the whole purpose of the exercise, since the previous times I’ve found myself agonizing over biting a bullet was precisely because the intuitions on both sides were already well-balanced; if there was a clear winner, I wouldn’t have had to work so hard to choose.

Perhaps this is a fundamental truth, that choosing to bite a bullet (or not) has to be a hard choice by definition. Or perhaps there is some other clever policy for biting bullets that I just haven’t managed to think of today. I’m certainly open to new suggestions.

V.

All of this talk of biting hard things has reminded me of a poem, so I’ll leave you with these two stanzas from Lewis Carroll’s You Are Old, Father William:

“You are old,” said the youth, “and your jaws are too weak
    For anything tougher than suet;
Yet you finished the goose, with the bones and the beak—
    Pray, how did you manage to do it?”

“In my youth,” said his father, “I took to the law,
    And argued each case with my wife;
And the muscular strength, which it gave to my jaw,
    Has lasted the rest of my life.”

Frankenstein Delenda Est

I.

I am terrified by the idea that one day, I will look back on my life, and realize that I helped create a monster. That my actions and my decisions pushed humanity a little further along the path to suffering and ruin. I step back sometimes from the gears of the technology I am creating, from the web of ideas I am promoting, and from the vision of the future that I am chasing, and I wonder if any of it is good.

Of course I play only the tiniest of roles in the world, and there will be no great reckoning for me. I am a drop in the ocean of the many, many others who are also trying to build the future. But still, I am here. I push, and the levers of the world move, however fractionally. Gears turn. Webs are spun. If I push in a different direction, then the future will be different. I must believe that my actions have meaning, because otherwise they have nothing at all.

No, I do not doubt my ability to shape the future; I doubt my ability to choose it well. The world is dense with opportunity, and we sit at the controls of a society with immense potential and awful power. We have at our disposal a library full of blueprints, each one claiming to be better than the last. I would scream, but in this library I simply whisper, to the blueprints: how do you know? How do you know that the future you propose has been authored by The Goddess of Everything Else, and is not another tendril of Moloch sneaking into our world?

Many people claim to know, to have ascended the mountain and to be pronouncing upon their return the commandments of the one true future: There is a way. Where we are going today, that is not the way. But there is a way. Believe in the way.

I hear these people speak and I am overcome with doubt. I think of butterflies, who flap their wings and create Brownian motion, as unfathomable as any hurricane. I think of fungi, whose simple mushrooms can hide a thousand acres of interwoven root. I think of the human brain, a few pounds of soggy meat whose spark eludes us. The weight of complexity is crushing, and any claim to understanding must be counterbalanced by the collected humility of a thousand generations of ignorance.

And on this complexity, we build our civilization. Synthesizing bold new chemicals, organizing the world’s information, and shaping the future through a patchwork mess of incentives, choices, and paths of least resistance. Visions of the future coalesce around politics of the moment, but there is no vision of the future that can account for our own radical invention. Do not doubt that Russell Marker and Bob Taylor did as much to shape today as any president or dictator. The levers we pull are slow, and their lengths are hidden, but some of them will certainly move the world.

And on these levers, we build our civilization. Invisible hands pull the levers that turn the gears that spin the webs that hold us fast, and those invisible hands belong to us. We pronounce our visions of a gleamingly efficient future, accumulating power in our bid to challenge Moloch, never asking whether Moloch is, simply, us. The institutions of the American experiment were shaped by the wisdom of centuries of political philosophy. That they have so readily crumbled is not an indictment of their authors, but of the radical societal changes none of those authors could foresee. Our new society is being thrown together slapdash by a bare handful of engineers more interested in optimizing behaviour than in guiding it, and the resulting institutions are as sociologically destructive as they are economically productive.

And on these institutions, we build our civilization.

II.

Sometimes, I believe that with a little work and a lot of care, humanity might be able to engineer its way out of its current rough patch and forward, into a stable equilibrium of happy society. Sometimes, if we just run a little faster and work a little harder, we might reach utopia.

There is a pleasant circularity to this dream. Sure, technology has forced disparate parts of our society together in a way that creates polarized echo chambers and threatens to tear society apart. But if we just dream a little bigger we can create new technology to solve that problem. And honestly, we probably can do just that. But a butterfly flaps its wings, and the gears turn, and whatever new technical solution we create will generate a hurricane in some other part of society. Any claims that it won’t must be counterbalanced by the collected humility of a thousand generations of mistakes.

Sometimes, I believe that the future is lying in plain sight, waiting to swallow us when we finally fall. If we just let things take their natural course, then the Amish and the Mennonites and (to a lesser extent) the Mormons will be there with their moral capital and their technological ludditism and their ultimately functional societies to pick up the pieces left by our self-destruction. Natural selection can be awful if you’re on the wrong end of it, but it still ultimately works.

Or maybe, sometimes, it’s all a wash and we’ll stumble along to weirder and weirder futures with their own fractal echoes of our current problems, as in Greg Daniels’s Upload. But I think of the complexity of this path, and I am overcome with doubt.

III.

I am terrified by the idea that one day, I will look back on my life, and realize that I helped create a monster. Not a grand, societal-collapse kind of monster or an elder-god-sucking-the-good-out-of-everything kind of monster. Just a prosaic, every-day, run-of-the-mill, Frankenstinian monster. I step back sometimes from the gears of the technology I am creating, from the web of ideas I am promoting, and from the vision of the future that I am chasing, and I wonder if it’s the right one.

From the grand library of societal blueprints, I have chosen a set. I have spent my life building the gears to make it go, and spinning the webs that hold it together. But I look up from my labour and I see other people building on other blueprints entirely. I see protests, and essays, and argument, and conflict. I am confident in my epistemology, but epistemology brings me only a method of transportation, not a destination.

I am terrified that it is hubris to claim one blueprint as my own. That I am no better than anyone else, coming down from the mountaintop, proclaiming the way. That society will destroy my monster of a future with pitchforks, or that worse, my monster will grow to devour what would have otherwise been a beautiful idyll.

Frankenstein was not the monster; Frankenstein created the monster.

Secret Goals

First off, apologies for the long absence; life has a habit of getting in the way of philosophy. Back to decision-making and game theory.

Now, obviously whenever you make a decision you must have certain goals in mind, and you are trying to make a decision to best fit those goals. If you’re looking at a menu, your goals may be to pick something tasty, but not too expensive, etc. You can have multiple goals, and they can sometimes conflict, in which case you have to compromise or prioritize. This is all pretty basic stuff.

But what people tend not to realize (or at least, not to think about too much) is that frequently our “goals” are not, in themselves, things we value; we value them because they let us achieve bigger, better goals. And those goals may be in the service of even higher goals. What this means is that all of these intermediate layers of “goals” are really just means that we use so frequently we have abstracted them into something that we can think of as inherently valuable. This saves us the mental work of traversing all the way back to the root wellspring of value each time we want to pick food off a menu. The result is these layers of abstract “goals”. Yet another set of layers of abstractions!

So what are these root goals we tend not to think about? Are they so-called “life goals” such as raising a family or eventually running your own company? No. Those are still just intermediate abstractions. The real goals are still one more step away, and are almost universally biological in nature. The survival and reproduction of our genetic code, whether through ourselves, our offspring, or our relations. These are our “secret goals”.

So how does this help us understand decision-making? It seems intuitively impossible to understand somebody’s decisions if we don’t understand the goal of that decision. But when we think exclusively in terms of our shorter-term, abstract “goals”, these are things that change, that we can abandon or reshape to suit our current situation. Thinking of these instead as methods of satisfying our underlying goals (which do not change) provides a much more consistent picture of human decision-making. This consistent picture is one to which we might even be able to apply game theory

Game Theory

We come, at last, to the final subsection of our “worldbuilding” series. Having touched on biology, culture, and the mind, we now turn back to a slightly more abstract topic: game theory. More generally, we are going to be looking at how people make decisions, why they make the decisions they do, and how these decisions tend to play out over the long term.

This topic draws on everything else we’ve covered in worldbuilding. In hindsight, understanding human decision-making was really the goal of this whole section, I just didn’t realize it until now. I’m sure there’s something very meta about that.

Game theory is traditionally concerned with the more mathematical study of decisions between rational decision-makers, but it’s also bled over into the fuzzier realms of psychology and philosophy. Since humans are (clearly) not always rational, it is this fuzzy boundary where we will spend most of our time.

The wiki article on game theory is good, but fairly math-heavy. Feel free to skim.