A Policy for Biting Bullets

I.

The CFAR Handbook has a really interesting chapter on policy-level decision-making (pages 170-173). It’s excellent, grounds much of this post, and comes with some classic Calvin & Hobbes comics; I recommend it. If you’re too lazy for that, I’ll summarize with a question: What should you do when you’ve made a plan, and then there’s a delay, and you really don’t want to do the thing you’ve planned anymore? The handbook starts with two basic perspectives:

“Look,” says the first perspective. “You’ve got to have follow-through.
You’ve got to be able to keep promises to yourself. If a little thing like a few
hours’ delay is enough to throw you off your game, there’s practically no point
in making plans at all. Sometimes, you have to let past you have the steering
wheel, even when you don’t feel like it anymore, because otherwise you’ll
never finish anything that takes sustained effort or motivation or attention.”

“Look,” says the second perspective. “There’s nothing to be gained from
locking yourself in boxes. Present you has the most information and context;
past you was just guessing at what you would want in this moment. Forcing
yourself to do stuff out of some misguided sense of consistency or guilt or
whatever is how people end up halfway through a law degree they never
actually wanted. You have to be able to update on new information and
adapt to new circumstances.”

Policy-level decision-making is the handbook’s suggested way to thread this needle:

[W]hat policy, if I followed it every time I had to make a decision like this, would strike the right balance? How do I want to trade off between follow-through and following my feelings, or between staying safe and seizing rare opportunities?

It’s obviously more work to come up with a policy than to just make the decision in the moment, but for those cases when you feel torn between the two basic perspectives, policy-level decision-making seems like a good way to resolve the tension.

II.

There is a peculiar manoeuvre in philosophy, as in life, called “biting the bullet”. Biting the bullet in life is to accept and then do something painful or unpleasant because you don’t think you have any better alternatives to get the thing you want. Want to swim in the ocean, but it’s the middle of winter? You’re going to have to “bite the bullet” and get in even though the water will be freezing cold.

Biting the bullet in philosophy is analogous; it means to accept weird, unpleasant, and frequently counter-intuitive implications of a theory or argument because the theory or argument is otherwise valuable or believed to be true. If you think that simple utilitarianism is the correct ethical theory, then you have to deal with the transplant problem, where you have the option to kill one random healthy person and use their organs to save five others. Really basic utilitarianism suggests this is a moral necessity, because five lives are more valuable than one life. One way to deal with this apparently appalling rule is to “bite the bullet”; accept and actually argue that we should kill people for their organs.

Bringing this back to policy-level decision-making: I realized recently that I don’t have a policy for biting bullets, in philosophy or in life.

In life, a policy for biting bullets is probably useful, and I’m sure there’s an important blog post to be written there, but at least personally I don’t feel the lack of policy too painfully. If there’s a thing I want and something in the way, then it’s a pretty standard (though frequently subconscious) cost-benefit analysis based on how much I want the thing and how much pain or work is in the way. If the analysis comes out right, I’ll “bite the bullet” and do the thing.

Philosophy, however, is a different matter. Not only have I realized that I am biting bullets in philosophy somewhat inconsistently, I also notice that it’s been the source of many times where I’ve agonized at length over an argument or philosophical point. I think a policy for biting philosophical bullets would help me be more consistent in my philosophy, and also save me a bit of sanity on occasion.

III.

So what’s a good policy for biting philosophical bullets? As a starting point, let’s copy the handbook and articulate the most basic (and extreme) perspectives:

“Look,” says the first perspective. “Philosophy is fundamentally grounded in our intuitions. You’ve got to be consistent with those, in the same way that any theory of physics has to be consistent with our empirical observations. If a philosophical theory asks you to deny an intuition, then that theory can’t be ultimately true; it might still be a useful approximation, but nothing more. And anyway it’s a slippery slope; if you accept biting bullets as a valid epistemic move, then every theory becomes equally valid because every objection can be ‘bitten’ away.”

“Look,” says the second perspective. “Our intuitions are basically garbage; you can’t expect them to be internally consistent, let alone universally correct. Humans are flawed, complicated creatures mostly built on hard-wired heuristics derived from a million years living on the savanna. A philosophical theory should be free to get rid of as many of these outdated intuitions as it needs to. After all, this is one of the ways we grow as people, by replacing our moral intuitions when persuaded by good arguments.”

Obviously both of these positions are somewhat exaggerated, but they do raise strong points. We don’t want a policy that lets us bite any old bullet, since that would significantly weaken our epistemology, but at the same time we do want to be able to bite some bullets or else we end up held captive by our often-flawed intuitions. But then how do we decide which bullets to bite?

IV.

Instinctively, there are two sides to the question of biting any particular philosophical bullet: the argument, and the intuition. In a sense, the stronger of the two wins; a strong argument countered by a weak intuition suggests biting the bullet (the argument wins), whereas a weak argument faced with a strong intuition suggests the opposite (the intuition wins). This is a nice model, but only succeeds in pushing the question down a layer: what do we mean by “strong” and “weak”, and how do we compare strengths between such disparate objects as arguments and intuitions? What I really want is Google’s unit conversion feature to be able to tell me “your intuition for atheism is worth 3.547 teleological arguments”. Alas, real life is somewhat messier than that.

“Strong” and “weak” for an intuition may be hard to precisely pin down with language, but at the very least I have a clear felt sense for what it means that an intuition is strong or weak, and I suspect this is common. Somewhat surprisingly, it is how to consider “strong” and “weak” with respect to arguments that seems to give more trouble. Assuming of course that the argument is logically valid (and that the empirical facts are well-specified), what makes a philosophical argument “stronger” seems to boil all the way down to intuitions again: a stronger philosophical argument is backed by more and/or stronger intuitions.

But if it’s true that argument strength is ultimately just intuition strength, then our policy for biting bullets can be summarized as “choose whichever side has the stronger intuitions”. This defeats the whole purpose of the exercise, since the previous times I’ve found myself agonizing over biting a bullet was precisely because the intuitions on both sides were already well-balanced; if there was a clear winner, I wouldn’t have had to work so hard to choose.

Perhaps this is a fundamental truth, that choosing to bite a bullet (or not) has to be a hard choice by definition. Or perhaps there is some other clever policy for biting bullets that I just haven’t managed to think of today. I’m certainly open to new suggestions.

V.

All of this talk of biting hard things has reminded me of a poem, so I’ll leave you with these two stanzas from Lewis Carroll’s You Are Old, Father William:

“You are old,” said the youth, “and your jaws are too weak
    For anything tougher than suet;
Yet you finished the goose, with the bones and the beak—
    Pray, how did you manage to do it?”

“In my youth,” said his father, “I took to the law,
    And argued each case with my wife;
And the muscular strength, which it gave to my jaw,
    Has lasted the rest of my life.”

Less Wrong

In the time since my last post, while trying to solve interesting problems and wandering around the web reading, I stumbled upon two related websites:

As it turns out, while I do not agree with everything word-for-word they promote, it’s *really* darn close. Close enough, as it turns out, that there isn’t much point in writing the remainder of this blog. The occasional tidbit might come along which demands a post, if there’s something I strongly disagree with or some factual or philosophical matter which falls outside the scope of Less Wrong’s mission. However, if you want to know what I think on some matter, start with the Less Wrong consensus. The odds are pretty good 🙂

As for what you should do instead of reading my blog now that I’m no longer even keeping up the pretence of intending to post: read HPMOR and Less Wrong. Just go read them, right now, you’ll thank me.


For those curious what I *do* disagree with them on, it is mostly quibbles on philosophical axioms (moral, and some metaphysical/epistemic). This doesn’t much affect models-of-the-world as much as it affects how I respond to that model, and what my preferences are.

Game Theory

We come, at last, to the final subsection of our “worldbuilding” series. Having touched on biology, culture, and the mind, we now turn back to a slightly more abstract topic: game theory. More generally, we are going to be looking at how people make decisions, why they make the decisions they do, and how these decisions tend to play out over the long term.

This topic draws on everything else we’ve covered in worldbuilding. In hindsight, understanding human decision-making was really the goal of this whole section, I just didn’t realize it until now. I’m sure there’s something very meta about that.

Game theory is traditionally concerned with the more mathematical study of decisions between rational decision-makers, but it’s also bled over into the fuzzier realms of psychology and philosophy. Since humans are (clearly) not always rational, it is this fuzzy boundary where we will spend most of our time.

The wiki article on game theory is good, but fairly math-heavy. Feel free to skim.