The Axiological Treadmill

The obvious reason that Moloch is the enemy is that it destroys everything we value in the name of competition and survival. But this is missing the bigger picture. We value what we value because, in our ancestral environment, those tended to be the things that helped us with competition and survival. If the things that help us compete and survive end up changing, then evolution will ensure that the things we value change as well.

To borrow a metaphor: Elua cheats. The hedonic treadmill has nothing on the axiological treadmill.

Consider a thought experiment. In Meditations on Moloch, Scott Alexander dreams up a dictatorless dystopia:

Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced.

So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if they don’t, everyone else will kill them, and so on. Every single citizen hates the system, but for lack of a good coordination mechanism it endures. From a god’s-eye-view, we can optimize the system to “everyone agrees to stop doing this at once”, but no one within the system is able to effect the transition without great risk to themselves.

Even if this system came into being ex nihilo it probably wouldn’t be stable in reality; a population that spends eight hours a day receiving strong shocks isn’t going to be able to feed itself, or reproduce. But assume for a moment that this system starts out economically and biologically stable (that is, people can still eat, and reproduce at the rate of replacement, despite the electric shocks, and that there are no outside countries ready to invade). What do we expect to happen over the long run?

Well, obviously there’s a strong evolutionary pressure to be tolerant to electric shocks. People who can tolerate those shocks better will do better on average than those who can’t. However, there’s another more subtle pressure at play: the pressure to ensure you shock yourself. After all, if you forget to shock yourself, or choose not to, then you are immediately killed. So the people in this country will slowly evolve reward and motivational systems such that, from the inside, it feels like they want to shock themselves, in the same way (though maybe not to the same degree) that they want to eat. Shocking themselves every day becomes an intrinsic value to them. Eventually, it’s no longer a dystopia at all.

They would be aghast at a society like ours, where Moloch has destroyed the value of receiving electrical shocks, all in the name of more perfect competition.

The Great Project

The great project of humanity, and in fact the great project of any group of self-aware creatures which value their own existence, is in three parts:

While survival is a fragile thing, and we invent new existential risks every day (e.g. global nuclear war), humanity is now the dominant living species in our sphere of existence. We are surviving.

Now, we must tackle Moloch.

A Brief Chat on World Government

[This is the transcript of a chat conversation I had with another member of my local rationalist meet-up, on the topics of Moloch, world government, and colonization. Lightly edited for clarity, spelling, etc. and shared with their permission.]

Me: Here are some thoughts on Moloch. Moloch basically guarantees that anybody who can figure out how to successfully convert other values into economic value will out-compete the rest. So in the end, we are the paperclip maximizers, except our paperclips are dollar bills.

Scott proposes that to defeat Moloch we install a gardener, specifically a super-intelligent AI. But if you don’t think that’s going to happen, a world government seems like the next best thing. However if we escape earth before that happens, speed of light limitations will forever fragment us into competing factions impossible to garden. Therefore we should forbid any attempts to colonize Mars or other planets until we have world government and the technology to effectively manage such colonies under that government.

Them: The superorganisms in his parable only function because of… external competitive pressures. If cells didn’t need to band together to survive, they wouldn’t. If governments don’t have to fend off foreign governments they will accumulate corruption and dysfunctions.

Sort of related, I’m not persuaded by the conclusion to his parable. Won’t superintelligent AIs be subject to the same natural selective pressures as any other entity? What happens when our benevolent gardener encounters the expanding sphere of computronium from five galaxies over?

Me: Cells were surviving just fine without banding together. It was just that cells which banded together reproduced and consumed resources more effectively than those which didn’t. Similarly, I think a well constructed world government could survive just fine without competitive pressure. We haven’t necessarily found the form of that government yet, but liberal democracy seems like a decent first step.

Regarding competitive pressure on AI, he deals with that off hand by assuming that accelerating self-improvement gives an unbreakable first mover advantage. I don’t think that’s actually true, but then I’m much less bullish on super-intelligent AI in general.

Them: It would “survive,” but we don’t want a surviving government, we want a competent, benevolent one. My read on large organizations in general is that they naturally tend towards dysfunction, and it’s only competitive pressures that keep them functional.

Me: That produces a dismal view of the universe. We are given a Sophie’s Choice of either tiling the universe in economicium in order to compete and survive, or instantiating a global gardener which inherently tends towards dystopic dysfunction.

My read on large organizations in general is that they naturally tend towards dysfunction, and it’s only competitive pressures that keep them functional.

This is certainly mostly true, but I’m not yet convinced it’s necessarily true.

competitive pressures

I think this in particular is too narrow. Hunter-gatherer bands were organizations that stayed relatively “functional”, often not due to competitive pressures with other bands, but due to pure environmental survival pressures. We probably don’t want a government that stays functional due to environmental survival pressures either, but I’m generalizing to an intuition that there are other kinds of pressure.

Them: There are other kinds of pressure, but you better be damn sure you’ve got them figured out before you quash all rivals.

Me: 💯

Them: And to be precise, yeah, there’s a second thing keeping organizations intact, and that’s the floor imposed by “so incompetent they self-destruct.” But I think they degrade to the level of the floor, at which point they are no longer robust enough to survive two crises taking place at once, so they collapse anyway.

Me: Hmm, so it becomes impossible to instantiate a long-term stable gardener of any kind, and we’re stuck tiling the universe in economicium regardless.

Them: Well I think it might be possible (in the short term at least), but you have to be cognizant of the risks before you assume removing competition will make things better. So when I imagine a one-world-government, it’s more like a coordinating body above a collection of smaller states locked in fierce competition (hopefully just economic, cultural & athletic).

Me: At the risk of clarifying something which is already clear: I was never arguing that we are ready for world government now, or should work towards that soon; I was just saying there are some things we shouldn’t do until we have a good world government. We should make sure we can garden what we have before we go buying more land.

Them: Hmm, okay, I think that’s some important nuance I was overlooking.

Me: Though perhaps that is an inherently useless suggestion, since the coordination required to not buy more land is… a global gardener. Otherwise there’s competitive advantage in getting to more land first.

Them: So its a fair point. I assume that any pan-global body will not be well-designed, since it won’t be subject to competitive pressures. But its true that you might want to solve that problem before you start propagating your social structures through the universe.

Me: I’m now imagining the parallel argument playing out in Europe just post-Columbus. “We shouldn’t colonize North America until we have a well-gardened Europe”. That highlights the absurdity of it rather well.

Changes in Reality

[Some short thoughts I just wanted to get out of my brain; bullet-points instead of well-structured prose. This is entirely random speculation.]

  • Social systems (laws, customs, memes) are subject to evolutionary pressure from the dynamics of reality; when reality changes, existing social systems are typically no longer in equilibrium and have to evolve, or collapse and be rebuilt. Consider for example the invention of the birth control pill and the resulting impact on family structure, gender relations, etc. Pre-pill social customs around marriage and family were no longer in equilibrium in a world with reliable female birth control, and so society shifted to a new set of customs.
  • “Change in reality” largely means economic and technological change. New wealth and new capabilities.
  • “Change in reality” has been accelerating for a long time as new technologies and discoveries unlock new economic prosperity which enables more discoveries, in an explosive feedback loop. Some argue that technology/science have slowed down a lot recently, but I think that’s mostly because our best and brightest are too busy extracting economic value from our recent innovations (computers and, separately, the internet). Once that bounty has been consumed, more general technological progress will resume its previous course.
  • There is a natural limit on how fast social systems can evolve. Humans can adapt to living under radically different memeplexes, but not instantly, and somebody has to invent those memes first. When reality changes slowly this is fine, as it leaves plenty of time for a multiplicity of experimental memetic shifts in different groups, letting the best adaption dominate with high probability.
  • At some point in the future (possibly soon?) reality will start changing faster than our social systems can adapt. Our existing laws, customs, memes, and government will be out of equilibrium, but we will not have enough time to converge on a new social system before reality changes again. Society will fragment and human culture will undergo an intense period of adaptive radiation.
  • The countervailing force is technology’s ability to connect us (the “global village”) and equivalently the law of cultural proximity.

A Policy for Biting Bullets

I.

The CFAR Handbook has a really interesting chapter on policy-level decision-making (pages 170-173). It’s excellent, grounds much of this post, and comes with some classic Calvin & Hobbes comics; I recommend it. If you’re too lazy for that, I’ll summarize with a question: What should you do when you’ve made a plan, and then there’s a delay, and you really don’t want to do the thing you’ve planned anymore? The handbook starts with two basic perspectives:

“Look,” says the first perspective. “You’ve got to have follow-through.
You’ve got to be able to keep promises to yourself. If a little thing like a few
hours’ delay is enough to throw you off your game, there’s practically no point
in making plans at all. Sometimes, you have to let past you have the steering
wheel, even when you don’t feel like it anymore, because otherwise you’ll
never finish anything that takes sustained effort or motivation or attention.”

“Look,” says the second perspective. “There’s nothing to be gained from
locking yourself in boxes. Present you has the most information and context;
past you was just guessing at what you would want in this moment. Forcing
yourself to do stuff out of some misguided sense of consistency or guilt or
whatever is how people end up halfway through a law degree they never
actually wanted. You have to be able to update on new information and
adapt to new circumstances.”

Policy-level decision-making is the handbook’s suggested way to thread this needle:

[W]hat policy, if I followed it every time I had to make a decision like this, would strike the right balance? How do I want to trade off between follow-through and following my feelings, or between staying safe and seizing rare opportunities?

It’s obviously more work to come up with a policy than to just make the decision in the moment, but for those cases when you feel torn between the two basic perspectives, policy-level decision-making seems like a good way to resolve the tension.

II.

There is a peculiar manoeuvre in philosophy, as in life, called “biting the bullet”. Biting the bullet in life is to accept and then do something painful or unpleasant because you don’t think you have any better alternatives to get the thing you want. Want to swim in the ocean, but it’s the middle of winter? You’re going to have to “bite the bullet” and get in even though the water will be freezing cold.

Biting the bullet in philosophy is analogous; it means to accept weird, unpleasant, and frequently counter-intuitive implications of a theory or argument because the theory or argument is otherwise valuable or believed to be true. If you think that simple utilitarianism is the correct ethical theory, then you have to deal with the transplant problem, where you have the option to kill one random healthy person and use their organs to save five others. Really basic utilitarianism suggests this is a moral necessity, because five lives are more valuable than one life. One way to deal with this apparently appalling rule is to “bite the bullet”; accept and actually argue that we should kill people for their organs.

Bringing this back to policy-level decision-making: I realized recently that I don’t have a policy for biting bullets, in philosophy or in life.

In life, a policy for biting bullets is probably useful, and I’m sure there’s an important blog post to be written there, but at least personally I don’t feel the lack of policy too painfully. If there’s a thing I want and something in the way, then it’s a pretty standard (though frequently subconscious) cost-benefit analysis based on how much I want the thing and how much pain or work is in the way. If the analysis comes out right, I’ll “bite the bullet” and do the thing.

Philosophy, however, is a different matter. Not only have I realized that I am biting bullets in philosophy somewhat inconsistently, I also notice that it’s been the source of many times where I’ve agonized at length over an argument or philosophical point. I think a policy for biting philosophical bullets would help me be more consistent in my philosophy, and also save me a bit of sanity on occasion.

III.

So what’s a good policy for biting philosophical bullets? As a starting point, let’s copy the handbook and articulate the most basic (and extreme) perspectives:

“Look,” says the first perspective. “Philosophy is fundamentally grounded in our intuitions. You’ve got to be consistent with those, in the same way that any theory of physics has to be consistent with our empirical observations. If a philosophical theory asks you to deny an intuition, then that theory can’t be ultimately true; it might still be a useful approximation, but nothing more. And anyway it’s a slippery slope; if you accept biting bullets as a valid epistemic move, then every theory becomes equally valid because every objection can be ‘bitten’ away.”

“Look,” says the second perspective. “Our intuitions are basically garbage; you can’t expect them to be internally consistent, let alone universally correct. Humans are flawed, complicated creatures mostly built on hard-wired heuristics derived from a million years living on the savanna. A philosophical theory should be free to get rid of as many of these outdated intuitions as it needs to. After all, this is one of the ways we grow as people, by replacing our moral intuitions when persuaded by good arguments.”

Obviously both of these positions are somewhat exaggerated, but they do raise strong points. We don’t want a policy that lets us bite any old bullet, since that would significantly weaken our epistemology, but at the same time we do want to be able to bite some bullets or else we end up held captive by our often-flawed intuitions. But then how do we decide which bullets to bite?

IV.

Instinctively, there are two sides to the question of biting any particular philosophical bullet: the argument, and the intuition. In a sense, the stronger of the two wins; a strong argument countered by a weak intuition suggests biting the bullet (the argument wins), whereas a weak argument faced with a strong intuition suggests the opposite (the intuition wins). This is a nice model, but only succeeds in pushing the question down a layer: what do we mean by “strong” and “weak”, and how do we compare strengths between such disparate objects as arguments and intuitions? What I really want is Google’s unit conversion feature to be able to tell me “your intuition for atheism is worth 3.547 teleological arguments”. Alas, real life is somewhat messier than that.

“Strong” and “weak” for an intuition may be hard to precisely pin down with language, but at the very least I have a clear felt sense for what it means that an intuition is strong or weak, and I suspect this is common. Somewhat surprisingly, it is how to consider “strong” and “weak” with respect to arguments that seems to give more trouble. Assuming of course that the argument is logically valid (and that the empirical facts are well-specified), what makes a philosophical argument “stronger” seems to boil all the way down to intuitions again: a stronger philosophical argument is backed by more and/or stronger intuitions.

But if it’s true that argument strength is ultimately just intuition strength, then our policy for biting bullets can be summarized as “choose whichever side has the stronger intuitions”. This defeats the whole purpose of the exercise, since the previous times I’ve found myself agonizing over biting a bullet was precisely because the intuitions on both sides were already well-balanced; if there was a clear winner, I wouldn’t have had to work so hard to choose.

Perhaps this is a fundamental truth, that choosing to bite a bullet (or not) has to be a hard choice by definition. Or perhaps there is some other clever policy for biting bullets that I just haven’t managed to think of today. I’m certainly open to new suggestions.

V.

All of this talk of biting hard things has reminded me of a poem, so I’ll leave you with these two stanzas from Lewis Carroll’s You Are Old, Father William:

“You are old,” said the youth, “and your jaws are too weak
    For anything tougher than suet;
Yet you finished the goose, with the bones and the beak—
    Pray, how did you manage to do it?”

“In my youth,” said his father, “I took to the law,
    And argued each case with my wife;
And the muscular strength, which it gave to my jaw,
    Has lasted the rest of my life.”

An Exercise in Pessimism and Paranoia

When I consider the world at large, there are three interrelated futures which terrify me.

Fear #1 is the culture war. Per the law of cultural proximity and musical outgroups, I expect this conflict to get worse in the near future as battle lines are more firmly drawn, and neutrality becomes increasingly impossible (I gave a few possibilities at the end of Musical Outgroups, but I’m now leaning more firmly towards “the smaller tribes being squeezed out of existence between dominant Blue and Red cultural forces”). We can already see this happening in recent events like protesters threatening passersby they assume are neutral.

Fear #2 is The Second American Civil War. David Shor makes a compelling case that post-Biden, the Republican party will end up with a multi-term lock on the presidency and the senate. A government consistently elected by a violently-hated minority (see fear #1) seems like a recipe for disaster. We’ve only had one term of Trump so far, and already witness renewed talk of Californian secession, and protester-led self-governing zones springing up and then fading away.

Fear #3 is World War 3. (Huh; fear #2 is the second civil war and fear #3 is the third world war. That’s numerically convenient). With America in turmoil, and civil war as a possible future, the age of Pax Americana (general world peace through American military dominance) has started to draw to a close. China and Russia are already starting to flex their muscles by snipping off bits of territory. They’re currently relying more on American being distracted than on an actual power shift, but that could change very rapidly if America descends into a genuine constitutional crisis or civil war.

I sincerely hope this is just my imagination running away with me, and none of this comes true.