[This is the transcript of a chat conversation I had with another member of my local rationalist meet-up, on the topics of Moloch, world government, and colonization. Lightly edited for clarity, spelling, etc. and shared with their permission.]
Me: Here are some thoughts on Moloch. Moloch basically guarantees that anybody who can figure out how to successfully convert other values into economic value will out-compete the rest. So in the end, we are the paperclip maximizers, except our paperclips are dollar bills.
Scott proposes that to defeat Moloch we install a gardener, specifically a super-intelligent AI. But if you don’t think that’s going to happen, a world government seems like the next best thing. However if we escape earth before that happens, speed of light limitations will forever fragment us into competing factions impossible to garden. Therefore we should forbid any attempts to colonize Mars or other planets until we have world government and the technology to effectively manage such colonies under that government.
Them: The superorganisms in his parable only function because of… external competitive pressures. If cells didn’t need to band together to survive, they wouldn’t. If governments don’t have to fend off foreign governments they will accumulate corruption and dysfunctions.
Sort of related, I’m not persuaded by the conclusion to his parable. Won’t superintelligent AIs be subject to the same natural selective pressures as any other entity? What happens when our benevolent gardener encounters the expanding sphere of computronium from five galaxies over?
Me: Cells were surviving just fine without banding together. It was just that cells which banded together reproduced and consumed resources more effectively than those which didn’t. Similarly, I think a well constructed world government could survive just fine without competitive pressure. We haven’t necessarily found the form of that government yet, but liberal democracy seems like a decent first step.
Regarding competitive pressure on AI, he deals with that off hand by assuming that accelerating self-improvement gives an unbreakable first mover advantage. I don’t think that’s actually true, but then I’m much less bullish on super-intelligent AI in general.
Them: It would “survive,” but we don’t want a surviving government, we want a competent, benevolent one. My read on large organizations in general is that they naturally tend towards dysfunction, and it’s only competitive pressures that keep them functional.
Me: That produces a dismal view of the universe. We are given a Sophie’s Choice of either tiling the universe in economicium in order to compete and survive, or instantiating a global gardener which inherently tends towards dystopic dysfunction.
My read on large organizations in general is that they naturally tend towards dysfunction, and it’s only competitive pressures that keep them functional.
This is certainly mostly true, but I’m not yet convinced it’s necessarily true.
I think this in particular is too narrow. Hunter-gatherer bands were organizations that stayed relatively “functional”, often not due to competitive pressures with other bands, but due to pure environmental survival pressures. We probably don’t want a government that stays functional due to environmental survival pressures either, but I’m generalizing to an intuition that there are other kinds of pressure.
Them: There are other kinds of pressure, but you better be damn sure you’ve got them figured out before you quash all rivals.
Them: And to be precise, yeah, there’s a second thing keeping organizations intact, and that’s the floor imposed by “so incompetent they self-destruct.” But I think they degrade to the level of the floor, at which point they are no longer robust enough to survive two crises taking place at once, so they collapse anyway.
Me: Hmm, so it becomes impossible to instantiate a long-term stable gardener of any kind, and we’re stuck tiling the universe in economicium regardless.
Them: Well I think it might be possible (in the short term at least), but you have to be cognizant of the risks before you assume removing competition will make things better. So when I imagine a one-world-government, it’s more like a coordinating body above a collection of smaller states locked in fierce competition (hopefully just economic, cultural & athletic).
Me: At the risk of clarifying something which is already clear: I was never arguing that we are ready for world government now, or should work towards that soon; I was just saying there are some things we shouldn’t do until we have a good world government. We should make sure we can garden what we have before we go buying more land.
Them: Hmm, okay, I think that’s some important nuance I was overlooking.
Me: Though perhaps that is an inherently useless suggestion, since the coordination required to not buy more land is… a global gardener. Otherwise there’s competitive advantage in getting to more land first.
Them: So its a fair point. I assume that any pan-global body will not be well-designed, since it won’t be subject to competitive pressures. But its true that you might want to solve that problem before you start propagating your social structures through the universe.
Me: I’m now imagining the parallel argument playing out in Europe just post-Columbus. “We shouldn’t colonize North America until we have a well-gardened Europe”. That highlights the absurdity of it rather well.