Where Do We Go From Here?

Previously: Frankenstein Delenda Est | Roll for Sanity | The Great Project
Or perhaps: Brexit, Trump, and Capital in the 21st Century | An Exercise in Pessimism and Paranoia



Joe Biden will in all likelihood be the next American President come January. You can practically feel left-leaning elites the world over unclench their weary paranoia. Much remains to be dealt with as the last few votes are counted and legal challenges get addressed by the courts, but the climax is past. The denouement, of course, will last decades. It always does.


The pandemic rages on. More than a million people are dead, and thousands more are dying every day. Imagine a medium-sized city, slowly crumbling as every last inhabitant dies over the course of a single year. Only the bones remain. And yet we adapt: deaths per thousand cases continues to fall. The world’s most populous country, more than a billion people, has reported only a single COVID-19 death since April. The rest of the world is slowly learning, through trial and terrible error, which parts of the economy can be reopened or adapted, and which cannot.


Technology rushes on, heedless of the rest of the world. Competitive satellite internet. Virtual reality. Self-driving cars. Newer, faster, stronger, better. The pope is praying for friendly AI.

I just finished re-reading A Canticle for Liebowitz. [spoilers follow] A monastic order tries to preserve human knowledge for hundreds of years after a nuclear apocalypse. They succeed, civilization is reborn, and their reward is a second nuclear apocalypse, far worse. Humanity is not to be trusted with great power, nor great responsibility.



America is increasingly divided. Biden may have won, but those hoping for a repudiation of Trump must be bitterly disappointed. The long future is violently uncertain. Healing? Secession? Civil war?

Beliefs are not isolated things. The human brain is remarkably adept at ignoring inconsistencies, but doesn’t necessarily resolve them correctly, even when forced to. Whichever side you take in this battle, your enemies believe in gravity. They believe in food, and air, and airplanes which fly. Their epistemology is intact.

Where people on both sides go wrong is that their priors create a self-reinforcing web of false beliefs that are too far removed from immediate empirical evidence to be emotionally falsified. The media could report that hotdogs cause COVID-19. Whether you believe that hotdogs cause COVID-19 has nothing to do with your beliefs about hotdogs, or COVID-19. The only way those beliefs could actually be related in your mind is if you see someone eat a hotdog, and then become deathly ill moments later. Instead, whether you believe in hotdogs-causing-COVID-19 has everything to do with whether you think the media lies. Fake news?

The next time the media reports something, you remember. Didn’t they run that hotdogs-cause-COVID-19 story? Your priors have shifted. The gyre widens.

Rationalists believe that demanding consistency from your beliefs raises the sanity waterline. I believe that consistency, on balance, would give us more false beliefs, not fewer. Too much of the world we believe in is disconnected from direct observation. Science, democracy, journalism… all merely webs of hearsay becoming webs of heresy.


The ocean depths are a horrible place with little light, few resources, and various horrible organisms dedicated to eating or parasitizing one another. But every so often, a whale carcass falls to the bottom of the sea. More food than the organisms that find it could ever possibly want. There’s a brief period of miraculous plenty, while the couple of creatures that first encounter the whale feed like kings. Eventually more animals discover the carcass, the faster-breeding animals in the carcass multiply, the whale is gradually consumed, and everyone sighs and goes back to living in a Malthusian death-trap.

Scott Alexander, Meditations on Moloch

The printing press. The New World. Machines. Electricity. Computers. Internet. We feast on these whales, and for a time, we flourish. But whales are finite beings, and their corpses must eventually return to the dust.

Some Republicans view Donald Trump as a sign of the apocalypse, and long for the return of Mitt Romney. I view Mitt Romney as the unwitting sign of a far worse apocalypse; not because he is a Republican, but because he is a Mormon. Low first-world birth rates in the last fifty years are a blip, a memetic aberration. The Church of Jesus Christ of Latter-day Saints has, by all indications, a decisive first-mover advantage among the next wave of Malthusian population-promoting memeplexes. The birth rate will recover when America is mostly Mormon. Evolution requires it.

We can hunt new whales, new technologies, new vistas, and prolong our civilization one flickering light at a time. This is a worthy goal. But we are Ahab, and the whales we hunt will surely destroy us in time.

We can coordinate, cooperate, and create a world where consuming the whale does not consume the world. This is also a worthy goal. It is, in fact, part of the goal; step two of The Great Project.

But we cannot compromise with our enemies, and so our enemies must become our friends.


And yet? And yet nothing. There is no one else. If civilization is to flourish, it must survive, and it must grow so mighty that even Moloch trembles before it.

So go, live your life, dream your dreams, hunt your whales. But to save civilization we must unite it, and to unite it we need a shared belief that unity is important.

Spread the good word.

What is a “Good” Prediction?

Zvi’s post on Evaluating Predictions in Hindsight is a great walk through some practical, concrete methods of evaluating predictions. This post aims to be a somewhat more theoretical/philosophical take on the related idea of what makes a prediction “good”.

Intuitively, when we ask whether some past prediction was “good” or not, we tend to look at what actually happened. If I predicted that the sun will rise with very high probability, and the sun actually rose, that was a good prediction, right? There is an instrumental sense in which this is true, but also an epistemic sense in which it is not. If the sun was extremely unlikely to rise, then in a sense my prediction was wrong – I just got lucky instead. We can formally divide this distinction as follows:

  • Instrumentally, a prediction was good if believing it guided us to better behaviour. Usually this means it assigned a majority probability to the thing that actually happened regardless of how likely it really was.
  • Epistemically, a prediction was good only if it matched the underlying true probability of the event in question.

But what do we mean by “true probability”? If you believe the universe has fundamental randomness in it then this idea of “true probability” is probably pretty intuitive. There is some probability of an event happening baked into the underlying reality, and like any knowledge, our prediction is good if it matches that underlying reality. If this feels weird because you have a more deterministic bent, then I would remind you that every system seems random from the inside.

For a more concrete example, consider betting on a sports match between two teams. From a theoretical, instrumental perspective there is one optimal bet: 100% on the team that actually wins. But in reality, it is impossible to perfectly predict who will win; either that information literally doesn’t exist, or it exists in a way which we cannot access. So we have to treat reality itself as having a spread: there is some metaphysically real probability that team A will win, and some metaphysically real probability that team B will win. The bet with the best expected outcome is the one that matches those real probabilities.

While this definition of an “epistemically good prediction” is the most theoretically pure, and is a good ideal to strive for, it is usually impractical for actually evaluating predictions (thus Zvi’s post). Even after the fact, we often don’t have a good idea what the underlying “true probability” was. This is important to note, because it’s an easy mistake to make: what actually happened does not tell us the true probability. It’s useful information in that direction, but cannot be conclusive and often isn’t even that significant. It only feels conclusive sometimes because we tend to default to thinking about the world deterministically.

Eliezer has an essay arguing that Probability is in the Mind. While in a literal sense I am contradicting that thesis, I don’t consider my argument here to be incompatible with what he’s written. Probability is in the mind, and that’s what is usually more useful to us. But unless you consider the world to be fully deterministic, probability must also be in the world – it’s just important to distinguish which one you’re talking about.

Winning vs Truth – Infohazard Trade-Offs

This post on the credibility of the CDC has sparked a great deal of discussion on the ethics of posts like it. Some people claim that the post itself is harmful, arguing that anything which reduces trust in the CDC will likely kill people as they ignore or reject important advice for dealing with SARS-CoV-2 and (in the long-run) other issues like vaccination. This argument has been met with two very different responses.

One response has been to argue that the CDC’s advice is so bad that reducing trust in it will actually have a net positive effect in the long run. This is an ultimately empirical question which somebody should probably address, but I do not have the skills or interest to attempt that.

The other response is much more interesting, arguing that appeals to consequences are generally bad, and that meta-level considerations mean we should generally speak the truth even if the immediate consequences are bad. I find this really interesting because it is ultimately about infohazards: those rare cases where there is a conflict between epistemic and instrumental rationality. Typically, we believe that having more truth (via epistemic rationality) is a positive trait that allows you to “win” more (thus aligning with instrumental rationality). But when more truth becomes harmful, which do we preference: truth, or winning?

Some people will just decide to value truth more than winning as an axiom of their value system. But for most of us, ultimately I think this also boils down to an empirical question of just how bad “not winning” will end up being. It’s easy to see that for sufficiently severe cases, natural selection takes over: any meme/person/thing that prefers truth over winning in those cases will die out, to be replaced by memes/people/things that choose to win. I personally will prefer winning in those cases. It’s also true that most of the time, truth actually helps you win in the long run. We should probably reject untrue claims even if they provide a small amount of extra short-term winning, since in the long run having an untrue belief is likely to prevent us from winning in ways we can’t predict.

Figuring out where the cut-over point lies between truth and winning seems non-trivial. Based on my examples above we can derive two simple heuristics to start off:

  • Prefer truth over winning by default.
  • Prefer winning over truth if the cost of not winning is destruction of yourself or your community. (It’s interesting to note that this heuristic arguably already applies to SARS-Cov-2, at least for some people in at-risk demographics.)

What other heuristics do other people use for this question? How do they come out on the CDC post and SARS-CoV-2?

An Atheist’s Flowchart, Part 3: Proof of God and Russel’s Teapot

In the first part of this series we covered the difference between axiomatic and derived beliefs and Occam’s razor. In the second part I made an argument that belief in any sort of traditional god cannot be axiomatic. In this post I will make the argument that belief in god cannot be derived either; the conclusion, following from both points, is that one cannot and should not believe in god. This will complete my first angle of atheist approach, the one I called epistemic.

In order for a belief in god to be derived, it must be naturally supported by some other beliefs which may themselves be derived or axiomatic. Either way, if you follow the chain of beliefs-supporting-beliefs back far enough you must end at an axiomatic belief at some point. Let us then consider the ways we might go about proving the existence of god.


The first and most obvious way to prove the existence of god is via empiricism: if there were observable, empirical evidence whose only reasonable explanation was the existence of god, then that would be sufficient. However, there is none. God does not regularly perform otherwise-inexplicable miracles on live television; there is no scientific experiment which suggests that god exists; no claims to see god, or hear his voice, or sense his presence, have ever been substantiated.

As an empiricist I must be consistent: if such evidence were ever to appear then I would happily change my mind on this whole point and consider myself to be mistaken. Until that point, the absence of evidence is, in fact, evidence of absence.

Russel’s Teapot

I’m now going to take a brief sidebar to elaborate on that last point since the burden of proof in this situation seems to be a common source of confusion. Succinctly put, the burden of proof in this case does in fact fall on the person making the argument for the existence of god (i.e. not on me). This can be seen most easily via a common analogy known as Russel’s Teapot. More formally, claiming that something is true because it has not been proven false is a fallacy: the argument from ignorance.

Of course, the opposite is also a fallacy: I cannot claim something is false simply because it has not yet been proven true. However this does not prevent absence of evidence from being evidence of absence in all cases. Per Irving Copi:

In some circumstances it can be safely assumed that if a certain event had occurred, evidence of it could be discovered by qualified investigators. In such circumstances it is perfectly reasonable to take the absence of proof of its occurrence as positive proof of its non-occurrence.


The other common approaches to prove the existence of god are via logic, the most popular of which are the many different ontological arguments. It would be counter-productive to try and enumerate and disprove all the various formulations of these arguments; suffice it to say that all of the more popular ones have been specifically debunked by philosophers and logicians at some point already. But more importantly, all of these arguments start with additional axioms beyond the core set. Even the full set of nine in which I believe do not provide for any of them.

As with the empiric approach, I must be consistent: if a logical argument were presented to me for the existence of god, whose only axioms were the nine in which I believe, then I would change my mind. But I do not believe that is likely to happen.

In fact, if you take a broad enough view, these two points are equivalent: since empiricism is effectively built into my axioms, my rejections of both the empirical and logical attempts to prove god are the same: none of the arguments presented are sufficiently supported based on my axioms.

An Atheist’s Flowchart, Part 1: Occam’s Razor and Axiomatic Beliefs

The first pillar of my atheistic treatise is the one I called “via epistemology”. Of the three, it is probably both the strongest and the most applicable to the real religious beliefs commonly held by real people. It is certainly the most logically rigorous, if that means anything.

Axiom vs Derivation

The starting claim for this argument is that every belief we have must fall into one of two categories: it must be either axiomatic, or derived. Axiomatic beliefs are unsupported by anything else, they are effectively taken on faith. Without axiomatic beliefs in which to root our worldview, we end up in a circular trap of nihilistic doubt. Conversely, derived beliefs are not taken on faith; they are instead supported by some other beliefs we already hold. Those beliefs are in turn supported by other beliefs down the chain until you end up either at an axiomatic belief, or a loop.

Of course, the vast majority of day-to-day beliefs are derived: my belief that I will get wet if I go outside is derived from two other beliefs:

  • my belief that it is raining outside, and that it will keep raining for the near future
  • my belief that things, when rained on, get wet

In fact, there are only a handful of common beliefs which need to be axiomatic. These include belief in the existence of reality, causality, and your own senses, and the reliability of your mind and memory. You may notice that this list looks an awful lot like the core set of axioms with which I started this blog; that is not a coincidence.

We now have two possible branches we can follow: someone’s belief in God may fall into either of these two categories. Let us explore both.

God as Axiom, and Occam’s Razor

The first path we will explore is when belief in god is taken on faith, as an axiom in itself. This is probably the path applicable to the most real peoples’ real beliefs, and it is certainly one of the most articulable: it feels deceptively simple and makes an easy fallback whenever a theist is challenged to prove their beliefs.

Unfortunately that simplicity is very deceptive, and simplicity is important.

The number of axioms we accept must be limited or else we can believe in anything, from flying spaghetti monsters to inter-galactic teapots to invisible dragons. Don’t feel like arguing for something? Just claim it as an axiom and you’re done! To avoid this, we put a limiting law on our axioms known as “Occam’s razor”, which goes roughly as “when all other considerations are equal, choose the simplest solution”.

It is important to note here that the simplest solution is not necessarily the one with the fewest axioms. In information-theoretical terms the simplest solution is actually the one encoding the fewest bits of information. Otherwise you could still take as many axioms as you want and glue them together into a single sentence via a lot of “and”s.

Sneak Peek

We’ve covered a lot of ground already in this post and haven’t even really gotten to the core of the argument yet, so I’ll sketch it out now and flesh it out properly next week. In broad strokes:

  • There is a core set of axioms which everybody accepts (regardless of religion) and everybody must accept in order to meaningfully participate in the world.
  • This core set is almost or completely sufficient on its own.
  • The existence of god is massively complex, as axioms go.
  • Even if the core set is insufficient on its own, there are better and simpler alternative axioms which complete it.

Therefore, by Occam’s razor, the existence of god cannot be an axiomatic belief.

The Scientific Method

All of this wandering around thinking about rationality, empiricism, truth and knowledge now finally culminates in the scientific method. This is, of course, not the be-all and end-all of truth or knowledge, but it is one of the most useful tools we have for probabilistically determining the shape of reality.

If you’re not familiar with the scientific method (really?) it goes something like this:

  1. Have a question.
  2. Take a guess.
  3. Find something that your guess predicts.
  4. Measure it.
  5. If your measurement doesn’t match your predication, go back to 2. If it does match your prediction, go back to 3.

You will note, of course, that the scientific method never stops. It is an endless loop of trying out new ideas and finding ways to verify or disprove them. Every successfully predicted measurement is another step on the way towards statistical certainty, but of course like any empirical practice we can never get all the way there.

It is also relatively important that your guess (step 2) must be capable of making predictions (step 3). Every so often someone will claim to have solved a great complicated scientific problem by the means of some mysterious new substance. When asked about this substance, they are incapable of predicting its properties other than that it makes their theory magically work. These ideas do not fit in the scientific method – if it doesn’t generate predictions, it’s not science.

Truth and Knowledge


While empiricism gives us some probabilistic, mediated access to reality, our senses are frequently consistent with each other. However, we have no way of knowing if this input represents what is or simply some high-level abstract interpretation of a transformed version of what is. Both of these could be called “true” however in normal speech, so I will borrow the Buddhist doctrine of Two Truths. In very brief summary:

Absolute Truth: The underlying reality of what is.

Relative Truth: What we consistently and reliably interpret of what we sense.

Absolute truth might, for example, be nothing more than atoms interacting according to physical laws. Regardless, it is exactly the reality accepted in axiom #3. Relative truth is the world that we construct from empiricism, with tables and chairs and clouds and cats and dogs. Even if, fundamentally, there is nothing that is a dog in the underlying reality, the existence of dogs is still a relative truth, a practical concept useful for navigating the world.


The definition of knowledge is another big problem in philosophy, and is more-or-less the defining question of epistemology. Based on our axioms so far and the previous two posts, I use the following three definitions of knowledge:

True Knowledge: This is the definition of knowledge that pedants like to trot out when arguing for epistemic scepticism. “True” knowledge is knowledge of absolute truth (see above) which is impossible because of the circular trap. We have no way of knowing that the axioms we have taken are correct, thus no way of knowing anything else which we might derive from them. However, this meaning is almost never used in non-philosophical debate.

Strong Knowledge: Strong knowledge is knowledge based on “inviolate empiricism”; facts like gravity1 that consistent across a truly enormous number of observations (all relative truths, of course). Logical derivations from axioms (and from inviolate empiricism) also qualifies of course: knowledge of algebra is strong knowledge. This usage occurs outside of philosophy debates, but mostly in scientific and other formal contexts.

Weak Knowledge: Weak knowledge is knowledge based on probabilistic empiricism. It is reasonable, for example, for me to say that I “know” certain things about Shakespeare, but the chain of actual facts and observations between myself and him is quite long and tenuous. Nonetheless, I say I know these facts because they are still far and away the most probable explanation of all the various things I have experienced. This is the most common usage of knowledge in informal conversation.

(1) Yes, I know, quantum theory and relativity etc etc. What goes up must still come down.

Empiricism and Probability

We have a set of tools now for operating on truths, but we lack the raw materials to operate on. Fortunately there is another common epistemic view.

Empiricism is the view that knowledge of reality comes from the senses. As I mentioned in my brief discussion of axiom #6, we are not assuming quite this much, though we are taking at least a partially empirical view. Our senses provide some sort of access to reality, although that access may be mediated, transformed, inconsistent, etc. Due to this caveat, we cannot simply make knowledge-claims based on our senses: “I see _ therefore I know that _.” could be perfectly wrong.

However, we can say “I see _ therefore I know that something in reality is causing me to see _”. This is somewhat more accurate though less useful. More importantly, we can appeal to the consistency of reality (axiom #3), the partial consistency of memory (axiom #7), and the validity of logic (axiom #8) to make probabilistic empirical claims, such as the following:

I see _, and I have many memories of seeing the same _, therefore it is probable that I will continue to see _.

Of course sight is not the only possible sense here; one can make similar claims using hearing, smell, touch, etc. By adding an appeal to causality (axiom #5) one can also make probabilistic claims about correlation:

I see X and have a memory of just seeing Y. I have many memories of seeing X shortly after seeing Y, and no memories of not seeing X after seeing Y. Therefore it is probable that if I again see Y, I will shortly see X.

(I use X and Y instead of _ above because I have two blanks to fill in that I wish to distinguish between). Just like any probabilistic claim, the more samples you have (in this case memories) the stronger the claim.

It is these probabilistic claims that we can use as raw materials, feeding them into our rational tools to produce an understanding of reality. Of course we cannot use this to achieve whatever constitutes “real” truth with any certainty, but strong probabilities are better than nothing. With rationalism and empiricism in hand we will use the next post to delve more deeply into the concepts of truth and knowledge.