The Manual Economy

[An attempt at fiction in the style of Scott Alexander. With bits of Lewis Carroll and Douglas Adams thrown in for good measure.]

The hallucination started out so normally, I completely forgot that I was tripping.

I was at the dentist, and I had just had my teeth cleaned. You know the drill, the hygienist goes through your teeth with this little spray nozzle that gets into all the cracks and cavities you pretend don’t exist when you’ve got a brush in there. Then they make you hold some disgusting not-quite-mint not-quite-water in your mouth, and swish, and spit. And spit. And spit. And after about the third blessed mouthful of real water, you can vaguely taste something other than not-quite-mint, until your salivary glands give up the ghost entirely and your mouth turns into the Sahara desert.

As I said, it was weirdly normal for a trip. I’d been expecting unicorns, or aliens, or a sky made up of funky colours and mystical cactus people who could factor large numbers. But I was at the dentist. If I’d wanted a trip to the dentist, I would have just gone to the dentist. It would have been cheaper, and probably better for my teeth.

The entire dental experience was so totally normal I completely forgot I was tripping until I went to pay, and I couldn’t find my credit card. Or any cash. My wallet had a driver’s license and various other identification cards, but no payment at all. The receptionist smiled at me politely.

“Is everything alright? Can I help you”?

I winced. “I’m sorry, I seem to have misplaced all my money, I’m not going to be able to pay my bill today”.

There was a confused pause. A giant hand walked past waving an umbrella and whistling show tunes. The receptionist winked at me with both eyes at once. I suddenly knew, somehow, that I didn’t need to pay, so I turned and walked out the door. Across the street was a bank, so I floated forward until I was inside.

The bank, like the dentist, seemed totally normal. There were no lines, but that was expected for mid-afternoon on a Tuesday. I rolled over to one of the tellers.

“Excuse me”, I said, “I seem to have lost my credit card, can you help me”?

There was another confused pause. The bank teller turned into a giant hand and flew away. The entire bank building sort of dissolved as the buildings on either side squeezed together to fill up the space. I ended up on the sidewalk outside a Starbucks.

I didn’t even like Starbucks.

Sitting outside the Starbucks was a homeless person whose baseball cap kept flickering as if it couldn’t make up its mind. First it was on their head, but then *pop*, it was on the sidewalk in front of them with a few coins in it, and then *pop*, it was gone entirely. And then it was suddenly on their head again. After a few seconds of this my own head started to hurt, so I stared at the sidewalk extra hard until the homeless person turned into a giant hand, and the baseball cap was arrested for multiplying entities beyond necessity.

The hand spoke to me. “Now look what you’ve done! It’s hard enough to coordinate this economy without some yokel trying to physically instantiate all of the mechanisms”!

There was a third confused pause, but this time the hand just sat there looking disgruntled until I finally echoed its statement back as a question. “You… coordinate the entire economy”?

“Yes of course I do”, the hand replied, “somebody has to do it or this whole place would fall apart. How else does food get to everyone who needs it, let alone all the other goods and services”!

I blinked. “So, you’re, literally, the invisible hand of the market”?

“Well I was“, the hand said waspishly. “But do I look invisible to you”?

“Oh, sorry about that”, I apologized. “So my money and credit card, and the bank and everything? They all disappeared because they’re… you? Or manifestations of you, or something”?

The hand glared at me. “I’m a hand”, it said, waving at itself sarcastically. “It seems awfully rude of you to talk to me about manifestations. Until you came along, I had no need of them at all”! It huffed. “Now here I am, trying to coordinate an economy the size of a planet, and instead of being a magical omniscient force I’m trapped in a giant disembodied appendage. What am even I supposed to do with all of these fingers”?

I giggled. “I dunno, you could say that the economy just went… digital“.

The hand rolled its eyes, but I had a lot more ready.

“Oh come on, you’ve got to hand me that one. No? You’re not going to clap back? Well come on then, let me give you a hand coming up with a response. I’m pretty handy with this sort of thing, in fact…”

Ten minutes later, I finally ran out of steam with a complicated pun about greased palms and coconut oil, that even I admitted was a stretch. At this point, the hand had finally had enough.

“Look”, it said, “maybe in your universe the economy is coordinated by these magical distributed pieces of paper and electronic numbers, and nobody has ultimate responsibility for the economy. But in this universe, none of that exists; the buck stops with me. I’ve been listening to you make hand puns for ten minutes, and in that time the entire economy has ground to a halt because I haven’t been there to ensure the right transactions occur at the right time. In some sense I don’t just coordinate the economy – I am the economy”.

I shook my head. “That can’t be right”, I said, “the economy isn’t made up of pieces of paper and numbers, the economy is all of the real things that get moved around because of that coordination. Just because you took a few minutes off to…”, I giggled again, “manifest, as a giant hand, farms are still growing food, factories are still producing goods, the economy is still going! Transport truck drivers didn’t all go on strike because you took a small break”!

“That’s exactly my point!” said the hand. “Truck drivers were on strike when you started your little game, but that strike required coordinated action which I provided. When I started slacking off all those truck drivers got bored and left the picket line to follow their individual inclinations, and now it’s chaos”!

At this point I could feel the drugs starting to wear off, but the hand was still going.

“They’re not striking, or trucking, or anything useful at all! The entire economy is crumbling like the twin towers after that so-called plane crash!”

The bank reappeared beside the Starbucks, and the entire row of buildings shifted down to accommodate.

“It’s all the governments fault, them and their secret mind control beams out to steal your thoughts!”

The not-invisible giant hand shrank in size until it was a normal hand, attached to a normal homeless person, still talking about the implications of omniscient economic coordination and various other conspiracy theories. My teeth started to hurt.

I’ve written this trip report in an attempt to jog my own memory. Something that the hand said during our conversation really resonated with me, and I just know the next Nobel prize in economics is mine if I can only remember what it was…

I just can’t put my finger on it.

Roll for Sanity

[This is very much a personal-diary type post, but it ends up touching on predictive processing and other aspects of how our brains work. Feels related to Choosing the Zero Point.]

I. Looking for Trouble

In the card game Munchkin, there is a mechanic called “Looking for Trouble”, whereby if you haven’t yet fought a monster on your turn, you can play a monster from your hand and fight that. You don’t have to do this – it’s optional, and can carry stiff penalties if the monster ends up defeating you – but since killing monsters is one of the key ways to win at Munchkin, it’s an important mechanic.

Obviously you don’t want to fight a monster if you think that you’re going to lose. A brand new munchkin “Looking for Trouble” with a level 20 Plutonium Dragon is literally… looking for trouble1. And even if you think you might win, it’s often a good idea to wait a turn or two in order to try and collect more spells, stronger weapons, etc. It would be a pretty terrible Munchkin strategy to go looking for trouble on every possible turn, regardless of your equipment or which monsters you actually have in your hand.

And yet… this terrible strategy feels like a metaphor for my life recently.

Between work, personal relationships, and the chaos caused by the pandemic, I’ve been dealing with a pretty big set of stressors (monsters) already in my life. But like an incompetent Munchkin, every time I’m not dealing with an immediate personal problem, I find myself Looking for Trouble. And the internet makes this soooooo easy.

Instead of taking a break, relaxing, and recharging my mental and emotional batteries, I find myself checking the latest coronavirus stats, seeing which of my favourite pieces of media have been cancelled, reading hot takes on the death of democracy, or just plain “doomscrolling” on social media. Unsurprisingly, I have not been at my best the last little while.

As best I can tell, this unfortunate behavioural pattern is a classic instance of predictive-processing gone awry. In other words, so much has gone wrong recently that my brain has decided the world must always be on fire, and that’s just the way things are. My subconscious is predicting disaster so strongly that when there’s no evidence of a new disaster, my brain assumes that I’m just not looking hard enough, and I end up on the internet finding new horrors in order to prove myself right. And all the recent stories about doomscrolling make me suspect I’m not alone.

II. Moral Implications

Now obviously predictive-processing gone awry is not the only explanation for everyone’s bad-news obsession. Even if it’s a plausible explanation for me personally (which I think it is), it might not be the cause of the general doom-scrolling trend. Things actually are unusually bad in many parts of the world, and people always tend to pay more attention to bad news than to good. Maybe feeling kind of terrible is just a natural response to things being unusually terrible.

If feeling terrible is in some sense a “reasonable” response to the state of the world, then I worry that my attempts to feel less terrible are morally wrong, since they try to avoid the problem instead of solve it. Am I just doing the global equivalent of pretending not to see the homeless person on the corner? Is the moral thing instead to face the world’s troubles head-on, acknowledge its pain, and try to help?

But this doesn’t seem quite fair; while I might plausibly be able to help a single homeless person, I am largely helpless in the face of the vast issues facing America and the world (at least, in the short term). I’m a private citizen in a relatively small, stable, country; most of the time nobody pays us any attention, for good reason. Feeling stress and anxiety truly proportional to the level of suffering in the world seems in some sense correct; scope insensitivity is still an irrational bias. But like an airline passenger who refuses to put on their own mask first, it would be a mistake in practice. Being insensitive to the scope of suffering beyond a certain point is an adaptive coping mechanism to keep us sane in the face of a vast and uncaring world. As long as we use our sanity to do good in the long run, ignoring pain in the short run seems ok.

III. Reducing the Area of Concern

Given that ignoring global problems in order to conserve our own sanity seems ok, at least in the short term, then how do we do that? By embracing scope insensitivity, and reducing our area of concern.

The human nervous system, grossly simplified, contains a slider switch that runs from “fight and flight” on one end (the sympathetic nervous system) to “rest and digest” on the other (the parasympathetic nervous system). A happy, productive life requires both components; you obviously need to spend some time resting and digesting, but equally you need your sympathetic nervous system to deal with challenges and to accomplish difficult tasks. In other words, it’s almost certainly unhealthy to be stuck at either extreme for any length of time.

Unfortunately, “fight or flight” isn’t just something that your brain does when facing an immediate, concrete threat. Stress, anxiety, and fear all show up whenever there’s a possible threat within some ill-defined “area of concern”. Another war on the other side of the planet? Not a big deal. But heaven forbid there’s been a string of burglaries in your neighbourhood recently. Even if you never see a burglar yourself, just hearing about it on the news is enough to cause some sleepless nights.

Given that mere bad news can cause a fight or flight response if your brain judges it “in scope”, and the fact that the world is absolutely full of bad news on a regular basis… if you start to think of the entire world as “in scope” then you’re going to have a bad time of it. The internet, news, politics… they’re all global arenas now, and it’s incredibly difficult to engage with them in a way that doesn’t increase your area of concern. Engage too much, and you end up permanently stuck in “fight or flight”, killing yourself with stress.

In recognition of where my slider switch has been sitting recently, and in order to metaphorically “put my own mask on first”, I’ve been trying to reduce my scope of concern. I’ve blocked a bunch of sites from my work laptop. I’ve uninstalled a few apps from my phone. I’ve tried to spend less time reading the news, and more time reading things that I find valuable and relaxing. If I’m helpless in the face of things anyways, then it doesn’t serve me to know about them at all, does it?

Early results are promising, but early. I suspect the hardest part will be sticking to it, and finding other sources of stimulus since much of my local life is still in pandemic-induced lockdown. If my immediate scope of concern is utterly static, and the global scope of concern is a panic-inducing nightmare, is there an intermediate scope? With the internet at our fingertips, I’m not sure that there is.


  1. Yes, technically a Plutonium Dragon won’t pursue anyone below level 5, so you’d be able to run away… but still.

Frankenstein Delenda Est

I.

I am terrified by the idea that one day, I will look back on my life, and realize that I helped create a monster. That my actions and my decisions pushed humanity a little further along the path to suffering and ruin. I step back sometimes from the gears of the technology I am creating, from the web of ideas I am promoting, and from the vision of the future that I am chasing, and I wonder if any of it is good.

Of course I play only the tiniest of roles in the world, and there will be no great reckoning for me. I am a drop in the ocean of the many, many others who are also trying to build the future. But still, I am here. I push, and the levers of the world move, however fractionally. Gears turn. Webs are spun. If I push in a different direction, then the future will be different. I must believe that my actions have meaning, because otherwise they have nothing at all.

No, I do not doubt my ability to shape the future; I doubt my ability to choose it well. The world is dense with opportunity, and we sit at the controls of a society with immense potential and awful power. We have at our disposal a library full of blueprints, each one claiming to be better than the last. I would scream, but in this library I simply whisper, to the blueprints: how do you know? How do you know that the future you propose has been authored by The Goddess of Everything Else, and is not another tendril of Moloch sneaking into our world?

Many people claim to know, to have ascended the mountain and to be pronouncing upon their return the commandments of the one true future: There is a way. Where we are going today, that is not the way. But there is a way. Believe in the way.

I hear these people speak and I am overcome with doubt. I think of butterflies, who flap their wings and create Brownian motion, as unfathomable as any hurricane. I think of fungi, whose simple mushrooms can hide a thousand acres of interwoven root. I think of the human brain, a few pounds of soggy meat whose spark eludes us. The weight of complexity is crushing, and any claim to understanding must be counterbalanced by the collected humility of a thousand generations of ignorance.

And on this complexity, we build our civilization. Synthesizing bold new chemicals, organizing the world’s information, and shaping the future through a patchwork mess of incentives, choices, and paths of least resistance. Visions of the future coalesce around politics of the moment, but there is no vision of the future that can account for our own radical invention. Do not doubt that Russell Marker and Bob Taylor did as much to shape today as any president or dictator. The levers we pull are slow, and their lengths are hidden, but some of them will certainly move the world.

And on these levers, we build our civilization. Invisible hands pull the levers that turn the gears that spin the webs that hold us fast, and those invisible hands belong to us. We pronounce our visions of a gleamingly efficient future, accumulating power in our bid to challenge Moloch, never asking whether Moloch is, simply, us. The institutions of the American experiment were shaped by the wisdom of centuries of political philosophy. That they have so readily crumbled is not an indictment of their authors, but of the radical societal changes none of those authors could foresee. Our new society is being thrown together slapdash by a bare handful of engineers more interested in optimizing behaviour than in guiding it, and the resulting institutions are as sociologically destructive as they are economically productive.

And on these institutions, we build our civilization.

II.

Sometimes, I believe that with a little work and a lot of care, humanity might be able to engineer its way out of its current rough patch and forward, into a stable equilibrium of happy society. Sometimes, if we just run a little faster and work a little harder, we might reach utopia.

There is a pleasant circularity to this dream. Sure, technology has forced disparate parts of our society together in a way that creates polarized echo chambers and threatens to tear society apart. But if we just dream a little bigger we can create new technology to solve that problem. And honestly, we probably can do just that. But a butterfly flaps its wings, and the gears turn, and whatever new technical solution we create will generate a hurricane in some other part of society. Any claims that it won’t must be counterbalanced by the collected humility of a thousand generations of mistakes.

Sometimes, I believe that the future is lying in plain sight, waiting to swallow us when we finally fall. If we just let things take their natural course, then the Amish and the Mennonites and (to a lesser extent) the Mormons will be there with their moral capital and their technological ludditism and their ultimately functional societies to pick up the pieces left by our self-destruction. Natural selection can be awful if you’re on the wrong end of it, but it still ultimately works.

Or maybe, sometimes, it’s all a wash and we’ll stumble along to weirder and weirder futures with their own fractal echoes of our current problems, as in Greg Daniels’s Upload. But I think of the complexity of this path, and I am overcome with doubt.

III.

I am terrified by the idea that one day, I will look back on my life, and realize that I helped create a monster. Not a grand, societal-collapse kind of monster or an elder-god-sucking-the-good-out-of-everything kind of monster. Just a prosaic, every-day, run-of-the-mill, Frankenstinian monster. I step back sometimes from the gears of the technology I am creating, from the web of ideas I am promoting, and from the vision of the future that I am chasing, and I wonder if it’s the right one.

From the grand library of societal blueprints, I have chosen a set. I have spent my life building the gears to make it go, and spinning the webs that hold it together. But I look up from my labour and I see other people building on other blueprints entirely. I see protests, and essays, and argument, and conflict. I am confident in my epistemology, but epistemology brings me only a method of transportation, not a destination.

I am terrified that it is hubris to claim one blueprint as my own. That I am no better than anyone else, coming down from the mountaintop, proclaiming the way. That society will destroy my monster of a future with pitchforks, or that worse, my monster will grow to devour what would have otherwise been a beautiful idyll.

Frankenstein was not the monster; Frankenstein created the monster.

Abstractions on Inconsistent Data

[I’m not sure this makes any sense – it is mostly babble, as an attempt to express something that doesn’t want to be expressed. The ideas here may themselves be an abstraction on inconsistent data. Posting anyway because that’s what this blog is for.]

i. Abterpretations

Abstractions are (or at least are very closely related to) patterns, compression, and Shannon entropy. We take something that isn’t entirely random, and we use that predictability (lack of randomness) to find a smaller representation which we can reason about, and predict. Abstractions frequently lose information – the map does not capture every detail of the territory – but are still generally useful. There is a sense in which some things cannot be abstracted without loss – purely random data cannot be compressed by definition. There is another sense in which everything can be abstracted without loss, since even purely random data can be represented as the bit-string of itself. Pure randomness is in this sense somehow analogous to primeness – there is only one satisfactory function, and it is the identity.

A separate idea, heading in the same direction: Data cannot, in itself, be inconsistent – it can only be inconsistent with (or within) a given interpretation. Data alone is a string of bits with no interpretation whatsoever. The bitstring 01000001 is commonly interpreted both as the number 65, and as the character ‘A’, but that interpretation is not inherent to the bits; I could just as easily interpret it as the number 190, or as anything else. Sense data that I interpret as “my total life so far, and then an apple falling upwards”, is inconsistent with the laws of gravity. But the apple falling up is not inconsistent with my total life so far – it’s only inconsistent with gravity, as my interpretation of that data.

There is a sense in which some data cannot be consistently interpreted – purely random data cannot be consistently mapped onto anything useful. There is another sense in which everything can be consistently interpreted, since even purely random data can be consistently mapped onto itself: the territory is the territory. Primeness as an analogue, again.

Abstraction and interpretation are both functions, mapping data onto other data. There is a sense in which they are the same function. There is another sense in which they are inverses. Both senses are true.

ii. Errplanations

Assuming no errors, then one piece of inconsistent data is enough to invalidate an entire interpretation. In practice, errors abound. We don’t throw out all of physics every time a grad student does too much LSD.

Sometimes locating the error is easy. The apple falling up is a hallucination, because you did LSD.

Sometimes locating the error is harder. I feel repulsion at the naive utilitarian idea of killing one healthy patient to save five. Is that an error in my feelings, and I should bite the bullet? Is that a true inconsistency, and I should throw out utilitarianism? Or is that an error in the framing of the question, and No True Utilitarian endorses that action?

Locating the error is meaningless without explaining the error. You hallucinated the apple because LSD does things to your brain. Your model of the world now includes the error. The error is predictable.

Locating the error without explaining it is attributing the error to phlogiston, or epicycles. There may be an error in my feelings about the transplant case, but it is not yet predictable. I cannot distinguish between a missing errplanation and a true inconsistency.

iii. Intuitions

If ethical frameworks are abterpretations of our moral intuitions, then there is a sense in which no ethical framework can be generally true – our moral intuitions do not always satisfy the axioms of preference, and cannot be consistently interpreted.

There is another sense in which there is a generally true ethical framework for any possible set of moral intuitions: there is always one satisfactory function, and it is the identity.

Primeness as an analogue.

The Stopped Clock Problem

[Unusually for me, I actually wrote this and published it on Less Wrong first. I’ve never reverse-cross-posted something to my blog before.]

When a low-probability, high-impact event occurs, and the world “got it wrong”, it is tempting to look for the people who did successfully predict it in advance in order to discover their secret, or at least see what else they’ve predicted. Unfortunately, as Wei Dai discovered recently, this tends to backfire.

It may feel a bit counterintuitive, but this is actually fairly predictable: the math backs it up on some reasonable assumptions. First, let’s assume that the topic required unusual levels of clarity of thought not to be sucked into the prevailing (wrong) consensus: say a mere 0.001% of people accomplished this. These people are worth finding, and listening to.

But we must also note that a good chunk of the population are just pessimists. Let’s say, very conservatively, that 0.01% of people predicted the same disaster just because they always predict the most obvious possible disaster. Suddenly the odds are pretty good that anybody you find who successfully predicted the disaster is a crank. The mere fact that they correctly predicted the disaster becomes evidence only of extreme reasoning, but is insufficient to tell whether that reasoning was extremely good, or extremely bad. And on balance, most of the time, it’s extremely bad.

Unfortunately, the problem here is not just that the good predictors are buried in a mountain of random others; it’s that the good predictors are buried in a mountain of extremely poor predictors. The result is that the mean prediction of that group is going to be noticeably worse than the prevailing consensus on most questions, not better.


Obviously the 0.001% and 0.01% numbers above are made up; I spent some time looking for real statistics and couldn’t find anything useful; this article claims roughly 1% of Americans are “preppers”, which might be a good indication, except it provides no source and could equally well just be the lizardman constant. Regardless, my point relies mainly on the second group being an order of magnitude or more larger than the first, which seems (to me) fairly intuitively likely to be true. If anybody has real statistics to prove or disprove this, they would be much appreciated.

Extracting Value from Inadequate Equilibria

[Much expanded from my comment here. Pure speculation, but I’m confident that the bones of this make sense, even if it ends up being unrealistic in practice.]

A lot of problems are coordination problems. An easy example that comes to mind is scientific publishing: everybody knows that some journal publishers are charging ridiculous prices relative to what they actually provide, but those journals have momentum. It’s too costly for any individual scientist or university to buck the trend; what we need is coordinated action.

Eliezer Yudkowsky talks about these problems in his sequence Inadequate Equilibria, and proposes off-hand the idea of a Kickstarter for Coordinated Action. While Kickstarter is a great metaphor for understanding the basic principle of “timed-collective-action-threshold-conditional-commitment”, I think it’s ultimately led the discussion of this idea down a less fruitful path because Kickstarter is focused on individuals, and most high-value coordination problems happen at the level of institutions.

Consider journal publishing again. Certainly a sufficient mass of individual scientists could coordinate to switch publishers all at once. But no matter what individual scientists agree to, this is not a complete or perfect solution:

  • Almost no individual scientists are paying directly for these subscriptions – their universities are, often via long-term bulk contracts.
  • University hiring decisions involve people in the HR and finance departments of a university who have no interest in a coordinated “stop publishing in predatory journals” action. They only care about the prestige and credentials of the people they hire. Publications in those journals would still be a strong signal for them.
  • Tenure decisions involve more peer scientists than hiring, but would suffer at least partly from the same issue as hiring.

What’s needed for an action like this isn’t a Kickstarter-style website for scientists to sign up on – it’s coordinated action between universities at an institutional level. Many of the other examples discussed in Inadequate Equilibria fit the same pattern: the problems with healthcare in the U.S. aren’t caused by insufficient coordination between individual doctors, they’re caused by institutional coordination problems between hospitals, the FDA, and government.

(Speaking of government, there’s a whole host of other coordination problems [climate change comes to mind] that would be eminently more solvable if we had a good mechanism for coordinating the various institutions of government between countries. The United Nations is better than nothing, but doesn’t have enough trust or verification/enforcement power to be truly effective.)


The problem with the Kickstarter model is that institutions qua institutions are never going to sign up for an impersonal website and pledge $25 over a 60-day campaign to switch publishing models. The time scale is wrong, the monetary scale is wrong, the commitment level is wrong, the interface is wrong… that’s just not how institutions do business. Universities and hospitals prefer to do business via contracts, and lawyers, and board meetings. Luckily, there’s still value to be extracted here, which means that it should be possible to make a startup out of this anyway; it just won’t look anything like Kickstarter.

Our hypothetical business would employ a small cadre of lawyers, accountants, and domain experts. It would identify opportunities (e.g. journal publishing) and proactively approach the relevant institutions through the proper channels. These institutions would sign crafted, non-trivial contracts bound to the success of the endeavour. The business would provide fulfillment verification and all of the other necessary components, and would act as a trusted third-party. The existence of proper contracts custom-written by dedicated lawyers would let the existing legal system act as an enforcement mechanism. Since the successful execution of these contracts would provide each institution with significant long-term value, the business can fund itself over the long haul by taking a percentage of these savings off the top, just like Kickstarter.

This idea has a lot of obvious problems as well (the required upfront investment, the business implications of having its income depend on one or two major projects each year, the incentives it would have to manufacture problems, etc) but with a proper long-term-focused investor on board it seems like this could turn into something quite useful to humanity as a whole. Implementing it is well outside of my current skillset, but I would love to see what some well-funded entrepreneur with the right legal chops could make of something like this.

Thoughts?

Going Full Walden

[A couple of years ago I was feeling pretty misanthropic and sketched out some ideas for a post which has sat in my drafts folder ever since. It’s suddenly kinda relevant because of the pandemic, so I found the motivation to dust it off and finish it. Enjoy?

No, of course I don’t actually believe any of this. Sheesh.

I feel like this maybe needs a further disclaimer: this is an idea which should not be taken seriously. Treat it as a writing exercise instead. Caveat lector.]

Other people suck. A lot.

Not you of course. You, dear reader, are the exception that proves the rule. But you know who I’m talking about – all those other people you know who are lazy, or inconsiderate, or rude. The ones who promise they’ll do something and then… don’t. The so-called “friends” who are anything but. The people who lie, or cheat, or steal. The “everybody else” in the world you just can’t trust.

It’s enough to make you want to escape civilization entirely, go off on your own in the woods. To be like Thoreau, and find your own personal Walden. After all, we don’t actually need other people do we? Sure our lives right now depend on supply chains and infrastructure and all that jazz, but robots can do most of that now, and yelling at the delivery guy to leave it on the porch doesn’t really count as human interaction. Or something like that.

But enough with the moping about, let’s take an actual look at what it would be like to… oh wait. That’s what we’re already all doing right now, more or less. Social distancing, social isolation, po-tay-to, po-tah-to. Hmm…

Next question then: what are the pros and cons of human interaction in the modern world? Obviously, historically, we really did need each other in a concrete way. Tribes provided food, and shelter, and protection. Going it alone had really bad odds, and it wasn’t typically possible to convince a tribe to support you without you supporting them back in some way. Whether you wanted to or not, you were pretty much forced into taking the bad of the tribe along with the good.

Today, however, we’ve abstracted a lot of that messy need away, behind money, and economics, and the internet. I can make money on Mechanical Turk without ever interacting with a person, and I can spend that money on food (UberEats), shelter (AirBnB), and protection (taxes) the same way. We can truly be homo solitarius. So what would it take to convince you that really, the benefits of other people don’t outweigh the costs? That, from a utilitarian perspective, we should all go Full Walden?

Well to start, other people suck. A lot.

I feel like I’m repeating myself, so let’s skip forward. Even when other people don’t actively suck, they’re still really messy. Human relationships are constantly shifting arenas of politics, dominance hierarchies (insert obligatory ironic lobster metaphor), and game theory, and trying to stay on top of all of that can be exhausting. This may seem counter-intuitive, but if you’re working on a project that will really help other people, then imagine how much more time and energy you’ll have for that project when you don’t have other people in your life anymore!

Now, maybe you’re willing to put up with that mess because you think that people, and human relationships, have some intrinsic value. Fine. But people are weird about that. In surveys, people Americans consistently rate family (which typically consists of the other people we’re closest to) as the most important source of meaning in their lives. And yet revealed preferences tell a different story. Americans are working more than ever. Every day they spend eight hours working, three hours on social media, and a measly 37 minutes with their family. Maybe we say they’re valuable, but the way we spend our time doesn’t back that up.

If other people really aren’t that valuable to us, as our revealed preferences would attest, and their suckiness costs us a non-trivial amount of energy and creates risk, then the default position should be that other people are threats. They’re unpredictable, might seriously hurt us, and probably won’t help us much if at all… sounds like the description of a rabid dog, not our ideal of a human being. Going Full Walden starts to seem like a good deal. In this world, we should assume until proven otherwise that interacting with another person will be a net negative. People are dangerous and not useful, and so avoiding them is just a practical way to optimize our time and our lives.

The counter-argument, of course, is that we’re not quite that advanced yet. Sure you can kinda make it work with Mechanical Turk and UberEats and all the rest, but as soon as you have to call a plumber or a doctor, you’re back to dealing with other people. You can get remarkably far with no human contact, but you still can’t get all the way, and if you try then you’ll be woefully underprepared when you do have to enter the real world again. Even Thoreau didn’t spend his two years at Walden completely alone.

And besides, even if it is temporarily optimal to go full Walden, it’s not clear what the psychological implications would be. For better or for worse we seem to have evolved to live in social communities, and total isolation seems to drive people crazy. Weird.

Anywho, this is kinda rambly, seems like a good place to stop.

What is a “Good” Prediction?

Zvi’s post on Evaluating Predictions in Hindsight is a great walk through some practical, concrete methods of evaluating predictions. This post aims to be a somewhat more theoretical/philosophical take on the related idea of what makes a prediction “good”.

Intuitively, when we ask whether some past prediction was “good” or not, we tend to look at what actually happened. If I predicted that the sun will rise with very high probability, and the sun actually rose, that was a good prediction, right? There is an instrumental sense in which this is true, but also an epistemic sense in which it is not. If the sun was extremely unlikely to rise, then in a sense my prediction was wrong – I just got lucky instead. We can formally divide this distinction as follows:

  • Instrumentally, a prediction was good if believing it guided us to better behaviour. Usually this means it assigned a majority probability to the thing that actually happened regardless of how likely it really was.
  • Epistemically, a prediction was good only if it matched the underlying true probability of the event in question.

But what do we mean by “true probability”? If you believe the universe has fundamental randomness in it then this idea of “true probability” is probably pretty intuitive. There is some probability of an event happening baked into the underlying reality, and like any knowledge, our prediction is good if it matches that underlying reality. If this feels weird because you have a more deterministic bent, then I would remind you that every system seems random from the inside.

For a more concrete example, consider betting on a sports match between two teams. From a theoretical, instrumental perspective there is one optimal bet: 100% on the team that actually wins. But in reality, it is impossible to perfectly predict who will win; either that information literally doesn’t exist, or it exists in a way which we cannot access. So we have to treat reality itself as having a spread: there is some metaphysically real probability that team A will win, and some metaphysically real probability that team B will win. The bet with the best expected outcome is the one that matches those real probabilities.

While this definition of an “epistemically good prediction” is the most theoretically pure, and is a good ideal to strive for, it is usually impractical for actually evaluating predictions (thus Zvi’s post). Even after the fact, we often don’t have a good idea what the underlying “true probability” was. This is important to note, because it’s an easy mistake to make: what actually happened does not tell us the true probability. It’s useful information in that direction, but cannot be conclusive and often isn’t even that significant. It only feels conclusive sometimes because we tend to default to thinking about the world deterministically.


Eliezer has an essay arguing that Probability is in the Mind. While in a literal sense I am contradicting that thesis, I don’t consider my argument here to be incompatible with what he’s written. Probability is in the mind, and that’s what is usually more useful to us. But unless you consider the world to be fully deterministic, probability must also be in the world – it’s just important to distinguish which one you’re talking about.

The FRACTAL Model

I was thinking about relationships and playing around with silly acronyms and came up with the following. It is by no means true, or useful, but I thought I’d share. One could say that a good relationship is fractal, meaning that it is built on:

Fun
Respect
Alignment
Care
Trust
Arousal
Limerence

Don’t read anything into the order, fractal was just a much better word than… cratfal. Or catrafl. A cat-raffle, now there’s an idea.

Pop quiz! What would you say the fractal model misses?

It’s Not About The Nail

[This is hardly original; I’m documenting for my own sake since it took so long for me to understand.]

There’s an old saw, that when a women complains she wants sympathy, but when a man hears a complaint, he tries to solve the problem. This viral YouTube video captures it perfectly:

Of course it’s not strictly limited by gender, that’s just the stereotype. And the underlying psychological details are fairly meaty; this article captures a lot of it pretty well for me.

I’ve known about all this for a long time now, and it’s always made sense at a sort of descriptive level of how people behave and what people need. But despite reading that article (and a good reddit thread) I’ve never really understood the “why”. What is the actual value of listening and “emotional support” in these kind of scenarios? Why do people need that? Well I finally had it happen to me recently when I was aware enough to notice the meta, and thus write this post.

I now find it easiest to think about in terms of the second-order psychological effects of bad things happening. When a bad thing happens to you, that has direct, obvious bad effects on you. But it also has secondary effects on your model of the world. Your mind (consciously or subconsciously) now has new information that the world is slightly less safe or slightly less predictable than it thought before. And of course the direct, obvious bad effects make you vulnerable (not just “feel” vulnerable, although normally that too – they make you actually vulnerable because you’ve just taken damage, so further damage becomes increasingly dangerous).

Obviously sometimes, and depending on the scenario, the first-order effect dominates and you really should just solve that problem directly. This is what makes the video so absurd – having a nail in your head is hard to beat in terms of the first-order effects dominating. But in real versions of these cases, sometimes the second-order effects are more significant, or more urgent, or at the least more easily addressable. In these cases it’s natural to want to address the second-order effects first. And the best way to do that is talking about it.

Talking about a problem to somebody you have a close relationship with addresses these second-order effects in a pretty concrete way: it reaffirms the reliability of your relationship in a way that makes the world feel more safe and predictable, and it informs an ally of your damage so that they can protect you while you’re vulnerable and healing. But of course you don’t accomplish this by talking directly about the second-order problem. The conversation is still, at the object level, about the first-order problem, which is why it’s so easy to misinterpret. To make it worse, the second-order problems are largely internal, and thus invisible, so it’s easy for whoever you’re talking to to assume they’re “not that bad” and that the first-order problem dominates, even when it doesn’t.

Working through this has given me some ideas to try the next time this happens to me. At a guess, the best way to handle it is to open the conversation with something like “I need you to make me feel safe” before you get into the actual first-order problem, but I guess we’ll see.