An Open Critique of Common Thought

[I was going through a bunch of old files and found this gem of an essay. If the timestamp on the file is accurate it’s from February 2010, which means it’s almost exactly ten years old and predates this blog by about three years. Past me was very weird, so enjoy!]

I am writing this essay as a critique of a fundamental and unsolvable problem in philosophy today. Our greatest minds refuse to acknowledge this problem, so I have humbly taken it upon myself to explore more fully this hidden paradox.

Amongst all of the different philosophies, religions, and world-views, there is one common theme, so utterly pervasive that it has never before been questioned, yet so utterly false upon deeper inspection that it boggles the mind. It is my hope that this short essay will act as a call to arms for the oppressed masses in the field of higher thought, and prompt them to action demanding an end to this conspiracy.

The problem, ladies and gentlemen, in long and in short, is that of existence. Every thought, every idea, every concept that humankind has ever had rests on the central pillar, the core belief, that we exist. Not content, of course, with this simpler sophistry, humankind has embarked on an even more heinous error of logic – we assume not only that we exist, but that other things exist as well.

It is at this point, of course, that your conditioning takes over – “Of course we exist”, you say, “how could it be otherwise”? This is the knee-jerk reaction typical of an oppressed thinker today, and the prevalence of this mindless assertion – calling it a failure of an argument would be too kind – worries me more than I can say about the future of our society. Beyond the obvious lack of critical thinking evidenced by such lemming-like idiocies, this simple error is the cause of deeper, more dangerous problems as well.

But I digress. I will leave the deeper analysis of this crisis to the historians who survive it, and turn my own meagre talents to the task of alerting the public of this travesty. It is with heart-felt distress that I type my final plea to you, the thinking public – “Do you believe”?

Milk as a Metaphor for Existential Risk

[I don’t believe this nearly as strongly as I argue for it, but I started to pull on the thread and wanted to see how far I could take it]

The majority of milk sold in North America is advertised as both “homogenized” and “filtered“. This is actually a metaphor created by the dairy industry to spread awareness of existential risk.

There has been a lot of chatter over the last few years on the topic of political polarization, and how the western political system is becoming more fragile as opinions drift farther apart and people become more content to simply demonize their enemies. A lot of causes have been thrown around to explain the situation, including millennials, boomers, free trade, protectionism, liberals, conservatives, economic inequality, and the internet… There’s a smorgasbord to choose from. I’ve come to believe that the primary root cause is, in fact, the internet, but the corollary to this is far more frightening than simple cultural collapse. Like milk, humanity’s current trend toward homogenization will eventually result in our filtration.

The Law of Cultural Proximity

Currently, different human cultures have different behavioural norms around all sorts of things. These norms cover all kinds of personal and interpersonal conduct, and extend into different legal systems in countries around the globe. In politics, this is often talked about in the form of the Overton window, which is the set of political positions that are sufficiently “mainstream” in a given culture to be considered electable. Unsurprisingly, different cultures have different Overton windows. For example, Norway and the United States have Overton windows that tend to overlap on some policies (the punishment of theft) but not on others (social welfare).

Shared norms and a stable, well-defined Overton window are important for the stable functioning of society, since they provide the implicit contract and social fabric on which everything else operates. But what exactly is the scope of a “society” for which that is true? We just talked about the differences between Norway and the U.S., but in a fairly real sense, Norway and the U.S. share “western culture” when placed in comparison with Iran, China, or North Korea. In the other direction, there are distinct cultures with different norms around things like gun control, entirely within the U.S. Like all categorizations, the lines are blurry at times.

The key factor in drawing cultural lines is interactional proximity. This is easiest to see in a historical setting because it becomes effectively identical to geographic proximity. Two neolithic tribes on opposite ends of a continent are clearly and unambiguously distinct, where-as two tribes that inhabit opposite banks of a local river are much more closely linked in every aspect: geographically, economically, and of course culturally. Because the two local tribes interact so much on a regular basis, it is functionally necessary that they share the same cultural norms in broad strokes. There is still room for minor differences, but if one tribe believes in ritual murder and the other does not, that’s a short path to disagreement and conflict.

Of course, neolithic tribes sometimes migrated, and so you could very well end up with an actual case of two tribes coming into close contact while holding very different cultural norms. This would invariably result in conflict until one of the tribes either migrated far enough away that contact became infrequent, became absorbed into the culture of the other tribe, or was wiped out entirely. You can invent additional scenarios with different tribes and different cultures in different geographies and economic situations, but the general rule that pops out of this is as follows: in the long run, the similarity between two cultures is proportional to the frequency with which they interact.

The Great Connecting

Hopefully the law of cultural proximity is fairly self-evident in the simplified world of neolithic tribes. But now consider how it applies to the rise of trade, and technology over the last several millennia. The neolithic world was simple because interactions between cultures were heavily mediated by simple geographic proximity, but the advent of long-distance trade started to wear away at that principle. Traders would travel to distant lands, and wouldn’t just carry goods back and forth; they would carry snippets of culture too. Suddenly cultures separated by great distances could interact more directly, even if only infrequently. Innovations in transportation (roads, ship design, etc) made travel easier and further increased the level of interaction.

This gradual connecting of the world led to a substantial number of conflicts between distant cultures that wouldn’t have even know about each other in a previous age. The victors of these conflicts formed empires, developed new technologies, and expanded their reach even farther afield.

Now fast-forward to modern day and take note of the technical innovations of the last two centuries: the telegraph, the airplane, the radio, the television, the internet. While the prior millennia had seen a gradual connecting of the world’s cultures, the last two hundred years have seen a massive step change: the great connecting. On my computer today, I could easily interact with people from thirty different countries around the globe. Past technologies metaphorically shrank the physical distance between cultures; the internet eliminates that distance entirely.

But now remember the law of cultural proximity: the similarity between two cultures is proportional to the frequency with which they interact. This law still holds, over the long run. However the internet is new, and the long run is long. We are currently living in a world where wildly different cultures are interacting on an incredibly regular basis via the internet. Unsurprisingly, this has led to a lot of cultural conflict. One might even call it cultural war.

Existential Risk

In modern times, the “culture war” has come to refer to the conflict between the left/liberal/urban and right/conservative/rural in North American politics. But this is just the most locally obvious example of different cultures with different norms being forced into regular interaction through the combination of technology and the economic realities that technology creates. The current tensions between the U.S. and China around trade and intellectual property are another aspect of the same beast. So are the tensions within Europe around immigration, and within Britain around Brexit. So was the Arab Spring. The world is being squished together into a cultural dimension that really only has room for one set of norms. All wars are culture wars.

So far, this doesn’t seem obviously bad. It’s weird, maybe, to think of a world with a single unified culture (unless you’re used to sci-fi stories where the unit of “culture” is in fact the planet or even the solar system – the law of cultural proximity strikes again!) but it doesn’t seem actively harmful as long as we can reach that unified state without undue armed conflict. But if we reframe the problem in biological and evolutionary terms then it becomes much more alarming. Species with no genetic diversity can’t adapt to changing conditions, and tend to go extinct. Species with no cultural diversity…

Granted, the simplest story of “conditions change, our one global culture is not a fit, game over humanity” does seem overly pessimistic. Unlike genetics, culture can change incredibly rapidly, and the internet does have an advantage in that it can propagate new memes quite quickly. However, there are other issues. A single global culture only works as long as that culture is suitable for all the geographic and economic realities in which people are living. If the internet forces us into a unified global culture, but the resulting culture is only adaptive for people living far from the equator… at best that creates a permanent underclass. At worst it results in humanity abandoning large swaths of the planet, which again looks a lot like putting all our eggs in one basket.

Now that I’ve gotten this far, I do conclude that the existential risk angle was maybe a bit overblown, but I am still suspicious that our eventual cultural homogeneity is going to cause us a lot more problems than we suspect. I don’t know how to stop it, but if there were a way to maintain cultural diversity within a realm of instant worldwide communication, that seems like a goal worth pursuing.

Bonus: I struggled to come up with a way to work yogurt (it’s just milk with extra “culture”!) into the metaphor joke, but couldn’t. Five internet points to anybody who figures out how to make that one work.

Success over Victory: Some Thoughts on Conflict Resolution

One afternoon several years ago, I was busy coding away at my software job when I noticed a disagreement spiralling out of control on our internal chat system. Conflict is stressful, and this one had nothing to do with me, so it would have been easy to ignore. But I’m a nosy do-gooder at heart, so instead of ignoring it I did what I always do: I made it my business to resolve, much to the surprise of the initial participants (note: I had already earned enough trust that I could insert myself into the conversation without ruffling too many feathers; it’s not always recommended).

After the dust had settled, a junior developer on my team approached me to ask how I had accomplished the minor miracle of getting everyone back together pulling in the same direction. This turned into an extended conversation about conflict resolution, during which I was forced to organize my many thoughts on that topic into words (and more than one whiteboard diagram) for the first time. I am forever grateful to that person for pushing me to work through my thoughts and express myself.

By the end of the conversation, I had a substantial amount of material in my head, and I promised to write a blog post explaining it all. Several years later (oops), this is that post.


We encounter conflict every day. Perhaps you’re having an unavoidable Thanksgiving-dinner conversation about politics, or maybe you’re chatting with your neighbour when you realize that you have very different views on a recent change by the local sports team. For me, as for many people, a lot of these conflicts tend to arise at work: dealing with unreasonable customers, unreasonable coworkers, or unreasonable managers is just part of the job. Whether your work is blue-collar, white-collar, retail, or even raising chickens, conflict happens whenever two people want or believe different things, and that isn’t exactly rare.

With so many conflicts in our day-to-day lives, resolving them becomes an important life skill. Typically, we do this using communication; it’s thankfully rare that minor conflicts devolve into violence. And yet, communicating well is one of the hardest parts of modern life. Technology has created a number of new ways to communicate our ideas, those ideas grow more complex every day, we spend less and less time face to face, and partisan political bias seems to be driving us further and further apart. Even so, communication is still our main approach for resolving most of the conflicts we encounter.

Given its importance, it shouldn’t be surprising that conflict resolution is a topic already rich in conventional wisdom, academic studies, and self-help books; practically speaking I don’t have much that is new to contribute. However, at the heart of many great innovations is the combination of multiple ideas in different fields, and that’s what I’m going to try and do here. I’ll be mixing together insights collected from a number of places, including the epistemological debates that I went through as part of my religious journey, the online “rationalist” community (a group with a particular focus on tools and processes for more effectively seeking the truth), a brief but interesting career as a manager of people, and of course several self-help books which touch on conflict resolution in some fashion. Anchoring all of these is my unusually intense dislike of interpersonal conflict. It’s just a part of who I am, so I’ve spent a lot more time resolving conflicts and thinking about this in my own life than I think is normal, or probably healthy.

I debated leaving out the parts that are truly unoriginal to focus on “the good stuff”, but I think that would be doing a disservice to the topic. It’s all important, and just because some of it has been covered elsewhere, doesn’t mean it’s not required to be successful. I’ve broken the material into four sections which I call Attitude, Communication, Comprehension, and Resolution, and if you’re starting fresh I recommend reading them all, in that order. That said, most of the material that is in any way “new” is in the last section on Resolution.

Finally, I want to note three things up front. First, that all of this is focused on conflict resolution via communication. If a conflict has reached the point of physical violence then the rules are very different; some of what’s in here might still be applicable, but I make no warranty to that effect. Second, that this is partly written from the perspective of a neutral third-party moderator. Everything here is just as applicable if you’re involved in the conflict, but it becomes harder to use effectively. And third, that this essay is entirely focused on resolving conflicts, not making decisions. Effective decision-making and consensus-building in the context of an unresolved (or unresolvable) disagreement is a whole other problem deserving of its own essay.

OK, here we go…


War is merely the continuation of politics by other means.

Carl von Clausewitz

The tools of conflict resolution bear a striking resemblance to the tools of conflict. In practice, this means that one of the most important parts of successful conflict resolution is your attitude. Otherwise you’re liable to misuse the tools you have available. A good attitude will naturally guide you to the right decisions, smooth out minor miscommunications, and build trust. It’s the foundation on which all the other parts of this essay are built. But what does “a good attitude” actually mean? While people have many different attitudes toward different parts of their life, there are four specific ones which I think are important for conflict resolution.

The first attitude has to do with what you’re aiming for. Human beings have an unfortunate tendency to try and “win” arguments in a purely social sense (e.g. via ad hominem attacks), but victory of that kind is typically not the success you’re interested in if you’re reading this essay. Sometimes, the success we seek is truly as simple as “resolving the conflict”, though usually it’s not. More often, “success” really means uncovering the truth, or finding a solution where everyone gets what they want, or clearing up some underlying miscommunication. Know what it is you’re really aiming for, and set your attitude accordingly. Aim for success, not for victory.

The second attitude is far simpler since it already has name: humility. Accept that sometimes you just don’t have all the information. Accept that sometimes you makes mistakes. Accept that sometimes you honestly just change your mind (or have it changed by someone else). Our ego doesn’t like admitting to these things, but they do happen, and digging in your heels to protect your ego is the fastest way to unnecessarily prolong a conflict.

The third attitude is an attitude toward others. Just like we have an unfortunate tendency to try and “win” arguments, we also have an unfortunate tendency to view other people in a conflict as “enemies”. Instead, it is far better to respect and trust your conversational partners, and always assume they are operating in good faith. That one is so important I’m just going to repeat it: assume good faith.

People often object that this is a naive or dangerous assumption, and in some settings it certainly can be. Past experience with a particular person certainly trumps any possible generalized advice. But I would argue that true bad faith is far, far rarer than most people realize. I see numerous conflicts every year which could have been trivially resolved if everyone involved had assumed good faith instead of jumping to “you’re a terrible person”.

Finally, on a somewhat different tack from the others, I want to talk a bit about emotions. There is often an attitude (especially among programmers and other more analytically-inclined folks) that emotions are somehow irrelevant to a debate and should be ignored. I’m certainly guilty of this belief myself sometimes. But most of the time in practice I find this to be both false, and quite unhelpful in dealing with conflict. This is probably worth an entire post to itself, but I’ll keep it brief: your emotions carry real, valuable information about what you believe and what you value. You shouldn’t let them rule you, and quite often they’re incorrect or haven’t caught up to the moment yet, but they are still both important and useful. Pay them their due.

In this section I’ve covered four key attitudes which I find helpful and which are foundational in how I deal with conflict resolution. I hope they’re as useful to you as they are to me:

  1. Aim for success, not victory.
  2. Be humble.
  3. Assume good faith.
  4. Pay attention to your emotions (but don’t let them rule you).


The medium is the message.

Marshall McLuhan

Attitudes are general things, and if you’re anything like me you crave more specific advice. Say this. Say it in this way. Don’t say that. Don’t use words that are longer than 10 letters when translated into Brazilian Portuguese. That sort of thing. But instead of focusing on what or what not to say, the most useful specific advice I can give is actually to focus on where you communicate.

Marshall McLuhan coined the famous phrase “the medium is the message”, and oh boy was he ever right. Every medium has different characteristics which impact how we communicate, and how conflict will spread or resolve. Here are just a few of the characteristics that matter:

  • Speed of communication, aka bandwidth. Most people can speak much faster than they can type.
  • Speed of response, aka latency. This can be anywhere from snail mail, which takes days per message, to instant messaging which is usually real-time.
  • Ability to absorb cross-talk. Laggy video-conferencing is particularly bad at this.
  • Audience size. Compare an in-person conversation to an email list with hundreds of subscribers.
  • Participant size. Are those hundred subscribers just reading, or can they add their own opinions to the mix?
  • Available side-channels. In-person communication gives you a whole bunch of important extra communication channels like tone of voice, facial expression and posture.
  • Norms. Most media are bound to specific codes of behaviour, either explicitly or implicitly.

With all of these variables, it’s no surprise that picking the right venue for your conflict is hugely valuable. Of course, you often don’t have a choice of where the conflict starts; they just do. But you always have the opportunity to move it, and it’s usually pretty easy when everyone involved is operating in good faith. Just go “hey, this would be easier to talk about in-person, do you mind if I swing by your desk?” and you’d be amazed at how easy it gets. Changing to a better venue is often both the easiest and the most effective thing you can do to resolve a conflict.

That said, with so many possible characteristics to consider it can be pretty daunting to figure out which one to suggest. Fortunately there’s an easy rule of thumb: in-person trumps everything, always. If in-person isn’t possible because people are physically too far apart, video-conferencing can be a decent substitute as long as it isn’t too laggy. If neither of those are realistic, I’ve had pretty good luck aiming for whatever venue has the highest bandwidth, lowest latency, and smallest audience.

There is one major caveat to the in-person rule however, which is on the number of participants. If you have more than six people involved then the value of in-person conversation falls off pretty sharply, and you might be better off with a venue that can handle that better. Of course, it’s pretty rare that more than six people really need to be there; usually you can pick representatives from each group or otherwise cut the participants down to a reasonable size.

This section is short enough it doesn’t necessarily need a summary, but I wrote one anyway:

  1. Use the best available venue or communication medium.
  2. In-person trumps everything.
  3. Keep the number of participants small.


Seek first to understand, then to be understood.

Stephen Covey

A good attitude and a good venue will carry you a surprisingly long way, but of course they’re not always sufficient on their own. The next thing I try is to temporarily ignore whatever I believe and work to understand both sides of the argument equally. I called this section “comprehension”, but it could equally just be called “listening”, or maybe more precisely “active listening”. Honestly, Stephen Covey has already said most of this far better than I can in his book “The 7 Habits of Highly Effective People”. That book is of course the source of the quote that opens this section; “seek first to understand” is habit number five.

The value of truly understanding both sides of a conflict cannot be overstated. Even when I’ve nominally resolved a conflict, I get antsy if I still don’t really grok one side or the other. More than once, trying to scratch that itch “after the fact” has turned up a hidden requirement or pain point which would have just caused more grief down the road. Remember, success is rarely as simple as just making the conflict go away; you can’t know if you’ve truly found success (not just victory) unless you properly understand both sides.

But understanding both sides isn’t just an after-the-fact thing. It also has concrete value in guiding the resolution of a conflict when you’re caught in the middle of it, because it allows you to properly apply the principle of charity. The principle of charity says that you should try and find the best possible interpretation for people’s arguments, even when they aren’t always clear or coherent. It goes back to assuming good faith; maybe an argument sounds crazy, but it makes sense to the person saying it. The only way to apply the principle of charity in many cases is to start by understanding the argument, and the person making it.

Understanding both sides is also a key part of something called “steelmanning“, which is the process of actively finding better versions of another person’s arguments. This may seem like an odd thing to do in a conflict, but only if you’ve accidentally slipped back into the habit of aiming for victory instead of success. Assume good faith, and work with both sides to fully develop the points they’re trying to make. Doing this brings clarity to the discussion which can often illuminate the crux of the conflict.

Of course sometimes being charitable is hard. People may make arguments which just seem… wrong. Crazy. Even harmful. (The topic of whether an argument can be harmful in and of itself is a fascinating one I don’t have space for here. Whatever you believe, it isn’t relevant to the point I’m trying to make). A lot of people would suggest that trying to understand or improve an argument like that is a waste of time, or even ethically wrong. I disagree. I believe that truly understanding both sides of a conflict is fundamentally valuable, no matter what that conflict is. It clarifies. It builds empathy. It expands your knowledge of the world. And even if by the end you still deeply disagree, understanding the argument will let you articulate a better response.

The principle is all well and good, but getting to that level of understanding in practice can also be really hard. It’s a skill that gets easier with repetition, so I would encourage you to practice it as much as possible, even for small conflicts where it might not seem necessary. Build that habit when it’s easy, and you’ll find that it becomes automatic even when it’s hard. Still, if you’re trying and you’re really stuck, I’ve got a trick which helps me when I just can’t seem to connect with what somebody is saying.

To better understand a different perspective, try splitting an argument up into the separate pieces of a problem and a solution. A lot of arguments fit into this pattern quite naturally, and I often find that while I couldn’t quite grasp the argument as a whole, I both understand and even agree with the problem; it’s the solution that’s causing me issues. Even then, having the problem separated out and well-defined can lead me to understanding the solution too, because it frequently highlights some unstated premise which I wasn’t aware of. This is also a great way to practice steelmanning, since making implied premises explicit is a great way to improve an argument; people are pretty bad at this by default.

I should also note that if this trick kind of works for a situation, but doesn’t quite, you should try making the problem even more general. For example, if the argument is “Mexicans are taking our jobs, so we should stop immigration from Mexico”, it’s tempting to define the problem as just “Mexicans are taking our jobs”, but it’s probably more productive to define it as something like “something is taking our jobs” or even “our economic prospects suck”. This pulls out an implied premise (that the cause is Mexican immigrants) which may be the real point of disagreement, but even apart from that, finding a problem which you can be sympathetic to is worth its weight in gold. With this kind of problem in hand, you can reframe the conflict as a cooperative mission, working together to find the best solution to the problem. You can start to look for success, not victory.

It’s often said that the real acid test for truly understanding somebody’s argument is the ability to explain it back to them in a way they will agree with. This is good, and you should definitely aim for this (trying to explain it back is also a useful trick for conflict resolution in general), but sometimes I find it useful to use a slightly higher standard. I consider myself to really properly understand an argument when I can not only explain it to the person who made it, but can also explain (to myself, not to them) how they came to believe it. Both sides of the conflict are part of the universe, so to understand the universe you have to know how both sides came to be.

This may seem like an esoteric or excessively demanding standard, and it isn’t necessary all the time. But there are interesting and practical sources of conflict where this is a really useful approach that provides a lot of insight. Religion is my favourite example of this; most theistic worldviews can pretty naturally explain the existence of atheists, but a lot of atheists have a hard time explaining the existence of theists. “People are dumb” may be emotionally satisfying, but doing the work of constructing a real explanation builds a lot of empathy and ends up sharpening the resulting argument.

I’ve covered a lot of different ground in this section, but I think I can boil it down to four key points to take away:

  1. Seek always to understand.
  2. Actively look for the best version of everyone’s arguments.
  3. Separate the problem and the solution.
  4. To truly understand, you must explain how both sides came to be.


Now, finally, we get to the meat of this post. You’ve got the right attitude, you’re in a good venue or communication medium, and you think you’ve got a pretty good grasp of what both sides are saying. How do you actually get to a successful resolution? For me, it all boils down to understanding the building blocks of how we argue, and how we disagree.

Philosophers and linguists have spent millennia studying the nature of logic, rhetoric, and argument, all the way from Aristotle through to predicate logic and beyond (the Wikipedia articles are unfortunately technical, but this Stanford site seems to have a more accessible introduction). This body of work is another great tool that can be helpful in the previous section on understanding both sides of an argument.

While rhetoric and disagreement are obviously related, the nature of disagreement is much less studied. The rationalist community has recently started to dig into it, coming up with some interesting ideas like double-cruxing, but I don’t know of any comprehensive theory from that group.

In a very brief post in 2017 (my first failed attempt at what would become this post) I sketched out a basic categorization of disagreements with almost no explanation. Two years later, my core model remains almost the same. While there can be many forms of valid argument and many kinds of propositions to slot into those arguments, there are in fact only four kinds of atomic disagreement: fact, value, meaning, and “empty”. As far as I can tell every disagreement must belong to one of these categories, or be a complex combination of smaller disagreements. I’ll tackle them one at a time, including tips for resolving each type, and then talk about how to understand and break down more complex combinations.

Disagreements of Fact

Disagreements of fact are disagreements over how the world was, is, or will be. They are fundamentally empirical in nature: if I believe that there are only ten chickens on the planet and you believe that there are more, that’s something we can physically check; we just have to go out and count enough chickens. Disagreements about historical facts are often harder to resolve (we can’t just count the chickens alive in the year 1500 to see how many there were) but the factual nature of the disagreement remains; there is a single right answer, and we just have to find it.

Resolving disagreements of fact is the specialty of science and the scientific method. When a disagreement of fact is not directly resolvable through empirical observation, hunt for places where the core disagreement results in differing predictions about something that is directly observable. Maybe if there were as many chickens as you believe, the nutrient content of human skeletons from that era will back you up (I really don’t know, historical chicken population is not my specialty and this example is getting out of hand).

Of course, some disagreements of fact may not be perfectly resolvable with the technology we have available to us. The nutrient content of skeletons may give some indication of chicken population, but it’s not going to give us a precise count. In these cases, it’s best to fall back on reasoning based on Bayesian statistics. What are your prior confidence levels, and how do the various pieces of evidence affect them? What else can you easily empirically check which will impact those confidence levels?

Even then, there are some cases where there just doesn’t seem to be any checkable predictions that come out of a conflict of fact (the various debates around string theory were like this for a while). The nice thing is that when you hit a disagreement like this, it somehow stops mattering. If there are no differences in the predictions that can be tested with current technology, then until that technology exists, the two possible worlds are by definition indistinguishable.

Finally, for cases about the future, it’s important to distinguish between disagreements about how the world will be (for example whether there will be more or fewer chickens tomorrow), and disagreements about how the world should be (for example whether we ought to breed more chickens). Disagreements about how the world will be can sometimes be resolved like historical facts, by looking for more immediately checkable predictions. They can also be resolved just by waiting until the future comes to pass. On the other hand, disagreements about how the world should be take us into our next type of disagreement: disagreements of value.

Disagreements of Value

Disagreements of value are disagreements over what we ought to value. This tends to play out more concretely in disagreements over how the world ought to be, and what we ought to do to get there. For example, if I believe that we should value chickens’ lives as much as human lives and you believe we should value them less, that is obviously a disagreement over value. There’s no checkable fact or testable prediction, now or in the future; the disagreement is fundamentally about what is important. Of course in practice you’re unlikely to see a direct disagreement over the value of chicken lives; you’re more likely to see a disagreement over whether humans should eat chickens or not, but it’s often the same thing.

Disagreements of value are difficult to deal with. This is often because there is actually a complex multi-part disagreement masquerading as a simple value disagreement (for example a disagreement over whether we “ought” to be vegetarian may be about environmental factors as much as it is about the value of a chicken’s life). The key thing to pay attention to is whether the values under debate are instrumental or terminal.

If the values under debate are instrumental (for example vegetarianism as a means to value chicken life), then things are by definition complex, as there are at least two possible underlying disagreements. The root cause could be a disagreement over the terminal value (whether a chicken’s life should be valued) or a disagreement over the best way to achieve that terminal value (our consumption of chicken has caused a great increase in the total number of chickens, which might be a more effective way to value their lives). When you see a debate over an instrumental value, apply Hume’s guillotine to slice apart the pieces and find the more fundamental disagreement. Keep in mind that there’s nothing to stop both pieces from being sources of disagreement at once, in which case you should at least try and take them one at a time.

Recognizing instrumental value debates can be tricky, as can breaking them down into their constituent parts. In practice, one of the best ways to do both of these things is to simply try the question “Why does that matter?”, and not accept “it just does” as an answer. When pressed, most people will be able to articulate that, for example, they actually value vegetarianism because they value the lives of animals.

The other way to recognize many instrumental value debates is to look for two apparently-unrelated values being traded off against one another. Imagine we’re building a coop for all of these chickens; if one person thinks we should prioritize security against foxes, while the other thinks we should prioritize the number of chickens it can hold, it might seem like they’re at an impasse. But this is actually an instrumental value debate that can easily be resolved; all we have to do is “normalize” the units under debate. Fox-security and number-of-chickens are not directly comparable values, but in practice they’re probably both backed by the same terminal value: maximizing the number of eggs we can collect per day. By normalizing the two sides into a single terminal value unit, we’re left with a simple disagreement of fact which can be resolved via experimentation: which approach results in more eggs?

Unfortunately, if the values under debate are truly terminal (back to whether chickens’ lives should be valued as human lives) then there isn’t a good way to resolve this conflict. The conflict will exist until somebody changes their core values, and that’s incredibly hard to do. The best “hack” I’ve found is to come up with an unrelated value or problem which both participants agree is more important, and thus makes the current conflict either irrelevant or at least not worth arguing over. Whether a chicken’s life is worth a human life tends to take a backseat when the human’s house is on fire.

(note: I am not advocating arson as a means of avoiding debates about vegetarianism)

Disagreements of Meaning

The third kind of disagreement is a disagreement over meaning. This is best understood by examining the classic question: if a chicken tree falls in the forest and nobody hears it, does it make a sound? While on the surface a disagreement on this point may seem to be a disagreement of fact, it’s almost always instead a disagreement of meaning.

Most reasonable people will agree to the same core facts of what happens when a tree falls in the forest. First, they’ll agree that it produces vibrations in the air, also known as sound waves. Second, they’ll agree that those sound waves dissipate before reaching anybody’s ears, as stipulated in the question. These two points actually cover all of the questions of fact relevant to the disagreement; the conflict is really over the meaning of the word “sound”. Does it refer to the simple production of sound waves (in which case the tree makes a sound) or does it refer to the sensation created by sound waves heard by a person (in which case it does not).

The nice thing about disagreements of meaning is that they almost never matter. Language is socially negotiated, and at the end of the day word meanings are entirely arbitrary. The only thing you need to do to resolve a conflict like this is be very clear about your definitions, and the conflict magically evaporates. Replacing problem words with new nonsense words that have clear definitions is a great trick for this (borrowed from this Less Wrong post on the same topic).

The one case where the meaning of words does legitimately matter is in law. As a friend of mine so nicely put it, “laws are stored in words”, and interpreting the meaning of those words can impact how the law is applied, who goes to jail, etc. Ultimately though, word definitions are still arbitrary and will even shift over time, meaning that these disagreements are not resolvable without getting really deep into the philosophy of law (the question of literal meaning vs author’s intent, just to start). Fortunately we have a standard method for making these decisions anyway: judges and juries. The result is that the law evolves over time, just like the people that interpret it, and the language that stores it.

The other case where people like to argue that word meanings matter is when certain words are offensive, disrespectful, or even harmful (if that’s a thing you believe words can be). Fortunately this one is a bit more clear-cut: the use of these words is a thing people can disagree about, but it’s not a disagreement of meaning. It actually has two parts, tying up an instrumental or potentially terminal value (we should not offend or harm people) with a factual claim (some proportion or group of people are offended or harmed by a given word). The meaning of the word no longer matters at all.

Empty Disagreements

Empty disagreements are a late addition to this essay, and are quite different from the other three types. In a certain sense they are not real disagreements at all, and are merely what happens when disagreement becomes disconnected from any tangible point. But in practice they are fairly common, and my goal with this essay is ultimately a practical one.

Empty disagreement happens when there is no fundamental disagreement of fact, value, or meaning between two parties, but something in the situation causes them to start or continue a conflict regardless. This is usually related either to social status (when someone knows they’re wrong but won’t back down to avoid losing face), or to internal emotional state (when someone is caught up in the heat of the moment). In both cases, it is ideas from the prior sections of this essay that are the key to a successful resolution.

Status-based conflicts are frequently best-solved by changing venue, usually to one with a smaller audience. In most cases people are happy to resolve the conflict themselves once doing so would no longer cost them status. Things become trickier if this isn’t possible, or if the status issue is actually between the two people involved in the conflict. You can try to build enough trust to overcome the status issue, or compensate for it by making an unrelated concession, but ultimately you’ll have to resolve the status issue to resolve the conflict.

Similarly, heat-of-the-moment conflicts are usually best solved by committing more strongly to the four attitudes I described in the first section of this essay. Breathe deep, and aim for success instead of victory. Use humility to build the trust necessary to reach that point, and never lose sight of the fact that both sides are operating in good faith (mistakes in the heat of the moment are still fundamentally different from malice). If necessary, suggest taking a five-minute break to go to the washroom or get a drink of water; time away is often all that is really needed for people to cool down.

Complex Disagreements

As we’ve gone through the four atomic types, we’ve seen a couple of examples of complex disagreements masquerading as simpler forms of disagreement. This is typically how they show up in practice, since if the complexity is obvious the participants will break it apart themselves without thinking about it. The fact that instrumental values show up frequently in this way is also not a coincidence; the combination of a value with a fact to produce an instrumental value is one of the easiest signs of a complex disagreement that needs to be split up.

The other major sign of a complex disagreement is the use of the forms of propositional and predicate logic (another great reason to study those topics). Argument forms like modus ponens are how complex arguments get built up, and thus naturally how complex disagreements can be broken down. Of course, people rarely phrase their arguments in pure logical form, so you’ll probably have to do some steelmanning along the way, but if you’re lucky somebody will make their arguments in roughly the right shape.

As mentioned in the section on comprehension, regular practice is the best way to build these skills. Even when an argument is really trivial, (for example “A five ounce bird could not carry a one pound coconut!” while talking about the carrying capacity of swallows) it can be worth breaking down. In its pure logical form, that example becomes something like:

  • P1: If a bird weighs five ounces, it cannot carry a coconut.
  • P2: Swallows weigh five ounces.
  • C: Swallows cannot carry coconuts.

Just like with instrumental values, we now have two different pieces (P1 and P2) where either could be the source of disagreement. By narrowing in on the root cause, or at least taking them one at a time, you’ve made the conflict smaller and more focused. Once you’ve gone down a few layers you’ll usually end up either at a testable disagreement of fact or a shared terminal value, and will be able to resolve it appropriately. The goal with a complex disagreement is always to break it down and deal with the pieces, not to swallow it whole.


Wow. What started as a quick blog post has turned into a six-thousand-word essay, and I still feel like there’s more I could say. Since I like bullet points, I’ll try and summarize all of my recommendations into a nice little list to leave you with.

  • Aim for success, not victory.
    • Be humble.
    • Assume good faith.
    • Pay attention to your emotions (but don’t let them rule you).
  • Use the best available venue or communication medium.
    • In-person trumps everything.
    • Keep the number of participants small.
  • Seek always to understand.
    • Actively look for the best version of everyone’s arguments.
    • Separate the problem and the solution.
    • To truly understand, you must explain how both sides came to be.
  • Use the right tool for the right conflict.
    • Use science and Bayesian statistics to resolve disagreements of fact.
    • Use overriding values to avoid disagreements of terminal value (but watch out for values that are actually instrumental).
    • Use clear definitions to resolve disagreements of meaning.
    • Use trust and communication to resolve empty disagreements.
    • Use logic to break down complex disagreements into simpler parts.

I hope reading this essay proves as helpful to you as writing it was for me. I want to once again thank the person who prompted me to write it, as well as all the other people who read early drafts and provided invaluable feedback. You make me better.

The Efficient Meeting Hypothesis

This is a minor departure from my typical topics, but was something I wrote for work and wanted to share more widely.

Meeting efficiency drops off sharply as the number of people in attendance climbs. A meeting with two or three people is almost always a good use of everyone’s time. If it’s not, the people involved simply stop meeting. Meetings with 4-6 people are worse, but are still generally OK. Meetings with more than 6 people in attendance (counting the organizer) are almost universally awful.

Why are meetings inefficient?

People do not exchange opinions the way machines exchange information. As the number of people grows, so does the number of different opinions, the number of social status games being played (consciously or not), the number of potential side conversations, etc. Achieving consensus gets harder.

In my experience, six people is the limit for anything resembling a useful caucus-style meeting. Above six people, it’s less likely that a given topic (at a given level of abstraction) is of sufficient interest to everyone present. Tangential topics drift so far that by the time everyone has had their say it’s hard to get back on track. Side-conversations start to occur regularly. People who naturally think and speak slowly simply won’t get to speak at all since there will always be somebody else who speaks first.

Why don’t people exit useless meetings?

People mainly stay in useless meetings for two reasons:

  • a variation of the bystander effect where everyone assumes that somebody else must be getting value from the meeting, and nobody wants to be the first to break rank
  • a fear of missing out, because the topics discussed at useless meetings are often so variable (due to tangents and side conversations) it’s hard to know if maybe this will be the moment where something relevant is discussed

How to run an efficient meeting

Keep it as small as possible, and always under 6 people.

How to run an efficient meeting with more than 6 people

You can’t. But if you really think you *have* to…

Give your meeting a rigid structure. Note that this does not just mean “have an agenda document that people can add to ahead of time”. At the minimum you need:

  • A moderator whose only job in the meeting is to moderate (either the meeting organizer or somebody explicitly appointed by them).
  • A talking stick or some digital equivalent. Basically: an explicit process for deciding who gets to speak, and when. A good moderator can manage this entirely verbally for medium-sized groups, but it’s hard. Something explicit is much better.
  • A formal meeting structure and topic, set in advance.

Again, a structure does not just mean “an agenda” or “a slide deck” but some common conversational rules. Here is a list (definitely not exhaustive) of common or useful meeting structures:

  • Stand-Up: each person in turn gets a fixed amount of time (enforced by the moderator) to present to the group.
  • Presentation: one person presents for the majority of the meeting, and then (optionally) holds a question/answer session afterwards.
  • Ask-Me-Anything: the moderator works through a list asking pre-curated questions to specific people.
  • Parliamentary Procedure: this would typically be Robert’s Rules of Order.

Some common pitfalls:

  • Never try to make consensus-based decisions in a meeting with more than 6 people. If a decision has to be made then you must either:
    • Have a smaller meeting. OR
    • Appoint one person the decision-maker in advance, in which case the meeting is actually about presenting and arguing to that person, not the actual making of the decision. OR
    • Use a majority-rules process (typically a vote), in combination with a more parliamentary structure (Robert’s Rules of Order or others).
  • The moderator absolutely cannot talk about anything other than the meta-level (moderating) unless they also hold the talking stick. Ideally the moderator has no stake in the actual topic of the meeting to begin with.
  • The moderator cannot be “nice”. Shut down tangents and off-topics aggressively.
  • Avoid automatically-recurring large meetings like the plague. They shouldn’t be frequent enough to bother auto-booking them to begin with, and the manual process will make it much easier to stop holding them when they are no longer useful.

Optimizing for the Apocalypse

If you’ve read many of my past posts, you’ll know that I have sometimes struggled with an internal conflict between what I would basically characterize as conservative or right-wing intuitions, and a fairly liberal or left-wing set of concrete beliefs. It’s also one of the things that I mentioned in my initial brain-dump of a post after reading Jonathan Haidt’s The Righteous Mind. I guess this is technically a continuation of the posts spawned by that book, but it pulls in enough other things that I’m not going to number it anymore.

Haidt’s book doesn’t really address my internal conflict directly; what it does do is talk about liberal and conservative moral intuitions in a way that I found really clarified for me what the conflict was about. Conveniently, in the way that the universe sometimes works, shortly after thinking about that topic a bunch I then read A Thrive/Survive Theory of the Political Spectrum. This post by Scott Alexander has nothing to do with Haidt, except that it ends up doing for the “why” of the question what Haidt did for the “what”. And so I now have a pretty nicely packaged understanding of what’s going on in that section of my brain.

Moral Foundations Theory

Let’s start with Haidt’s Moral Foundations Theory. According to Haidt there are six “moral foundations”: care, fairness, loyalty, authority, sanctity, and liberty. Each of us has moral intuitions on roughly these six axes, and the amount of weight we put on each axis can vary between people, cultures, etc. Conveniently according to Haidt, the amount of weight we put on each axis tracks really nicely as part of the right/left political divide present in the Western world. Libertarians (sometimes called “classical liberals”) strongly value liberty; liberals (the left) put much more emphasis on harm and fairness while mostly ignoring the others; conservatives (the right) value all of them roughly equally, thus leaving them as the effective champions of loyalty, authority, and sanctity.

This is already a very helpful labelling system for me, since it lets me be clearer when I talk about my conflicts. I tend to believe in a lot political ideas that are associated with the left, like a robust social safety net. But, I believe that loyalty, authority, and sanctity have real moral value, and are generally undervalued by the modern left. This isn’t a direct logical conflict (there’s nothing about loyalty that is fundamentally incompatible with a robust social safety net) but it does put me in a sometimes awkward spot between the two “tribes”, especially as the left and right become increasingly polarized in modern politics.

Thriving and Surviving

So Haidt’s system has already been pretty helpful in giving me a better understanding of what exactly the conflict is. But it doesn’t really explain why the conflict is: why I came to hold liberal views despite conservative intuitions. I imagine most people with my intuitions naturally grow up to hold fairly conservative political views as well; it’s the path of least internal resistance. This is where thrive/survive theory comes in. Alexander summarizes it like this:

My hypothesis is that rightism is what happens when you’re optimizing for surviving an unsafe environment, leftism is what happens when you’re optimized for thriving in a safe environment.

This is conveniently similar to behaviour observed in the wild among, for example, slime molds:

When all is well, the slime mold thrives as a single-celled organism, but when food is scarce, it combines forces with its brethren, and grows. 

This combined slime mold expends a great deal of energy, and ends up sacrificing itself in order to spore and give the mold a chance to start a new life somewhere else. It’s the slime mold equivalent of Gandalf facing the Balrog, spending his own life to ensure the survival of his friends.

And, it also conveniently aligns with Haidt’s moral foundations: of the six foundations, there are three that are fundamentally important for the survival of the group in an unsafe environment: loyalty, authority, and sanctity. The other three (care, fairness, and liberty) are important, but are much more likely to be sacrificed for “the greater good” in extreme situations.

This all ties together really nicely. I grew up in a stable, prosperous family in a stable, prosperous country that is still, despite some recent wobbles, doing really really well on most measures. The fact is that my environment is extremely safe, and I’m a sucker for facts combined with rational argument. But twin studies have generally shown that while political specifics are mostly social and not genetic (nurture, not nature), there is a pretty strong genetic component to ideology and related personality traits which, I would hypothesize, boil down in one aspect to Haidt’s moral foundations.

In summary then, the explanation is that I inherited a fairly “conservative” set of intuitions optimized for surviving in an unsafe environment. But, since my actual environment is eminently safe, my rational mind has dragged my actual specific views towards the more practically correct solutions. I wonder if this makes me a genetic dead end?

In other words: I want to optimize for the apocalypse, but fortunately the apocalypse seems very far away.

When is it Wrong to Click on a Cow?

Three Stories

Imagine, for a moment, three young adults recently embarked on the same promising career path. The first comes home from work each day, and spends their evenings practising and playing a musical instrument. The second comes home from work each day, and spends their evenings practising and playing a video game. The third comes home from work each day, and spends their evenings hooked up to a machine which directly stimulates the pleasure and reward centres of their brain.

How do these people make you feel?

For some people with more libertarian, utilitarian, or hedonistic perspectives, all three people are equally positive. They harm no-one, and are spending their time on activities they enjoy and freely chose. We can ask nothing more of them.

And yet this perspective does not line up with my intuitions. For me, and I suspect for many people, the musician’s choice of hobby is laudable, the gamer’s is relatively neutral, and the “stimmer”‘s (the person with the brain-stimulating machine) is distinctly repugnant in a way that feels vaguely ethics-related. It may be difficult to actually draw that repugnance out in clear moral language – after all, no-one is being harmed – but still… they’re not the kind of person you’d want your children to marry.

The Good and The Bad

Untangling the “why” of these intuitions is quite an interesting problem. Technically all three hobbies rely on hijacking the reward centres of the brain, whose original evolutionary advantages were more to do with food, sex, and other survival-related tasks. There’s a fairly short path from arguing that the stimmer’s behaviour is repugnant to arguing that all three cases are repugnant; after all none of them result in food or anything truly “productive”. But this tack also seems to go a bit against our intuitions.

Fortunately, the world has a lot of different video games, and we can use that range to draw out some more concrete differences. At the low-end are games like Cow Clicker and Cookie Clicker, which are so basic as to be little more than indirect versions of the reward-centre-stimulating machine. More complex games seem to intuitively fair a little better, as do games with a non-trivial social element. Games that directly attempt to train us in some way also seem to do a little better, whether they actually work or not.

Generalizing slightly, it seems like the things we care about to make video games more “positive” are roughly: transferable skills, personal growth, and social contact. But this model doesn’t seem to fit so well when applied to learning an instrument. You could argue that it includes transferable skills, but the obvious candidates only transfer to other instruments and forms of musicianship, not to anything strictly “practical”. Similarly, social contact is a positive, but it’s not a required component of learning an instrument. Playing in a group seems distinctly better than learning it by yourself, but learning it on your own still seems like a net positive. Our final option of “personal growth” now seems very wishy-washy. Yes, learning an instrument seems to be a clear case of personal growth, but… what does that mean exactly? How is it useful, if it doesn’t include transferable skills or social contact?

There are a few possible explanations that I’m not going to explore fully in this essay, since it would take us a bit far afield from the point I originally wanted to address. For one, perhaps music is seen as more of a shared or public good, one that naturally increases social cohesion. It seems plausible that maybe our intuitions just can’t account for somebody learning music entirely in private, with no social benefits.

Another approach would be to lean on Jonathan Haidt’s A Righteous Mind and its Moral Foundations Theory. Certainly none of the three people are causing harm with their actions, but perhaps they are triggering one of our weirder loyalty or sanctity intuitions?

Thirdly, perhaps the issue with the third hobby is less “it’s not useful” and more of a concern than it’s actively dangerous. We know from experiments on rats (and a few unethical ones on humans) that such machines can lead to addictive behaviour and very dangerous disregard for food and other critical needs. Perhaps as video games become more indirect, they become less addictive and simply less dangerous.

Moral Obligations

Really though, these questions are being unpacked in order to answer the more interesting one in this essay’s title: when is it wrong to click on a cow? Or slightly less metaphorically: what moral obligations do we have around how we spend our leisure time? Should I feel bad about reading a book if it doesn’t teach me anything? Should I feel bad about going out to see a show if it’s not some deep philosophical exploration of the human spirit? What about the widely-shat-upon genre of reality television?

Even more disturbingly, what are the implications for just hanging out with your friends? Surely that’s still a good thing?

If I generalize my intuitions well past my ability to back them up with reason, we have some weak moral obligation to spend our time in a way that benefits our group, either through direct development of publicly beneficial skills like music, or through more general self-improvement in one form or another, or through socializing and social play and the resulting group bonding. Anything that we do entirely without benefit to others is onanistic and probably wrong.

The final question is then: what if that isn’t what I find enjoyable? How much room is there in life for reading trashy novels and watching the Kardashians? The moral absolutist in me suggests that there is none; that we must do our best to optimize what little time we have as effectively as possible. But that’s a topic for another post.

Where the Narrative Stops

Back in February, I talked about the scripts and narratives that guide our life, with a specific focus on the cognitive dissonance that happens when we try and “disobey” them. Today instead I’m going to talk about the way in which I believe those narratives are getting weaker and less meaningful. It’s also probably going to borrow a bunch from my series on Jonathan Haidt’s The Righteous Mind (1, 2, 3, 4), because shared scripts and narratives are clearly a core component of Haidt’s “moral capital”.

In fact the more I draft this post, the more I realize I should throw in one more previous essay reference: Nostalgia For Ye Olde Days also talks around this issue a little bit. In hindsight though I think that post committed exactly the sin I want to talk about today, of focusing too much on things in common (look at the examples I used, of yoga, and veganism, and video games) instead of narratives in common.

My thesis is this: people today (and particularly younger people) have increasingly uncertain and unclear visions of where and what and how they want their life to be, due largely to the erosion of binding social narratives and the equivalent moral capital. This is leading to an increase in chronic existential unhappiness, and various other issues.

In middle-class post-war America, there was a single, nearly universal narrative that existed in the cultural zeitgeist: you grew up, got a career (as distinct from just a job, and only if you were male), got married, had kids, raised your kids. Rinse, repeat. People who grew up with this narrative could rest assured that if they followed it, they were “living a good life”, or something along those lines. Every life is different, and some people who followed this path were genuinely terrible, but at some sort of existential level the promise was that you would be alright. It was just How Things Are.

Of course this narrative is very restrictive if it’s not quite what you want for yourself. The “free love” and hippie rebellion of the following generation were largely reactions against this narrative, even though in practice most of the rebels eventually settled down and lived just that life. And it’s also true that this narrative still exists in pockets today; the Mormons, for example, seem to have it pretty down pat at this point. It’s just not nearly as pervasive.

But if that narrative is increasingly dying out in the general population, what narrative is replacing it? It’s easy to point to specific examples (social activism comes to mind) but for a lot of people I would argue there isn’t anything replacing it. We grow up, finish high school, (potentially) finish university, and then… the narrative stops. We want to give people the freedom to pursue their life’s passion, to not get stuck in the “rat race”, to love who they love, and build the world they want to see. But in giving too much freedom we also give an overwhelming selection of choices. If you know the “next step” in your life is to get a career, then suddenly you have something to work toward. It doesn’t matter if your career has some ultimate fulfilling purpose; it’s just What You Do.

Today, it’s really easy to spend a lot of your twenties (and soon, your thirties) just kinda wandering around. Working, usually, because you need money to pay the bills, but working jobs, not careers. Looking, waiting, for something that you can do that will give you that purpose, that sense of fulfillment. And even if you see it, even if you know deep down “that thing over there is what I want to do with my life”, it’s too easy to dismiss it as too hard, unachievable, and end up settling for nothing at all. Purpose is what we make of it, and I’ll settle for somebody else’s narrative any day over no purpose at all.

It would be nice if there was some way to create a good “default” cultural narrative for people to fall back on without restricting their personal freedom at all. Unfortunately it doesn’t seem to work that way; a key part of Haidt’s definition of moral capital is that it does constrain individualism in favour of cooperation. I’m going to think more on this.

Where the Magic Happens

A quick follow-up Q&A to some comments received (both publicly and directly) on this post. The comments and questions have been heavily paraphrased.

But what actually is moral capital? That doesn’t seem to be what those words mean.

I’m using it per Haidt, and I agree the definition he gives isn’t quite in line with what you’d maybe intuit based on the words “moral” and “capital”. In The Righteous Mind he defines it fairly precisely but also fairly technically. I won’t quote it here, but this link has the relevant pages. Better yet, the New York Times has a decent paraphrase: “norms, prac­tices and institutions, like religion and family values, that facilitate cooperation by constraining individualism”. Between the two of them those links do a pretty decent job sketching out the full idea.

But is it really true that societies with more moral capital are healthier, happier, more efficient etc? What specific claims are you making?

I am unfortunately running off of intuition and some half-remembered bits of Haidt’s book (now returned to the library), but I can at least gesture in the right direction. There’s lots of work showing that belonging to a tightly-knit social community is good for happiness and mental health. Think religious communities, or very small towns; the most stereotypical examples in my mind (combining both religion and small town) are an Israeli kibbutz, or an Amish village. If I remember correctly, Lost Connections by Johann Hari has a good summary of a bunch of this research and related arguments.

Similarly, there’s a lot of anecdotal evidence in the business world (it’s a more recent phenomenon there so I don’t know if it’s been formally studied yet) that the most competitive and efficient businesses are the ones that can foster this kind of belonging in their employees. It’s certainly working for Netflix and Shopify.

Being highly aligned and high in moral capital doesn’t prevent conflict or “bad politics” though?

It definitely doesn’t prevent conflict. It definitely does help prevent bad politics. In a high-moral-capital political environment, the conflicts that arise will be about means, not ends. It might be instructive to look at, for example, progressive and conservative opinions on safe injection sites. Progressives tend to believe in reducing harm. As such, two progressives debating safe injection sites will be able to have a well-reasoned and fairly trust-based debate about whether safe injection sites, or harsher penalties for possession, or this, or that, will have the best effect of reducing harm. They have different means, but the same end, so they ultimately feel like they’re on the same side.

Conservatives, on the other hand, are worried not just about the individual harm of drug use, but also its effect on moral capital. To a conservative, safe injection sites are likely a non-starter because while they do reduce harm, they have the net effect of enabling drug use and the concomitant erosion of moral capital. A conservative and a progressive debating safe injection sites are looking for fundamentally different things, a gap which is much harder to bridge with social trust.

Isn’t there a middle ground between a perfectly aligned but un-free society, and one that devolves into anarchy?

Of course there is, and I didn’t mean to imply otherwise. We are, quite literally, living it. But since I was writing for a primarily progressive audience who wants to move towards more personal freedom, I tried to emphasize the conservative side of the argument more. There are dangers in too much personal freedom, and advantages in requiring some conformity from a group.

How exactly is this a utilitarian argument for conservative politics? Your argument missed a step somewhere.

Yup, sorry, I over-summarized. To be a bit more explicit:

  • Societies with more moral capital tend to be happier, healthier, more efficient, etc. than their counterparts with less. This is what utilitarians want.
  • Conservative policies tend to focus on creating moral capital, at the expense of personal freedoms and preventing harm.
  • Progressive policies tend to focus on personal freedoms and preventing harm, at the cost of destroying moral capital.

(Obviously utilitarians tend to want to boost personal freedom and prevent harm too. As I mentioned in the previous post, it’s a matter more of priorities than of absolute preference.)

Progressives want as few people to suffer as possible even if it inconveniences the majority, while Conservatives want to promote sameness and fairness as much as possible even if some people slip through the cracks.

Not actually a question, but a really good paraphrase of part of the argument I’m presenting here, and part of the argument Haidt makes in his book. It misses some dimensions (e.g. weighing personal freedom of choice into the mix for progressives, not just the avoidance of suffering), but very broadly Haidt is pointing out this distinction and then saying roughly “either side is terrible when taken to its ultimate extreme; we must find a balance”.

The Needs of the Many

This post is the third of what will likely be a series growing out of my thoughts on Jonathan Haidt’s “The Righteous Mind”. Here are the first and the second.

Spock: The needs of the many outweigh the needs of the few.

Kirk: Or the one.

–  Star Trek II: The Wrath of Khan (1982)

Ah, Star Trek. Remember when Star Trek used to be considered progressive? I do, or at least the tail end of that era. Nowadays quotes like this feel oddly conservative in certain contexts. Today’s progressive viewpoint is all about the tyranny of the majority, breaking down power structures, and ensuring that everybody is free and valued equally in all of their diversity.

Most days, without thinking too hard, I manage to believe in both of these viewpoints. I believe in fighting for a world where people are treated equally without regard for their race, their gender, their religion, their culture. And I believe that when given no other choice, the needs of the many outweigh the needs of the few. If one must suffer to save the village, then so be it.

But there is a conflict here.

It’s one thing to believe in the needs of the many from a personal perspective, and to freely make that personal sacrifice for the greater good. It is quite another to believe in it absolutely, and to therefore bless the tyranny of the majority as a net utilitarian positive. It’s actually kinda funny, since I tend to think of progressives as the more utilitarian, while conservatives are more deontological, but in this case it’s the progressive camp that clings to the right of personal freedom and the conservatives arguing for utilitarianism. Further proof, I suppose, of Haidt’s claim that neither moral theory is particularly well-aligned with human moral instincts.

In contrast with the quote from Star Trek, here’s a quote from a modern progressive TV show:

Tan: What’s wrong with wanting something that you just want, not that you need?

Joey: The way I grew up, I got it in the back of my head that that was selfish, you know, and so maybe that’s something I need to unlearn.

– Queer Eye: Season 3 Episode 2 (2019)

Thirty-seven years later, the progressive viewpoint is no longer “for the greater good”. Instead it’s become “for the personal good”. I want to be clear here that regardless of politics, basically nobody regards “the greater good” or “the personal good” as fundamentally bad. It’s just a matter of priorities: where before the greater good was seen as more important than the personal (when they even conflicted), now it is the reverse.

This raises another more interesting point though: when do the greater good and the personal good conflict in real life? Opponents of utilitarianism have lots of thought experiments they like to trot out at this point (for example, killing one healthy person against their will in order to harvest their organs and save five others). But these scenarios are oddly empty of the practical, day-to-day moral decisions that people tend to make in real life.

One of Haidt’s principle goals in The Righteous Mind is to clearly articulate the value systems of both progressives and conservatives in a way that is, if not precisely “objective”, at least fair and understandable to both sides of that debate. It is this articulation which brings him to the idea of a society’s “moral capital”, which is itself the linchpin of this conflict between the greater and the personal good. Interestingly I accidentally hit upon a very rough definition of “moral capital” myself in an off-hand comment a few years ago, so here’s me quoting myself:

[S]ocio-cultural conformance is a powerful force multiplier because it builds trust and lets people work towards implicit common goals. Society can afford and absorb some people who break the mold, but eventually the system decoheres.

Another way this sometimes gets talked about is through the phrase “Highly Aligned and Loosely Coupled”, which (I believe?) started out in Netflix’s culture document and has now made its way into a bunch of other corporate cultures. A group of people, whether a tribe or a company or a country, who are closely aligned on their long-term goals as a group, can afford much less internal communication and “bad politics”, and end up both more efficient and happier. Now, “alignment” and “conformance” have fairly different connotations in terms of amount of freedom, but practically they end up meaning the same thing: everybody believes the same thing and has the same shared vision of the future.

I admit to wandering around between a couple of different concepts so far, but here’s where we tie it all together. Haidt’s “moral capital” is in a very real sense “the greater good”. A highly aligned, highly conformant society is generally happier, healthier, and more efficient than one in which every social interaction has to start from first principles and deal with the risk of the unknown. The cost of this greater good is, of course, the personal good: a highly conformant society sucks for people who don’t want to conform, either because they have a specific different set of values or just because they’re generally non-conformist. Conversely though, a totally free society where personal good is king becomes anarchy, which ends up being bad for everybody. It’s a very weird kind of prisoner’s dilemma game we’re playing with each other.

At its heart this whole essay has been a strong utilitarian argument for conservative politics. Since I have a lot of friends who are both utilitarian and fairly progressive, I’m curious to see the hot water this gets me in 🙂

P.S. I realize this never really tied back into culturism like I promised in that post. It’ll bubble to the top of my brain again, I think.

This post sparked a bunch of confusion and good questions; a follow-up post addressing some of that is here.

On Culturism

This post is the second of what will likely be a series growing out of my thoughts on Jonathan Haidt’s “The Righteous Mind”. The first is here.

Also, this post was extracted from a longer essay that’s still in the works. It’s meant to be foundational more than earth-shattering.

I want to promote a word that I just don’t hear a lot these days: culturism. Analogous to racism, sexism, etc., “culturism” can be roughly defined a couple of different (not necessarily exclusive or exhaustive) ways:

  • discrimination against someone on the basis of their different culture
  • the belief that one culture is superior to others
  • cultural prejudice + power

I want to promote this word, because I want to make a much stronger claim. I believe that all of the different *-isms (racism, sexism, etc) are just second-order mental shortcuts for culturism. And just like everyone’s a little bit racist, everyone’s a little bit culturist.

Now I’ve used “culturism” and “culture” in that claim, but really “behaviourism” might have been a better choice of word if it wasn’t already taken to mean something entirely different. Culture and behaviour is all tied together though, so I’m just going to stick with culturism and note a few places where my usage might not match the intuitive definition.

The easiest way to see how racism is just a shortcut for culturism is to ask an old-school racist what they hate about black people. The answers they give you don’t vary much: lazy, dirty, and rude are all words that pop up. But note that none of those things are actually about skin colour! For the most part, old-school racists don’t actually hate people with black skin per se; they hate people with undesirable behaviours. Does anybody actually want people to be lazy, dirty, and rude? The racist has just incorrectly associated those behaviours with skin colour (“dirty” isn’t technically a behaviour, but hygiene and grooming are both cultural-behavioural).

President Obama is a great example of how this plays out. He was black, but he conformed to the cultural and behavioural stereotype of an upper-class white man. He was not culturally black in any negative way, either in the old-school-racism meaning or in the more modern sense (inner-city gangs, etc). While he still received some negative attention from true racists, in this case the exception proves the rule: people reify their mental shortcuts all the time. It shouldn’t be surprising that if people grow up associating black skin with all these negative qualities, then some of them will forget the original association and just react negatively to black skin. Likewise it shouldn’t be surprising that if a scientist grows up in an environment where that prejudice is normalized, they’ll go looking for explanations and come up with weird ideas like craniometry.

Sexism is a similar story, with the only catch being that it feels weird to talk about men and women having “different cultures”. However, gender roles mean that at least historically, there were different expectations around how men and women would behave. This is all we need to connect the dots. What were the arguments for why women shouldn’t work? Because they were seen as emotional and weak, and those were undesirable qualities for someone who worked. It wasn’t about womanhood per se, it was about a false association between womanhood and undesirable behaviours and properties (women are still, on average, physically weaker than men, but we’ve learned to look at the individual for properties now, which is a whole other essay).

Now if I’ve done my job you’re likely nodding along, or at least willing to accept my premise for the sake of argument. But you may not really see why this would be important. Racism is still racism is still wrong, whatever the exact mechanism.

Here’s a hint at the kicker: even though we’re mostly not racist anymore, we’re still really really culturist. We are still prejudiced against people who are lazy, dirty, and rude. We’re not biased against emotional people only because being emotionally attuned has now become a desirable quality; instead we bias ourselves against people who close off their emotions and act coldly.

This will all tie back into Haidt and his concept of “moral capital” as soon as I finish that essay, I promise!