Postel’s Principle as Moral Aphorism

[All the usual disclaimers. Wanders dangerously close to moral relativism.]

I.

Postel’s Principle (also known as the Robustness Principle) is an obscure little guideline somewhat popular among computer programmers, in particularly those working on network protocols. The original goes like this:

Be conservative in what you do, be liberal in what you accept from others.

My parents were both computer programmers, as am I, and my first job as a programmer was working on network protocols, so it shouldn’t be too surprising that I ran across this principle a long, long time ago. I suspect I heard it while still a teenager, before finishing high school, but I honestly don’t remember. Suffice to say that it’s been kicking around my brain for a long time.

As a rule of thumb in computer programming, Postel’s Principle has some basic advantages. You should be conservative in what you do because producing output that isn’t strictly compliant with the specification risks other programs being unable to read your data. Conversely, you should be liberal in what you accept because other programs might occasionally produce non-compliant data, and ideally your program should be robust and keep working in the face of data that isn’t quite 100% right.

While in recent years the long-term effects of Postel’s Principle on software ecosystems have led to some pushback, I’m more interested in the fact that Postel’s Principle seems to apply as well just as well as a moral aphorism as it does in programming. Context matters a lot when reading, so here’s a list of other aphorisms and popular moral phrases to get your brain in the right frame:

  • What would Jesus do?
  • Actions speak louder than words.
  • If you can’t say something nice, don’t say anything at all.
  • Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime.
  • Be conservative in what you do, and liberal in what you expect from others.

II.

I am, by nature, a fairly conservative person. I’m also, whether by nature or past experience, somewhat socially subordinate; I’m usually much happier in a secondary position than in any role of real authority, and my self-image tends to be fairly fragile. The manosphere would happily write me off as a “beta male”, and I’m sure Jordan Peterson would have something weird to say about lobsters and serotonin.

This combination of personality traits makes Postel’s Principle a natural fit for defining my own behaviour. Rather than trying to seriously enforce my own worldview or argue aggressively for my own preferences, I endeavour not to make waves. The more people who like me, the more secure my situation, and the surest way to get people to like me is to follow Postel’s Principle: be conservative in my own actions (or else I might do something they disapprove of or dislike), and be liberal in what I accept from others (being judgemental is a sure way to lose friends).

[People who know me IRL will point out that in fact I am pretty judgemental a lot of the time. But I try and restrict my judginess (judgmentality? judgementalism?) to matters of objective efficiency, where empirical reality will back me up, and avoid any kind of value-based judgement. E.g. I will judge you for being an ineffective, inconsistent feminist, but never for holding or not holding feminist values.]

Unfortunately, of course, the world is a mind-boggling huge place with an annoyingly large number of people, each of whom has their own slightly different set of moral intuitions. There is clearly no set of behaviours I could perform that will satisfy all of them, so I focus on applying Postel’s Principle to the much smaller set of people who are in my “social bubble” (in the pre-COVID sense). If I’m not likely to interact with you soon, or on a regular basis, then I’m relatively free to ignore your opinion.

Talking about the “set” of people on whom to apply Postel’s Principle provides a nice segue into the formal definitions that are implicit in the English aphorism. For my own behaviour, it makes sense to think of it like the intersection operation in set theory, or the universal quantifier in predicate logic: something is only morally permissible for me if it is permissible for all of the people I am likely to interact with regularly. Conversely, of course, the values I must accept without judgment are the union of the values of the people I know; it is morally permissible if it is permissible for any of the people I am likely to interact with regularly.

III.

Since the set of actions that are considered morally permissible for me are defined effectively by my social circle, it becomes of some importance to intentionally manage my social circle. It would be untenable to make such different friends and colleagues that the intersection of their acceptable actions shrinks to nothing. In that situation I would be forced to make a choice (since inaction is of course its own kind of action) and jettison one group of friends in order to open up behavioural manoeuvring space again.

Unfortunately, it sometimes happens that people change their moral stances, especially when under pressure from other people who I may not be interacting with directly. Even if I have a stable social circle and behavioural manoeuvring space today, tomorrow one of my friends could decide they’re suddenly a radical Islamist and force me with a choice. While in some sense “difficult”, many of these choices end up being rather easy; I have no interest in radical Islam, and so ultimately how close I was to this friend relative to the rest of my social circle matters only in the very extreme case where they were literally my only acquaintance worth speaking of.

Again unfortunately, it sometimes happens that large groups of people change their moral stances all at once. Memes spread incredibly fast, and a small undercurrent of change can rapidly become a torrent when one person in a position of power or status chooses a side. This sort of situation also forces me with a choice, and often a much more difficult one. Apart from the necessity of weighing and balancing friend groups against each other, there’s also a predictive aspect. If I expect a given moral meme to become dominant over the next decade, it seems prudent to be “on the right side of history” regardless of the present impact on my social circle.

Being forced to choose between two social groups with incompatible moral stances is, unsurprisingly, stressful. Social alienation is a painful process, as can attest any Amish person who has been shunned. However what may be worse than any clean break is the moment just before, trying to walk the knife edge of barely-overlapping morals in the desperate hope that the centre can hold.

IV. (PostScript)

I wrote this focused mostly on myself. Having finished, I cannot help but wonder how much an approximation of Postel’s Principle guides the moral principles of most people, whether they would acknowledge it or not. Even people who claim to derive their morality from first principles often end up with something surprisingly close to their local social consensus.

Abstractions on Inconsistent Data

[I’m not sure this makes any sense – it is mostly babble, as an attempt to express something that doesn’t want to be expressed. The ideas here may themselves be an abstraction on inconsistent data. Posting anyway because that’s what this blog is for.]

i. Abterpretations

Abstractions are (or at least are very closely related to) patterns, compression, and Shannon entropy. We take something that isn’t entirely random, and we use that predictability (lack of randomness) to find a smaller representation which we can reason about, and predict. Abstractions frequently lose information – the map does not capture every detail of the territory – but are still generally useful. There is a sense in which some things cannot be abstracted without loss – purely random data cannot be compressed by definition. There is another sense in which everything can be abstracted without loss, since even purely random data can be represented as the bit-string of itself. Pure randomness is in this sense somehow analogous to primeness – there is only one satisfactory function, and it is the identity.

A separate idea, heading in the same direction: Data cannot, in itself, be inconsistent – it can only be inconsistent with (or within) a given interpretation. Data alone is a string of bits with no interpretation whatsoever. The bitstring 01000001 is commonly interpreted both as the number 65, and as the character ‘A’, but that interpretation is not inherent to the bits; I could just as easily interpret it as the number 190, or as anything else. Sense data that I interpret as “my total life so far, and then an apple falling upwards”, is inconsistent with the laws of gravity. But the apple falling up is not inconsistent with my total life so far – it’s only inconsistent with gravity, as my interpretation of that data.

There is a sense in which some data cannot be consistently interpreted – purely random data cannot be consistently mapped onto anything useful. There is another sense in which everything can be consistently interpreted, since even purely random data can be consistently mapped onto itself: the territory is the territory. Primeness as an analogue, again.

Abstraction and interpretation are both functions, mapping data onto other data. There is a sense in which they are the same function. There is another sense in which they are inverses. Both senses are true.

ii. Errplanations

Assuming no errors, then one piece of inconsistent data is enough to invalidate an entire interpretation. In practice, errors abound. We don’t throw out all of physics every time a grad student does too much LSD.

Sometimes locating the error is easy. The apple falling up is a hallucination, because you did LSD.

Sometimes locating the error is harder. I feel repulsion at the naive utilitarian idea of killing one healthy patient to save five. Is that an error in my feelings, and I should bite the bullet? Is that a true inconsistency, and I should throw out utilitarianism? Or is that an error in the framing of the question, and No True Utilitarian endorses that action?

Locating the error is meaningless without explaining the error. You hallucinated the apple because LSD does things to your brain. Your model of the world now includes the error. The error is predictable.

Locating the error without explaining it is attributing the error to phlogiston, or epicycles. There may be an error in my feelings about the transplant case, but it is not yet predictable. I cannot distinguish between a missing errplanation and a true inconsistency.

iii. Intuitions

If ethical frameworks are abterpretations of our moral intuitions, then there is a sense in which no ethical framework can be generally true – our moral intuitions do not always satisfy the axioms of preference, and cannot be consistently interpreted.

There is another sense in which there is a generally true ethical framework for any possible set of moral intuitions: there is always one satisfactory function, and it is the identity.

Primeness as an analogue.

Atemporal Ethical Obligations

[All the trigger warnings, especially for the links out. I’m trying to understand and find the strongest version of an argument I heard recently. I’m not sure if I believe this or not.]

Edit: This was partly a hidden argument ad absurdum. I thought it was weird enough to make that obvious, but I forgot that this is the internet (and that I actually have people reading my blog who don’t know me IRL).

It is no longer enough just to be a “good person” today. Even if you study the leading edge of contemporary morality and do everything right according to that philosophy, you are not doing enough. The future is coming, and it will judge you for your failures. We must do better.

This may sound extreme, but it is self-evidently true in hindsight. Pick any historical figure you want. No matter their moral stature during their lifetime, today we find something to judge. George Washington owned slaves. Abraham Lincoln, despite abolishing slavery in the United States, opposed black suffrage and inter-racial marriage. Mary Wollstonecraft arguably invented much of modern feminism, and still managed to write such cringe-worthy phrases as “men seem to be designed by Providence to attain a greater degree of virtue [than women]”. Gandhi was racist. Martin Luther King Jr abetted rape. The list goes on.

At an object level, this shouldn’t be too surprising. Society has made and continues to make a great deal of moral progress over time. It’s almost natural that somebody who lived long ago would violate our present day ethical standards. But from the moral perspective, this is an explanation, not an excuse; these people are still responsible for the harm their actions caused. They are not to be counted as “good people”.

It’s tempting to believe that today is different; that if you are sufficiently ethical, sufficiently good, sufficiently “woke” by today’s standards, that you have reached some kind of moral acceptability. But there is no reason to believe this is true. The trend of moral progress has been accelerating, and shows no signs of slowing down. It took hundreds of years after his death before Washington became persona non grata. MLK took about fifty. JK Rowling isn’t even dead yet, and beliefs that would have put her at the liberal edge of the feminist movement thirty years ago are now earning widespread condemnation. Moral progress doesn’t just stop because it’s 2020. This trend will keep accelerating.

All of this means that looking at the bleeding edge of today’s moral thought and saying “I’m living my life this way, I must be doing OK” is not enough. Anybody who does this will be left behind; in a few decades, your actions today will be recognized as unethical. The fact that you lived according to today’s ethical views will explain your failings, but not excuse them. Thus, in order to be truly good people, we must take an active role, predict the future of moral progress, and live by tomorrow’s rules, today.

Anything else is not enough.

Optimizing for the Apocalypse

If you’ve read many of my past posts, you’ll know that I have sometimes struggled with an internal conflict between what I would basically characterize as conservative or right-wing intuitions, and a fairly liberal or left-wing set of concrete beliefs. It’s also one of the things that I mentioned in my initial brain-dump of a post after reading Jonathan Haidt’s The Righteous Mind. I guess this is technically a continuation of the posts spawned by that book, but it pulls in enough other things that I’m not going to number it anymore.

Haidt’s book doesn’t really address my internal conflict directly; what it does do is talk about liberal and conservative moral intuitions in a way that I found really clarified for me what the conflict was about. Conveniently, in the way that the universe sometimes works, shortly after thinking about that topic a bunch I then read A Thrive/Survive Theory of the Political Spectrum. This post by Scott Alexander has nothing to do with Haidt, except that it ends up doing for the “why” of the question what Haidt did for the “what”. And so I now have a pretty nicely packaged understanding of what’s going on in that section of my brain.

Moral Foundations Theory

Let’s start with Haidt’s Moral Foundations Theory. According to Haidt there are six “moral foundations”: care, fairness, loyalty, authority, sanctity, and liberty. Each of us has moral intuitions on roughly these six axes, and the amount of weight we put on each axis can vary between people, cultures, etc. Conveniently according to Haidt, the amount of weight we put on each axis tracks really nicely as part of the right/left political divide present in the Western world. Libertarians (sometimes called “classical liberals”) strongly value liberty; liberals (the left) put much more emphasis on harm and fairness while mostly ignoring the others; conservatives (the right) value all of them roughly equally, thus leaving them as the effective champions of loyalty, authority, and sanctity.

This is already a very helpful labelling system for me, since it lets me be clearer when I talk about my conflicts. I tend to believe in a lot political ideas that are associated with the left, like a robust social safety net. But, I believe that loyalty, authority, and sanctity have real moral value, and are generally undervalued by the modern left. This isn’t a direct logical conflict (there’s nothing about loyalty that is fundamentally incompatible with a robust social safety net) but it does put me in a sometimes awkward spot between the two “tribes”, especially as the left and right become increasingly polarized in modern politics.

Thriving and Surviving

So Haidt’s system has already been pretty helpful in giving me a better understanding of what exactly the conflict is. But it doesn’t really explain why the conflict is: why I came to hold liberal views despite conservative intuitions. I imagine most people with my intuitions naturally grow up to hold fairly conservative political views as well; it’s the path of least internal resistance. This is where thrive/survive theory comes in. Alexander summarizes it like this:

My hypothesis is that rightism is what happens when you’re optimizing for surviving an unsafe environment, leftism is what happens when you’re optimized for thriving in a safe environment.

This is conveniently similar to behaviour observed in the wild among, for example, slime molds:

When all is well, the slime mold thrives as a single-celled organism, but when food is scarce, it combines forces with its brethren, and grows. 

This combined slime mold expends a great deal of energy, and ends up sacrificing itself in order to spore and give the mold a chance to start a new life somewhere else. It’s the slime mold equivalent of Gandalf facing the Balrog, spending his own life to ensure the survival of his friends.

And, it also conveniently aligns with Haidt’s moral foundations: of the six foundations, there are three that are fundamentally important for the survival of the group in an unsafe environment: loyalty, authority, and sanctity. The other three (care, fairness, and liberty) are important, but are much more likely to be sacrificed for “the greater good” in extreme situations.

This all ties together really nicely. I grew up in a stable, prosperous family in a stable, prosperous country that is still, despite some recent wobbles, doing really really well on most measures. The fact is that my environment is extremely safe, and I’m a sucker for facts combined with rational argument. But twin studies have generally shown that while political specifics are mostly social and not genetic (nurture, not nature), there is a pretty strong genetic component to ideology and related personality traits which, I would hypothesize, boil down in one aspect to Haidt’s moral foundations.

In summary then, the explanation is that I inherited a fairly “conservative” set of intuitions optimized for surviving in an unsafe environment. But, since my actual environment is eminently safe, my rational mind has dragged my actual specific views towards the more practically correct solutions. I wonder if this makes me a genetic dead end?

In other words: I want to optimize for the apocalypse, but fortunately the apocalypse seems very far away.

When is it Wrong to Click on a Cow?

Three Stories

Imagine, for a moment, three young adults recently embarked on the same promising career path. The first comes home from work each day, and spends their evenings practising and playing a musical instrument. The second comes home from work each day, and spends their evenings practising and playing a video game. The third comes home from work each day, and spends their evenings hooked up to a machine which directly stimulates the pleasure and reward centres of their brain.

How do these people make you feel?

For some people with more libertarian, utilitarian, or hedonistic perspectives, all three people are equally positive. They harm no-one, and are spending their time on activities they enjoy and freely chose. We can ask nothing more of them.

And yet this perspective does not line up with my intuitions. For me, and I suspect for many people, the musician’s choice of hobby is laudable, the gamer’s is relatively neutral, and the “stimmer”‘s (the person with the brain-stimulating machine) is distinctly repugnant in a way that feels vaguely ethics-related. It may be difficult to actually draw that repugnance out in clear moral language – after all, no-one is being harmed – but still… they’re not the kind of person you’d want your children to marry.

The Good and The Bad

Untangling the “why” of these intuitions is quite an interesting problem. Technically all three hobbies rely on hijacking the reward centres of the brain, whose original evolutionary advantages were more to do with food, sex, and other survival-related tasks. There’s a fairly short path from arguing that the stimmer’s behaviour is repugnant to arguing that all three cases are repugnant; after all none of them result in food or anything truly “productive”. But this tack also seems to go a bit against our intuitions.

Fortunately, the world has a lot of different video games, and we can use that range to draw out some more concrete differences. At the low-end are games like Cow Clicker and Cookie Clicker, which are so basic as to be little more than indirect versions of the reward-centre-stimulating machine. More complex games seem to intuitively fair a little better, as do games with a non-trivial social element. Games that directly attempt to train us in some way also seem to do a little better, whether they actually work or not.

Generalizing slightly, it seems like the things we care about to make video games more “positive” are roughly: transferable skills, personal growth, and social contact. But this model doesn’t seem to fit so well when applied to learning an instrument. You could argue that it includes transferable skills, but the obvious candidates only transfer to other instruments and forms of musicianship, not to anything strictly “practical”. Similarly, social contact is a positive, but it’s not a required component of learning an instrument. Playing in a group seems distinctly better than learning it by yourself, but learning it on your own still seems like a net positive. Our final option of “personal growth” now seems very wishy-washy. Yes, learning an instrument seems to be a clear case of personal growth, but… what does that mean exactly? How is it useful, if it doesn’t include transferable skills or social contact?

There are a few possible explanations that I’m not going to explore fully in this essay, since it would take us a bit far afield from the point I originally wanted to address. For one, perhaps music is seen as more of a shared or public good, one that naturally increases social cohesion. It seems plausible that maybe our intuitions just can’t account for somebody learning music entirely in private, with no social benefits.

Another approach would be to lean on Jonathan Haidt’s A Righteous Mind and its Moral Foundations Theory. Certainly none of the three people are causing harm with their actions, but perhaps they are triggering one of our weirder loyalty or sanctity intuitions?

Thirdly, perhaps the issue with the third hobby is less “it’s not useful” and more of a concern than it’s actively dangerous. We know from experiments on rats (and a few unethical ones on humans) that such machines can lead to addictive behaviour and very dangerous disregard for food and other critical needs. Perhaps as video games become more indirect, they become less addictive and simply less dangerous.

Moral Obligations

Really though, these questions are being unpacked in order to answer the more interesting one in this essay’s title: when is it wrong to click on a cow? Or slightly less metaphorically: what moral obligations do we have around how we spend our leisure time? Should I feel bad about reading a book if it doesn’t teach me anything? Should I feel bad about going out to see a show if it’s not some deep philosophical exploration of the human spirit? What about the widely-shat-upon genre of reality television?

Even more disturbingly, what are the implications for just hanging out with your friends? Surely that’s still a good thing?

If I generalize my intuitions well past my ability to back them up with reason, we have some weak moral obligation to spend our time in a way that benefits our group, either through direct development of publicly beneficial skills like music, or through more general self-improvement in one form or another, or through socializing and social play and the resulting group bonding. Anything that we do entirely without benefit to others is onanistic and probably wrong.

The final question is then: what if that isn’t what I find enjoyable? How much room is there in life for reading trashy novels and watching the Kardashians? The moral absolutist in me suggests that there is none; that we must do our best to optimize what little time we have as effectively as possible. But that’s a topic for another post.

Where the Magic Happens

A quick follow-up Q&A to some comments received (both publicly and directly) on this post. The comments and questions have been heavily paraphrased.

But what actually is moral capital? That doesn’t seem to be what those words mean.

I’m using it per Haidt, and I agree the definition he gives isn’t quite in line with what you’d maybe intuit based on the words “moral” and “capital”. In The Righteous Mind he defines it fairly precisely but also fairly technically. I won’t quote it here, but this link has the relevant pages. Better yet, the New York Times has a decent paraphrase: “norms, prac­tices and institutions, like religion and family values, that facilitate cooperation by constraining individualism”. Between the two of them those links do a pretty decent job sketching out the full idea.

But is it really true that societies with more moral capital are healthier, happier, more efficient etc? What specific claims are you making?

I am unfortunately running off of intuition and some half-remembered bits of Haidt’s book (now returned to the library), but I can at least gesture in the right direction. There’s lots of work showing that belonging to a tightly-knit social community is good for happiness and mental health. Think religious communities, or very small towns; the most stereotypical examples in my mind (combining both religion and small town) are an Israeli kibbutz, or an Amish village. If I remember correctly, Lost Connections by Johann Hari has a good summary of a bunch of this research and related arguments.

Similarly, there’s a lot of anecdotal evidence in the business world (it’s a more recent phenomenon there so I don’t know if it’s been formally studied yet) that the most competitive and efficient businesses are the ones that can foster this kind of belonging in their employees. It’s certainly working for Netflix and Shopify.

Being highly aligned and high in moral capital doesn’t prevent conflict or “bad politics” though?

It definitely doesn’t prevent conflict. It definitely does help prevent bad politics. In a high-moral-capital political environment, the conflicts that arise will be about means, not ends. It might be instructive to look at, for example, progressive and conservative opinions on safe injection sites. Progressives tend to believe in reducing harm. As such, two progressives debating safe injection sites will be able to have a well-reasoned and fairly trust-based debate about whether safe injection sites, or harsher penalties for possession, or this, or that, will have the best effect of reducing harm. They have different means, but the same end, so they ultimately feel like they’re on the same side.

Conservatives, on the other hand, are worried not just about the individual harm of drug use, but also its effect on moral capital. To a conservative, safe injection sites are likely a non-starter because while they do reduce harm, they have the net effect of enabling drug use and the concomitant erosion of moral capital. A conservative and a progressive debating safe injection sites are looking for fundamentally different things, a gap which is much harder to bridge with social trust.

Isn’t there a middle ground between a perfectly aligned but un-free society, and one that devolves into anarchy?

Of course there is, and I didn’t mean to imply otherwise. We are, quite literally, living it. But since I was writing for a primarily progressive audience who wants to move towards more personal freedom, I tried to emphasize the conservative side of the argument more. There are dangers in too much personal freedom, and advantages in requiring some conformity from a group.

How exactly is this a utilitarian argument for conservative politics? Your argument missed a step somewhere.

Yup, sorry, I over-summarized. To be a bit more explicit:

  • Societies with more moral capital tend to be happier, healthier, more efficient, etc. than their counterparts with less. This is what utilitarians want.
  • Conservative policies tend to focus on creating moral capital, at the expense of personal freedoms and preventing harm.
  • Progressive policies tend to focus on personal freedoms and preventing harm, at the cost of destroying moral capital.

(Obviously utilitarians tend to want to boost personal freedom and prevent harm too. As I mentioned in the previous post, it’s a matter more of priorities than of absolute preference.)

Progressives want as few people to suffer as possible even if it inconveniences the majority, while Conservatives want to promote sameness and fairness as much as possible even if some people slip through the cracks.

Not actually a question, but a really good paraphrase of part of the argument I’m presenting here, and part of the argument Haidt makes in his book. It misses some dimensions (e.g. weighing personal freedom of choice into the mix for progressives, not just the avoidance of suffering), but very broadly Haidt is pointing out this distinction and then saying roughly “either side is terrible when taken to its ultimate extreme; we must find a balance”.

On Culturism

This post is the second of what will likely be a series growing out of my thoughts on Jonathan Haidt’s “The Righteous Mind”. The first is here.

Also, this post was extracted from a longer essay that’s still in the works. It’s meant to be foundational more than earth-shattering.

I want to promote a word that I just don’t hear a lot these days: culturism. Analogous to racism, sexism, etc., “culturism” can be roughly defined a couple of different (not necessarily exclusive or exhaustive) ways:

  • discrimination against someone on the basis of their different culture
  • the belief that one culture is superior to others
  • cultural prejudice + power

I want to promote this word, because I want to make a much stronger claim. I believe that all of the different *-isms (racism, sexism, etc) are just second-order mental shortcuts for culturism. And just like everyone’s a little bit racist, everyone’s a little bit culturist.

Now I’ve used “culturism” and “culture” in that claim, but really “behaviourism” might have been a better choice of word if it wasn’t already taken to mean something entirely different. Culture and behaviour is all tied together though, so I’m just going to stick with culturism and note a few places where my usage might not match the intuitive definition.

The easiest way to see how racism is just a shortcut for culturism is to ask an old-school racist what they hate about black people. The answers they give you don’t vary much: lazy, dirty, and rude are all words that pop up. But note that none of those things are actually about skin colour! For the most part, old-school racists don’t actually hate people with black skin per se; they hate people with undesirable behaviours. Does anybody actually want people to be lazy, dirty, and rude? The racist has just incorrectly associated those behaviours with skin colour (“dirty” isn’t technically a behaviour, but hygiene and grooming are both cultural-behavioural).

President Obama is a great example of how this plays out. He was black, but he conformed to the cultural and behavioural stereotype of an upper-class white man. He was not culturally black in any negative way, either in the old-school-racism meaning or in the more modern sense (inner-city gangs, etc). While he still received some negative attention from true racists, in this case the exception proves the rule: people reify their mental shortcuts all the time. It shouldn’t be surprising that if people grow up associating black skin with all these negative qualities, then some of them will forget the original association and just react negatively to black skin. Likewise it shouldn’t be surprising that if a scientist grows up in an environment where that prejudice is normalized, they’ll go looking for explanations and come up with weird ideas like craniometry.

Sexism is a similar story, with the only catch being that it feels weird to talk about men and women having “different cultures”. However, gender roles mean that at least historically, there were different expectations around how men and women would behave. This is all we need to connect the dots. What were the arguments for why women shouldn’t work? Because they were seen as emotional and weak, and those were undesirable qualities for someone who worked. It wasn’t about womanhood per se, it was about a false association between womanhood and undesirable behaviours and properties (women are still, on average, physically weaker than men, but we’ve learned to look at the individual for properties now, which is a whole other essay).

Now if I’ve done my job you’re likely nodding along, or at least willing to accept my premise for the sake of argument. But you may not really see why this would be important. Racism is still racism is still wrong, whatever the exact mechanism.

Here’s a hint at the kicker: even though we’re mostly not racist anymore, we’re still really really culturist. We are still prejudiced against people who are lazy, dirty, and rude. We’re not biased against emotional people only because being emotionally attuned has now become a desirable quality; instead we bias ourselves against people who close off their emotions and act coldly.

This will all tie back into Haidt and his concept of “moral capital” as soon as I finish that essay, I promise!