“I Was Wrong” on Reopening Ottawa

Roughly two weeks ago I wrote a post Against Reopening Ottawa. Since then, my predictions have turned out to be mostly wrong.

First, here’s same the chart I published at the time showing a moving average of new cases in Ottawa:

And now here’s the same chart updated as of the data available today from Ottawa Public Health:

At first glance I sort of feel vindicated, since I predicted cases would keep going up, and they definitely did. But pretty much all of the particulars of my predictions were wrong. In relation to Ottawa cases, my three predictions were:

  • Based on the original data I inferred a steady doubling time of two weeks; actual new cases went up way faster than that, and then flattened completely, and now seem to be decreasing again.
  • I predicted a brief slowdown in new cases starting around July 21st, due to Ottawa’s mandatory mask bylaw going into effect on July 7th. If you squint a little it kinda looks like that part might have been right, but on closer inspection it isn’t. The timing is off (the graph flattens well before masks would have an impact, and starts decreasing well after) and the sharp drop in new cases is so recent that it’s liable to disappear in the next few days anyway (Ottawa reports cases by first symptom where possible, so each new day’s data tends to backfill more cases over the last week or so).
  • I predicted that starting August 3rd or shortly after, cases would spike again due to stage 3 of reopening. This one isn’t right or wrong yet as we haven’t gotten there, but I’d no longer make the same prediction today.

I also made some less rigorous province-level predictions that turned out to be (mostly) wrong:

  • Ontario did see a brief more general increase in cases after my post, but that has stopped, and if you zoom out cases still seem to be trending clearly downward everywhere but Ottawa. I’ll give myself a win for predicting that Peel and other previous hot spots were mostly under control at that point, but Ottawa is now one of the regions with the most reported daily cases, making it an outlier, not a pack leader.
  • British Columbia cases have continued to pick up again, but apparently in the same ways as Ontario; they have one region that’s become a hot spot and everywhere else is well under control. The main difference is that BC’s infections are so low to begin with that their hot spot is swamping their general numbers and making it look like the whole province is in trouble.
  • Alberta I admit I don’t really understand. Their data seems to have done the same thing as Ottawa: a sharp increase followed by an immediate flattening. But they only have two major population centres (Edmonton and Calgary) and neither city on its own shows this pattern. If anybody has a better understanding, please leave a comment.

Finally, there’s a few other miscellaneous points I wanted to make:

  • As I mentioned parenthetically above, Ottawa reports cases by date of first symptom where possible (or even date of infection, if the source is known), which means that new cases reported on a given day are almost always backdated by a few days and up to two weeks. Thus we should expect the last week or so of numbers to show up as lower than they will be in the final tally. This is why I’m not convinced by the magnitude of the recent “dip” in new cases in my chart above, and explains why what was a gradual uptick in my previous post turned into a sudden spike. It does also vindicate me “reading too much into the graph”, although I didn’t realize this at the time so I can’t take credit for it.
  • Ottawa Public Health addressed the recent spike shortly after I published my last post, claiming that it was unrelated to reopening and more related to private parties and lack of distancing in private spaces (in particular among younger age groups). This seems like a generally plausible explanation, and I don’t have any better guesses that explain the weird shape of the data. There’s been so much garbage floating around from the WHO and the CDC, it’s nice to get evidence that my local health authorities actually know more than I do about COVID.

Musical Outgroups

[Content warning: Politics. Something I will regret writing.]

A lot of this extends from Scott Alexander’s I Can Tolerate Anything Except the Outgroup, but if you don’t want to read the whole thing I’ll quote a few key definitions up front. Specifically:

The Red Tribe is most classically typified by conservative political beliefs, strong evangelical religious beliefs, creationism, opposing gay marriage, owning guns, eating steak, drinking Coca-Cola, driving SUVs, watching lots of TV, enjoying American football, getting conspicuously upset about terrorists and commies, marrying early, divorcing early, shouting “USA IS NUMBER ONE!!!”, and listening to country music.

The Blue Tribe is most classically typified by liberal political beliefs, vague agnosticism, supporting gay rights, thinking guns are barbaric, eating arugula, drinking fancy bottled water, driving Priuses, reading lots of books, being highly educated, mocking American football, feeling vaguely like they should like soccer but never really being able to get into it, getting conspicuously upset about sexists and bigots, marrying later, constantly pointing out how much more civilized European countries are than America, and listening to “everything except country”.

(There is a partly-formed attempt to spin off a Grey Tribe typified by libertarian political beliefs, Dawkins-style atheism, vague annoyance that the question of gay rights even comes up, eating paleo, drinking Soylent, calling in rides on Uber, reading lots of blogs, calling American football “sportsball”, getting conspicuously upset about the War on Drugs and the NSA, and listening to filk – but for our current purposes this is a distraction and they can safely be considered part of the Blue Tribe most of the time)

And then the kicker:

And my hypothesis, stated plainly, is that if you’re part of the Blue Tribe, then your outgroup isn’t al-Qaeda, or Muslims, or blacks, or gays, or transpeople, or Jews, or atheists – it’s the Red Tribe.

Scott’s post was written in 2017, which now feels like a very different time. I’m not good at fancy metaphors and stories like Scott, so instead of gently guiding you to my point I’m just going to say it: I don’t think the definitions of these tribes, or the description of the Red Tribe as an outgroup of the Blue Tribe, is correct anymore. Things are different here in 2020.

After four years of Trump as president, the Red Tribe has changed in a couple of important ways: it’s gotten smaller, and it’s gotten weirder. A lot of moderate Republicans and previously-Red-Tribe folks have been disgusted by Trump, and while it might be a stretch to say they’ve completely crossed the floor, it’s hard to call them Red Tribe anymore. As a result, the folks that remain in the Red Tribe have consolidated around increasingly explicit anti-science beliefs and other strongly polarized and “fringe-feeling” positions.

That combination of being both smaller, and weirder or more “fringe-feeling”, is really important, because all of a sudden the Red Tribe doesn’t make a good outgroup for the Blue Tribe: it’s not close enough, and it’s not dangerous enough. To quote Scott again:

Freud spoke of the narcissism of small differences, saying that “it is precisely communities with adjoining territories, and related to each other in other ways as well, who are engaged in constant feuds and ridiculing each other”. Nazis and German Jews. Northern Irish Protestants and Northern Irish Catholics. Hutus and Tutsis. South African whites and South African blacks. Israeli Jews and Israeli Arabs. Anyone in the former Yugoslavia and anyone else in the former Yugoslavia.

So what makes an outgroup? Proximity plus small differences. If you want to know who someone in former Yugoslavia hates, don’t look at the Indonesians or the Zulus or the Tibetans or anyone else distant and exotic. Find the Yugoslavian ethnicity that lives closely intermingled with them and is most conspicuously similar to them, and chances are you’ll find the one who they have eight hundred years of seething hatred toward.

(Tangential note that this is mostly what I was trying to express with my Law of Cultural Proximity.)

Now clearly this process isn’t finished yet, and it may still reverse: the Red Tribe remains a decently large percentage of the American population, so it remains a strong political force capable of opposing Blue Tribe values. But from the position of somebody already living in a Blue Tribe bubble, the Red Tribe suddenly starts to feel too “distant and exotic” to be a proper outgroup. A game of Musical Outgroups begins: the Blue Tribe needs to find a new outgroup.

Again following the narcissism of small differences, the obvious candidate for a new Blue Tribe outgroup is of course the still-half-formed Grey Tribe. But unfortunately the Grey Tribe really is only half-formed, and until recently there was a pretty healthy spread of people across the Blue-Grey spectrum. Categories are fundamentally human constructions (see e.g. The Categories Were Made for Man, Not Man For The Categories), so the Blue Tribe isn’t interested in picking out only the folks who satisfy some platonic ideal of Grey-Tribe-ness as their outgroup; they’re just going to slap a line somewhere in the middle of the Blue-Grey spectrum and call it a day. Besides the obvious Silicon Valley Grey-Tribe tech-bros, who else is on the far side of that line? Neoliberals.

In the new American order, the tribal landscape is more fragmented than before. The Red and Blue Tribes have both become smaller and more politically extreme versions of their 2017 selves. Three new tribes are being forcefully ejected into the wilderness as a result. The Red Tribe is purging itself of compassionate conservatives and the not-explicitly-antiscience; think Ross Douthat and Mitt Romney; I’ll call these the Pink Tribe. The Blue Tribe is purging itself of the previously-defined Grey Tribe, as well as a moderately large contingent of non-Grey neoliberals typified by people like Hillary Clinton; I’ll call them the Purple Tribe.

What comes next is hard to predict. Pink and Purple seem like natural allies, and I can see the Grey Tribe joining that alliance for pragmatic “enemy-of-my-enemy” reasons. But the two-party system throws a real wrench into things. Perhaps, if the Red Tribe continues to shrink and lose cultural relevance, the two-party divide will pivot (as it has before) to be a Blue-Tribe vs Pink-Purple-Grey-Tribe division. On the other hand, if the Red Tribe begins to recover post-Trump, or if Pink, Purple, and Grey can’t find enough common ground, then I can see the smaller tribes being squeezed out of existence between dominant Blue and Red cultural forces.

Against Reopening Ottawa

I.

This is not the COVID post I thought I would be writing a few weeks ago. I honestly didn’t think I’d be writing a COVID post at all.

A few weeks ago, most of Canada seemed to be under control. There were a few hot spots left in e.g. Southern Ontario, but almost any other graph you looked at showed a nice, clean downward trend. How quickly things change. I live in Ottawa, so I’m going to focus there. This data comes directly from Ottawa Public Health (though the graph is mine):

You can see things getting consistently better in May, staying quite steady throughout June, and then starting to creep back up around the beginning of July.

This is a huge problem.

It may seem like I’m exaggerating slightly – after all, Ottawa (a city of 1 million people) went from roughly 4 new cases a day, to roughly 8 new cases a day, over the span of two weeks. That hardly seems comparable to the huge surges being seen in the southern United States or other problem areas. Ottawa’s health care system and hospitals have plenty of capacity. Our testing turn-around time remains under 48h, and our testing throughput remains high. We have a mandatory mask bylaw in place. There seem to be a lot of things going right.

In point of fact there are a lot of things going right – I’d still rather be in Ottawa than in any part of the US. But just because some things are going well, doesn’t mean we’re not still in trouble. Going from 4 to 8 new cases a day is an increase, and any increase is really bad news.

II.

At the risk of rehashing a topic everybody is sick of at this point: virus spread is modelled exponentially (this exponent is the “R0” everybody keeps talking about). R0 isn’t a magical fixed value; when something changes (e.g. people start wearing masks) then R0 for the virus changes too. When R0 is less than one, the virus gradually fades out of the population, as was happening in Ottawa in May. When R0 is exactly one, then the number of new cases stays flat, as was happening in June. When R0 is greater than one, the virus starts picking up steam and spreading again.

R0 in Ottawa is, clearly, greater than one at this point, and has been since roughly the beginning of July (or maybe a little later, depending on how much you’re willing to smooth the graph). This isn’t a big deal for today – we can still handle 8 new cases a day, or 10, or 12. But the thing about positive exponential growth is that it keeps going up, faster, and faster, and faster. As a very rough approximation, let’s assume that the last two weeks are representative of R0 in Ottawa right now, and that our new cases continue to double every two weeks. That would mean that August 1st we’d be dealing with 16 new cases per day. Not too bad. By the end of August, 64 new cases a day – bad (until very recently we had fewer than 60 active cases total), but still manageable. By the end of September though… 250 cases a day, which is probably more than we can handle. And so on. If the trend continued through to Christmas (which is, granted, very unlikely) then we’d be looking at roughly 16000 new cases a day.

Now there’s a lot of reasons we’re very unlikely to hit 250 cases a day, let alone 16000. If new cases increased that much I assume the city would re-institute some kind of lockdown, and at that kind of load other factors like herd immunity would start kicking in as well. But when cases are already increasing it seems like a really bad time to start reopening even more. And yet…

III.

The general consensus seems to be that there’s a minimum of two weeks of lag between the development of actual cases, and reporting. This time accounts for how long it takes somebody who’s been exposed to incubate the virus, develop symptoms and go get tested. Of course in some places where testing is overloaded, the delay can be much more than that, but Ottawa is not overloaded, so let’s assume two weeks.

Ottawa officially entered “phase 2” of our reopening on June 12th, though in practice most businesses were not ready on the day of, and reopened piecemeal over the following week; let’s take an average reopening date of June 15th. Two weeks after that brings us to June 29th, which is, (surprise!), right at the beginning of our uptick in cases. This is a bit of evidence that our “phase 2” reopening was in fact too much; R0 is now back above one, and the virus is spreading.

In worse news, despite phase 2 already being too much, Ottawa officially entered “phase 3” of our reopening yesterday (July 17th). Not only does this seem like a bad idea in general given that R0 is already back above one, but phase 3 includes a huge swath of very risky activities whose impact on R0 will almost certainly be far greater than the impact of phase 2: indoor service at restaurants and bars, movie theatres, museums, etc. As with phase 2, a lot of businesses weren’t ready day of; if we take an actual phase 3 reopening date of July 20th, and add two weeks, it’s easy to see an even bigger spike of cases coming down the pipe, starting around August 3rd. (August 3rd is a statutory holiday here, so in practice I expect the data might not show up until a few days later.)

The one bright spot in all this is that Ottawa made masks mandatory while indoors, starting July 7th. That will presumably have a big impact on transmission rates, and was less than two weeks ago, so we won’t see it in the data yet. Hopefully new cases start to drop again around July 21st (two weeks after mandatory masks), and if we’re very lucky then that decrease will entirely counter-act the increase from both stage 2 and stage 3 reopenings. But that seems like a lot to ask, especially since many people were already voluntarily wearing masks even before the mandate. Only time will tell.

Ultimately, I predict continued increases in Ottawa until around July 21st, at which point the trend will reverse due to mandatory masks, and we see decreases again until August 3rd or shortly afterwards. Then we’ll see the impact of phase 3 reopening, but I can only imagine that it’s going to be bad. I suspect by late August it will be clear that phase 3 is unsustainable and will have to be rolled back. I only hope we don’t learn that lesson too much the hard way.

IV.

[This section is more of an appendix of other little things I didn’t fit in the main post.]

Somebody I discussed this with argued that I’m reading too much into the graph I presented. The data from end of June up to July 9th actually looks well in line with the rest of June, and the growth after that point could very well just be random variance as was likely the case with the brief spike from June 7th to 13th. I suppose this is possible, though it feels unlikely to me. Time will tell, if cases continue to rise or not.

Alberta and BC (two other provinces) are also seeing recent spikes in cases after reopening, though oddly Ontario (the province that Ottawa is actually in) has not. I haven’t dug into the regional data to back this up, but I imagine it’s because the Peel region and the other “hot spots” I referenced earlier are finally under control and decreasing rapidly, which is balancing out the gradual increase in Ottawa and other places. Again, time will tell. I expect we’re already at the bottom of that particular wave, so Ontario-wide cases should start ticking up again (slowly) this week.

Atemporal Ethical Obligations

[All the trigger warnings, especially for the links out. I’m trying to understand and find the strongest version of an argument I heard recently. I’m not sure if I believe this or not.]

Edit: This was partly a hidden argument ad absurdum. I thought it was weird enough to make that obvious, but I forgot that this is the internet (and that I actually have people reading my blog who don’t know me IRL).

It is no longer enough just to be a “good person” today. Even if you study the leading edge of contemporary morality and do everything right according to that philosophy, you are not doing enough. The future is coming, and it will judge you for your failures. We must do better.

This may sound extreme, but it is self-evidently true in hindsight. Pick any historical figure you want. No matter their moral stature during their lifetime, today we find something to judge. George Washington owned slaves. Abraham Lincoln, despite abolishing slavery in the United States, opposed black suffrage and inter-racial marriage. Mary Wollstonecraft arguably invented much of modern feminism, and still managed to write such cringe-worthy phrases as “men seem to be designed by Providence to attain a greater degree of virtue [than women]”. Gandhi was racist. Martin Luther King Jr abetted rape. The list goes on.

At an object level, this shouldn’t be too surprising. Society has made and continues to make a great deal of moral progress over time. It’s almost natural that somebody who lived long ago would violate our present day ethical standards. But from the moral perspective, this is an explanation, not an excuse; these people are still responsible for the harm their actions caused. They are not to be counted as “good people”.

It’s tempting to believe that today is different; that if you are sufficiently ethical, sufficiently good, sufficiently “woke” by today’s standards, that you have reached some kind of moral acceptability. But there is no reason to believe this is true. The trend of moral progress has been accelerating, and shows no signs of slowing down. It took hundreds of years after his death before Washington became persona non grata. MLK took about fifty. JK Rowling isn’t even dead yet, and beliefs that would have put her at the liberal edge of the feminist movement thirty years ago are now earning widespread condemnation. Moral progress doesn’t just stop because it’s 2020. This trend will keep accelerating.

All of this means that looking at the bleeding edge of today’s moral thought and saying “I’m living my life this way, I must be doing OK” is not enough. Anybody who does this will be left behind; in a few decades, your actions today will be recognized as unethical. The fact that you lived according to today’s ethical views will explain your failings, but not excuse them. Thus, in order to be truly good people, we must take an active role, predict the future of moral progress, and live by tomorrow’s rules, today.

Anything else is not enough.

Self-Predicting Markets

The story of Hertz has fundamentally changed how I view the stock market. This isn’t a novel revelation – now that I understand it, I’ve seen the core insight mentioned elsewhere – but it took a concrete example to really drive the point home.

The short version of the Hertz story is that the company went bankrupt. They have nearly $20 billion in debt, and as far as anybody can tell, no path to recovery; they’re bankrupt because their business model has been losing money for years despite several attempts to turn things around. The twist? Their stock is still trading, and at time of writing they have a market cap of $900 million.

I notice I am confused.

On any intuitive understanding of the market this shouldn’t be possible. The company is literally worthless. Or really, worse – it’s less than worthless given its debt load. People are paying positive money to own negative money. On a naive view this is another nail in the coffin of the Efficient Market Hypothesis.

After noticing that I was confused, I tried to generate hypotheses to explain this story:

  • Maybe the EMH really is wrong and the markets are nonsense.
  • Maybe bankruptcy laws are so complex and tangled that the expected value of the company really is positive after all is said and done.
  • Maybe the markets expect Hertz to get a government bailout for some reason.

Some of these are plausible (in particular the second), but none of them were particularly satisfying, so I tried asking myself why I, in a hypothetical world, would buy Hertz stock in this situation. I gave myself the standard answer: because I expected the stock go up in value in the future. Then I realized that this answer has nothing to do with the value of the company.

I had been making the mistake of viewing the stock market as a predictor of company value over the short-to-medium term, but this isn’t true. The stock market is a predictor of itself over the short-to-medium term. If people think the stock will go up tomorrow, then the stock will go up today – it doesn’t matter what the value of the company does at all. The company can be literally worthless, and as the Hertz story proves, people will still buy as long as they think the stock will go up tomorrow.

Now in practice, there are a bunch of traders in the market who trade based on the expected value of the company. As long as these people have a majority or at least a plurality, then everybody else is destined to follow their lead. If the expected value of the company goes up, then the expected value of the stock goes up, as long as enough people are trading based on company value. But in cases like Hertz, the expected value of the company is nothing, so value-based traders exit the market entirely. This leaves only the “shallow” stock-based traders, whose rational move is now to trade based on the expected value of the stock being completely divorced from reality.

The market is really weird.

The Law of Cultural Proximity

[Not my area of expertise, but I would be surprised if the core thesis was wrong in a significant way. Probably not as original as I think it is. Based on a previous blog post of mine that went in a very different/weird direction.]

Introduction

Currently, different human cultures have different behavioural norms around all sorts of things. These norms cover all kinds of personal and interpersonal conduct, and extend into different legal systems in countries around the globe. In politics, this is often talked about in the form of the Overton window, which is the set of political positions that are sufficiently “mainstream” in a given culture to be considered electable. Unsurprisingly, different cultures have different Overton windows. For example, Norway and the United States currently have Overton windows that tend to overlap on some policies (the punishment of theft) but perhaps not on others (social welfare).

Shared norms and a stable, well-defined Overton window are important for the stable functioning of society, since they provide the implicit contract and social fabric on which everything else operates. But what exactly is the scope of a “society” for which that is true? We just talked about the differences between Norway and the U.S., but in a very real sense, Norway and the U.S. share “western culture” when placed in comparison with Iran or North Korea. In the other direction, there are many distinct cultures entirely within the U.S. with different norms around things like gun control. The categories were made for man, not man for the categories.

However blurry these lines are, it might be tempting to assume that they get drawn roughly according to geography; it’s certainly reflected in our language (note my use of “western culture” already in this post). But this isn’t quite right: the key factor is actually interactional proximity; it’s just that in a historical setting geographical and interactional proximity were the same thing. Time for an example.

Ooms and Looms

Back in the neolithic era, the tribe of Oom and the tribe of Loom occupied opposite banks of their local river. These two tribes were closely linked in every aspect: geographically, linguistically, economically, and of course, culturally. Because the Ooms and the Looms were forced into interaction on such a regular basis, it was functionally necessary that they shared the same cultural norms in broad strokes. There was still room for minor differences of course, but if one tribe started believing in ritual murder and the other didn’t, that was a short path to disagreement and conflict.

Of course, neolithic tribes sometimes migrated, which is what happened a short time later when the tribe of Pa moved into the region from a distant valley. Compared to the Ooms and the Looms, the Pas were practically alien: they had different customs, different beliefs, and spoke a different language altogether. Unsurprisingly, a great deal of conflict resulted. One day an amorous Oomite threw a walnut towards a Pa, which was of course a common courting ritual among both the Ooms and the Looms. Unfortunately, the Pa saw it as an act of aggression. War quickly followed.

Ultimately, the poor Pa were outnumbered and mostly wiped out. The remaining Pa were assimilated into the culture of their new neighbours, though a few Pa words stuck around in the local dialect. Neolithic life went on.

In this kind of setting, you could predict cultural similarity between two people or groups purely based on geographic proximity. It was possible to have two very different peoples living side by side, but this was ultimately unstable. In the long run, such things resulted in conflict, assimilation, or at best a gradual homogenization as memes were exchanged and selected. But imagine an only-slightly-different world where the river between the Ooms and the Looms was uncrossable; we would have no reason to believe that Oom culture and Loom culture would look anything alike in this case. The law that describes this is the law of cultural proximity:

In the long run, the similarity between two cultures is proportional to the frequency with which they interact.

More First Contact

Hopefully the law of cultural proximity was fairly self-evident in the original world of neolithic tribes. But over time, trade and technology started playing an increasing role in people’s lives. The neolithic world was simple because interactions between cultures were heavily mediated by geographic proximity, but the advent of long-distance trade started to wear away at that principle. Ooms would travel to distant lands, and they wouldn’t just carry home goods; they would carry snippets of culture too. Suddenly cultures separated by great distances could interact more directly, even if only infrequently. Innovations in transportation (roads, ship design, etc) made travel easier and further increased the level of interaction.

This gradual connecting of the world led to a substantial number of conflicts between distant cultures that wouldn’t have even know about each other in a previous age. The Ooms and the Looms eventually ran into their neighbour the Dooms, who conquered and assimilated them both in order to control their supply of cocoa. The victor of successive conflicts, the Dooms formed an empire, developed new technologies, and expanded their reach even farther afield. On the other side of a once-uncrossable sea, the Dooms met the Petys; they interacted infrequently at first, but over time their cultures homogenized until they were basically indistinguishable from each other.

The Great Connecting

Now fast-forward to modern day and take note of the technical innovations of the last two centuries: the telegraph, the airplane, the radio, the television, the internet. While the prior millennia saw a gradual connecting of the world’s cultures, the last two hundred years have seen a massive step change: the great connecting. On any computer or phone today, I can easily interact with people from one hundred different countries around the globe. Past technologies metaphorically shrank the physical distance between cultures; the internet eliminates that distance entirely.

But now remember the law of cultural proximity: the similarity between two cultures is proportional to the frequency with which they interact. This law still holds, over the long run. However the internet is new, and the long run is long. We are currently living in a world where wildly different cultures are interacting on an incredibly regular basis via the internet. The result should not be a surprise.

The Future

[This section is much more speculative and less confident than the rest.]

The implications for the future are… interesting. It seems inevitable that in a couple of generations the world will have a much smaller range of cultures than it does today. The process of getting there will be difficult, and sometimes violent, but the result will be a more peaceful planet with fewer international disagreements or “culture wars”. A unified world culture also seems likely to make a unified world government possible. Whether the UN or some other body takes on this role, I expect something in that space to grow increasingly powerful.

While a stable world government seems like it would be nice, homogeneity has its pitfalls. There’s a reason we care about ecological diversity so much.

Of course in the farther future, human culture will fragment again as it spreads into space. The speed of light is a hard limit, and while our first Martian colony will likely stay fairly connected to Earth, our first extra-solar colony will be isolated by sheer distance and will be able to forge its own cultural path. Beyond that, only time will tell.

Narrative Direction and Rebellion

This is the fourth post in what has been a kind of accidental series on life narratives. Previously: Narrative Dissonance, Where the Narrative Stops, and Narrative Distress and Reinvention.

In Where the Narrative Stops I briefly mentioned the hippie revolution as a rebellion against the standard narrative of the time. This idea combined in my brain a while ago with a few other ideas that had been floating around, and now I’m finally getting around to writing about it. So let’s talk about narrative rebellions.

I’ve previously defined narratives as roughly “the stories we tell about ourselves and others that help us make sense of the world”. As explored previously in the series, these stories provide us with two things critical for our lives and happiness: a sense of purposeful direction, and a set of default templates for making decisions. So what happens when an individual or a demographic group chooses to rebel against the narrative of the day? It depends.

Rebellions are naturally framed in the negative: you rebel against something. With a little work you can manage to frame them positively, as in “fighting for a cause”, but the negative framing comes more naturally because it’s more reflective of reality. While some rebellions are kicked off by a positive vision, the vast majority are reactionary; the current system doesn’t work, so let’s destroy it. Even when there is a nominally positive vision (as in the Russian Revolution, which could be framed as a “positive” rebellion towards communism) there is usually also a negative aspect intermingled (the existing Russian army was already ready to mutiny against Russia’s participation in the First World War) and it can be difficult to disentangle the different causes.

In this way, narrative and socio-cultural rebellions are not that different from militaristic and geo-political ones. You can sometimes attach a positive framing, but the negative framing is both default, and usually dominant.

We’ll come back to that. For the moment let’s take a quick side-trip to Stephen Covey’s Principle-centered Leadership. One of the metaphors he uses in that book (which I didn’t actually include in my post about it, unfortunately) is the idea of a compass and a map. Maps can be a great tool to help you navigate, but Covey really hammers on the fact that it’s better to have a compass. Maps can be badly misleading if the mapmaker left off a particular piece of information you’re interested in; they can also simply go stale as the landscape shifts over time. A compass on the other hand (meaning your principles, in Covey’s metaphor), always points due North, and is a far more reliable navigational tool.

This navigational metaphor is really useful when extended for talking about narratives and rebellions. One of the most important things a narrative gives us is that “sense of purposeful direction” which carries us through life. Without it, as in Where the Narrative Stops, narratives tend to peter out after a while or even stop abruptly on a final event (the way a “student” narrative may stop on graduation if you don’t know what you actually want to do with the degree).

The problem is that rebelling against a narrative doesn’t automatically generate a fully-defined counter-narrative (roughly analogous to how reversed stupidity isn’t intelligence). If you don’t like the direction things are going, you can turn around and walk the other way. But there’s no guarantee the other way actually goes anywhere, and in fact it usually doesn’t; a random walk through idea-space is very unlikely to generate a coherent story. Even when you have a specific counter-narrative in mind, there’s good odds it still doesn’t actually work. See again the Russian Revolution for an example; they ended up with a strong positive vision for communism, but that vision ultimately collapsed under the weight of economic and political realities.

This lack of destination seems to me the likely candidate for why the hippie rebellion petered out. They had a strong disagreement with the status quo, and chose to walk in the direction of “free love”, and similar principles instead. But this new direction mostly failed to translate into a coherent positive vision, and even when it did that vision didn’t work. Most stories I’ve been able to find of concrete hippie-narrative experiments end up sounding a lot like the Russian revolution; they ultimately collapse under the weight of reality.

Given the high cost of a rebellion, be it individual or societal, militaristic or narrative, it seems prudent to set yourself up for success as much as possible before-hand. In practice, this seems to mean having a concrete positive vision with strong evidence that it will actually work in reality. Otherwise tearing down the system will just leave you with rubble.

What We Owe to Ourselves

“You can never make the same mistake twice because the second time you make it, it’s not a mistake, it’s a choice.”

Steven Denn

Something that has been kicking around my mind for the last little while is the relationship between responsibility and self-compassion. A couple of people recently made some very pointed observations about my lack of self-compassion, and it provoked a strange sadness in me. Sadness because while I know that their point is true – I am often very hard on myself, to the detriment of my happiness – those thought patterns seem so philosophically necessary that I have been unable to change them. This post is my attempt to unpack and understand that philosophical necessity.

Our ability to feel compassion is intimately tied to our judgement of responsibility in a situation. If you get unexpectedly laid off from your job, that’s terrible luck and most people will express compassion. However, if you were an awful employee who showed up late and did your job poorly, then most people aren’t going to be as sympathetic when you finally get fired. As the saying goes: you made your bed, you lie in it. More abstractly, we tend to feel less compassion for someone if we think that they’re responsible for their own misfortune. This all tracks with my lack of self-compassion, as I have also been told that I have an overdeveloped sense of personal responsibility. If I feel responsible for something, I’m not going to be very compassionate towards myself when it goes wrong; I’m going to feel guilt or shame instead.

Of course, this raises the question of what we’re fundamentally responsible for; the question of compassion is less relevant if I actually am responsible for the things that are causing me grief. People largely assume that we’re responsible for our own actions, and this seems like a reasonable place to start. It makes sense, because our own actions are where we have the clearest sense of control. While we can control parts of the outside world, that process is less direct and less exact. Our control over ourselves is typically much clearer, though still not always perfect.

If we assume that we have control over ourselves and our actions, this means that we also have responsibility for ourselves and our actions. If we avoid or ignore that responsibility and we’re not happy with the consequences, we don’t deserve much, if any, compassion: it’s our own damn fault. This all seems… normal and fairly standard, I think, but it’s an important foundation to build on.

Freedom, and Responsibility over Time

Now let’s explore what it means to be responsible for our actions, because that can be quite subtle. Sometimes our choices are limited, or we take an action under duress. Even ignoring those obvious edge cases, our narrative dictates the majority of our day-to-day decisions. What responsibility do we bear in all of these cases? Ultimately I believe in a fairly Sartrean version of freedom, where we have a near-limitless range of possible actions every day, and are responsible for which actions we take. Obviously some things are physically impossible (I can’t walk to Vancouver in a day), but there are a lot of things that make no sense in the current framework of my life that are still theoretically options for me. If nothing else, I could start walking to Vancouver today.

Assuming that we’re responsible for all of our actions in this fairly direct way, we also end up responsible for the consequences of actions not taken on a given day. There is a sense in which I am responsible for not walking to Vancouver today, because I chose to write this essay instead. I am responsible for my decision to write instead of walk, and thus for the consequence of not being on my way to Vancouver. This feels kind of weird and a bit irrelevant, so let’s recast it into a more useful example.

A few hours from now when I finish this essay, I’ll be hungry to the point of lightheadedness because I won’t have eaten since breakfast. Am I responsible for my future hunger? There’s a certain existential perspective in which I’m not, since it’s a biological process that happens whether I will it or not. But it’s equally true that I could have stopped writing several hours earlier, put the essay on hold, and had lunch at a reasonable time. I am definitely responsible for my decision to keep writing instead of eating lunch, and so there is a pretty concrete way in which I am at least partly responsible for my hunger.

This isn’t to say that I’m necessarily going to be unhappy with that decision; even in hindsight I may believe that finishing the essay in one fell swoop was worth a little discomfort. But it does mean that I can’t avoid taking some responsibility for that discomfort. And, since I’m responsible for it, it’s not something I can feel much self-compassion over; if I decide in hindsight that it was a terrible decision, then it was still a decision that I freely made. If I experience a predictable consequence of that decision, it’s my own damn fault.

This conclusion still feels pretty reasonable to me, so let’s take a weirder concrete example, and imagine that in three months I get attacked on the street.

Predictability in Hindsight

I don’t have a lot of experience with physical violence, so if I were to get attacked in my current state then I would likely lose, and be badly hurt, even imagining my attacker does not have a weapon. To what degree am I responsible for this pain? Intuitively, not at all, but again, the attack happens three months from now. I could very well decide to spend the next three months focused on an aggressive fitness and self-defence regimen (let’s assume that this training would be effective, and that I would not get hurt in this case). Today, I made the decision to write this essay instead of embarking on such a regimen; in the moment today, this decision to write is clearly one that I am responsible for. Does this mean I’m responsible for the future pain that I experience? I’d much rather avoid that pain than finish this essay, so maybe I should stop writing and start training!

The flaw in this argument, of course, is that I don’t know that I’m going to get attacked in three months. In fact, it seems like something that’s very unlikely. In choosing not to train today, I can’t accept responsibility for the full cost of that future pain. I should only take responsibility for the very small amount of pain that is left when that future is weighted by the small probability it will actually occur. If I did somehow know with perfect accuracy that I was going to be attacked, then the situation seems somewhat different: I would feel responsible for not preparing appropriately in that case, in the same way I would feel responsible for not preparing for a boxing match that I’d registered for.

All of this seems to work pretty well when looking forward in time. We make predictions about the future, weight them by their probability, and take action based on the result. If an action leads to bad results with high probability and we do it anyway then we are responsible for that, and don’t deserve much sympathy. We rarely go through the explicit predictions and calculations, but this seems to be the general way that our subconscious works.

But what about looking backward in time? Let’s say I decide not to train because I think it is unlikely I will be attacked, and then I get attacked anyway. Was this purely bad luck, or was my prediction wrong? How can we tell the difference?

Depending on the perspective you take you can get pretty different answers. Random street violence is quite rare in my city, so from that perspective my prediction was correct. Hindsight is a funny thing though, because in hindsight I know that I was attacked; in hindsight probabilities tend to collapse to either 0 or 1. And knowing that I was attacked, I can start to look for clues that it was predictable. Maybe I realize that, while street violence is rare in the city overall, I live in a particularly bad neighbourhood. Or maybe I learn that while it’s rare most of the time, it spikes on Tuesdays, the day when I normally go for a stroll. If I’d known these things initially, then I would have predicted a much higher probability of being attacked. Perhaps in that case I would have decided to train, or even take other mitigating steps like moving to a different neighbourhood. Who knows.

What this ultimately means, is that I can only possibly be responsible for being hurt in the attack if I’m somehow responsible for failing to predict that attack. I’m only responsible if for some reason I “should have known better”.

Taking Responsibility for our Predictions

[I should take this opportunity to remind people that this is all hypothetical. I was not attacked. I’m just too lazy to keep filling the language with extra conditional clauses.]

At this point I’ve already diverged slightly from the orthodox position. A large number of people would argue that my attacker is solely responsible for attacking me, and that I should accept no blame in this scenario. This certainly seems true from the perspective of law, and justice. But in this essay I’m focused on outcomes, not on justice; ultimately I experienced significant pain, pain that I could have avoided had I made better predictions and thus taken better actions.

Let’s return to the issue of whether or not I can be held responsible for failing to predict the attack. There is a continuum here. In some scenarios, the danger I was in should have been obvious, for example if my real-estate agent warned me explicitly about the neighbourhood when I moved in. In these scenarios, it seems reasonable to assign some blame to me for making a bad prediction. In other scenarios, there was really no signal, no warning; the attack was truly a stroke of random bad luck and even in hindsight I don’t see a way that I could have done better. In these scenarios, I take no responsibility; my prediction was reasonable, and I would make the same prediction again.

As with most things, practical experience tells me that real-world situations tend to fall somewhere in the middle. Nobody’s hitting you over the head with a warning, but neither is the danger utterly unpredictable; if you look closely enough, there are always signs. It is these scenarios where I think that my intuition fundamentally deviates from the norm. When something bad happens to me, then I default to taking responsibility for it, and I think that’s the correct thing to do.

Of course, there are times when whatever it is is my fault in an uncontroversial way, but I’m not talking about those. I’m talking about things like getting attacked on the street, or being unable to finish a work project on time because somebody unexpectedly quit. These are the kind of things that I expect most people would say are “not my fault”, and I do understand this position. However, I think that denying our responsibility for these failures is bad, because it causes us to stop learning. Every time we wave away a problem as “not our fault” we stop looking for the thing we could have done better. We stop growing our knowledge, our skills, our model of the world. We stagnate.

This sounds really negative, but we can frame it in a much more positive way: that there’s always something to learn, and that we should always try to be better than we were. What I think gets lost for a lot of people is that this not a casual use of “always”. Even in failures that are not directly our fault, there is still something to learn, and we should still use it as an opportunity to grow. Unless we perfectly predicted the failure and did everything we could to avoid or mitigate it, there is still something we could have done better. Denying our responsibility for our bad predictions is abdicating our ability to grow, change, or progress in life. Where does this leave us?

Any good Stoic will tell you that when something goes wrong, the only thing we have control over is our reaction, and this applies as much to how we assign responsibility as to anything else. We are responsible for the fate of our future self, and the only way to discharge that responsibility is to learn, grow, and constantly get better. The world is full of challenges. If we do not strive to meet them, we have no-one to blame but ourselves.

Winning vs Truth – Infohazard Trade-Offs

This post on the credibility of the CDC has sparked a great deal of discussion on the ethics of posts like it. Some people claim that the post itself is harmful, arguing that anything which reduces trust in the CDC will likely kill people as they ignore or reject important advice for dealing with SARS-CoV-2 and (in the long-run) other issues like vaccination. This argument has been met with two very different responses.

One response has been to argue that the CDC’s advice is so bad that reducing trust in it will actually have a net positive effect in the long run. This is an ultimately empirical question which somebody should probably address, but I do not have the skills or interest to attempt that.

The other response is much more interesting, arguing that appeals to consequences are generally bad, and that meta-level considerations mean we should generally speak the truth even if the immediate consequences are bad. I find this really interesting because it is ultimately about infohazards: those rare cases where there is a conflict between epistemic and instrumental rationality. Typically, we believe that having more truth (via epistemic rationality) is a positive trait that allows you to “win” more (thus aligning with instrumental rationality). But when more truth becomes harmful, which do we preference: truth, or winning?

Some people will just decide to value truth more than winning as an axiom of their value system. But for most of us, ultimately I think this also boils down to an empirical question of just how bad “not winning” will end up being. It’s easy to see that for sufficiently severe cases, natural selection takes over: any meme/person/thing that prefers truth over winning in those cases will die out, to be replaced by memes/people/things that choose to win. I personally will prefer winning in those cases. It’s also true that most of the time, truth actually helps you win in the long run. We should probably reject untrue claims even if they provide a small amount of extra short-term winning, since in the long run having an untrue belief is likely to prevent us from winning in ways we can’t predict.

Figuring out where the cut-over point lies between truth and winning seems non-trivial. Based on my examples above we can derive two simple heuristics to start off:

  • Prefer truth over winning by default.
  • Prefer winning over truth if the cost of not winning is destruction of yourself or your community. (It’s interesting to note that this heuristic arguably already applies to SARS-Cov-2, at least for some people in at-risk demographics.)

What other heuristics do other people use for this question? How do they come out on the CDC post and SARS-CoV-2?

An Open Critique of Common Thought

[I was going through a bunch of old files and found this gem of an essay. If the timestamp on the file is accurate it’s from February 2010, which means it’s almost exactly ten years old and predates this blog by about three years. Past me was very weird, so enjoy!]

I am writing this essay as a critique of a fundamental and unsolvable problem in philosophy today. Our greatest minds refuse to acknowledge this problem, so I have humbly taken it upon myself to explore more fully this hidden paradox.

Amongst all of the different philosophies, religions, and world-views, there is one common theme, so utterly pervasive that it has never before been questioned, yet so utterly false upon deeper inspection that it boggles the mind. It is my hope that this short essay will act as a call to arms for the oppressed masses in the field of higher thought, and prompt them to action demanding an end to this conspiracy.

The problem, ladies and gentlemen, in long and in short, is that of existence. Every thought, every idea, every concept that humankind has ever had rests on the central pillar, the core belief, that we exist. Not content, of course, with this simpler sophistry, humankind has embarked on an even more heinous error of logic – we assume not only that we exist, but that other things exist as well.

It is at this point, of course, that your conditioning takes over – “Of course we exist”, you say, “how could it be otherwise”? This is the knee-jerk reaction typical of an oppressed thinker today, and the prevalence of this mindless assertion – calling it a failure of an argument would be too kind – worries me more than I can say about the future of our society. Beyond the obvious lack of critical thinking evidenced by such lemming-like idiocies, this simple error is the cause of deeper, more dangerous problems as well.

But I digress. I will leave the deeper analysis of this crisis to the historians who survive it, and turn my own meagre talents to the task of alerting the public of this travesty. It is with heart-felt distress that I type my final plea to you, the thinking public – “Do you believe”?