Changes in Reality

[Some short thoughts I just wanted to get out of my brain; bullet-points instead of well-structured prose. This is entirely random speculation.]

  • Social systems (laws, customs, memes) are subject to evolutionary pressure from the dynamics of reality; when reality changes, existing social systems are typically no longer in equilibrium and have to evolve, or collapse and be rebuilt. Consider for example the invention of the birth control pill and the resulting impact on family structure, gender relations, etc. Pre-pill social customs around marriage and family were no longer in equilibrium in a world with reliable female birth control, and so society shifted to a new set of customs.
  • “Change in reality” largely means economic and technological change. New wealth and new capabilities.
  • “Change in reality” has been accelerating for a long time as new technologies and discoveries unlock new economic prosperity which enables more discoveries, in an explosive feedback loop. Some argue that technology/science have slowed down a lot recently, but I think that’s mostly because our best and brightest are too busy extracting economic value from our recent innovations (computers and, separately, the internet). Once that bounty has been consumed, more general technological progress will resume its previous course.
  • There is a natural limit on how fast social systems can evolve. Humans can adapt to living under radically different memeplexes, but not instantly, and somebody has to invent those memes first. When reality changes slowly this is fine, as it leaves plenty of time for a multiplicity of experimental memetic shifts in different groups, letting the best adaption dominate with high probability.
  • At some point in the future (possibly soon?) reality will start changing faster than our social systems can adapt. Our existing laws, customs, memes, and government will be out of equilibrium, but we will not have enough time to converge on a new social system before reality changes again. Society will fragment and human culture will undergo an intense period of adaptive radiation.
  • The countervailing force is technology’s ability to connect us (the “global village”) and equivalently the law of cultural proximity.

The Law of Cultural Proximity

[Not my area of expertise, but I would be surprised if the core thesis was wrong in a significant way. Probably not as original as I think it is. Based on a previous blog post of mine that went in a very different/weird direction.]

Introduction

Currently, different human cultures have different behavioural norms around all sorts of things. These norms cover all kinds of personal and interpersonal conduct, and extend into different legal systems in countries around the globe. In politics, this is often talked about in the form of the Overton window, which is the set of political positions that are sufficiently “mainstream” in a given culture to be considered electable. Unsurprisingly, different cultures have different Overton windows. For example, Norway and the United States currently have Overton windows that tend to overlap on some policies (the punishment of theft) but perhaps not on others (social welfare).

Shared norms and a stable, well-defined Overton window are important for the stable functioning of society, since they provide the implicit contract and social fabric on which everything else operates. But what exactly is the scope of a “society” for which that is true? We just talked about the differences between Norway and the U.S., but in a very real sense, Norway and the U.S. share “western culture” when placed in comparison with Iran or North Korea. In the other direction, there are many distinct cultures entirely within the U.S. with different norms around things like gun control. The categories were made for man, not man for the categories.

However blurry these lines are, it might be tempting to assume that they get drawn roughly according to geography; it’s certainly reflected in our language (note my use of “western culture” already in this post). But this isn’t quite right: the key factor is actually interactional proximity; it’s just that in a historical setting geographical and interactional proximity were the same thing. Time for an example.

Ooms and Looms

Back in the neolithic era, the tribe of Oom and the tribe of Loom occupied opposite banks of their local river. These two tribes were closely linked in every aspect: geographically, linguistically, economically, and of course, culturally. Because the Ooms and the Looms were forced into interaction on such a regular basis, it was functionally necessary that they shared the same cultural norms in broad strokes. There was still room for minor differences of course, but if one tribe started believing in ritual murder and the other didn’t, that was a short path to disagreement and conflict.

Of course, neolithic tribes sometimes migrated, which is what happened a short time later when the tribe of Pa moved into the region from a distant valley. Compared to the Ooms and the Looms, the Pas were practically alien: they had different customs, different beliefs, and spoke a different language altogether. Unsurprisingly, a great deal of conflict resulted. One day an amorous Oomite threw a walnut towards a Pa, which was of course a common courting ritual among both the Ooms and the Looms. Unfortunately, the Pa saw it as an act of aggression. War quickly followed.

Ultimately, the poor Pa were outnumbered and mostly wiped out. The remaining Pa were assimilated into the culture of their new neighbours, though a few Pa words stuck around in the local dialect. Neolithic life went on.

In this kind of setting, you could predict cultural similarity between two people or groups purely based on geographic proximity. It was possible to have two very different peoples living side by side, but this was ultimately unstable. In the long run, such things resulted in conflict, assimilation, or at best a gradual homogenization as memes were exchanged and selected. But imagine an only-slightly-different world where the river between the Ooms and the Looms was uncrossable; we would have no reason to believe that Oom culture and Loom culture would look anything alike in this case. The law that describes this is the law of cultural proximity:

In the long run, the similarity between two cultures is proportional to the frequency with which they interact.

More First Contact

Hopefully the law of cultural proximity was fairly self-evident in the original world of neolithic tribes. But over time, trade and technology started playing an increasing role in people’s lives. The neolithic world was simple because interactions between cultures were heavily mediated by geographic proximity, but the advent of long-distance trade started to wear away at that principle. Ooms would travel to distant lands, and they wouldn’t just carry home goods; they would carry snippets of culture too. Suddenly cultures separated by great distances could interact more directly, even if only infrequently. Innovations in transportation (roads, ship design, etc) made travel easier and further increased the level of interaction.

This gradual connecting of the world led to a substantial number of conflicts between distant cultures that wouldn’t have even know about each other in a previous age. The Ooms and the Looms eventually ran into their neighbour the Dooms, who conquered and assimilated them both in order to control their supply of cocoa. The victor of successive conflicts, the Dooms formed an empire, developed new technologies, and expanded their reach even farther afield. On the other side of a once-uncrossable sea, the Dooms met the Petys; they interacted infrequently at first, but over time their cultures homogenized until they were basically indistinguishable from each other.

The Great Connecting

Now fast-forward to modern day and take note of the technical innovations of the last two centuries: the telegraph, the airplane, the radio, the television, the internet. While the prior millennia saw a gradual connecting of the world’s cultures, the last two hundred years have seen a massive step change: the great connecting. On any computer or phone today, I can easily interact with people from one hundred different countries around the globe. Past technologies metaphorically shrank the physical distance between cultures; the internet eliminates that distance entirely.

But now remember the law of cultural proximity: the similarity between two cultures is proportional to the frequency with which they interact. This law still holds, over the long run. However the internet is new, and the long run is long. We are currently living in a world where wildly different cultures are interacting on an incredibly regular basis via the internet. The result should not be a surprise.

The Future

[This section is much more speculative and less confident than the rest.]

The implications for the future are… interesting. It seems inevitable that in a couple of generations the world will have a much smaller range of cultures than it does today. The process of getting there will be difficult, and sometimes violent, but the result will be a more peaceful planet with fewer international disagreements or “culture wars”. A unified world culture also seems likely to make a unified world government possible. Whether the UN or some other body takes on this role, I expect something in that space to grow increasingly powerful.

While a stable world government seems like it would be nice, homogeneity has its pitfalls. There’s a reason we care about ecological diversity so much.

Of course in the farther future, human culture will fragment again as it spreads into space. The speed of light is a hard limit, and while our first Martian colony will likely stay fairly connected to Earth, our first extra-solar colony will be isolated by sheer distance and will be able to forge its own cultural path. Beyond that, only time will tell.

A pithy quote

I was writing an internal thing about execution within software engineering, and Parkinson’s law, and ended up with the following pithy quote I wanted to share.

Within an engineering organization, all real work of note is in the form of making decisions. The rest is just typing.

Other Opinions #52 – Can We Avoid a Surveillance State Dystopia?

http://www.antipope.org/charlie/blog-static/2014/02/can-we-avoid-a-surveillance-st.html

Disclaimer: I don’t necessarily agree with or endorse everything that I link to. I link to things that are interesting and/or thought-provoking. Caveat lector.

Betteridge’s Law suggests no, but the article itself is oddly optimistic. One thing is certain: technology is shifting the landscape of power faster than any government bureaucracy can adapt. We live in interesting times.

An Incremental Solution to an Artificial Apocalypse

So smart/famous/rich people are publicly clashing over the dangers of Artificial Intelligence again. It’s happened before. Depending on who you talk to, there are several explanations of why that apocalypse might actually happen. However, as somebody with a degree in computer science and a bent for philosophy of mind, I’m not exactly worried.

The Setup

Loosely speaking, there are two different kinds of AI: “weak” and “strong”. Weak AI is the kind we have right now; if you have a modern smartphone you’re carrying around a weak AI in your pocket. This technology uses AI strategies like reinforcement learning and artificial neural networks to solve simple singular problems: speech recognition, or face recognition, or translation, or (hopefully soon) self-driving cars. Perhaps the most recent publicized success of this kind of AI was when IBM’s Watson managed to win the Jeopardy! game show.

Strong AI, on the other hand, is still strictly hypothetical. It’s the realm of sci-fi, where a robot or computer program “wakes up” one day and is suddenly conscious, probably capable of passing the Turing test. While weak AIs are clearly still machines, a strong AI would basically be a person instead, just one that happens to be really good at math.

While the routes to apocalypse tend to vary quite a bit, there are really only two end states depending on whether you worry about weak AI or strong AI.

Skynet

If you go in for strong AI, then your apocalypse of choice basically ends up looking like Skynet. In this scenario, a strong AI wakes up, decides that it’s better off without humanity for some reason (potentially related to its own self-preservation or desires), and proceeds to exterminate us. This one is pretty easy for most people to grasp.

Paperclips

If you worry more about weak AI then your apocalypse looks like paperclips, because that’s the canonical thought experiment for this case. In this scenario, a weak AI built for producing paperclips learns enough about the universe to understand that it can, in fact, produce more paperclips if it exterminates humanity and uses our corpses for raw materials.

In both scenarios, the threat is predicated upon the idea of an intelligence explosion. Because AIs (weak or strong) run on digital computers, they can do math really inconceivably fast, and thus can also work at speed in a host of other disciplines which boil down to math pretty easily: physics, electrical engineering, computer science, etc.

This means that once our apocalyptic AI gets started, it will be able to redesign its own software and hardware in order to better achieve its goals. Since this task is one to which it is ideally suited, it will quickly surpass anything a human could possibly design and achieve god-like intelligence. Game over humanity.

On Waking Up

Strong AI has been “just around the corner” since Alan Turing’s time, but in that time nobody has ever created anything remotely close to a real conscious computer. There are two potential explanations for this:

First, perhaps consciousness is magic. Whether you believe in a traditional soul or something weird like a quantum mind, maybe consciousness depends on something more than the arrangement of atoms. In this world, current AI research is barking up entirely the wrong tree and is never going to produce anything remotely like Skynet. The nature of consciousness is entirely unknown, so we can’t know if the idea of something like an intelligence explosion is even a coherent concept yet.

Alternatively, perhaps consciousness is complicated. If there’s no actual magic involved then the other reasonable alternative is that consciousness is incredibly complex. In this world, strong AI is almost certainly impossible with current hardware: even the best modern super-computer has only a tiny fraction of the power needed to run a simplistic rat brain let alone a human one. Even in a world where Moore’s law continues unabated, human-brain-level computer hardware is still centuries away, not decades. We may be only a decade away from having as many transistors as there are neurons, but they’re not comparable; neurons are massively more complex building blocks.

It’s also worth noting that if consciousness is hard because it’s complicated, our hypothetical strong AI would wake up very, very slowly. We’re not going to suddenly jump from current processors to AI-capable ones that are millions of times more powerful; there will be numerous incremental designs in between. As our processing power improves, our AI will get stronger and stronger but there will be no eureka moment where Skynet both suddenly decides humanity has to go and is smart enough to make it happen. We’ll have plenty of opportunity to study primitive artificial consciousnesses, learn about them in an environment we control, and iteratively incorporate our findings into subsequent generations.

Regardless of why we’ve never gotten anywhere with strong AI, we haven’t, and there is no indication that this time it really is “just around the corner”.

On Paperclips

While the idea of Skynet is common in popular culture, it’s the paperclip maximizer which tends to worry the people who actually seriously worry about this stuff. It’s already true that we don’t really understand how weak AI actually works: we understand the principles which drive something like Watson, but no human could describe the series of specific decisions it performs to arrive at a given answer.

It also seems a lot more plausible that weak AI (which we already have) will just keep getting smarter (which it has definitely done so far) than that we’ll have some weird magic breakthrough in strong AI, when nobody still even understands what that would look like. So, why couldn’t some future self-driving car learn just enough about the world to figure out that it would be able to drive much more efficiently if it could just get rid of all of those pesky pedestrians? As it turns out, I think something like this could actually happen; it’s just not the end of the world when it does.

As with the consciousness-is-complicated explanation for strong AI, our saving grace against paperclips and homicidal cars is the incremental nature of the technology. The first self-driving car smart enough to decide to murder pedestrians is not also going to be smart enough to redesign its own hardware, create biological weapons, and do all of this in secret without hinting to its owners that it “has plans”. No, the only thing it’s going to know how to do to achieve this goal is to try and intentionally run people over, which is obviously bad, but not exactly apocalyptic. It also has the benefit of being really freaking obvious.

Here’s a hint to future AI developers: maybe you should stop making it smarter when your cars start trying to run you over in the name of optimizing the number of pedestrians.

Other Opinions #46 – There’s No Real Difference Between Online Espionage and Online Attack

This one is a couple of years old but still relevant, especially with the recent ransomware attacks. We’re used to thinking in terms of human actors, where an informant is a very different kind of asset from an undercover operative. The former is a passive conduit of information while the latter is an active force for change. In technological conflict there is no such difference. Both activities require the ability execute code on the remote machine and once that is achieved it can be used for any end, passive or active.

https://www.theatlantic.com/technology/archive/2014/03/theres-no-real-difference-between-online-espionage-and-online-attack/284233/

And of course any vulnerability, once discovered, can be used by whatever criminal claims it first.

Disclaimer: I don’t necessarily agree with or endorse everything that I link to. I link to things that are interesting and/or thought-provoking. Caveat lector.