Culture, Memetics, and Lamarkian Inheritance

Having covered the brain and the mind, we now take a second sharp turn and head in the other direction, in a manner roughly paralleling our previous discussion of biology. This time, however, we will be discussing culture.

We start the concept of a meme, analogous to the biological concept of a gene. The precise definition of a meme is rather controversial, but the definition suggested by Wikipedia will do well enough for us. In fact, on skimming that article, it makes almost all of the points I wanted to make here. Go read it.

The other topic I wanted to cover here is Lamarkian Inheritance. Although it is no longer supported in biological genetics, it is a useful concept to have since it is in some part the method of transmission of memes.

Layering Abstractions: More on Intelligence

As I discussed on Monday, I believe the fundamental underlying characteristic of intelligence is the ability to perceive and discover patterns, but I want to go a bit deeper on the resulting layering of abstractions that result from this. We all use many layers of abstractions every day.

The first, obvious layer of abstractions is on top of the underlying reality of fundamental particles (electrons and quarks and leptons etc). This lets our brain recognize physical materials like water, plastic, hair, wood, metal, etc. These elements are just abstractions; the same underlying particles could, in fact be arranged into totally different materials, or even beĀ plasma. Arguably this isn’t a mental abstraction so much as one forced on us by the methods we have with which to observe the world, but the net result is the same.

On top of this abstraction of materials, we have an abstraction of objects. Wood formed in a particular pattern isn’t just wood, but also forms a table, or a chair, or a door, or any of another hundred things. A particular quantity of water in a particular location isn’t just water; it’s a lake, or a river, or an ocean. We could describe what I’m typing on right now as just a complex combination of plastic, metal (mostly silicon) and various other minerals, but typically we just abstract away that detail and call it a computer keyboard.

And there are yet even more layers of abstractions. There are abstractions we impose explicitly when laying out the rules of a sport or a game, and there are abstractions we impose implicitly when laying out the rules of behaviour in polite society. There are even, perhaps, abstractions we construct of each other. The ideas of “village idiot”, or “class nerd” are themselves abstractions on the social roles we play, and can have substantial impacts on how we socialize.

Abstractions are all around us, layer upon layer, whether we acknowledge them or not.

Matching Patterns: The Nature of Intelligence

From the nature of the brain, through the nature of the mind, we now move on to the last of this particular triumvirate: the nature of intelligence.

A good definition of intelligence follows relatively cleanly from my previous two posts. Since the brain is a modelling subsystem of reality, it follows that some brains simply have more information-theoretic power than others. However, I believe that this is not the whole story. Certainly a strictly bigger brain will be able to store more complex abstractions (as a computer with more memory can do bigger computations), but the actual physical size of human brains is not strongly correlated with our individual intelligence (however you measure it).

Instead I posit the following: intelligence, roughly speaking, is related to the ability for the brain to match new patterns and derive new abstractions. This is information-theoretic compression in a sense. The more abstract and compact the ideas that one is able to reason with, the more powerful the models one is able to use.

The actual root of this ability is almost certainly structural with the brain somehow, but the exact mechanics are irrelevant. It is more important to note that the resulting stronger abstractions are not the cause of raw intelligence so much as an effect: the cause is the ability to take disparate data and factor out all the patterns, reducing it down to as close to raw Shannon entropy as possible.

The Ghost in the Machine: The Nature of the Mind

Having just covered in summary the nature of the brain, we now turn to the much knottier issue of what constitutes the mind. Specifically I want to turn to the nature of self-awareness and true intelligence. Advances in modern computing have left most people with little doubt that we can simulate behavioural intelligence to within certain limits. But there still seems to be that missing spark that separates even the best computer from an actual human being.

That spark, I believe, boils down to recursive predictive self-modelling. The brain, as seen on Monday, can be viewed as a modelling subsystem of reality. But why should it be limited to modelling other parts of reality? Since from an information-theoretic perspective it must already be dealing in abstractions in order to model as much of reality as it can, there is nothing at all to prevent it from building an abstraction of itself and modelling that as well. Recursively, ad nauseum, until the resolution (in number of bits) of the abstraction no longer permits.

This self-modelling provides, in a very literal way, a sense of self. It also lets us make sense of certain idioms of speech, such as “I surprised myself”. On most theories of the mind, that notion of surprising oneself can only be a figure of speech, but self-modelling can actually make sense of it: your brain’s model of itself made a false prediction; the abstraction broke down.

The Nature of the Brain

Our little subsection on biology and genetics has covered the core points I wanted to mention, so now we take a sharp left turn and head back to an application of systems theory. Specifically, the next couple of posts will deal with the philosophy’s classic mind-body problem. If you haven’t already, I suggest you skim through my systems-theory posts, in particular “Reality as a System“. They really set the stage for what’s coming here.

As suggested in my last systems-theory post, if we view reality as a system then we can draw some interesting information-theoretic conclusions about our brains. Specifically, our brains must be seen as open (i.e. not closed), recursively modelling subsystems of reality.

Simply by being part of reality it must be a subsystem therein. Because it interacts with other parts of reality, it is open, not closed. The claim that it provides a recursive model of (part of) reality is perhaps less obvious, but should still be intuitive on reflection. When we imagine what it would be like to make some decision, what else is our brain doing but simulating that part of reality. Obviously it is not simulating the actual underlying reality (atoms or molecules or whatever) but it is simulating some relevant abstraction of that.

In fact, I will argue later that this is effectively all our brains do: they recursively model an abstraction of reality. But this is obviously a more contentious claim, so I will leave it for another day.

Balancing Altruism (The “Selfish” Gene, continued)

It was originally only supposed to be a single post, and this one makes three. Now I know why Dawkins originally wrote it as a book! This should (hopefully) be my last post on the selfish gene for now; next week we’ll move on to other stuff.

Given my previous points, one might realistically wonder why people aren’t simply altruistic all the time. If altruism leads to better overall genetic survival, why are people (sometimes) selfish?

Like a lot of things, the actual result is a bit of a balancing act. While human beings share a huge portion of genetic material simply be being human, nobody’s genes are exactly the same. As such, there is still some competition between different human genomes for survival.

Especially in developed society, where the human population is large and stable, and the loss of an individual is unlikely to risk the loss of a species, people are more selfish because they can afford to be. Being selfish in that environment increases the probability that your specific genes will survive, but does not realistically decrease the probability that human genes in general will survive.

The genes themselves are not doing these probability calculations of course; it is simply the case that those genes whose expressed behaviour most closely matched the actual probabilities involved were the most likely to survive. It’s all one marvellous self-balancing system of feedback.

The “Selfish” Gene (Again)

My previous post on The Selfish Gene didn’t quite cram in all of the ideas I wanted to touch on. Or, more precisely, I didn’t articulate some of them very well (if at all). So let’s revisit them a bit more explicitly before moving on.

A common complaint against “survival of the fittest” is that on the surface it seems incompatible with altruistic behaviour. If the fittest really do survive, why do people ever make sacrifices for the greater good? Don’t those sacrifices make them less likely to survive, thus weeding out such behaviour over time? Should we not, if “survival of the fittest” were true, be seeing nearly perfectly selfish people, each aiming for their own survival at the expense of everyone else?

This is the primary complaint that Dawkins was answering in his book, and the title (though otherwise a bit misleading) does in some sense encapsulate his answer to those question. The key point to remember is that the basic unit of survival, the thing on which “survival of the fittest” actually operates, is not the individual animal. The things that are surviving according to their fitness are, in fact, our genes.

Since human beings share a substantial portion of genetic code simply by belonging to the same species, this view makes altruism much more coherent; we may sacrifice a bit of our own personal good, but if the increase in general good means many more people survive to reproduce, this is good for our genes overall (thus why Dawkins calls them “selfish”).

This also explains why we tend to be more altruistic towards close family: they share a larger percentage of our genes than some other random person.

The Selfish Gene

Yes, the title of this post is a direct reference to the book of the same title by Richard Dawkins. Whatever you may think of Dawkins himself, his science has ended up being extremely influential.

The title is, by the author’s own admission, rather misleading. The idea is not to think of genes as agents with purpose or moral capacity (they’re just chemical strings after all). Instead, consider the following scenario:

A women and her husband stand before the queen. The women is pregnant, just starting to show. The man is putting on a brave face, as his wife has just killed a man. The punishment is death.

The man steps forward, shaking. “My queen”, he says, “I confess”. His wife lets out a whimper. “I am guilty of this crime, not she”. He pauses as the weight of what he has done sinks in, then continues. “I accept the consequences of my crime”.

It is a natural and obvious connection to draw that, given survival of the fittest and basic genetics, the genes that survive will be ones that make their respective animals survive. But on this understanding, the above scenario makes no sense. Why would the man confess to a crime he did not commit, when it leads to his almost certain death? Does not survival of the fittest imply that such behaviour be weeded out over time? We could argue that this altruistic behaviour is not representative and in fact will be weeded out, but such behaviour has been recorded again and again throughout history.

Instead, we must notice that while the man will certainly die, his genes will not. In fact, half his genes are at that moment present in his unborn child, who has a full and long life ahead if the man makes this sacrifice. Humanity tends to see this sacrifice as noble and good in some sense, but it is really much simpler than that. The man is not doing what is best for himself; he is doing what is best for his genes.

Diversity, Competition, and Stable Strategies

Having covered a couple of conceptual building-blocks, we can start putting them together and seeing what effects they have.

Through the combination of random variation and inheritance, we know that sometimes children will have new or different genes from those of their parents, but that most of the time they will have very similar genes. Since genes are connected to actual properties of living things, this means that sometimes children will be born with new, different or unusual properties not shared by their parents. Over grand time scales, this leads to diversity, even if the starting population is relatively homogenous. Some people will end up with blue eyes, some with brown; some people will end up with red hair, some with black hair.

Now note that in general, living beings are in competition with each other for resources (human beings count here too, though the competition is much more subtle in modern society; I will deal with this point more in later posts). Survival of the fittest comes into play here, and we know that genetics has an impact on physical properties. Together, this means (for example) that a giraffe with a gene for extra tallness may be able to eat off taller trees that the other giraffes can’t reach, thus surviving and passing on that gene.

Putting those two points together, this leads to an interesting situation. Random variation provides natural diversity, and survival of the fittest trims that diversity so that only the best genetic variants survive. The result tends statistically into what are called “stable strategies“. After some period of time, a combination of genes naturally occurs which produces properties that make the animals particularly well-suited to their environment. They don’t just survive, they begin to thrive. Their offspring may have random variations on this set of genes, but effectively all major variations end up being worse than the original. As such, the same set of near-optimal genes gets passed down stably, generation after generation.

Lethal Genes and Time Bombs

Now for a few additional definitions on top of the previous concepts.

A lethal gene is a gene that results in the death of its carrier. Genes do exist which cause diseases (such as Huntington’s disease, for example) instead of harmless changes such as blue eyes. Typically these genes kill their “host” before that host can have children, and so the gene dies with them.

Genes which kill their host but only later in life (again, Huntington’s is a good example) are called “time bomb” genes, because they wait a substantial amount of time before causing problems.