Would the Real Economy Please Stand Up

More-or-less an exegesis of The Manual Economy. Warning: I am not an economist; this is speculative.

I.

What does “the economy” mean to you? What do those words point to in the world? If I say instead “the banana”, that’s a pretty well-defined concrete object. If I say “the marriage”, that’s way less concrete but still has pretty well-defined boundaries in a lot of ways. But when I say “the economy”… well. Isn’t everything kinda part of the economy? It starts to involve a lot of hand-waving and ambiguity.

Just as the idea of “marriage” is made up of roughly three distinct pieces in reality (a legal piece, a social piece, and a now-optional religious piece), I like to think of the “economy” as really made up of a few fairly distinct pieces. Like two kids in a trench-coat, every economy is really two economies trying to sneak into a movie theatre together: a concrete economy made of goods and services, and a virtual economic coordination mechanism. These two economies usually stay approximately isomorphic to each other; when I buy a banana there are two parallel exchanges, one in money and one in bananas. Think of it like double-entry bookkeeping.

To draw out this distinction, imagine two alternative worlds. Imagine first the world vaguely hinted at in The Manual Economy, the world where money and all other coordination mechanisms have completely disappeared, but life carries on completely normally. In this world people still work, and receive goods. There are still “rich” and “poor” lifestyles, and poverty traps, and all the rest. It’s just that there’s no terrestrial coordination mechanism anymore; the economy stays coordinated because some twisted god wills it to be so. In this world you don’t go buy a banana. You go take a banana from the store without giving anything in return. The twisted god controlling us all merely wills it so that poor people don’t go take all the bananas.

For our second imaginary world, take the mirror image of the first. In this world money, and banks, and mortgages and all the rest still exist. People get paycheques, and try to make their rent each month, and get punished when they’re late paying their bills. But also it’s basically The Matrix; everybody eats and sleeps and lives in a permanent tube with no actual goods or services being exchanged. In our first imaginary world, we had a concrete economy running somehow with no coordination. In our second, we have a coordination mechanism running somehow with no concrete economy to coordinate.

Neither of these imaginary worlds are intended to be realistic or “stable” realities in any sense. They’re designed to provide a sense of what these two economies look like when completely isolated from each other. In reality, they depend on each other to stay standing, just as both kids depend on each other to successfully purchase that movie ticket.

II.

I said before that these two economies, the concrete and the coordination, are generally isomorphic. But they aren’t always isomorphic to each other, and unsurprisingly, all the interesting stuff (bubbles, crashes, government stimulus, most white-collar crime, etc.) happens in the gaps where they diverge.

Let’s start with the dual-economy view of an economic crash. In fact, like “the economy” itself, the dual-economy view implies that there are actually two main types of economic crash, depending on which piece of the economy has actually crashed. The easy one to talk about is a concrete economic crash, which can happen when there’s a shock to actual production somewhere. Say that a new disease destroys all of the food crops in North America in the span of a few weeks; this is a real economic crash because all of a sudden the real economic capacity of the continent has drastically shrunk. The problem isn’t one of coordination; even if you coordinate everything perfectly, there just isn’t enough food.

This sounds bad, but the other kind of crash is in some ways worse. When the coordination mechanism crashes instead, there’s generally still enough real economic capacity for everybody to get what they want, it’s just wildly misallocated. This is what happens when for example speculation drives up the price of a particular good until the bubble finally bursts. That can happen even though production and consumption of the good in question stayed perfectly even throughout the entire run. There were no shifts in real supply or demand to justify the bubble. The bubble wasn’t created by a shortage, and didn’t burst because of a surplus in the market. The coordination mechanism simply failed. This kind of crash is what we’ve seen the most of recently: entire neighbourhoods evicted from their subprime mortgages while the homeless crowd the streets; people going hungry while food rots in a warehouse a few blocks away.

The virtual economy can crash because of a bug in the “software” of the coordination mechanism, without affecting the fundamentals of there being enough real capacity to produce all the food, shelter, goods, services, etc. that people actually want. Conversely, the concrete economy can crash (say because of a global pandemic that forces most real economic activity to grind to a halt) but as long as the software is resilient the coordinating mechanism will keep chugging along, coordinating whatever economic activity is left. It’s arguable whether this is a bug or a feature.

III.

Another interesting place where this dual-economy view comes in handy is getting a possible sense of why national debts are so weird. As far as I can tell, the consensus among professional economists is something along the lines of “you don’t want it to get too high relative to GDP, but an increasing absolute national debt isn’t a problem the way it would be for an individual”. This has always struck me as counter-intuitive, and I think a dual-economy model makes it a little clearer.

Debt is an abstract concept, which puts it firmly into the coordination side of the economy. It’s a promise of real economic activity in the future, which allows economic coordination across time. Importantly, there is no real economic “lack” behind debt; you can’t have a physically negative number of bananas. Debt is always just a number on a balance sheet. Just like personal debt can be a useful tool in the form of a credit card – assuming you hold up “both sides” of the temporal bargain by paying it off promptly in the future – so too can national debt. But whereas a personal debt is tied to your personal ability to pay it off, which is finite and limited, national debt is tied to the theoretically-infinite national future.

Now obviously we don’t expect any country to last forever. There’s the heat death of the universe to worry about after all. But the timescales are so wildly different that it’s hard to reason about. It seems plausibly reasonable to sell a twenty-year mortgage to a thirty-year-old human, even if they’re already in debt, but rather less reasonable to sell it to somebody already pushing one hundred. The earnings potential and lifespan just don’t seem to be there to support the future end of the bargain. But it’s entirely reasonable to believe that major economic powers might continue to exist in some form for hundreds more years, and even continue to grow economically through that time. Today’s national debt is an attempt to coordinate real economic activity across time with our future, much larger, national economic future.

Interestingly, a similar kind of analysis can be applied to fractional-reserve banking (with deposits viewed as individual loans to the bank).

IV.

I am not very confident in the above analysis, but even if it’s mostly garbage I think that treating the virtual and the concrete economies as separate entities has helped me. “The economy” has always seemed like a big ball of spaghetti with a bunch of arbitrary rules, and I feel like this is a good step towards drawing out a gears-level understanding, even if a lot of the details are wrong.

Hopefully it helps you, too.

Abstractions on Inconsistent Data

[I’m not sure this makes any sense – it is mostly babble, as an attempt to express something that doesn’t want to be expressed. The ideas here may themselves be an abstraction on inconsistent data. Posting anyway because that’s what this blog is for.]

i. Abterpretations

Abstractions are (or at least are very closely related to) patterns, compression, and Shannon entropy. We take something that isn’t entirely random, and we use that predictability (lack of randomness) to find a smaller representation which we can reason about, and predict. Abstractions frequently lose information – the map does not capture every detail of the territory – but are still generally useful. There is a sense in which some things cannot be abstracted without loss – purely random data cannot be compressed by definition. There is another sense in which everything can be abstracted without loss, since even purely random data can be represented as the bit-string of itself. Pure randomness is in this sense somehow analogous to primeness – there is only one satisfactory function, and it is the identity.

A separate idea, heading in the same direction: Data cannot, in itself, be inconsistent – it can only be inconsistent with (or within) a given interpretation. Data alone is a string of bits with no interpretation whatsoever. The bitstring 01000001 is commonly interpreted both as the number 65, and as the character ‘A’, but that interpretation is not inherent to the bits; I could just as easily interpret it as the number 190, or as anything else. Sense data that I interpret as “my total life so far, and then an apple falling upwards”, is inconsistent with the laws of gravity. But the apple falling up is not inconsistent with my total life so far – it’s only inconsistent with gravity, as my interpretation of that data.

There is a sense in which some data cannot be consistently interpreted – purely random data cannot be consistently mapped onto anything useful. There is another sense in which everything can be consistently interpreted, since even purely random data can be consistently mapped onto itself: the territory is the territory. Primeness as an analogue, again.

Abstraction and interpretation are both functions, mapping data onto other data. There is a sense in which they are the same function. There is another sense in which they are inverses. Both senses are true.

ii. Errplanations

Assuming no errors, then one piece of inconsistent data is enough to invalidate an entire interpretation. In practice, errors abound. We don’t throw out all of physics every time a grad student does too much LSD.

Sometimes locating the error is easy. The apple falling up is a hallucination, because you did LSD.

Sometimes locating the error is harder. I feel repulsion at the naive utilitarian idea of killing one healthy patient to save five. Is that an error in my feelings, and I should bite the bullet? Is that a true inconsistency, and I should throw out utilitarianism? Or is that an error in the framing of the question, and No True Utilitarian endorses that action?

Locating the error is meaningless without explaining the error. You hallucinated the apple because LSD does things to your brain. Your model of the world now includes the error. The error is predictable.

Locating the error without explaining it is attributing the error to phlogiston, or epicycles. There may be an error in my feelings about the transplant case, but it is not yet predictable. I cannot distinguish between a missing errplanation and a true inconsistency.

iii. Intuitions

If ethical frameworks are abterpretations of our moral intuitions, then there is a sense in which no ethical framework can be generally true – our moral intuitions do not always satisfy the axioms of preference, and cannot be consistently interpreted.

There is another sense in which there is a generally true ethical framework for any possible set of moral intuitions: there is always one satisfactory function, and it is the identity.

Primeness as an analogue.

Other Opinions #54 – The Problem With ‘Privilege’

http://www.realcleareducation.com/articles/2017/08/28/the_problem_with_privilege_110195.html

Disclaimer: I don’t necessarily agree with or endorse everything that I link to. I link to things that are interesting and/or thought-provoking. Caveat lector.

I agree with more of this one than I thought I might, given the title. Worth reading for liberals and conservatives alike.

Power is a very messy abstraction.

I’m Back, I Swear

My previous post started with

Whoops, it’s been over a month since I finished my last post

and ended with

Hopefully the next update comes sooner!

Well that’s depressing. At least I managed to keep the gap down to under a year. Barely.

As it turns out, indulging in outrageous philosophical hand-waving has not proven a particularly motivating way to write. So let’s mostly ignore my “brief detour” on constructing the mind, and go back to the original question, which basically boiled down to answering the Cartesian challenge to Hume. Frankly, I don’t have an answer. Self-awareness is one of those things that I just don’t even know where to start with. So I’m going to ignore it (for now) and sketch out the rest of my solution in broad strokes anyways.

First a refresher: I’m still pretty sure the brain is an open, recursively modelling subsystem of reality. It does this by dealing in patterns and abstractions. If we ignore self-awareness, then a fairly solipsistic view presents itself: the concept of a person (in particular other people) is just a really handy abstraction we use to refer to a particular pattern that shows up in the world around us: biological matter arranged in the shape of a hominid with complex-to-the-point-of-unpredictable energy inputs and outputs.

Of course what exactly constitutes a person is subject to constant social negotiation (see, recently, the abortion debate). And identity is the same way. Social theorists (in particular feminists) have recognized for a while that gender is in effect a social construct. And while some broad strokes of identity may be genetically determined, it’s pretty obvious that a lot of the details are also social constructs. I call you by a certain name because that’s the name everybody calls you, not because it’s some intrinsic property of the abstraction I think of as you.

Taking this back to personhood and identity, the concept of self and self-identity falls neatly out of analogy with what we’ve just discussed. The body in which my brain is located has all the same properties that abstract as person in the 3rd-party. This body must be a person too, and must by analogy also have an identity. That is me.

Throw in proprioception and other sensory input, and somehow that gives you self-awareness. Don’t ask me how.


 

My original post actually started with reference to Parfit and his teleportation cases so for completeness’s sake I’ll spell out those answers here as well: as with previous problems of abstraction, there is never any debate about what happens to the underlying reality in all those weird cases. The only debate is over what we call the resulting abstractions, and that is both arbitrary and subject to social negotiation.

Until next time!

edit: I realized after posting that the bit about Parfit at the end didn’t really spell out as much as I wanted to. To be perfectly blunt: identity is a socially negotiated abstraction. In the case that a teleporter mistakenly duplicates you, which one of the resulting people is really you will end up determined by which one people treat as you. There’s still no debate about the underlying atoms.

Abstract Identity through Social Interaction

Identity is a complicated subject, made more confusing by the numerous different meanings in numerous different fields where we use the term. In mathematics, the term identity already takes on several different uses, but fortunately those uses are already rigorously defined and relatively uncontroversial. In the social sciences (including psychology, etc.) identity is something entirely different, and the subject of ongoing debate and research. In philosophy, identity refers to yet a third concept. While all of these meanings bare some relation to one another, it’s not at all obvious that they’re actually identical, so the whole thing is a bit of a mess. (See what I did there with the word “identical”? Common usage is a whole other barrel of monkeys, as it usually is.) Fortunately, the Stanford Encyclopedia has an excellent and thorough overview of the subject. I strongly suggest you go read at least the introduction before continuing.

Initially I will limit myself specifically to the questions of personally identity, paying specific attention to that concept applied over time, and to the interesting cloning and teleportation cases raised by Derek Parfit. If you’ve read and understood my previous posts, you will likely be able to predict my approach to this problem: it involves applying my theories of abstraction and social negotiation. In this case the end result is very close to that of David Hume, and my primary contribution is to provide a coherent and intuitive way of arriving at what is an apparently absurd conclusion.

The first and most important question is what, exactly, is personal identity? If we can answer this question in a thorough and satisfying way, then the vast majority of the related questions should be answerable relatively trivially. Hume argued that there is basically no such thing — we are just a bundle of sensations from one moment to the next, without any real existing thing to call the self. This view has been relatively widely ignored (as much as anything written by Hume, at any rate) as generally counter-intuitive. There seems to be obviously some thing that I can refer to as myself; the fact that nobody can agree if that thing is my mind, my soul, my body, or some other thing is irrelevant, there’s clearly something.

Fortunately, viewing the world through the lens of abstractions provides a simple way around this confusion. As with basically everything else, the self is an abstraction on top of the lower-level things that make up reality. This is still, unfortunately, relatively counter-intuitive. At the very least it has to be able to answer the challenge of Descartes’ Cogito ergo sum (roughly “I think therefore I am”). If the self is purely an abstraction, then what is doing the thinking about the abstraction? It does not seem reasonable that an abstraction is itself capable of thought — after all, an abstraction is just a mental construct to help us reason, it doesn’t actually exist in the necessary way to be capable of thought.


 

I wrote the above prelude about three weeks ago, then sat down to work through my solution again and got bogged down in a numerous complexities and details (my initial response to the Cartesian challenge was a bit of a cheat, and it took me a while to recognize that). I think I finally have a coherent solution, but it’s no longer as simple as I’d like and is still frankly a bit half-baked, even for me. I ended up drawing a lot on artificial intelligence as an analogy.

So, uh, *cough*, that leaves us in a bit of an interesting situation with respect to this blog, since it’s the first time I get to depart from my “planned” topics which I’d already more-or-less worked out in advance, and start throwing about wild ideas to see what sticks. This topic is already long, so it’s definitely going to be split across multiple posts. For now, I’ll leave you with an explicit statement of my conclusion, which hasn’t changed much: living beings, like all other macroscopic objects, are abstractions. This includes oneself. The experiential property (that sense of being there “watching” things happen) is an emergent property due to the complex reflexive interactions of various conscious and subconscious components of the brain. Identity (as much as it is distinct from consciousness proper) is something we apply to others first via socially negotiation and then develop for ourselves via analogy with the identities we have for others.

I realize that’s kinda messy, but this exploratory guesswork is the best part of philosophy. Onwards!

Head in the Clouds: The Problem of Many

Now that we have seen the problem of Material Constitution and how it is, in effect, a problem of abstraction, we shall turn to the Problem of Many. Coincidentally, the problem of many is another one with which Peter Unger is closely involved — you may recall that I borrowed parts of his eliminativist solution to the problem of material constitution in my previous post.

In the problem of many we are asked to consider a cloud. From the ground, a cloud may have clean, sharply-delineated borders, but of course this is an illusion. When we look more closely, we realize that the cloud is made of many water droplets, and that what appeared at first to be a sharp border is in fact a fuzzy “trailing off” as the water droplets become gradually less dense.

The question then becomes, what is a cloud? If we simply define it as a collection of water droplets in the air then we have two problems. First, our definition is clearly too broad as effectively all air in our atmosphere contains some moisture. Second, and more troubling, is that we also seem to have an enormous number of clouds where there appears to be only one. After all, if I take only the left half of our cloud, that is itself a collection of water droplets, and thus a cloud in its own right. But this trick can be used to create a “cloud” for every possible subset of water droplets, which defies our understanding of clouds. Should we say that the main cloud is composed of millions of overlapping little clouds of every possible shape and size? That seems absurd.

As with material constitution, Unger’s solution is one from which I draw inspiration but do eventually deviate. Unger’s move was to claim that for certain conceptual reasons involving inconsistent definitions, there are no clouds. This, as the Stanford Encyclopedia article notes, is counter-intuitive, and I find Unger’s reasons for his conclusion rather confusing. At this point, my approach in the next few paragraphs should be obvious if you’ve read my previous posts (especially the most recent one on material constitution).

Once again, we are faced with a disagreement not about the nature of the underlying reality (which, for our purposes, can be talked about in terms of water droplets) but about how to define, delineate, conceptualize and refer to that reality. Unger is, in a sense, correct: there are no clouds in the fundamental underlying reality (just as there were no ships in our discussion of the Ship of Theseus). However, when I point in the sky and say “Look, a cloud” I am not talking nonsense; I am referring to the shared abstraction of a cloud, constructed via sociolinguistic negotiation.

In this view, there is one cloud and not millions simply because we all agree that there is only one cloud. If there were two cloud-shaped groups of water droplets very close to one another, it is entirely possible that one person would call them “two clouds” and another person would call them “one cloud with a very thin spot”. It would be wrong to assume that in this situation one of those people would be wrong and one would be right; wrong and right in that sense are a property only of facts about reality, not about our abstractions on top.

As long as you are consistent in your abstractions, you can have as many clouds as you want.

Material Constitution: A Problem of Abstraction

The first philosophical problem we will tackle is known as the problem of material constitution. The online Stanford Encyclopedia of Philosophy contains a wealth of information on all sorts of interesting philosophical problems, so expect to see it linked a lot in this section. Its article on material constitution is well worth reading: http://plato.stanford.edu/entries/material-constitution/

The problem of material constitution can be demonstrated in a few different ways: one of my favourites is the story of the Ship of Theseus. In this story, the famous wooden Ship of Theseus is preserved in a museum. Over time, its boards wear out and are replaced until eventually not a single original board remains in the ship. Is it still the Ship of Theseus?

Taking the problem a step further, suppose someone has been collecting the worn-out boards, and when all the boards have been replaced, they construct a (substantially worn-down) ship out of the original boards. We now have two complete ships, both of which have some claim to the original identity of the Ship of Theseus. Which is the real one?

Skimming the Stanford entry, there are two popular solutions which seem to somehow fit within the framework I’ve layed out so far: Unger’s Eliminativism (section 4), and Carnap’s Deflationism (section 7). In effect, my solution is a synthesis of these two approaches.

Recall for a moment my third axiom: “There is some underlying consistent reality that is made up of things”. Also note the scientific reality of molecules and atoms etc. My fundamental claim follows naturally from these and is quite similar to Unger’s: properly speaking there is no such thing as a ship, there is only some collection of things in the underlying reality (the correct word is “atoms”, but it’s already been used for something else by physics!) that are arranged in a pattern we think of as a ship. To quote the Stanford article, “the most common reaction to this claim is an incredulous stare”, and here is where I am able to draw on Carnap’s Deflationism and my own work to go beyond Unger and provide a coherent answer.

When you say “but of course there are such things as ships” and I say “there is no such thing as a ship”, strictly speaking we are both right — we are using different meanings of the word is. The problem, as Carnap would argue, is only linguistic.

For this to make sense, recall way back to my post on Truth and Knowledge. We have now covered enough to realize that what I originally referred to as “Relative Truth” is nothing more than the set of abstractions we work with in our day-to-day life. Using these tools, we can see that when you say “but of course there are such things as ships” you are referring to the abstraction of a ship, the relative truth of the fact. The “ship” abstraction is one we both presumably share, so I am happy to grant your claim. However, when I say “there is no such thing as a ship” I am referring to the fundamental being of a ship, the absolute truth of the fact. Since ships are made up of molecules and atoms, there is no such thing that is, in itself, the ship, and so my claim is also correct.

To conclude, both Unger and Carnap were right, as far as it went; they each simply missed half the picture. In the Ship of Theseus, there is no conflict about what happens to the underlying reality (whether it consists of particles or something more exotic). The only question lies in what we call the resulting abstractions, and this is an issue because here there is no absolute truth to refer to; they are only abstractions, and abstractions are perpetually subject to the process of social negotiation.

Secret Goals

First off, apologies for the long absence; life has a habit of getting in the way of philosophy. Back to decision-making and game theory.

Now, obviously whenever you make a decision you must have certain goals in mind, and you are trying to make a decision to best fit those goals. If you’re looking at a menu, your goals may be to pick something tasty, but not too expensive, etc. You can have multiple goals, and they can sometimes conflict, in which case you have to compromise or prioritize. This is all pretty basic stuff.

But what people tend not to realize (or at least, not to think about too much) is that frequently our “goals” are not, in themselves, things we value; we value them because they let us achieve bigger, better goals. And those goals may be in the service of even higher goals. What this means is that all of these intermediate layers of “goals” are really just means that we use so frequently we have abstracted them into something that we can think of as inherently valuable. This saves us the mental work of traversing all the way back to the root wellspring of value each time we want to pick food off a menu. The result is these layers of abstract “goals”. Yet another set of layers of abstractions!

So what are these root goals we tend not to think about? Are they so-called “life goals” such as raising a family or eventually running your own company? No. Those are still just intermediate abstractions. The real goals are still one more step away, and are almost universally biological in nature. The survival and reproduction of our genetic code, whether through ourselves, our offspring, or our relations. These are our “secret goals”.

So how does this help us understand decision-making? It seems intuitively impossible to understand somebody’s decisions if we don’t understand the goal of that decision. But when we think exclusively in terms of our shorter-term, abstract “goals”, these are things that change, that we can abandon or reshape to suit our current situation. Thinking of these instead as methods of satisfying our underlying goals (which do not change) provides a much more consistent picture of human decision-making. This consistent picture is one to which we might even be able to apply game theory

Layering Abstractions: More on Intelligence

As I discussed on Monday, I believe the fundamental underlying characteristic of intelligence is the ability to perceive and discover patterns, but I want to go a bit deeper on the resulting layering of abstractions that result from this. We all use many layers of abstractions every day.

The first, obvious layer of abstractions is on top of the underlying reality of fundamental particles (electrons and quarks and leptons etc). This lets our brain recognize physical materials like water, plastic, hair, wood, metal, etc. These elements are just abstractions; the same underlying particles could, in fact be arranged into totally different materials, or even be plasma. Arguably this isn’t a mental abstraction so much as one forced on us by the methods we have with which to observe the world, but the net result is the same.

On top of this abstraction of materials, we have an abstraction of objects. Wood formed in a particular pattern isn’t just wood, but also forms a table, or a chair, or a door, or any of another hundred things. A particular quantity of water in a particular location isn’t just water; it’s a lake, or a river, or an ocean. We could describe what I’m typing on right now as just a complex combination of plastic, metal (mostly silicon) and various other minerals, but typically we just abstract away that detail and call it a computer keyboard.

And there are yet even more layers of abstractions. There are abstractions we impose explicitly when laying out the rules of a sport or a game, and there are abstractions we impose implicitly when laying out the rules of behaviour in polite society. There are even, perhaps, abstractions we construct of each other. The ideas of “village idiot”, or “class nerd” are themselves abstractions on the social roles we play, and can have substantial impacts on how we socialize.

Abstractions are all around us, layer upon layer, whether we acknowledge them or not.

Recursive Abstractions and Approximate Models

Given my previous definition of system simulation (aka modelling) it seems intuitive that a finite system cannot model itself except insofar as it is itself. Even more obviously, no proper subsystem of a system could simulate its “parent”. A proper subsystem by definition has a smaller size than the enclosing system, but needs to be at least as big in order to model it.

(An infinite subsystem of an infinite system is not a case I care to think too hard about, though in theory it could violate this rule? Although some infinities are bigger than others, so… ask a set theorist.)

However, an abstraction of a system can be substantially smaller (i.e. require fewer bits of information) than the underlying system. This means that a system can have subsystems which recursively model abstractions of their parents. Going back to our game-of-life/glider example, this means that you could have a section of a game of life which computationally modeled the behaviour of gliders in that very same system. The model cannot be perfect (that would require the subsystem to be larger than its “parent”) so the abstraction must of necessity be incomplete, but as we saw in that example being incomplete doesn’t make it useless.