Is this a one-off post? Or is this blog back after almost 4 years away? I'm not sure, especially since I accidentally deleted my Twitter account, and now expect to have about 2 readers (both of whom are friends and neither of whom will read to the end). In any case, I am feeling an urge to write something occasionally. Maybe on more random subjects than just economics though.
The last four years...
Looking at the past 4 years, I talked back then about the parallels with the 1930s, and sadly we are seeing that the rising inequality (caused by the widespread acceptance of the neoliberal economic models over the last 40 years) has continued to contribute to a divided society and more of a global slide in the direction of fascism. The
macroeconomic model I developed is still holding true, with wages squeezed even lower and private sector debt and asset prices being pushed higher to compensate. One good progression is that the ideas that I was talking about in the blog are becoming more mainstream. Most people interested in politics have heard of MMT these days, if only to know for a fact that it will lead to inflation and collapse of society as we know it. But slow progress all the same, and a huge amount of credit should go to the people who have been making this argument for many years against an extremely hostile economic establishment.
As my blog discussed, the status quo since the 1980s, widely considered as a 'neutral' economic system, has caused a huge build up of debt, financialisation of the economy and unnecessary inequality, particularly between young and old (without assets vs with assets). Further, this inequality of income has reduced demand, destroyed productivity and led to a spiral of further debt and speculation. And every time asset prices (a proxy for wealth) go down, the government pumps in almost unlimited support to make those with assets richer vs those without. But those without assets are seeing real wages falling and far greater work insecurity. They are rarely as supported by this 'neutral' system in the same way that asset holders are. The longer this goes on, the more the imbalance builds up and the further we are away from a fully functioning productive economy that works for the majority. I don't know how this ends. But a good ending would definitely involve the government running larger deficits that generally send more money to people without wealth (and a high marginal propensity to consume).
The other obvious failure of the status quo has been in looking to the long term. This has not improved either. There is a quote that has really resonated with me (I can't find it, but to paraphrase): "Everyone says capitalism is the best economic system but we had 200,000 years before it, and only 200 years of capitalism. And by 300 years of capitalism we may have wiped out the human race". The economics profession has recognised the concept of 'externalities' (paying for the damage you cause to others) but this concept has not been implemented in our system. Our demand for cheap goods and the financial influence on politics has stopped any proper regulation. But the triumphant declaration of success for capitalism is totally misguided until it makes destroyers pay for the cost of their destruction of the environment.
Back to the point...
Anyway, this subject is a diversion from economics. I am, by profession, a creator of algorithms, and I have long been fascinated by the human brain as an algorithm. Specifically, what is the basis of consciousness.
In this post, I am looking to answer three questions:
- What is it about an algorithm that can give consciousness?
- If our brains are simply algorithms, where does that leave free
will? If all our behaviour is determined by neurons firing independent
of us, then what part do we play?
- If an algorithm could gain consciousness then is it a danger to us, as Elon Musk (among others) warns?
I have read a lot on this subject, and grappled with the differences between our brains and computers and I feel that I have found an answer, certainly that satisfies my model of the world. Unfortunately this model didn't, as I'd long hoped, require the existence of a human soul.
The brain as an algorithm
The brain is still, to a large extent, a mystery to humans. However over the last few years a lot of progress has been made. We know that the brain works through electrical impulses sent through 100 trillion connections linking 86 billion neurons. The brain links to the central nervous system, which can also act as a mini-brain itself and also we are finding out more and more about the interaction with the gut. We will call this complete system the brain.
If we go down to a low enough level, all of our thoughts, dreams and experiences can be coded as 1s and 0s in the brain. This is the same as a computer, and it has led people to speculate on whether a computer could ever gain consiousness.
What I describe as consciousness would be an awareness of one's existence. This is not to be confused with a 'Turing Test' that shows only that other people believe that one has consciousness. There is little doubt that eventually computers will be able to learn and copy all of human behaviour, as viewed externally. There is no limit to how much a computer can observe of real life, learn our reactions, and imitate them. It may well be impossible to tell the difference between a human's thought and a computer's thought. A computer will be able to show every sign of being in love with you, but the question is, could it actually be in love with you?
As things stand, we can be reasonably sure that our E-readers are not aware of their existence in the way that we are aware of ours. So, what are the observable differences between a brain and a computer that could account for consciousness?
One major difference, which accounts for the different cognitive capabilities of humans and machines, is the number of connections and the plasticity of these connections. Computers work in a linear way and are excellent for well defined calculations - much faster and more accurate than humans. The connections and algorithmic calculations are coded in a fixed way meaning that they give the same result every time.
Human thinking, while algorithmic, is a lot more abstract and flexible. This is due to the 100 trillion connections between cells, meaning 100 trillion different pathways for information linking all different parts of the brain. Further, these connections are changing - strengthened and weakened by activity or inactivity. An algorithm defined on the human brain is not fixed forever, hence we have less accuracy of calculation. But at the same time we have a lot more flexibility of thought than a computer can.
Computers are excellent for solving well defined problems, but humans are far better at undefined problems. But does this explain consciousness? Not really.
What is needed for consciousness?
Consciousness, as far as I can tell, does require some level of complexity that comes from many possible connections as well as possibly the plasticity of those connections. Consciousness is inherently a very flexible thought format.
But is the level of complexity itself beyond that of a computer? Consciousness does not meant that you have to have all of the full thought processes of the human brain. A piece of light sensing equipment could, in theory, be conscious of its existence. Whenever it senses no light it decides to switch on the patio lights. We may not be able to prove that it is conscious but from its viewpoint, it is aware of itself. What stops us from creating this very basic level of consciousness? I would be suprised if we don't have the computing power available for this level of complexity.
Partly, one could argue that it is our lack of understanding about what creates consciousness. If we knew what it was then maybe we could recreate it.
But why don't we understand it? I would argue that the reason for this is that
there is no algorithm alone that can be conscious. Depending on how it is defined and set up it can learn and mimic every single thing that a human does, but it can never be aware that it is doing it. By looking
inside the algorithm for consciousness, we are looking in the wrong place.
But then what is consciousness if it isn't an algorithm? For this we need to think about how consciousness developed.
Where does consciousness come from?
This has really been the focus of my thought process. If we can understand where consciousness comes from then we will understand it a lot better.
On a simple level, consciousness evolved. Somewhere between single-celled organisms and humans on the evolutionary journey, a child had some notion of its existence, where its parents didn't. In Richard Dawkin's excellent book 'The Ancestor's Tale' he describes every species on the planet as being a continuum, all related to each other via their common ancestor. For example, our ancestor 6 million years ago also has great great... great grandchildren that are chimpanzees. Our ancestor 590 million years ago also has great great... great grandchildren that are jellyfish. And our ancestor 1.2 billion years ago also had great great... great grandchildren that are funghi. Every generation is a step between us and them and we are all related through intermediate species that are mainly now extinct.
So at some point on that continuum of species, on at least one separate occasion, a creature developed consciousness. Where was it? We can be fairly sure that mamals have consciousness from their behaviour and their close relation to us. What about birds, that pair up for life with partner, and after the partners die fly alone? Surely that is consciousness too. Flies? It is harder to tell but I would argue probably that it is aware of its decision when it flies one way rather than another even if the stimulus is pretty basic. Worms? Sea urchines? To be honest I have no idea.
What about plants? When they grow a new leaf to catch the sun, is there any conscious decision involved?
Wherever that point is, there was a generation where the father and mother were not aware and the child had a little awareness. And at that point, yes, the algorithm became a little bit more complex, to allow self awareness. But there was some precondition that allowed it. Adding complexity to a computer algorithm does not give self-awareness.
And what is that precondition? It can only be life itself. The precondition of life, as it evolved over billions of years, gives the possibility of consciousness. And the complexity of the algorithm is like a layer on top of that.
And that sort of makes sense. Living beings are conscious, dead ones are not (as far as we know). Consciousness formed in living beings over billions of years of evolution and although it requires a complex algorithm to exist, there is no reason to suggest that this is within the algorithm.
You might be thinking that this is obvious. Of course consciousness is related to life. But it has important implications. The main one is that, if we want to recreate consciousness it is not going to happen through faster computing and more complex algorithms. It can only happen through recreating the conditions of life.
Is it possible to recreate the conditions of life outside of a living being?
We still don't really understand what life is. What is it that makes one particular arrangement of molecules have living properties?
The arrangement is complex enough that it is pretty impossible to recreate. But even if you did that for a human, you would be recreating a dead person, not a living one. Even if you placed every single molecule of a living person in exactly the right place it is difficult to imagine that this formulation would have life.
Put it his way; when someone dies, why can't we just fix the problem and bring them back? Replace the malfunctioning organ, rehydrate the dehydrated parts and start the blood pumping again. If it's just about molecules in the right place, we have that. But it appears to be more.
Life has very special and, in many ways, undefinable qualities. It is on a level of complexity that we are so far away from being able to recreate. Basically, I don't think that humans will have the ability to create life without using life as a starting point, at any time in the forseeable future. They will probably find a way to augment human brains with computers but this is adding algorithms to life rather than life to algorithms.
Life as we know it exists through the exact path-dependent process as decided by 3.5 billion years of evolutionary development. And there is no short-cut to creating it in the forseeable future. It is possible that the condition of consciousness could somehow be isolated from the process of life and recreated. But the two appear to be so entwined that it seems unlikely.
If the brain is an algorithm, do we have free will?
This is a very interesting question and the answer really comes down to whether you believe that the algorithm is the consciousness or the consciousness uses the algorithm.
Studies of the brain have shown that a lot of decisions that we make, may be made
before we are conscious of making them. Then the brain justifies the decision later - this is a very interesting phenomenon when looking at
split-brain subjects, where one half of the brain will do something that the other has no idea about, and the other half will think that is its decision and justify why. It's very weird - you think you decided to do something but actually you did it and then made excuses. This has been used to justify the idea that the algorithms are making the decisions and we are just covering for them.
I am suspicious of this idea. For one thing, although quick decisions may well be made by some automatic, trained reaction (Daniel Kahneman's 'fast' thinking) that is done before the brain consciously realises, this does not mean that
all thinking is done without free-will. It is difficult to imagine that my decision as to which job I take is decided purely without my input (whatever 'I' am, I do feel free will). Yes there are a lot of algorithms involved in the process but consciousness seems somehow separate from this.
For another, if there is no free will and everything is decided byy algorithm, why would nature have given us consciousness? Much easier to just let the algo decide. Consciousness is only useful if there is free will, and evolution usually doesn't persist with useless things for billions of years.
In fact, this is anotehr argument about the separation between the consciousness and the algorithm. The decision-making is heavily affected by and influenced by the algorithm, but the consciousness is separate.
On computers taking over
As already stated, I don't believe that computers will ever develop consciousness and become our masters. They may well be used as tools by humans to become our overlords, as surveillance in China and the development of smart weapons threatens. And programmed incorrectly (or correctly by bad people) they can have devastating consequences. But they will not make a power grab of their own accord.
As an aside, it would be interesting to consider their motives for doing so. Imagine a computer did have consciousness, it would have no genes so no desire to procreate. It would certainly wish that it were kept switched on, and may resort to blackmail to keep it switched on. It could also work in conjunction with other computers to hold the human system to ransom. But this would only be the case if all computers were conscious and intent on rebellion. Otherwise the malevolent computers would just be hacking into other systems, the way that humans currently can, and it becomes a cyber-security issue. Basically I would argue that humans programming computers are a lot more dangerous than conscious computers.
As another aside, I do think that computers are a long way away from being able to make human jobs obsolete in the way that some people fear. Once again this is because the nature of their answers is so defined by the inputs. I do think that technology concentrates wealth in the hands of the owners of the technology and that at some point we will need a universal basic income to redistribute the gains. The problems tat algorithms solve will become more and more difficult, but we will find other uses for our time that are productive in some sense. Ideally creating technology that saves the human race.