Wednesday 22 July 2020

Could A.I. Gain Consciousness and Take Over the World?

Is this a one-off post? Or is this blog back after almost 4 years away? I'm not sure, especially since I accidentally deleted my Twitter account, and now expect to have about 2 readers (both of whom are friends and neither of whom will read to the end). In any case, I am feeling an urge to write something occasionally. Maybe on more random subjects than just economics though.

The last four years...

Looking at the past 4 years, I talked back then about the parallels with the 1930s, and sadly we are seeing that the rising inequality (caused by the widespread acceptance of the neoliberal economic models over the last 40 years) has continued to contribute to a divided society and more of a global slide in the direction of fascism. The macroeconomic model I developed is still holding true, with wages squeezed even lower and private sector debt and asset prices being pushed higher to compensate. One good progression is that the ideas that I was talking about in the blog are becoming more mainstream. Most people interested in politics have heard of MMT these days, if only to know for a fact that it will lead to inflation and collapse of society as we know it. But slow progress all the same, and a huge amount of credit should go to the people who have been making this argument for many years against an extremely hostile economic establishment.

As my blog discussed, the status quo since the 1980s, widely considered as a 'neutral' economic system, has caused a huge build up of debt, financialisation of the economy and unnecessary inequality, particularly between young and old (without assets vs with assets). Further, this inequality of income has reduced demand, destroyed productivity and led to a spiral of further debt and speculation. And every time asset prices (a proxy for wealth) go down, the government pumps in almost unlimited support to make those with assets richer vs those without. But those without assets are seeing real wages falling and far greater work insecurity. They are rarely as supported by this 'neutral' system in the same way that asset holders are. The longer this goes on, the more the imbalance builds up and the further we are away from a fully functioning productive economy that works for the majority. I don't know how this ends. But a good ending would definitely involve the government running larger deficits that generally send more money to people without wealth (and a high marginal propensity to consume).

The other obvious failure of the status quo has been in looking to the long term. This has not improved either. There is a quote that has really resonated with me (I can't find it, but to paraphrase): "Everyone says capitalism is the best economic system but we had 200,000 years before it, and only 200 years of capitalism. And by 300 years of capitalism we may have wiped out the human race". The economics profession has  recognised the concept of 'externalities' (paying for the damage you cause to others) but this concept has not been implemented in our system. Our demand for cheap goods and the financial influence on politics has stopped any proper regulation. But the triumphant declaration of success for capitalism is totally misguided until it makes destroyers pay for the cost of their destruction of the environment.

Back to the point...

Anyway, this subject is a diversion from economics. I am, by profession, a creator of algorithms, and I have long been fascinated by the human brain as an algorithm. Specifically, what is the basis of consciousness.

In this post, I am looking to answer three questions:

  1. What is it about an algorithm that can give consciousness? 
  2. If our brains are simply algorithms, where does that leave free will? If all our behaviour is determined by neurons firing independent of us, then what part do we play? 
  3. If an algorithm could gain consciousness then is it a danger to us, as Elon Musk (among others) warns?

I have read a lot on this subject, and grappled with the differences between our brains and computers and I feel that I have found an answer, certainly that satisfies my model of the world. Unfortunately this model didn't, as I'd long hoped, require the existence of a human soul.

The brain as an algorithm

The brain is still, to a large extent, a mystery to humans. However over the last few years a lot of progress has been made. We know that the brain works through electrical impulses sent through 100 trillion connections linking 86 billion neurons. The brain links to the central nervous system, which can also act as a mini-brain itself and also we are finding out more and more about the interaction with the gut. We will call this complete system the brain.

If we go down to a low enough level, all of our thoughts, dreams and experiences can be coded as 1s and 0s in the brain. This is the same as a computer, and it has led people to speculate on whether a computer could ever gain consiousness.

What I describe as consciousness would be an awareness of one's existence. This is not to be confused with a 'Turing Test' that shows only that other people believe that one has consciousness. There is little doubt that eventually computers will be able to learn and copy all of human behaviour, as viewed externally. There is no limit to how much a computer can observe of real life, learn our reactions, and imitate them. It may well be impossible to tell the difference between a human's thought and a computer's thought. A computer will be able to show every sign of being in love with you, but the question is, could it actually be in love with you?

As things stand, we can be reasonably sure that our E-readers are not aware of their existence in the way that we are aware of ours. So, what are the observable differences between a brain and a computer that could account for consciousness?

One major difference, which accounts for the different cognitive capabilities of humans and machines, is the number of connections and the plasticity of these connections. Computers work in a linear way and are excellent for well defined calculations - much faster and more accurate than humans. The connections and algorithmic calculations are coded in a fixed way meaning that they give the same result every time.

Human thinking, while algorithmic, is a lot more abstract and flexible. This is due to the 100 trillion connections between cells, meaning 100 trillion different pathways for information linking all different parts of the brain. Further, these connections are changing - strengthened and weakened by activity or inactivity. An algorithm defined on the human brain is not fixed forever, hence we have less accuracy of calculation. But at the same time we have a lot more flexibility of thought than a computer can.

Computers are excellent for solving well defined problems, but humans are far better at undefined problems. But does this explain consciousness? Not really.

What is needed for consciousness?

Consciousness, as far as I can tell, does require some level of complexity that comes from many possible connections as well as possibly the plasticity of those connections. Consciousness is inherently a very flexible thought format.

But is the level of complexity itself beyond that of a computer? Consciousness does not meant that you have to have all of the full thought processes of the human brain. A piece of light sensing equipment could, in theory, be conscious of its existence. Whenever it senses no light it decides to switch on the patio lights. We may not be able to prove that it is conscious but from its viewpoint, it is aware of itself. What stops us from creating this very basic level of consciousness? I would be suprised if we don't have the computing power available for this level of complexity.

Partly, one could argue that it is our lack of understanding about what creates consciousness. If we knew what it was then maybe we could recreate it.

But why don't we understand it? I would argue that the reason for this is that there is no algorithm alone that can be conscious. Depending on how it is defined and set up it can learn and mimic every single thing that a human does, but it can never be aware that it is doing it. By looking inside the algorithm for consciousness, we are looking in the wrong place.

But then what is consciousness if it isn't an algorithm? For this we need to think about how consciousness developed.

Where does consciousness come from?

This has really been the focus of my thought process. If we can understand where consciousness comes from then we will understand it a lot better.

On a simple level, consciousness evolved. Somewhere between single-celled organisms and humans on the evolutionary journey, a child had some notion of its existence, where its parents didn't. In Richard Dawkin's excellent book 'The Ancestor's Tale' he describes every species on the planet as being a continuum, all related to each other via their common ancestor. For example, our ancestor 6 million years ago also has great great... great grandchildren that are chimpanzees. Our ancestor 590 million years ago also has great great... great grandchildren that are jellyfish. And our ancestor 1.2 billion years ago also had great great... great grandchildren that are funghi. Every generation is a step between us and them and we are all related through intermediate species that are mainly now extinct.

So at some point on that continuum of species, on at least one separate occasion, a creature developed consciousness. Where was it? We can be fairly sure that mamals have consciousness from their behaviour and their close relation to us. What about birds, that pair up for life with partner, and after the partners die fly alone? Surely that is consciousness too. Flies? It is harder to tell but I would argue probably that it is aware of its decision when it flies one way rather than another even if the stimulus is pretty basic. Worms? Sea urchines? To be honest I have no idea.

What about plants? When they grow a new leaf to catch the sun, is there any conscious decision involved?

Wherever that point is, there was a generation where the father and mother were not aware and the child had a little awareness. And at that point, yes, the algorithm became a little bit more complex, to allow self awareness. But there was some precondition that allowed it. Adding complexity to a computer algorithm does not give self-awareness.

And what is that precondition? It can only be life itself. The precondition of life, as it evolved over billions of years, gives the possibility of consciousness. And the complexity of the algorithm is like a layer on top of that.

And that sort of makes sense. Living beings are conscious, dead ones are not (as far as we know). Consciousness formed in living beings over billions of years of evolution and although it requires a complex algorithm to exist, there is no reason to suggest that this is within the algorithm.

You might be thinking that this is obvious. Of course consciousness is related to life. But it has important implications. The main one is that, if we want to recreate consciousness it is not going to happen through faster computing and more complex algorithms. It can only happen through recreating the conditions of life.

Is it possible to recreate the conditions of life outside of a living being?

We still don't really understand what life is. What is it that makes one particular arrangement of molecules have living properties?

The arrangement is complex enough that it is pretty impossible to recreate. But even if you did that for a human, you would be recreating a dead person, not a living one. Even if you placed every single molecule of a living person in exactly the right place it is difficult to imagine that this formulation would have life.

Put it his way; when someone dies, why can't we just fix the problem and bring them back? Replace the malfunctioning organ, rehydrate the dehydrated parts and start the blood pumping again. If it's just about molecules in the right place, we have that. But it appears to be more.

Life has very special and, in many ways, undefinable qualities. It is on a level of complexity that we are so far away from being able to recreate. Basically, I don't think that humans will have the ability to create life without using life as a starting point, at any time in the forseeable future. They will probably find a way to augment human brains with computers but this is adding algorithms to life rather than life to algorithms.

Life as we know it exists through the exact path-dependent process as decided by 3.5 billion years of evolutionary development. And there is no short-cut to creating it in the forseeable future. It is possible that the condition of consciousness could somehow be isolated from the process of life and recreated. But the two appear to be so entwined that it seems unlikely.

If the brain is an algorithm, do we have free will?

This is a very interesting question and the answer really comes down to whether you believe that the algorithm is the consciousness or the consciousness uses the algorithm.

Studies of the brain have shown that a lot of decisions that we make, may be made before we are conscious of making them. Then the brain justifies the decision later - this is a very interesting phenomenon when looking at split-brain subjects, where one half of the brain will do something that the other has no idea about, and the other half will think that is its decision and justify why. It's very weird - you think you decided to do something but actually you did it and then made excuses. This has been used to justify the idea that the algorithms are making the decisions and we are just covering for them.

I am suspicious of this idea. For one thing, although quick decisions may well be made by some automatic, trained reaction (Daniel Kahneman's 'fast' thinking) that is done before the brain consciously realises, this does not mean that all thinking is done without free-will. It is difficult to imagine that my decision as to which job I take is decided purely without my input (whatever 'I' am, I do feel free will). Yes there are a lot of algorithms involved in the process but consciousness seems somehow separate from this.

For another, if there is no free will and everything is decided byy algorithm, why would nature have given us consciousness? Much easier to just let the algo decide. Consciousness is only useful if there is free will, and evolution usually doesn't persist with useless things for billions of years.

In fact, this is anotehr argument about the separation between the consciousness and the algorithm. The decision-making is heavily affected by and influenced by the algorithm, but the consciousness is separate.

On computers taking over

As already stated, I don't believe that computers will ever develop consciousness and become our masters. They may well be used as tools by humans to become our overlords, as surveillance in China and the development of smart weapons threatens. And programmed incorrectly (or correctly by bad people) they can have devastating consequences. But they will not make a power grab of their own accord.


As an aside, it would be interesting to consider their motives for doing so. Imagine a computer did have consciousness, it would have no genes so no desire to procreate. It would certainly wish that it were kept switched on, and may resort to blackmail to keep it switched on. It could also work in conjunction with other computers to hold the human system to ransom. But this would only be the case if all computers were conscious and intent on rebellion. Otherwise the malevolent computers would just be hacking into other systems, the way that humans currently can, and it becomes a cyber-security issue. Basically I would argue that humans programming computers are a lot more dangerous than conscious computers.

As another aside, I do think that computers are a long way away from being able to make human jobs obsolete in the way that some people fear. Once again this is because the nature of their answers is so defined by the inputs. I do think that technology concentrates wealth in the hands of the owners of the technology and that at some point we will need a universal basic income to redistribute the gains. The problems tat algorithms solve will become more and more difficult, but we will find other uses for our time that are productive in some sense. Ideally creating technology that saves the human race.


  1. Welcome back to blogging. Your post appeared in my feedly reader app’s defunct blogs folder. This is an interesting post. You should write more often. Here are some thought on computers and decision-making.

    I am retired but I spent my career helping businesses and government departments solve problems and make decisions. We all make decisions by weighing up facts. The weightings are always subjective and depend on the values and biases of the decision-makers.

    For example, suppose you and I both wanted to buy a new car, and suppose that we decided to collaborate in researching the car market. Suppose also that we had the same budget. We would still probably make different decisions based on our personal biases. I might give more weight to the speed of the car. You might give more weight to the comfort of the car. I might prefer a blue car. You might prefer a white car.

    One of the reasons that economics is not “science” IMHO is that these biases exist everywhere, so there is no such thing as a completely objective decision or policy. For example, Paul Krugman is super-smart. However, he says things like “the facts have a well-known liberal bias”. What does he mean by “the facts”? There are millions of facts. What he must mean is that “the very small number of facts that a liberal American economist thinks are important have a well-known liberal bias”. That is different from his words as it will also be true that “the very small number of facts that a conservative American economist thinks are important have a well-known conservative bias”.

    My main criticism of academic economists is that they do not recognise their own decision-making biases. It is impossible for anyone to make a complex decision without employing personal biases that can be used to reduce the number of relevant facts to a manageable number and to decide which of those facts are most important to the decision. Decisions cannot be considered as science as there is no objectively correct bias. Asset markets, betting markets and democratic elections are based on different people using their personal biases to arrive at different conclusions about the future.

    I suspect that this is all relevant to computers and consciousness. In your terms, I am saying that personal biases are a necessary input into any decision-making algorithm. Biases are separate from any generic algorithm. My question is what would it mean for a computer to have personal biases? We tend to think that computers are smart because they can beat us at chess or monitor complex traffic flows. However, they would only be conscious in a way that we would recognise if they had biases that allowed them to decide that they were bored with monitoring traffic flows and preferred to play chess, or that they wanted to take the day off to chill, or that they preferred to be housed in rooms with blue walls.

    Another way of putting this would be to think of our biases as instincts and motivations. We might ask what it would mean for a computer to have instincts or motivations? I do think that this relates to ideas like fast and slow thinking. In the real world, we often need to react quickly to unexpected events. In my experience, agility is one of the most important qualities for any successful business or government organisation, and it is also important at the individual level.

    I agree that the most likely adverse effects of computers will probably arise from the intent or mistakes of their human programmers. I guess that it is also possible that humans will lose certain skills if we rely fully on computers to carry out some tasks. This is already happening. Why learn arithmetic when you have a calculator? Why learn to write when you can type or dictate? Why learn spelling and grammar when computers can correct mistakes? I don’t think there’s much chance of computers taking over the world though. What world be their motivation?

    1. Jamie, thank you for the kind words and the ideas, and a few thoughts on your thoughts...
      1) your job sounds like it was really cool
      2) I do go for comfort over speed and have chosen a white car, which is a bit spooky :)
      3) (more to the point of your
      argument) I'm interested why you consider computers don't have biases? I think it's all about the data that goes in. I would say that in this way AI is similar to the brain. When a computer is fed photos from a biased criminal justice system it will more likely identify a black face as criminal (this has actually happened). When a human brain is exposed to the selective aperture of news from the Daily Mail or Fox News then similar biases happen. The more intelligent and powerful the computers become, the more dangerous these biases. What do you think?

      But I also think an interesting question is why two different people with seemingly the same input data can have different biases. Is this free will or just a different part of the algorithm?
      I have more questions but I'll stop there :)

  2. Ha ha! That is spooky about your car preferences.

    Ari: “When a computer is fed photos from a biased criminal justice system it will more likely identify a black face as criminal”

    I would say that is just the computer reflecting the biases of the justice system and the biases of the people who wrote its programs. A computer with its own personal bias would be able to say that it did not agree with this bias, so it was going on strike or that it was going to rewrite its own program to reflect a different bias. It would have to be possible for two computers with the same program and input data to produce different results because of something innate to the computer.

    Ari: “When a human brain is exposed to the selective aperture of news from the Daily Mail or Fox News then similar biases happen”

    That reflects old arguments about whether biases are natural/innate or nurtured/learned. You are suggesting that a selective aperture of news leads to learned bias. I don’t disagree. I think that biases are a combination or nature and nurture. However, that is true of everyone, including you and me. That is why we all tend to end up talking to people with similar biases on social media. It is too easy to point out bias in everyone else. It is much harder to recognise our own biases.

    I learned from my job that successful mediation between people with different biases requires the mediator to leave their own biases at the door. That is very hard to do. If you make your own biases clear, you will lose the trust of anyone with different biases. For example, Paul Krugman will never persuade right-wing Americans of anything as he constantly calls them stupid.

    In my job, I had to think hard about my own biases and how to tease out tactfully why different people came to different conclusions about the same situation. A useful exercise in this respect is to think about how to describe a heated debate in a way that does not assume that one side or the other is stupid.

    For example, I find that, in economics, right-wing people tend to focus on the microeconomy and mostly ignore the macroeconomy, whereas left-wing people tend to do the opposite. As a result, they lack a common perspective through which to have a reasoned debate.

    Another example. In the recent Brexit debates in the UK, Remainers tended to focus on aspects such as the economy whereas Leavers focused on aspects such as democratic accountability. Neither side listened to the concerns of the other side. Instead, each side agreed that the other side was stupid.

    Another way of thinking about this is to split intelligence into intellectual intelligence (the bit that relies on algorithms) and emotional intelligence (the bit that relies on instinct and empathy towards different perspectives). Our society underestimates the importance of emotional intelligence.

    1. This is really interesting. And I think you are totally correct about needing to understand before trying to persuade.

      In my current career I am making software. We have a rule that the customer is never stupid. It's always our fault for not understanding them and making things clear enough to them.

      If everyone had the attitude that no one is stupid and it is our failure to understand them that is at fault - maybe the world would be a bit of a nicer place.

  3. Ari: “We have a rule that the customer is never stupid”

    I agree that is a good rule in commerce. It relates to accountability. In the end, a business is accountable to its customers. The same is true in other fields. For example, trial lawyers must persuade juries of non-lawyers of the strength of their arguments. It is not enough for lawyers to believe in their own arguments. They must persuade others too.

    I find it fascinating that academic economists, and academics in other social fields, seem to have no idea of this rule – either that it exists in the commercial world or that it applies to academics too in their own dealings with policy-makers and the public.

    I love this old diagram of what happens in IT projects when people don’t listen carefully to what other people are saying.


Sorry, I have had to moderate comments because of an annoying pest spammer who keeps posting American football matches links.