• News
  • Brain, Mind & Consciousness

What would it take for AI to achieve consciousness?

by Eva Voinigescu Nov 9 / 17
What would it take for AI to achieve consciousness?
photo: istock

Imagine you’re on a business trip, driving from your hotel to a meeting with a client when your fuel light goes on.

Your car has a GPS that can tell it where the nearest gas station is. But the fuel light and the GPS are unaware of each other. If instead your car had access to information from all its parts and it was self aware – in other words, it knew what it knew – it could direct itself to the nearest gas station. If it could do this, it would be functionally conscious.

So argue CIFAR Fellows Stanislas Dehaene and Sid Kouider, members of CIFAR’s Azrieli Program in Brain, Mind & Consciousness, in a review paper published last month in Science.

The review looks at existing knowledge about the neuroscience of consciousness and proposes that consciousness results form specific types of information processing physically carried out by the hardware of the brain. Today’s artificial intelligence does not have these abilities, the researchers say.

In fact, the things that AIs are best at today are things that human brains tend to do without conscious thought, such as face and speech recognition.

In order to achieve a functional consciousness similar to that of humans, Kouider, Dehaene and their colleague Hakwan Lau insist that machines need to adopt two types of information processing already present in the brain.

…the things that AIs are best at today are things that human brains tend to do without conscious thought…

The first is “global availability,” the act of selecting and making a piece of information accessible for processing and decision-making by the whole system. Global availability highlights or draws attention to a thought or piece of information that was up until that moment unconscious.

Though the brain possess a deep hierarchy of specialized modules that operate non-consciously and are dedicated to specific tasks like processing visual input or directing motion, it also possesses a “global neuronal workspace” where specific pieces of information are selected and shared across all modules. Whatever information is present in this global area at any given time is what we call “conscious.”

In the paper the researchers explain that the prefrontal cortex appears to act as a central information sharing device. They add that the considerable expansion of the prefrontal cortex in the human lineage may have resulted in a greater capacity for integration across brain regions and functions.

For AI, implementing multiple process in a single system and flexibly coordinating them similarly to the global workspace remains a difficult problem, although some recent machine architectures are showing progress in this area.

But for a machine to act as though it was conscious it is not enough for information to simply be globally accessible in this way. The researchers point to a second computational process that they believe is key to the emergence of consciousness in the brain – self-awareness.

“Humans do not just know things about the world, they actually know that they know or that they do not know,” the authors state in the paper. This self-awareness allows us, and would allow self-reflective machines to regulate behavior, make confident decisions, and know when a mistake has been made. It also allows us to direct resources to acquiring more information where we lack it – in effect, self-awareness drives curiosity.

Humans do not just know things about the world, they actually know that they know or that they do not know”

The authors point out that most current neural networks lack self-knowledge about the reliability and limits of what they have learned, though some Bayesian networks use probability to track whether they are likely to be correct. Other systems have been implemented in robots to direct their resources towards problems that maximize their learning.

An important aspect of knowing what you know involves knowing what is real and was is imagined. Humans both take in external information and generate their own imagined scenarios. For generative algorithms, adversarial learning is one method researchers are using to evaluate the authenticity of generated representations. Doing so could help AI function more like conscious humans.

As the authors point out in their review, some may object that this measure misses the subjectivity of experience that most people associate with consciousness. The authors admit that their theory of consciousness differs from others in that it is entirely computational. However, the researchers point out that in the human brain, we know that a loss of global availability and meta-cognition coincides with the loss of subjective experience.

“This is what we know about the only system that is unambiguously conscious, what we know from studying the human brain and human cognition, and those are ingredients that should be thought of when we are trying to build new [AI] implementations,” said Kouider.