This question seems to pivot on whether we can initially create a robot that is, or becomes, a sentient being capable of independent thought and self-replication. Can we create a machine that thinks, learns and behaves like a human and indeed in a manner superior to humans?
It’s a common theme for the science fiction genre, sure enough. Misunderstood, super-intelligent, kinda-handsome-in-a-nerdy-kind-of-way, scientist creates an intelligent robot capable of independent thought and learning. Said robot becomes more intelligent than maker, replicates itself and is about to conquer the world only to be foiled at the eleventh hour…
Given the advances in technology over the past century, it is tempting to say that we will indeed be able to create such a robot in the not to distant future. We did, after all, leap from the first powered flight to space flight in only 66 years. Why couldn’t we create a truly intelligent robot in a similar time frame?
One argument against a ‘sentient’ robot is that human beings are somehow endowed with an intelligent soul that cannot be replicated in a machine. I’m not comfortable with this argument though, as it seems we are stepping ever closer to a world defined in physicalism. The dualistic (mind/body) notions of humanity and intelligence may become a thing of the past as we discover the Higgs boson (or ‘God particle’), find a unified theory of physics, comprehensively define the mental in terms of the physical and eschew traditional notions of soul for more scientific alternatives.
So is there another natural limit? A factor that defines human thought and intelligence that cannot be reduced to an algorithm and placed in a machine?
I believe there are two such factors: (i) a pesky mathematical theorem known as Godel’s Theorum and (ii) qualia
There is a strong logical argument to suggest that certain brain processes are not computational or algorithmic in nature. Roger Penrose (Shadows of the Mind) presents perhaps the strongest case against the artificial creation or simulation of true intelligence. He provides an application of Godel’s Theorem to human thought to explain that there are some thought processes which are not computable. This is a complex argument and I won’t attempt a summary here, but for further reading check the abovementioned text.
The second, perhaps more accessible, argument against sentient robots is that computers think’ in numbers whereas humans don’t.
Robots are created from machines, computers and our best attempts to simulate the learning process using quantitative methods. Humans, on the other hand, appear to be able to compare qualitative information and still reach generally rational conclusions about different and similar qualities.
It’s as though robots can only think in numbers as primary units, whereas humans can think in colours, sounds, physical sensations, tastes, odours, and even more abstract concepts such as emotions, as primary units. We don’t reduce these memories of sensory experiences down into numbers, but instead process them in their original qualitative form or qualia’ (such as red’ – Frank Jackson in his article “Epiphenomenal Qualia” (1982) and extended in “What Mary Didn’t Know” (1986.
Exactly how does the brain do this? Well, the brain appears to work more like a database of qualitative information than a mathematical algorithm.
It is adept at comparing and contrasting memories of these different sensory experiences both with other memories and a set of beliefs. The brain appears to perform is generally like’, is generally dissimilar to’ and several other qualitative calculations rather than stringent mathematical calculations (+,-,X,/,=,>,< etc). An entire new science of qualitative calculation might need to be developed before artificial intelligence could even begin to simulate human thought processes and the development of a set of beliefs.
Humans can certainly perform mental arithmetic, but only because we have been educated in these techniques as useful tools for understanding and harnessing the world around us. Babies do not count the number of fingers on each hand. But they do see their mother, hear her voice, smell her, touch her warm body and taste her milk. As they continue to experience similar interaction with their mother, they compare the new memories of sensory maternal interaction with older memories, compare these memories to a need for shelter, protection and food, and perhaps a growing belief that they belong’ with her and enjoy her company. When these qualities match (mum is right here with me), the child experiences different emotions to when the qualities do not match (where has mum gone?). Neither children nor adults reduce these qualitative memories into numbers.
So could we eventually develop a science of qualitative calculation? Perhaps. But as a starting point, we must resolve to use qualia rather than numbers as the primary units in our calculations. Qualities such as the colour red are likely to be forever irreducible, regardless of the sophistication of our phenomenology.
Even if machines can use sensors to capture data on sight, sound and touch, compare this data to existing data, make qualitative decisions and refine a belief set (in other words approximate an algorithm of human thought), they will still never experience these qualia in the same way humans do. If they cannot experience what we experience, then perhaps they can never learn the way we learn or think the way we think. Perhaps we will never build a sentient robot because such sentience is dependent on understanding what red’ actually is.
I think we are safe.