A Science Fiction Writer Muses on Artificial Intelligence and Artificial Sentience

Artificial Sentience is a general theme in Science Fiction. Sometimes the computer sentience is friendly as in Robert Heinlein’s The Moon is a Harsh Mistress (one of my favorite SF books of all time), other times the artificial sentience is overtly inimical (the artificial intelligence inThe Matrix, Skynet in the Terminator movies), and sometimes it is schizophrenic as the HAL 9000 computer in 2001/2010 Odyssey movies.

I’m currently working on the third book in The Halcyon Cycle which I have tentatively named the Descent into Abaddon. In it one of my lead characters Al Gleeson has a debate with his friend Floyd Linder about Artificial Intelligence. Gleeson makes the following statement:

“There’s no such thing as a smart computer, only smart programmers.”

Turing_Test_version_3

Can C distinguish A from B?

As an author, if I want to construct a realistic debate about this subject, I have to try to argue for both sides as convincingly as I can. One way to counter Al’s statement is to propose the Turing Test (http://plato.stanford.edu/entries/turing-test/). In the test for computer intelligence proposed by Alan Turing in 1950  [Turing, A. (1950), “Computing Machinery and Intelligence,” Mind, 59 (236): 433–460], a human test subject would interact via keyboard (to make life as easy as possible for the machine) with two communicators: a human and a machine. After 5 minutes of communication if the test subject cannot identify the machine with better than 70% accuracy, then the machine is thought to be intelligent.

But think about what would happen if we put this into practice. After each failure (i.e. the human subject recognizes the machine), the programmers would analyze which bit of dialogue tripped up the machine and put in additional code to “give the human answer.” So with enough processing speed and a large enough memory, over time all of these written queries ought to be countered and so the machine should pass the test. But is the machine really intelligent or are the programmers simply very clever?

There may be another problem with this test: the computer may be too smart. If I were the test subject, I would ask the subject to cite a irrational number, say Euler’s Number to 27 decimal places. Any calculator could do it; I would not expect a human subject to do so. I suppose the programmers could code the computer to lie responding it can only cite e to two decimal places, but if it answered truthfully that would give it away. There are probably a large number of such tests, where the computer can provide data of such precision that it “would be busted” in its deception.

What do you think? Is it possible to design a sentient machine?

Thanks for reading,

Peter

Peter Kazmaier is author of  the Science Fiction, The Halcyon Dislocation, available at Amazon.

About Peter Kazmaier

Lover of Fantasy and Science Fiction. Author of the SF series THE HALCYON CYCLE. I frequently re-read my favourite books. http://tinyurl.com/p46woa4

Posted on November 19, 2014, in Science Fiction and tagged , , , . Bookmark the permalink. 7 Comments.

  1. It would be interesting to turn the experiment around: with hidden human interlocutors behind the keyboard, how often would they be identified as “machine”? What type of answers that gave would prompt such an identification?

  2. John, that’s a very interesting idea and would check miss-identification from both directions: machine for human and human for machine.

  3. kevinthekingsson

    There is a very important distinction between sentient behaviour and actual consciousness. I have often reflected on the hypothetical notion of a child reared and thoroughly indoctrinated in the theory of human evolution; but without ever being made aware that it was itself a human being. On being finally told that it was human, I can well imagine its response being something like, ‘Well, I can understand how these humans come to be, and why they behave the way they do, But there are umpteen billion of them, all supposedly like me; yet I cannot feel a single thing that one of them is feeling. Where do I fit into all this?”

    Although we still have a long way to go, great strides are being made in the development of learning algorithms that allow computers to learn for themselves. (My eldest son has a degree in Cognitive Science.) I can’t see any a priori reason why it should not eventually be possible to construct robots that exhibit sentient behaviour that is as sophisticated as our own. However, whilst they could react to sensory stimuli just as we do, and engage in philosophical debate about the meaning of their existence, we would still have no way of actually knowing if they could really see colours or feel pain as we do. Faced with such a scenario, I do not see how we could reasonably refuse to grant them the equivalent of ‘human rights,’ or even acknowledge the possibility that they might have a soul or a spirit. But how could we actually tell?

    According to the hard-nosed atheist, the existence of ‘soul’ or ‘spirit’ is just a superstitious fantasy: whereas to the Christian it is our possession of an eternal spirit and capacity to have a relationship with God that places man above the natural creation. Ultimately robots may conclude that they are superior to humans and that the universe would be a better place without us. And if the atheists are right, why not?

  4. Hello Kevin,

    Thank you for your thoughtful and informative reply. I found your distinction between sentient behavior and actual consciousness very useful.

    Your information about cognitive science and learning algorithms was enlightening. I guess for me this points back to the idea “there are no smart computers only smart computer programmers.” It seems to me (and here my thinking could use some help), the presence of a “learning algorithm” itself already puts the machine into a deterministic box. Learning to the machine will mean whatever the programmer said it would mean and follow whatever process the programmer has prescribed.

    As you pointed out, the real question is are we humans also in a deterministic box? I think most materialists would say “yes” and so of course, with that line of thinking, we all are run by algorithms and so machines in principle can become “human” (actually they have made humans into machines through their philosophical presuppositions).

    When I look at human creativity, artistry, and innovation, I don’t see us as being in a box. Somehow we seem to be able to generate wholly new non-algorithm-driven thought (I can’t prove that but that’s how my observations stack up). Perhaps we have a dimension of the soul that breaks out of the bounds of simple cause and effect and that’s the source of true free will?

    Thank you for commenting.

    Peter

  5. kevinthekingsson

    The kind of learning algorithms I was referring to are what are known as ‘neural networks.’ These are arrays of relatively simple logical elements that seek to mimic the behaviour and organisation of neurons in the brain. Typically, each ‘neuron’ can accept a number of possible inputs and produce an output signal based upon the relative weighting applied to the different inputs. The outputs may than be fed into secondary arrays. fed back into the primary array, etc…, with the final outputs leading to an output signal representing a decision, which may be a logical yes or now, or a more complex probability weighting.

    The network is then ‘placed into ‘learn’ mode, where it is presented with a series of input parameters, representing a specific problem, and is allowed to vary the weighting factors, by a process of trial and error, until most, if not all, of the results yield the desired response. It is really quite remarkable what sophisticated functions can be performed by even a simple neural net of this type: and subsequent analysis of the weighting factors discovered can actually reveal previously unrecognised relationships between the various input parameters. Technology of this type is now widely used in video and voice recognition systems, for example.

    The challenging aspect of this is that, once the basic neural mechanism has been set up, the actual programming of the net is done by a pseudo-random process. (It could in principle be totally random: though for speed of learning, most practical applications do involve some ‘tweaking.’) Yet, of course, the whole procedure is totally deterministic. So where does this leave us in the sentience and free will debate?

    A neural net system might, if sufficiently complex, result in sentient behaviour: but we would still not know if it really was sentient. However, it raises some very interesting questions as to the potential relationship between the non-physical realm and what are termed ’emergent properties.’

    But the development of artificial intelligence highlights what I think is a major misunderstanding of the debate concerning freewill. I have previously commented on this in the ‘Books and Writers’ group on LinkedIn, in the thread entitled, ‘You can argue the existence of free will?’ For ease of access, I’ll reproduce it here…

    I did my B.Sc. thesis on the subject, ‘Freewill and Determinism in Relation to Brain Science;’ so I have a strong personal interest in this issue.

    Two points then:

    Firstly, I think it is more appropriate to speak in terms of our having ‘limited autonomy.’ That is to say, we are able to make decisions that are independent of external influences: but only up to a point. Our ability to choose what we want can be limited both by the pressure of external circumstances and by our inability to foresee the outcome of our choices.

    Secondly, contrary to what we might intuitively suppose, our ability to exercise free choice is dependent upon – not on freedom from – the causality of our environment. Without causality we cannot predict the probable results of our choices, and so our choices become meaningless, (even though they might still be ‘free’ in the same sense as the gambler who bases his choices on the roll of a dice).

    During the time I was doing my thesis there was a debate raging on this issue in the New Scientist magazine. On the one hand, there was a professor of mathematics who was arguing that it should be possible, if one could but take a snapshot of a subject’s brain and all relevant factors in his environment, to perfectly predict the subject’s responses, no matter what action he might take to refute it. On the other hand was another eminent doctor whose counter-argument, as I recall, was that the subject was still free to believe or disbelieve the prediction.

    By the time I had got my own thoughts together on the issue the debate was over: but I sent the maths professor a private letter outlining a simple computer program that was guaranteed to invalidate any attempt made to predict its future state. I then asked whether, given that a clearly deterministic machine could so easily refute any prediction of its state, it was conceivable that a human being was incapable of doing the same? After several months I received a very gracious reply from the professor acknowledging that he had been unable to come up with any counter-argument.

    The point is that our ability to make meaningful choices is dependent on our ability to predict outcomes. But predictions made about others may be invalidated if it becomes known to them. This has interesting ramifications when it comes to issues such as prophecies – which may be self-fulfilling, self-destroying or unaffected, depending on the level of comprehension and conscious responses of those who hear them.

    The other point arising from this is that in order to have total freedom of choice one would have to know everything about everything and be capable of controlling all external influences. Ultimately, therefore, the only one with total autonomy would, by definition, have to be God.

    • Kevin, thank you for your excellent introduction on neural networks and their relationship to determinism versus free will. I have a question. C. S. Lewis in his book MIRACLES in chapter 3 entitled The Cardinal Difficulty of Naturalism, talks about two different kinds of thought: Cause and Effect thinking (that one readily recognizes in both man and beast) and Ground and Consequent thinking, which seems to be restricted to humans and indeed underlies the reasoning used most often with science because it deals with correlatives.

      My question then is: do neural networks give rise to both kinds of thinking or only Cause and Effect type thinking [I think you know what I mean and did not want to bore you with the definitions]. Any thoughts?

  6. kevinthekingsson

    Hi, Peter,

    I hadn’t actually read that particular one of C.S. Lewis’ books; so have been reading Chapter 3 and Jim S’s series of articles on the arguments for and against it on the Quodlibeta forum (http://bedejournal.blogspot.co.uk/2013/09/cs-lewiss-argument-against-naturalism.html) over the weekend.

    Typically, a neural net excels at Cause and Effect problems. In effect, an aggregated ‘memory’ of past events and their outcomes is being coded into the system via the stored weighting factors, which is then used to predict future outcomes. This is the type of mechanism that is commonly thought to underlie most of the animal kingdom’s ‘acquired learning’ behaviours.

    Ground and Consequent thinking is more akin to the ‘boolean logic’ that forms the basis of conventional digital computing, where circuits performing very basic logical operations can be arranged and programmed to perform incredibly complex tasks. As you have previously observed, however, the genius here currently lies mainly in the mind of the circuit designers and programmers.

    But these distinctions are blurring. Conventional digital computers can simulate the operation of a neural network, and neural networks can learn to perform complex logical comparisons. Indeed, it is highly probable that our own logical thought processes, at the physical level, are heavily reliant on the skills learned by our own neural networks. Moreover, in science and research it is now increasingly necessary to make use of the vast computing power of super-computers to test the validity of scientific theories, either by simulating the consequences of a particular hypothesis, or else by using their immense number-crunching capabilities to analyse vast amounts of data, such as in the recent high-energy experiments searching for evidence of the Higgs boson.

    So, although it is probably beyond our current levels of computer ability to analyse the semantics of a complex logical argument in order to match the alleged grounds and causes against the consequents and effects in order to deliver a proven/not-proven verdict on the argument, I would have to say that I can see no a priori reason why this should not be possible in the future.

    Which brings me back to C.S. Lewis’ argument. He correctly recognises that, in order for any ground-consequent relation to be validly believed there needs to be a demonstrable cause-effect relationship between the ground and the consequence. Putting this another way, if our thought process are not themselves deterministic – in the sense that, given knowledge of the ground conditions, we cannot come to any other logically valid conclusion as to the consequence – then we cannot regard the conclusion as being validly proven.

    But what, exactly, is it that Lewis is trying to assert by this ground-consequent distinction? He appears to be acknowledging that ground-consequent thinking must be logically deterministic.

    … One thought can cause another not by being, but by being seen to be, a ground for it.

    If you distrust the sensory metaphor in seen, you may substitute apprehended or grasped or simply known. It makes little difference for all these words recall us to what thinking really is. Acts of thinking are no doubt events; but they are a very special sort of events. They are ‘about’ something other than themselves and can be true or false.

    Yet he also appears to be saying that to recognise the truth of a belief is something distinct both from the cause-effect relationships, whose observation gave rise to the corresponding ground-consequent inference, and from the cause-effect relationships that are in play during the inferential process.

    The act of knowing has no doubt various conditions, without which it could not occur: attention, and the states of will and health which this presupposes. But its positive character must be determined by the truth it knows. If it were totally explicable from other sources it would cease to be knowledge … Any thing which professes to explain our reasoning fully without introducing an act of knowing thus solely determined by what is known, is really a theory that there is no reasoning.

    The real crux of the matter, I think, is what is meant by ‘truth’, ‘knowing’ and ‘reasoning’. In a computer, truth is simply a logical state: whereas to most humans it is something highly desirable and to some the ultimate goal for which they strive. It is analogous, I would suggest, to the relationship between sound and music.

    Music is a message which is codified as sound waves. But it may also be codified as electrical pulses, a series of pits in the surface of a cd or an arrangement of pigments on a piece of paper. Computers can analyse music and even compress it by stripping out features that are beyond the capacity of our physical senses to discern. They can learn to recognise harmony, cadence, etc. Yet practically no-one would suggest that they experience music in the same way that we do:. It may well be possible to create a computer or robot that was an ardent fan of Bach or Electronica. I even suspect that it could be constructed in such a way as to be capable of holding its own with the most erudite of music critics. But I still can see no way it could ever be definitively shown that such behaviour did or did not involve any conscious perception and love of music.

    The same may be said of all our conscious senses. The function and behaviours associated with our senses may be analysed and reproduced in various ways: but conscious perception is not needed to explain how they work and achieve their ends. And thought itself is similar, in that its mechanisms can be examined an emulated in various ways. Yet the thrill and satisfaction of that ‘Eureka!’ moment and the settled sense of understanding that follows, is every bit as much a conscious experience as the delight on hearing a musical masterpiece or watching a glorious sunrise.

    Therefore it seems to me that the real argument is not about determinism, but consciousness.

    I could go on, as I haven’t even started on issues such as self-awareness, how predictive decision-making can open the door to freedom, the real nature of scientific discovery, the role of axiomatic truth in logical reasoning and the concept of emergent properties: but I’d better stop at this point.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.