What do Smart Machines Think of Us?
by J C Spender

guestPosted by

Alan Turing, the British mathematician who did crucial work on WW2 German Naval codes and on computing, has been much in the news. One reason being his theorizing about mathematical biology, morphology, and chaos theory; why, for instance, a zebra’s stripes are as they are. Another being his field-shaping thinking about artificial intelligence (AI) and to its increasing impact on human affairs. Stephen Hawking, Bill Gates, and Elon Musk, for example, have sounded warnings that AI might ‘escape our control’ and ‘take over’. Ray Kurzweil has claimed the ‘singularity’ (moment of takeover) is at hand. Given our total surveillance society, deepening embrace of ‘technology’, and the coming ‘Internet of things’ these concerns alarm us as they surface some of mankind’s ancient terrors.

 

We have various ideas about how humans and computers interact. ‘Soft AI’ sees computers as tools to aid us in our endeavors; tireless and precise as an electric drill or robotic medical sampler obeying our instructions – but never more. ‘Hard AI’ emerges as we imagine these tools becoming (a) more complex than we are, and (b) able to ‘learn’, improve their performance and so move beyond our limitations.

 

It is not easy to define the singularity. The movie “Imitation Game” drew attention to Turing’s Test. This challenges us to decide whether questions posed to a hidden respondent are being answered by a human or a computer; whether a computer can ‘fool’ us enough to throw the man-machine distinction into doubt. It upends the idea that our capabilities can be usefully compared against a computer’s; memory, speed, and so on. Instead Turing’s test challenges our sense of consciousness. It suggests a computer might seem ‘more human’ than us as our sense of it being a ‘machine’ is displaced by our amazement at its human-like responses. Siri and the movie “Her” help show how readily we fool ourselves as we submerge what we have evidence for beneath what we desire.

 

Which indicates three views of the singularity.  Firstly, and most naïve, is that as we create ever more complex systems there must be a point when they are ‘more complex’ than us.  Even then, amazed at the system’s capabilities, we know these have been ‘engineered in’. Calling such a system ‘intelligent’ abuses the meaning of the term for its intelligence is only that of its human makers. Of course, the system might well ’fool’ those who do not understand what is going on, just as people unfamiliar with modern medicine may be fooled by a doctor’s ‘magic’.

 

A second notion, proposed by the Americans Herbert Simon and Allen Newell, is of machine intelligence based on heuristics.  Their insight was that humans’ non-logical work-rules, such as kitchen recipes or finding dates, could be programmed into a machine that is entirely logical. Medical diagnoses were early examples. Expert systems are now common. They are ‘intelligent’ to the extent the programmer transports rules humans see as intelligent into the system. Comparing the result against the rules can lead to feedback, a form of ‘machine learning’ as its rules are adjusted by a ‘higher order’ but no less human-generated rule. Nevertheless, this learning must be programmed, so Simon & Newell reminded us that the system is displaying the programmer’s learning, not its own.

 

Turing offered a third argument or speculation that led to his test. He presumed humans thought with rules but being emotional and ethical as well as logical, less precisely or speedily than computers. His famous 1950 Mind article pointed out that given machines can already ‘imitate’ us we may eventually be unable to distinguish them from us. Lacking a distinction, their intelligence must be considered fundamentally similar. We need not understand our own thinking. The singularity follows. Note ‘machine consciousness’ is not at issue; the emphasis is on our inability to distinguish the machine’s consciousness from our own. We might assume we have capabilities machines lack, but cannot ever define or prove them. Turing simply proposed that machines could acquire the rules necessary to ‘imitate’ us and so become socially acceptable. What more is needed?

 

The three arguments’ differences are instructive and help us grasp big data’s potential. The important questions revolve around human imagination, our evident capacity to deal with our life’s uncertainties. There are three types of uncertainty; ignorance, indeterminacy, and incommensurability. Ignorance, the uncertainty that drives most research, is of our not knowing what can be known, the real. Indeterminacy, which underlies game theory, arises because our knowing is both limited and various. Unable to enter another’s mind we cannot be certain of the result of interacting with them. Incommensurability underpins our inner doubts in that our knowing is grounded on a variety of unprovable assumptions and so fragmented into parts. Contradictions and paradoxes are incommensurabilities our imaginations cannot resolve. Crucially, our imagination is shaped but never bounded or determined by our experiences as we inhabit our social space. Even though computers ‘inhabit’ their own space, not ours, and do not live as we do, Turing presumed machines might learn to deal with our questions and so display ‘human imagination’. Given the coming ‘Internet of things’ they may also get to share our panoply of sensory equipment and display an adaptive capability we cannot distinguish from our own acts of imagination.

 

These stories help illuminate Tom Davenport’s three modes of big data – descriptive, predictive, and prescriptive. The first simply collects ‘data’, what is sensed. The second has a model or rule programmed in to link the computation to some human intention, predicting whether a chosen goal will be reached. The third has the capability to explore alternative models programmed in to find a best fit between the data being gathered and the goal brought to the analysis. This technique has the potential to surprise us by surfacing unanticipated goal-relevant relationships – what we sometimes call pattern-recognition. In short, big data can help us overcome ignorance about how best to reach our goals.

 

The possibility of any machine system’s dealing with indeterminacy and incommensurability, life’s other uncertainties, hinges on its being able to discover and successfully inhabit the multiple universes we inhabit. For us, forever bounded and unable to understand our situation in its entirety, the resulting pluralism of our knowing the lived world is held together by a dynamic and pragmatic sense of Self, of being a single competent intelligence, not schizophrenic. This arises from our unique reflexive capacity to both think and observe our thinking, to be both within and without our consciousness. Ultimately, Kurzweil’s singularity is about machine consciousness and most agree this is far off, even if some form of reflexivity can be engineered. In which case a ‘reverse’ Turing test hovers in the background challenging us to seem truly computer-like. Yet can a machine observe and learn to know itself as we do, become ‘conscious’? Could we ever recognize such consciousness for, absent the machine-dominated dystopias of science fiction, we cannot ever enter the machine’s universe? Likewise, we cannot enter a zebra’s intelligence – it only matters when we train it to our purposes. There can be no singularity. Management’s task is always to bring the machine’s ‘consciousness’ to bear in our world and so answer, “What does it mean to us?”

 

 

About the author:

JC Spender trained first as a nuclear engineer then in computing with IBM.  He moved into academe as a strategy theorist, opening up a subjectivist/creative approach that complements mainstream rational planning notions of strategizing.  This 40-year project was brought to completion in Business Strategy: Managing Uncertainty, Opportunity, and Enterprise (OUP 2014).

Leave a Reply

Your email address will not be published. Required fields are marked *