1000 DAYS OF THEORY
Intelligence and Representability
Louis Armand
One of the prevailing assumptions about "artificial intelligence," as a project, is that it has proceeded upon the basis of a dream of analogy or
translatability between human cognitive systems and a universalised "intelligence." This clearly has something to do with the fact that the
project of "artificial intelligence" assumes as its necessary foundation an anthropocentric model -- a model that retains an intrinsic humanism
consistent, to a significant degree, with the mechanistics of Descartes and of the Enlightenment generally -- whether dualist or materialist in
conception. A particular expression of this predisposition to a cyber-humanism and the simulation of human thought is found in one of the foundational
documents of modern cybernetics, described by what has come to be known as the Turing Test.
The Turing Test first appeared as "the imitation game" in Alan Turing's 1950 article "Computing Machinery and Intelligence," in which Turing
states: "I propose to consider the question 'Can machines think?'" This reconsideration, Turing explains, "should begin with definitions of the
meaning of the terms 'machine' and 'think.'"[1] And we should do well to follow this injunction. In order
to arrive at such definitions, the Turing Test sets out criteria for determining if a computer program may in some way be perceived as having
"intelligence." According to Turing, "the new form of the problem can be described in terms of a game which we call the 'imitation game.' It is played
with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other
two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X
and Y, and at the end of the game he says either 'X is A and Y is B' or 'X is B and Y is A.' The interrogator is allowed to put questions to A and B."
In order to complicate matters, it is the role of the male respondent to deceive the interrogator, while it is the woman's role to convince.
The purpose of the game was that "a successful imitation of a woman's responses by a man would not prove anything. Gender depended on facts
which were not reducible to sequences of symbols." In contrast, Turing "wished to argue that such an imitation principle did apply to
'thinking' or 'intelligence.' If a computer, on the basis of its written replies to questions, could not be distinguished from a human respondent,
then 'fair play' would oblige one to say that it must be thinking."[2] Consequently, whereas the original
"imitation game" devolves upon a determination of gender-symbolisation, the Turing Test as it is normally understood involves a situation in which "a
machine takes the place of (A) in this game" -- such that a human being and a digital computer are interrogated under conditions where the
interrogator would not know which was which, the communication being entirely by means of textual messages.
Turing thus argued that if the interrogator could not distinguish between the human and the computer on the basis of their relative responses, any
less than in the game involving the male and female respondents, then it would not be unreasonable to consider the computer as being "intelligent." In
other words, according to Turing's proposition, a computer-respondent is "intelligent" if the human subject is able to be convinced that its
respondent is, like the interrogator, also a human being, and not a machine. The negative definition here proceeds on the basis that neither
machine nor human, within the parameters of the game, can clearly be distinguished from the other on the basis of assumptions about intelligence and
behaviour. As a consequence, Turing effectively locates "intelligence" as a relativistic interface phenomenon, rooted in the simulation of any
given criteria of intelligence measured by the effectiveness of the dialogic illusion -- something which has profound implications for how we may then
proceed to define "machine," "thought," or even "intelligence."
In Turing's view, it is not so much machines themselves but the states of machines can be regarded as analogous to "states of mind." In
other words, Turing's definition of intelligence is an operational one: "The original question, 'Can machines think?' I believe to be too meaningless
to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much
that one will be able to speak of machines thinking without expecting to be contradicted."[3]
One of the many implications of the Turing Test is that the measure of intelligence is somehow bound up with how "intelligence" is recognised (or
is "computable"), as comparative to the predictability of human behaviour, and that any "artificial" or universalised intelligence thereby holds a
mirror up to man (as that thing which thinks). Intelligence, in other words, is vested in a technics of representation or
representability -- where, in the absence of any direct verifiability or proof to the contrary, the assumption of man's genius is mechanically
represented back to himself in his own image. For Turing it is enough that intelligent machines be imaginary machines -- such machines, after
all, must first be imaginable, and this has to do with the human capacity for (self) recognition (would an intelligence not immediately
reflective of human cognition be humanly recognisable? This is the question).
Dependency upon the faculty of recognition is one way in which cybernetics can be seen to avail itself of a certain humanistic-theological
movement, but at the same time it points to a very real problem as to how otherwise to conceive of intelligence as such, if intelligence is not simply
to be regarded as a uniquely human attribute that must, in the absence of a deity, be projected, via the techné of computability, into the
universe at large. Such a project would be enough if it were simply a question of immortalising the human legacy by way of machines: imparting sense
to base matter, and thereby redeeming a world or a universe we now realise to be godless. To this extent, artificial intelligence may be regarded as a
form of creationist revisionism, that puts man -- and by virtue of man, man's god -- "back" into the cosmic framework. Intelligence thus perceived
functions abstractly as a type of Cartesian Artifex Maximus (the brain of god?) -- but like all gods, it exists only insofar as one believes
that it does.
The export of human intelligence is not the same thing as arriving at a generalised understanding of what intelligence is, or of being able to
develop, let alone recognise, what could properly be called "artificial" intelligence. And here lies the basic dilemma: is it possible to recognise,
and consequently affect, an intelligence remotely different -- or even moderately different -- from human intelligence?
Considering that even the least intuitive models of organic intelligence describe mind and consciousness in terms of schematised mechanisms, on the
one hand, or a complex of neuro-biological and environmental/experiential relations, on the other -- in short, a synthesis of the brain's and the
entire central nervous system's activities -- then it is hardly surprising that intelligence thus conceived should be anthropocentric or
anthropo-technic. Nor is it surprising that any psychological or physiological rationale for testing intelligence should proceed upon a human
conception: the theoretical and practical sciences are after all human activities. We might tentatively describe the necessary tenets of such a
project as Aristotelean, in that they are formally anthropomorphic and thereby "computable" in rationalistic terms. This distinction can be usefully
clarified if we consider the relation of human intelligence to radically non-anthropomorphic types of organic "intelligence" (as a prelude to
discussion of any "artificial" intelligence) -- such as the organisation of the nervous system in invertebrates, like those of the phylum
Echinodermata, specifically the common starfish or Asteroidea.
An important differentiation is commonly made between intelligence and mere reflexivity, as between neural systems and nervous systems, and yet it
remains necessary for us to consider what it is that allows apparently unintelligent species to make sense of (or analyse) their environments,
even in the most haphazard way, and how such analysis might, on a mechanistic level, give rise to something like intelligence on a level of reflexive
synthesis. Part of this question relates to the distinction between memory and genetic programming, but it also relates to the extent to which
intelligence is vested in the organism itself, rather than as an epiphenomenon of the brain or cerebral cortex. When we consider invertebrates like
the starfish, these questions are intensified by the peculiar organisation of the organism as a whole, characterised firstly by a radical symmetry.
Commonly, the body of the starfish consists of five equal segments radiating from a "central body" (pentamerous), each containing a duplicate
set of internal organs, with the exclusion of a heart, brain and eyes -- of which the starfish possesses no counterparts. In certain cases, areas of
the starfish's limbs are photosensitive, while the limbs themselves are hydraulically operated (thereby mechanical), and may be split-off
(autotomy) -- or the entire organism divided (fission) -- as part of an asexual reproduction process.
With regard to the question of intelligence, probably the most striking feature of the starfish is that it lacks anything analogous to a "head,"
while yet possessing a mouth and excretory passage on its underside (and a stomach that functions by extruding over the starfish's prey). The basic
consequence of its acephalous nature is that, while possessing a topside and an underside, the starfish possesses no singularly privileged orientation
that could determine its movements as either forward, backward or sideways, or as deviating: the starfish, in other words, can move equally in
any of five direction, determined by the disposition of its five members -- and is thus radically distinct in its analysis and experience of space and
directionality from its Aristotelian counterpart.
Such apparent arbitrariness with regard to directionality is unthinkable in human terms. Man's dependency upon the inner ear and hypothalamus for
balance and orientation, the disposition of his limbs and the frontal positioning of the eyes, locks him into a rigidly binary directional system:
front/back; left/right; up/down, relative to a singular axis of perception and movement. There are very finite limits to the human capacity to
calculate spatial orientations that do not fit intuitively with this model -- which itself is mechanistically determined and not the product of an act
of consciousness -- and it is precisely such limits that are the focus of much testing and training of pilots and astronauts, for example. Indeed,
such training represents something like an attempt to adjust or re-organise intelligence around the strict limitations of neurobiology, in order for
the total organism to operate effectively under alien conditions. Or, as cyberneticists would say, in order to develop a software solution to a
hardware limitation. For human intelligence cannot alter the structure of the so-called reptile brain (the hypothalamic and endocrine systems); it can
merely be seen as generating a compensatory effect via the mediations of the hippocampus, septum, and cingulated gyri, by means of which emergent
patterns of "intellection" take place in the thalamus and cortex.[4]
But such compensation remains environmentally conditioned, and the immediate environment of human intelligence is the body and those forces acting
within and upon it. More precisely, what we might here term limitations must inevitably be recognised as the preconditions of human
intelligence -- as effectively soliciting intelligence -- just as the acephalous nature of the starfish preconditions its particular form of
invertebrate intelligence.
Consequently, whilst it may be possible to compose algorithms in order to compute or design an orientational system analogous to that of a
starfish, the operations of such a system would remain -- for us -- purely metaphorical, or rather "artificial," since they remain essentially outside
our capacity for experience. Instead, we are once again dependent upon the advent of an interface that could interpret or rather translate between
this acephalous model and our own Aristotelian one -- yet such that the intelligibility of the system would not in fact be analogical, but
rather affective, just as with Turing's test hypothesis. And it is for this reason that so-called artificial intelligence can, in reality, be
"nothing more" than a type of intellectional interface: a translation machine, a type of Maxwell's demon capable or resolving the "entropic" discursus
of alien intelligence into the neatly binarised vision of an intelligence both recognisable and functional in human terms.
This compensatory model of intelligence -- or rather rectificatory model -- recalls certain propositions put forward by Turing in 1936, in a
well-known article entitled "On Computable Numbers."[5] In this article, Turing proposed a hypothetical
universal calculating machine (or universal Turing machine) capable of imitating any other calculating machine, including all other "Turing machines."
That is to say, the theoretical apparatus called a universal Turing machine is capable of "simulating" all possible Turing machines by means of a
programmatics in which computing is linked to a general recursiveness (the Church-Turing thesis). Although today the Turing machine hypothesis has
been extended into discussions of stochastic and quantum computing, it remains the case that Turing's basic hypothesis involves a medium of
translation between incompatible hardwares by way of a simulatory, analytic software. Being simulatory, the universal Turing machine functions on the
basis of a certain illusionism, and it is the possibility of such an illusionary interface that provides for what elsewhere is termed intelligibility.
For Turing, intelligence in this sense is linked to symbolisation -- or representability -- and it is a key feature of the Turing machine
that it is both capable of producing symbols and of scanning, or analysing, them. In other words, the Turing machine operates on the basis of a type
of literate technology. In Turing's original proposal, this machine would function according to a strict set of procedures, of writing, reading (and
erasing) binary "marks" (1 or a blank) on a strip of ticker tape. The "markings" on the ticker tape instruct the machine to move left or right, and to
write new marks (or erase existing ones) -- with the scanner moving left or right only one mark (or set of marks) at a time and then halting. At the
end of each movement, the machine enters a different configuration, depending upon the marks on the ticker tape. In this way, the machine is said to
effect "acts of recognition."
Returning to the example of the starfish, we might similarly consider any compensatory or rectificatory interface -- between the acephalous and
Aristotelian configurations -- as analytic in the sense that each directional shift on the part of the starfish would be translated by means of a
reconfiguration -- a form of halting, by which the human system would experience the starfish's change of direction as a "re-starting" in the same
direction. Like the Turing Machine, this process of rectification between two incompatible hardwares (catachrésis) would assume a
linearisation of non-linear co-ordinates, for example, rendering the starfish's multi-directionality in terms of human "unidirectionality." This type
of "approximative modelling" suggests that it is in consequence of the rectification-effect of interfaces (analytic-synthetic) that base
material conditions are transcribed to a level of intelligibility.
It may indeed be the case, then, that when we speak of artificial intelligence what we really mean is a type of "virtual cortex." An interface or
techné of intelligibility that (recalling Turing's insistence upon the interrelationship of neurology and morphogenesis) would not act as a
mental prosthesis, extending the human mind into the world at large, but rather a form of integration: a way of integrating the world into the
mind through the instrumental illusion of intelligibility.
One of the problems that we continue to confront in discussions of intelligence is the tendency to oppose analogical processes to
non-analogical ones -- such as digitisation -- as though the second process itself were experientially fictive or "imaginary." There is a tendency to
assume that, because something is not representable, it doesn't really exist. We know, for example, that sound waves exist -- without them we could
not speak with one another -- and yet their apparent immateriality suggests that they belong to another dimension, one that barely touches upon
"ours." Or else, we confuse the voice that we are able to perceive from the operations of its transmission as so many particles vibrating invisibly in
air. Such dilemmas have occupied man from the earliest recorded times, and they continue to haunt our understanding of how we think and perceive and
act, despite the fact that their phenomena have largely been explicated by science. Above all, our inability to represent even human intelligence
appears to pose insurmountable problems for an understanding of what a generalised intelligence could be.
In terms of cybernetics or computing science, we might equally characterise this imaginary quality as a dream of code -- or of digital
correspondence between input and output. Like computers, the central nervous system and brain contain neither words, images or ideas, but
merely electrical impulses. We might say that words, images and ideas are themselves very much like the sorts of common "analogical" user interfaces
(or "windows") that have existed in computing since the 1990s -- interfaces that provide the illusion that what exists within the machine is somehow
an analogue of what we perceive on the screen (being depictions of things "in the world") -- such that acts of "recognition" take the place of direct
cognition. It is by comparable means that thought enters into the imaginable: that we represent our mental and physical worlds to
ourselves -- that we become conscious as such -- imagining our "brains" to be populated with words, images and ideas. This interface -- what might
otherwise be termed mind -- assumes a complex topological form whereby neuro-physical impulses acquire self-referential, "imaginary" values
upon which the functional illusion of consciousness devolves.
One of the reasons we say the dream of code is that, in terms both of computing and of cognition, there is no one-to-one correspondence
between assumed inputs and outputs as far as the representation of "intelligence" is concerned -- as between perception, experience or consciousness
-- but rather metaphoric and metonymic relations (or catachrésis-: the bringing into communication of non-communicating terms or entities). The
coding of sensory input by the central nervous system and its "sensible" output as intelligible experience, do not describe an equivalence that is any
more than virtual. Similarly, Turing's test hypothesis describes a virtuality of recognition -- as the projection of a set of postulates about
reason onto the world -- which thus assumes the merely ritualistic or procedural form of cognition. It is important to keep in mind that digital
encoding likewise involves an interface effect, by which intelligibility is actively affected rather than merely transmitted. The process of
digital coding and decoding is not one of equivalence, as has formerly been supposed between reflexivity and thought, or will and idea, but rather of
a simulation of equivalence -- whereby we might generalise intelligence not in terms of the artificial, or of "artifice," but as a
synthetic-analytic technology.
The crucial point in all of this has to do with the question of "representability" and of what amounts to an attempt to represent what cannot, in
fact, be represented -- whether we think of this in terms of the "extrasensory," or of such things as what mathematicians like Roger Penrose consider
"incomputable" or recognisable in rational terms.[6] The point, however, is not that this is a
mistaken enterprise, but that it is -- for better or for worse -- in fact foundational to all human experience, insofar as we can speak of experience
as being meaningful. For it is only insofar as we represent that we assume to mean, even -- or precisely -- if such meanings
remain purely imaginary. And here lies the seeming paradox of analogical thought, in that only the imaginary that can be represented. This,
too, may have something to do with the supposition that the world is represented to us, rather than by "us" -- i.e. by an act of will -- meaning, that
representation may be considered as having its basis in the operations of an unconscious, rather than in a conscious "selfhood" capable of
initiating the world as will, idea or fact. More precisely, that the conditions of representability are equally the conditions for intellection, and
therefore remain anterior to experience.
There is a certain seductiveness, therefore, in regarding intelligent agency as somehow residing with the unrepresentable: that what is
unrepresentable is in fact what conditions, avails or affects action as we understand it -- along with its constituting phenomena, such as
free will. It has long been a philosophical commonplace that thinking remains imperceptible to thought; or that the essence of
intelligence is nothing intelligible, and this in turn poses certain dilemmas for any "model" of artificial intelligence -- for what could this be but
in fact the mere reification of intelligence, which is itself unrepresentable? But this too is to fall prey to analogical thought. The solutions to
the problem of intelligibility evidently must be sought elsewhere than in the effort to form a picture of how we may think, no matter how empirically
grounded such a picture may be reckoned to be.
Notes
---------------
[1] Alan Turing, "Computing Machinery and Intelligence," Mind LIX.236 (1950): 433-460.
[2] Andrew Hodges, Alan Turing: The Enigma (New York: Simon and Schuster, 1983) 415.
[3] Turing, "Computing Machinery and Intelligence," 460.
[4] Gerald Edelman, The Remembered Present: A Biological Theory of Consciousness ( New York:
Basic Books, 1989). Edelman's distinction is between the recognition of patterns in "interoceptive input," input from neural maps gauging the state of
the body (by way of the "reptile brain"); and the recognition of patterns emergent between "interoceptive input" and "exteroceptive" input -- in other
words the cognitive relation between motor functions and properly psychological functions. Consciousness, for Edelman, is above all vested in the
"recognition" of patterns.
[5] Alan Turing, "On Computable Numbers, with an Application to the Entscheidungsproblem,"
Proceedings of the London Mathematical Society 2.42 (1936): 230-265.
[6] Roger Penrose, The Emperor's New Mind (Oxford: Oxford University Press, 1989).
--------------------
Louis Armand is Director of InterCultural Studies in the Philosophy Faculty of Charles University, Prague. His books include Techné: James Joyce,
Hypertext & Technology (2003); Solicitations: Essays on Criticism & Culture (2005), Incendiary Devices: Discourses of the Other
(2005) and Literate Technologies (forthcoming).