Surveillance Never Sleeps | 5. Robots Trekking Across the Uncanny Valley

Theorizing 21c

Surveillance Never Sleeps

Arthur and Marilouise Kroker


Robots Trekking Across the Uncanny Valley

Blended Reality

In the new real, we are running with the robots. Industrial robots for seamlessly automated car manufacturing; medical robots for facilitating patient care in assisted living retirement communities; warrior robots engaged in materializing the imaginative game scenarios of cyber-warfare; toy robots that promise a happy first encounter between machines and the newest generation of humans; and, most of all, invisible robots circulating in the data clouds of social media as SocialBots. Perhaps more than we may suspect, ours is already a blended reality in which robots not only live among us as artificially programmed prosthetics equipped with articulated limbs and complex sensory arrays, but have also begun to live within us, quietly but insistently bending the trajectory of human perception, imagination, and desire in the direction of a future life of the mind that bears unmistakable signs of a robotic imaginary. Consider, for example, the following stories focusing on the complex intersection between human intelligibility and robots, both invisible and visible.


While the future of human encounters with robots has often been envisioned as an ominous struggle between fragile but immensely adaptive humans and powerful, although less creative, mega-robots, the real-world encounter has proven to be decidedly low-key, ubiquitous, and technologically subtle. Seemingly everywhere, the digital body has been swiftly delivered to its robotic future in the form of a pervasive network of invisible bots: socialbots swarming social media sites creating contagious flows of viral information, influencing individual perception, imitating human behavior; capitalist super-bots in the form of high-frequency trading algorithms that powerfully shape the ebbs and flows of stock transactions; psy-ops bots in the service of military intelligence that function to effectively influence political perception; and, of course, those other multiplicities of net bots–spiders, crawlers, and malware–that trawl the Internet, sometimes like proletarian worker robots performing routine web indexing functions, but at other times like futurist versions of the Cylons in Battlestar Galactica, quietly searching for critical weaknesses in websites, software programs, and Internet infrastructure itself. Consequently, to the question concerning future encounters between humans and robots, the answer is already not only well known, but pervasively experienced as the contemporary real-time environment of digital life. No longer content to remain at a safe, mechanical distance from their human creators, robots in the form of those lines of code that we call bots have already broken down the walls of human perception, inhabiting the world of social media as their cybernetic hive, attaching themselves to the human imagination in the seductive form of hashtags and tweets and, all the while, migrating the spearhead of robotic evolution itself from the mechanical to the neurological.

In the usual way of things, no one really anticipated that robots would faithfully follow the trajectory of technology itself, from high visibility to pervasive invisibility, travelling from the outside of the human body to the deepest interior of human subjectivity, quickly evolving from the mechanical to bots with very active cognition. When bots proliferate in the digital clouds that surround us, when they actually take up neurological residence in human perception, desire, and imagination, we can acknowledge with some confidence not only that we are already running with the robots, but something more uncanny; namely, that robots are already living among us and, most decidedly, living within us.

The meaning of this is fully enigmatic. When robots were something that we could see–for example the cute Japanese robot that played soccer with President Obama and concluded with a victory dance and cheer [1]–we could take the measure of the event in traditional humanist terms. But what happens when robots actually trek across the uncanny valley? Not uncanny in the usual sense of the term because they physically start to become indistinguishable from humans, but in the deeper sense that bots are perhaps already an indispensable dimension of posthuman subjectivity. We mean this literally. For example, it is reported that 30 percent of all Twitter content comes from bots: bots that reply to articles, bots that assume the names of friends in order to direct traffic to specific commercial products, bots for spying, for trading, for porn. In this case, have we become our own uncanny valley? For example, consider the following media report:

“I Flirt and Tweet. Follow Me at #Socialbot”

From the earliest days of the Internet, robotic programs, or bots, have been trying to pass themselves off as human. Chatbots greet users when they enter an online chat room, for example, or kick them out when they get obnoxious. More insidiously, spambots indiscriminately churn out e-mails advertising miracle stocks and unattended bank accounts in Nigeria. Bimbots deploy photos of gorgeous women to hawk work-from-home job ploys and illegal pharmaceuticals.

Now come socialbots. These automated charlatans are programmed to tweet and retweet. They have quirks, life histories and the gift of gab. Many of them have built-in databases of current events, so they can piece together phrases that seem relevant to their target audience. They have sleep-wake cycles so their fakery is more convincing, making them less prone to repetitive patterns that flag them as mere programs. Some have even been souped up by so-called persona management software, which makes them seem more real by adding matching Facebook, Reddit or Foursquare accounts, giving them an online footprint over time as they amass friends and like-minded followers.

Researchers say this new breed of bots is being designed not just with greater sophistication but also with grander goals: to sway elections, to influence the stock market, to attack governments, even to flirt with people and one another. [2]

The above report concludes by noting that of the 500 million Twitter accounts, “some researchers estimate that only 35 percent of the average Twitter user’s followers are real people,” that “more than half of Internet traffic already comes from nonhuman sources like bots or other types of algorithms,” and that in “two years, about 10 percent of the activity occurring on social online networks will be masquerading bots.” [3]

More than the sheer quantity of socialbots invading every dimension of digital life, what is significant about this report is something left undisclosed: that bots are integral to the question of social identity. Not simply in the sense of leveraging perceptions, desires, and imagination to move in certain directions, but integral in the fuller sense of the term–that, perhaps, we have already succeeded in moving beyond the point of real-time familiarity with the presence of bots to actually being part-human/part-bot. In this case, what may be truly uncanny is our own online subjectivity, occupying as it does an entirely unstable boundary between lines of code and lines of skin. When bots come inside us, pacing our existence with their artificial “sleep-wake cycles,” mirroring our moods with “persona management software,” and creating networks of their own consisting of “friends and like-minded followers,” we can recognize that we have become the first and best of all the posthuman subjects, breathing in lines of code as the real source of digital energy that allows us finally to come alive as the flesh and blood of socialbots.

More than half a century ago, the American psychologist B.F. Skinner correctly (and in fact enthusiastically) endorsed a future society based on a relatively primitive theory of “radical behaviorism.” Setting aside enduring questions concerning the origin and meaning of introspection and unconscious desires, Skinner suggested an alternative form of human subjectivity constructed on the strictly behavioral foundations of “operant conditioning.” For Skinner, what matters is the quantified self: the observable self that acts in and upon the world on the entirely predictable basis of social reinforcements–some negative (punishment), others positive (rewards), with yet still others more neutral in their role as reinforcements. Reducing the diverse spectrum of individual human experience–lingering desires, upstart passions of the heart, long-buried psychological repressions, mixed motives–to the observable behavior of a subject that is postulated as acting on the basis of a social protocol of rewards and punishments (i.e., avoiding that which hurts, privileging that which rewards), Skinner’s vision held that that which was true in the laboratory with respect to the behavior of rats and pigeons was equally true of social behavior in general. That is, human behavior could actually be modified by the application of the soft power of a token economy, providing actual, and sometimes symbolic, rewards as an inducement for certain privileged forms of social behavior, while gradually extinguishing undesirable behavior by the hard power of pain and punishment. Stated in its essential elements, Skinner’s vision of social behavior–“operant conditioning”–provided a way of transcending millennia of concern with that strange and definitely precarious mixture of animality, intellectuality, and emotion that is the nature of being human in favor of an ecstatic theory of remaking humans by the organized application of a radically new technology of human subjectivity–radical behaviorism. In this perhaps pragmatic and certainly deeply visionary theory of the human condition, there was always a twofold ontological assumption: first, that persistent concerns with supposed epiphenomena such as psychic blockages, unknown motives, and interior sensibility could, and should be, dismissed in favor of a technological vision of subjectivity open to its surrounding environment, deeply influenced by its actions and responding accordingly; and second, that the “self” of radical behaviorism could be socially modified, indeed socially engineered, by the methodical application of the principles of operant conditioning. Curiously, while at the intellectual level, the technological utopia that Skinner envisioned in his books Walden Two and Beyond Freedom and Dignity, were themselves surpassed by theoretical debate about the rise and decline of all the referentials of truth, power, and sexuality, Skinner’s prophetic vision of a social self capable of being modified by the soft power of social reinforcements–particularly the “token economy” of radical behaviorism–has finally found its key public expression in the once and future society of socialbots. Not simply a new technology of communication perfectly fit for the age of social media, socialbots are, in their essence, something very different, namely a technology for modifying human subjectivity that is, in its essence, simultaneously political and neurological. Political because socialbots embody how the ideology of operant conditioning is inserted into the deepest recesses of the data mind–the externalized, circulating consciousness characteristic of the quantified self of social media. Neurological because socialbots are the primary cybernetic agents of “cognitive hacking,” that complex process whereby the key driver of the newly emergent attention economy–perceptual attention–is encouraged to turn in certain directions, sometimes by positive reinforcers operating in the language of seduction and, at other times, by negative reinforcers functioning in terms of fear and anxiety. When swarms of socialbots attach themselves to the data mind–flirting, chatting, spying, tracking–we can clearly recognize that we are already living in a society of soft power and modulated violence.

Indeed, one of B.F. Skinner’s most celebrated instruments for test-driving the theory of operant conditioning was the “Skinner Box,” a closed, programmable environment whereby test subjects–including laboratory rats and pigeons–could be probed, reinforced, and, if necessary, punished as a way of calibrating, and thus engineering, the protocols of effective social modification. Now, the fact that Skinner’s theory of operant conditioning–with its stripped-down assessment of human behavior, its studious attention to the best practices of a token economy, and its transcendent vision of behavioral modification guided by experts–was seemingly displaced by theoretical attention to the death of the subject, from poststructuralism and postmodernism to posthumanism and, most recently, by new materialist theories focused on the complexity of objects as life-forms, does not necessarily mean that operant conditioning, with its profoundly eschatological vision of behavioral modification, was lost to the world of emergent technologies. In one of those superb ironies of cultural reflection, the Skinner Box could be quickly left behind as so much detritus on the way to posthuman culture precisely because the theory of operant conditioning was always waiting patiently and persistently for its technological realization by a creative form of new media–in fact, social media–that could instantly and decisively translate the anticipatory vision of soft power, token economy, and reinforcement theory that was the Skinner Box into the generalized network of socialbots within which we find ourselves enmeshed today. In this case, when socialbots take active possession of social media, when complex patterns of human neurology expressed by the ablated consciousness of the data mind are gradually shaped, indeed modified, in their observable outcomes by bots that chat, make suggestions, anticipate connections, manifest seemingly total recall, and facilitate the attainment of desirable goals (better health, greater intelligence, early warnings), then, at that point, the Skinner Box is no longer an object outside ourselves but something else entirely–a technology of programmable subjectivity rendered part-flesh/part-data. Today, it is not so much that we are mingling with physical robots in ways anticipated by cinematic and science-fiction visions of the technological future, but that clear, discernable borders have been eliminated between immaterial (social) robots and ourselves, that it is difficult to know with any certainty whether a friend or a commentator on social media is human or the sensitively attuned response of an artificial life-form–a socialbot–that can know us so intimately because, in daring to become fully digital–being social media–we may have inadvertently entered in the long-anticipated world of B.F. Skinner redux. Replete with swarms of bots–socialbots, neurobots, spybots, junkbots, hackerbots–the ablated Skinner Box that is the universe of contemporary social media has this common feature: expert systems in the form of artificial life-forms function ceaselessly to modify, cajole, influence, and channel the privileged psychic targets of human perception and social attention in the token economy of network culture, with its powerful technologies of soft facilitation and its equally harsh technologies of command, including surveillance and tracking. Happily taking up neurological residence in the data mind, armies of neurobots, sometimes acting at the behest of corporate capitalism or perhaps under governmental supervision, are, in effect, the way in which power speaks today–otherwise invisible databases that seduce, inform, link, and recall as leading spearheads of evocative communication between robots and humans.

With the sheer invisibility of socialbots, the fact that the first, fateful encounter between robots and ourselves occurs in the innocuous, immaterial form of lines of code may intimate the elimination of the pervasive anxiety surrounding the “uncanny valley”–that psychic moment identified by robotics engineers when robots that are effectively indistinguishable from human presences. In this case, the uncanny valley of robotics engineering lore may well constitute an ancient, psychological reinforcer supporting the pattern-maintenance of established boundary lines long viewed as necessary to the self-preservation of the human species. While lines of code never rise to the psychological prominence of increasingly human-like mechanical robots, they do enjoy an important technological attribute, namely encouraging the human species, individually and collectively, to drop its traditional psychological aversion to mixing robotic and human species-identity, which thus increases the vulnerability of the human species to quick insertions of the most fundamental elements of robotic consciousness, such as ambient awareness, distributive consciousness, circuits of fast connectivity and a fully externalized nervous system into the emergent infrastructure of the digital brain. Definitely not openly hegemonic and certainly not operating in the language of domination, the first encounter of neurobots and humans produces individuals who see actually begin to see, think, and feel like the socialbots of their wildest dreams.

Psychic Robots

A recent BBC report, titled “Robotic Prison Wardens to Patrol South Korean Prison,” describes a prototype demonstration of prison guard robots that would monitor inmates for “risky behavior,” specifically suicidal tendencies and violent impulses:

Professor Lee Baik-Chu of Kyonggi University, who led the design process, said that robots would alert human guards if they discovered a problem:

“As we’re almost done with creating its key operating system, we are now working on refining its details to make it look more friendly to inmates,” the professor told the Yonhap news agency. [4]

Quickly migrating beyond the use of robots to physically guard prisoners, this prototype project represents that moment when robots first began to evolve beyond their purely mechanical function as prison guards to the more complex task of carrying out psychiatric assessments of the behavioral patterns of prison inmates. While it could be expected that robots would first enter prisons in the traditional roles of surveillance and control, the three robots involved in the demonstration project have a very different task: namely, to mingle among a captive population as only a five-foot robot can do and while “looking more friendly to the inmates” conduct an active search for signs of suicidal and violent behavior. Not so much, then, a demonstration concerning the feasibility of using robots in prison environments, but actually an experiment with very general applications for perfecting an operating system allowing robots to conduct complex psychiatric examinations of prisoners. At this point, we move beyond cinematic images of prisons of the future with robotic guards in towers carefully monitoring prison populations to that moment when technology actively penetrates the human psyche in search of “risky behavior.” Here, robots are no longer mechanical devices, but artificial psychiatrists equipped with 3D vision, motion detection, and programmed operating systems, all aimed at discerning visible signs of melancholy, rage, despair, desperation, fatigue, hopelessness.

While it is not evident from media reports how robots are to fulfill complex psychiatric examinations–other than the mention of the demonstration robots monitoring abrupt changes in the behavior of individual prisoners–the intention is clear: for prison guard robots to cross the boundary between surveillance from the outside of captive bodies to internal explorations of psychic behavior. Guided by a prescriptive doctrine concerning the parameters of “risky behavior,” what is really being tested here is robots as avatars of the new normal, conducting frequent visual examinations of a chosen, and necessarily captive, population in order to determine which bodies fall inside and outside of the normative intelligibility determined by the artificially determined ethics of “risky behavior.” In this case, it is the responsibility of those bodies placed under surveillance to provide no outward signs of either visible dissent (violence) or refusal of the state’s power over life (suicide). While at first glance it might seem that guard robots are not programmed with levels of artificial intelligence and, perhaps, artificial affectivity necessary to detect otherwise invisible signs of powerful emotions internal to the psychic life of prisoners, what may be brought into political presence here is an entirely new conception concerning how power will operate in the robotic future. Not so much the great referentials of power over death or even power over life, but power over visible expressions of human affectivity–a form of robotic control that assumes that the psyche is not a form of internal being but a kind of external doing; that is, the psyche is not something we have but something that we do. In this scenario, what is important about the human psyche for purposes of the society of control is less the complexities of hidden intentions–the cultural acedia associated with feelings of melancholy, resentments that activate rage, total powerlessness that motivates despair–than those visible, outward manifestations of the rebellious psyche, that moment when the bodily psyche moves from the long, silent gestation of hidden intentionality to overt declarations of its intention to act, whether through violence or suicide. At that point, at least according to this prototype demonstration, robot guards will be waiting along the watchtower of the society of control, quickly targeting immanent signs of psychic rebellions against the order of normative intelligibility, relaying warnings to central command, all the while standing by for further instructions.

I, Robot Land

In September 1950, Incheon, Korea was the site of a daring, and justifiably famous, US invasion at the height of the Korean War, which aimed at capturing the capital city of Seoul and thereby decisively cutting off vital supply and communication lines to North Korean forces who were engaged in besieging UN forces further south in the Pusan peninsula. Identified as “Operation Chromite,” as conceived by General Douglas McArthur and carried out by the 1st and 5th Marine Divisions, the invasion force successfully shifted the momentum of hostilities, eventually resulting in the present-day demarcations of North and South Korea.

Possibly as an unconscious tribute to the above invasion, Incheon has been selected as the site of a second invasion, this time not by US Marines charging ashore, but by astral landing craft carrying robots from the past, present, and future. The invasion force consists of a multitude of creative robotic engineers, futurist designers, and marketing experts in entertainment spectacles, all aimed at successfully establishing, by 2016, a cutting-edge theme park called Robot Land which will consist of robotics engineering displays, commercial applications, and futurist-oriented research facilities depicting the future of robot society as well as possibilities for “harmonious co-existence” among “robots, humans, and nature.” [5] Not so much a Disneyworld for robots, since that would entail focusing on a symbolically rich, but past-oriented, narrative of mass entertainment spectacles, Robot Land has a very different objective. Conceived of as a “history of the future, the guiding ambition is to construct a theme park depicting a future robotic society that, while visually honoring the history of robotics engineering as well as visions of robotic society originating in science fiction and Hollywood cinema, actively and very directly engages in the project of designing the robotic future. Here, the robotic future anticipated by business, engineering, cinema, comic books, and literature will be paralleled by state-of-the-art research facilities aimed at both confirming and promoting Korea’s creative leadership in the areas of robotic design, fabrication, and engineering. Imagined as a gateway to the future rather than a spectacle of the past, Robot Land has chromite at its techno-visionary core, anticipating a hard-driving futurist invasion of global markets and perhaps of generalized cultural imagination as well by the Korean robotic imaginary. Part theme park (featuring a gigantic roller coaster that dangles off the arm of a gigantic robot before plunging into the water below; a robotic aquarium filled with robotic fish, including lobsters and jellyfish; and merry-go-rounds for riding robot animals), part futurist robot laboratory, and part “industrial promotion facility,” Robot Land takes seriously its mission of intensifying the “fun and fantasy” in the robotic future. There are, of course, necessary, indeed inevitable, exceptions as in any story concerning the unfolding (artificial) future. In the midst of this intended celebration of robotic fantasy, there are also plans underway to demonstrate “how robots may be used in 2030, particularly when it comes to assisting seniors with housework, medical check-ups and dementia prevention.” [6] There are also psychologically and economically regional geo-national sensibilities at play. In this case, no sooner have the Japanese constructed two colossal robot statues (Tetsujin in Kobe and Gundam in Odaibu), than Korea’s Robot Land has been built to trump Japan’s claim to supremacy in the area of gigantic robotic spectacles, with a strikingly colossal 364-foot statue of Taekwon V (Voltar the Invincible). [7] In this case, persistent and longstanding tensions between Korea and Japan find their most recent manifestation in the twenty-first century in the delirious form of robotic fiction.

Considered as “a history of the future,” there is at least one significant, perhaps terminal challenge to the overall logic of the project that is hinted at by the very naming of the theme park–Robot Land. Possibly conceived as a Korean alternative to the “magic kingdom” of California’s Disneyland Park where “you can sail with pirates, explore exotic jungles, meet fairy-tale princesses, dive under the ocean and rocket through the stars–all in the same day,” [8] Robot Land offers its own alternative vision of a future distinct from the Disneyland prescription with its “eight extravagantly themed lands–Main Street, U.S.A, Tomorrowland, Fantasyland, Mickey’s Toontown, Frontierland, Critter Country, New Orleans Square and Adventureland.” [9] While Disneyland seduces by translating the phantasmatic ideology of the American dream into nostalgic spectacles, Robot Land delivers a harder message: that robots are here to stay, whether taking the form of lobsters and jellyfish, assuming the entertainment guise of robotic animals gathered together for a fun carousel ride, inflating to the gigantic proportions of apocalyptic cinema like the massive statue of Taekwon V, or, more prosaically (but pervasively), spreading out their established robotic hardware as the real working infrastructure of global automobile manufacturing or, for that matter, as futurist technological prosthetics for the sick, the aged, the demented.

While Jean Baudrillard might once have noted that the seduction of Disneyland is its convincing pretense that its fantastic simulations are an escape from the real world rather than what it really is–a perfect model of the real-time model of soft power, modulated violence, and crowd-management–Robot Land is the technological order after the age of simulacra. Here, there are technologically enabled thrills–roller coasters dangling from the outstretched arms of massive robots–mesmerizing robotic spectacles, and spectacular feats of imagination, but no order of simulacra, no sense, that is, that the new order of robotics is anything than what it really is: a key component of the Korean version of the power of the new real. With its mixture of entertainment spectacles, industrial promotions, and a graduate school in robotics, Robot Land is a place where fun illusions and delirious spectacles are always underwritten by a very visible undercurrent of dead-eyed economic seriousness of purpose and carefully orchestrated research visions of (certain) robotic futures. This is, of course, its biggest problem–that the future of robotics probably will have nothing to do with any territorial referent; certainly, it will not be a “land” in any physical or even symbolic meaning of the term, but will most definitely constitute a new order of time: robotic time. In this case, Robot Time, rather than Robot Land, would probably be a more accurate description of the new epoch ushered in by all futurist robotic designs, from mass entertainment spectacles to the complex artificial sensors working the assembly lines of the manufacturing world. When the future of robotics, one already anticipated by contemporary developments, turns away from its ready-to-hand terrestrial manifestations–artificial fish, mega-statues, humanoid machines–and enters the databases of globalized networked culture as their indispensable artificial intelligence and machine-to-machine and machine-to-human communication, then we will recognize that we are no following a technological pathway that will lead to a certain place (Robot Land) but toward a certain (robotic) order of database time that is networked, communicative, neutral. As with all things having to do with theme parks, actually expressing such a fundamental eschatological rupture in the order of things–the displacement in importance of visible space by the invisibilities of (database) time–is challenging. Such a challenge is probably why, although it takes momentary refuge in the comfortable referential illusion of Robot Land, this is one theme park that will probably always be known for the hauntological traces of its essential missing element–the once and future epoch of Database Robot Time. There are definitely no “magical kingdoms,” no “fairy-tale princesses,” no pirates–just a theme park on the edge of the rising time of the East that announces that for all the psychic exuberance of its robotic fossils, from fish and statues to carousel animals, that this is one tomorrowland that will not be able to camouflage for much longer what is really taking place in this second invasion of Incheon: the newly emergent order of the time of the robots, with humans kept on standby as their necessary prosthetics.

Database Robots

What happens when the evolutionary destiny of robots suddenly splits into two paths, with one pathway continuing that which has long been anticipated by scientific visionaries, cinematic scenarios, and science fiction–namely, the triumphant rise of a new robotic epoch invested with technological inevitability as successors to a putatively declining human species–and an alternative pathway in which robots abruptly shed their mechanical skin, upgrade their artificial intelligence, and adopt the remote senses of network culture as their very own interface with the surrounding world of human flesh? What, then, is the future of robotics: sovereign technological automatons or database robots?

Projective thought focused on the first pathway has long been the subject matter of technological futurists. In his brilliant book, Mind’s Children, the technological futurist Hans Moravec establishes clear-cut timelines tracing the history of robots, from their first appearance as mechanical prosthetics servicing human needs to that quickly approaching singularity moment (approximately 2050) in which robots equipped with advanced artificial intelligence, articulated limbs, and full-array sensory data inputs are projected to become an autonomous species, not only thinking for themselves but, more importantly, making sovereign decisions concerning what needs to be done in the interests of the preservation of the (robotic) species. Anticipating that day of fatal reckoning in which robots, as the product of human imagination, just might be inclined to follow familiar (human) pathways of revenge-taking for the gift of (robotic) life which they can never pay back, Isaac Asimov, in his celebrated book I, Robot, is ethically preemptive in anticipating a future race of fully autonomous robots that are invested, outside their conscious awareness, of the guiding moral edict, first and foremost, to do no harm to human beings. That Asimov’s anticipatory ethics of robotic behavior would be quickly shrugged off by robots exhibiting all the behavioral, emotional, and moral traits of their human progenitors is, of course, the privileged focus of the science fiction writer, Bruce Sterling, who in his cult classic Crystal Express eloquently and passionately scripts a future war of robots spanning many galaxies–a war in which a class of robots known as “shapers” and an opposing robotic tribe identified as “mechanists” engage in protracted combat in which the key issues as stake are as profoundly ontological as they are fiercely political.

Culturally, we are already well aware of the history of the (robotic) future that will be traced by the first pathway. Like a form of generalized anticipatory consciousness, many years of cinematic history have provided dramatic images of the multiple permutations, internal and external, that will likely follow the sovereign regime of robotic logic. While most cinematic encounters between robots and humans are ultimately settled by spectacles of violent battles, a few actually hint at a “harmonious co-existence of robots, humans and nature” [10] with the remainder often concluding with unsettled paradoxes, unfinished narratives, and promises only made to be broken. For example, the final, anguished speech by Batty, one of the pursued replicants in Blade Runner, powerfully and evocatively captures both the anguished human will to live and a courageous replicant’s pride in star bursts that he has witnessed, distant planets explored, and inexpressible awe before the vastness of deep space. When Albert Camus first articulated the absurd sense in The Rebel as consisting of an all-too-human demand for meaning to which the universe answers with indifferent silence, he probably did not have in mind a future time in which the hunted-down replicants of Blade Runner would be commonly haunted by an existential sense of the robotic absurd, that moment in which genuine anguish by replicants over their programmed termination dates is met with the silence of nature’s indifference. That we are already conscious of the blending of technological dynamism, real power struggles, and stubborn, complex ethical entanglements that will probably constitute the material reality of the future life of the robotic mind is explored everywhere in the history of cinema, including those classics of visual imagination such as 2001: A Space Odyssey, Alien, Metropolis, Westworld, The Day the Earth Stood Still, Star Wars, Star Trek, Robocop, Terminator 2: Judgment Day, and, of course, that poignant narrative of human senescence and robotic ingenuity–Wall-E. Like a cinematically driven society eager to be haunted by its technological future, and certainly capable of quick ethical and political adaptations to the demands of the (robotic) day, we may have already war-gamed the future, played and replayed it, spliced and remixed the fractures, bifurcations, and liminalities likely to follow the Judgment Day of the technological future. In this case, it is as if the first pathway to the future–the often-told story of human hubris and cyber-power–has already taken place in our collective imagination, leaving us now to be fully absorbed in studying in advance the psychic entrails of that fateful collision of the human species with its emergent technological successor.

However, with robotic life, as much as with human life, only opposites are ever true. Consequently, if there can exist such a rich cinematic and literary vernacular surrounding the robotic future, that might be because that future may have already reached its furthest limit and already begun to move in reverse direction, not necessarily by way of a spectacular implosion but by a silent yet discernable shift in robotic intelligibility. Perhaps robots themselves have grown tired of their rehearsed cinematic portrayals, shifting direction away from the spectacle of powerful AI machines to the more prosaic, more pervasive, certainly more perverse, and genuinely more futurist enactment of the approaching world of database robots. That is what the opening stories in this narrative of robots trekking across the uncanny valley is all about: not so much a predictable future of human/robot deep space encounters, but a more complex story of database robots expressed variously as neurobots, psychic robots, and avatars of robot time. In this case, robots have already fully penetrated the human sensorium, from hijacking the process of automated labor to relentlessly hacking the senses.

Beyond visions of technological apocalypse featuring predatory struggles between space-bearing robots and instinctually-driven humans, the migration of robots into the minutiae of social life has quickly evolved from multi-axis industrial robots–automatically controlled, multipurpose and functionally reprogrammable–specializing in the automation of labor to swarms of cyber-bots, fluid networks of AI agents privileging the automation of cognition. With a rapid increase in the world robot population (300,000 in 2000 to 18,000,000 in 2011), [11] industrial robots have swiftly been integrated into manufacturing processes, particularly those reprogrammable around automated labor that promises to deliver predictability and reliability, backed up by “high endurance, speed, and precision.” [12] An increasingly technical future, therefore, in which the compulsory labor of armies of specialized robots quickly displaces laboring human subjects in many work processes: welding, shipbuilding, painting, construction, assembly, packaging, and palletizing. Here, the overall trajectory follows the traditional path of economic development, this time with robots beginning in low-skill, sometimes dangerous jobs that can be done automatically and remotely, and thereafter moving up the skill-set ladder of career achievement to assume high-skill, hyper-cognitive positions in network culture. That, at least, is the overall technological ambition, marred sometimes by disquieting reports such as the following account concerning what happened when robots went wild in a GMC factory built on a field of robotic dreams:

In the 1980s, the General Motors Corporation spent upwards of $40 billion on new technologies, many hundreds of millions on robots. Unfortunately, the company did not spend nearly enough on understanding the systems and processes that the robots were supposed to revolutionize or on the people who were to maintain and operate them. The GM plant in Hamtramck, Michigan, was supposed to be a showcase for the company. Instead, by 1988 it was the site of some of the worst in technological utopianism. Robots on the line painted each other rather than the car bodies passing by; robots occasionally went out of control and smashed into passing vehicles; a robot designed to install windshields was found systematically smashing them. Once, when a robot ceased working, technicians did not know how to fix it. A hurried call to the manufacturer brought a technician on the next plane. He looked at the robot, pushed the “Reset” button, and the machine was once again operational. [13]

While computer malfunctions in a manufacturing plant can sometimes be solved by simply pushing the reset button, what happens now when computer glitches effecting the core system logic of the externalized nervous system take down key areas of social life, including banking, health, identity, and warfare? Without sufficient evidence concerning the consequences of the wholesale transfer of the human sensorium to electronic databases controlled, for the most part, by machine-to-machine (M2M) communication and endlessly circuited by data robots serving as synapses of the ablated world of cognition, finance, medicine, politics, and defence, contemporary technological society has quickly rushed into outsourcing itself, literally parceling out human identity into data clouds, from digital storage of personal health information to complex networks for circulating financial data. When entire computer systems crash, sometimes as a result of overload stress and, at other times, for reasons enigmatic even to systems engineers, the result is no longer an unexpected disruption in assembly lines, but the sudden data eclipse of core areas of externalized human cognition. When data goes dark, it is as if the body has suddenly been divested of its key senses–it is the jettisoning of externalized memory, the disappearance of electronic profiles of the extruded financial self, the circulation of electronic information concerning medically tracked subjects, or the substitution of recombinant, digital orifices of the eye, ear, taste, smell, and touch in the age of the rapidly dematerialized body. While many cautionary notes have been struck concerning the inevitable fallout from a future populated by the fully ablated self, skinned with an externalized nervous system, and possessing an order of (digital) intelligibility modeled after extruded consciousness, only now is it actually possible to measure the consequential results of this basic rupture of human subjectivity. Lost in clouds of data, communicatively overexposed, its identity outsourced by fast digital algorithms, its autobiography uploaded by data streams always offshore to the vicissitudes of individual experience, the real world of technology, particularly robotic technology, reveals that we may have made a Faustian bargain with the will to technology. Whether through generalized cultural panic over the sheer speed of technological change, or perhaps an equally shared willingness to ride the whirlwind of a society based on the literal evacuation of human subjectivity, we have committed to a future of the split subject: one part a fatal remainder of effectively powerless human senses, and the other a digitally enabled universe of substitute senses. In the most elemental meaning of the term, the technological future that spreads out from this fatal split of human subjectivity cannot fail to be profoundly and decidedly uncanny. While robots, technically forearmed with indifference, coldness, and rationality, will probably at some point and in some measure successfully trek across the uncanny valley, the human response to the growing presence of the (technological) uncanny in contemporary affairs is far less certain. For example, consider the following reports from the uncanny valley that is daily life in the shadow of robots.

Uncanny Bodies

There was a recent newspaper report that evocatively captured the feeling of the uncanny in the robotic future. Appropriately titled, “SociBot: the ‘social robot’ that knows how you feel,” the report focused on the underlying element of uncertainty that is often a sure and certain sign of the presence of the uncanny in human affairs:

If Skype and FaceTime aren’t giving you enough of the human touch, you could soon be talking face to rubbery face with your loved ones, thanks to SociBot, a creepy “social robot” that can imitate your friends.

“It’s like having a real presence in the room,” say Nic Carey, research coordinator at Engineered Arts, the Cornish company behind the device. “You simply upload a static photo of the face you want it to mimic and our software does the rest, animating the features down to the subtle twitches and eyes that follow you around the room.

The company sees its potential in shopping centers and theme parks, airports and tourist information centers,” anywhere requiring personalized content delivered with a human touch,” as well as potential security applications, given that the Socibot can track up to 12 people simultaneously, even in a crowd.

“We are looking for platforms that can be really emotional, investigating how robots can interact with people on multiple levels.” [14]

In his classic essay “The ‘Uncanny,” written in 1919 and perhaps itself deeply symptomatic of the profound uncertainties that gripped European culture post-WWI, Sigmund Freud approached the question of the uncanny on the basis of an immediate refusal. [15] For Freud, the uncanny–unheimlich–does not denote a kind of fright associated with the “new and unfamiliar,” but something else–still indeterminate, still multiple in its appearances and illusive in its origins. Far from being “new and unfamiliar,” the uncanny for Freud represented something more enduring in the human psyche, “something familiar and old-established in the mind and which has become alienated from it only through the process of repression”–namely, the continuing yet repressed presence of “animism, magic and sorcery” in the unfolding story of the psyche. For Freud, scenes that evoked the feeling of the uncanny were remarkably diverse: “dismembered limbs, a severed head, a hand cut off at the wrist, as in a fairy tale of Hauff’s”; “feet which dance by themselves as in the book by Schaeffer”; the “story of ‘The Sand-Man’ in Hoffmann’s Nachstucken with its tale of the ‘Sand-Man who tears out children’s eyes’ and the doll Olympia who occupies an unstable boundary between a dead automaton and a living erotic subject”; the always enigmatic appearance of the double; the fear of being buried alive; and, of course, the constant fear of castration. For Freud, whatever the particular animus that evokes feelings of the uncanny, the origin remains the same–the return of that which has been repressed not only by prohibitions surrounding “animism, magic and sorcery,” but also by episodic fractures, unexpected breaks in the violence that human subjectivity does to itself to reduce to psychic invisibility the complexities of sexuality and desire.

Now that we live almost one hundred years after Freud’s initial interpretation of the origins of the uncanny, does the emergence of a new robotic technology such as SociBots have anything to tell us about the meaning of the uncanny in posthuman culture? At first glance, SociBots represents a psychic continuation of that which was alluded to by Freud–a contemporary technological manifestation of the feared figure of the double as “something familiar and old-established in the mind.” For Freud, what is truly uncanny about the figure of the double is not its apparent meaning as mimesis, but its dual signification as simultaneously being “an assurance of immortality” and an “uncanny harbinger of death.” That is, in fact, the essence of Socibots: an assurance of (digital) immortality, with its ability to transform a static photo into an animated face, complete with twitches, blushes, and possibly sighs; but also a fateful harbinger of death, with its equally uncanny ability to transform living human vision into what Paul Virilio once described as cold-eyed “machine vision “–machine-to-human communication with a perfectly animated software face tracking its human interlocutors, twelve test subjects at a time. In this case, like all robots, SociBots certainly give off tangible hints of immortality–upload a photo of yourself, a friend, an acquaintance, and they are destined for eternal digital life. But, as with all visual representations come alive, it is also a possible harbinger of death, provoking feelings of human dispensability, that the tangible human presence can also be quickly rendered fully precarious by its robotic simulacra. Interestingly, while Freud began his story of the uncanny with a reflection upon the psychic anxiety provoked by the figure of the Sandman, who robs children of their eyes, SociBots may well anticipate death in another way, this time the death of human vision and its substitution by a form of vivified robotic vision. Here, SociBots could be viewed as providing, however unintentionally, perhaps the first preliminary glimpse of the psychic theatre of the Sandman in a twenty-first century digital device. With this addition: SociBots resemble the myth of the Sandman in a second important manner. Not only, like the Sandman, does this technology provoke enduring, though deeply subliminal human anxieties over the death of vision, but it also draws into cultural presence, once again, that strange figure of the doll Olympia with its subtle equivocations between dead automaton and living erotic subject. In this case, the particular fascination of SociBots, with its almost magical and certainly (technological) occult ability to animate “features down to the subtle twitches and eyes that follow you around the room,” does not solely reside in its animation of death, but in its manifestation of a world where objects come alive, with eyes that track you, with lips that speak, and facial features that perfectly mimic their human progenitors. Neither death by automaton nor life by the doll-like construction of Socibots, but something else: this is one robotic technology that derives its sense of the uncanny by always occupying an unstable boundary between life and death, software animation and real-life visual conversations and tracking. In essence, the uncanniness of Socibots may have to do with the fact that it is a brilliant example of the blended objects–part-simulacrum/part-database–that will increasingly come to occupy the posthuman imagination. Curiously, while it might be tempting to limit the story of SociBots, like the mythic tale of the Sandman before it, to stories of the death of human vision or even to the fully ambivalent nature of blended objects, from dolls to robots, there is possibly something even more uncanny at play here. It might be recalled that Freud controversially concluded his interpretation of the uncanny with his own psychoanalytical insights concerning the unheimlich place as the uncanniness of “female genital organs”: “This unheimlich place, however, is the entrance to the former Heim (home) of all human beings, to the place where each one of us lived once upon a time and in the beginning.” [16] While making no prejudgment on the genital assignment of robotic technology, it might be said, however, that the story of SociBots has about it a haunting and perhaps truly uncanny sense of a premonition about a greater technological homecoming in which we are, perhaps unwittingly and unwillingly, fully involved. In this interpretation, could the origins of the SociBots uncanniness have to do with its suggestion that we are now in the presence of technologies representing, in their essence, possibilities for a second (digital) rebirth? The suggestion of the uncanny, therefore, that is SociBots may well inhere in its capacity to practically realize the once and future destiny of robots as born again technologies.

Junk Robots in the Mojave Desert: Year 2040

What happens when no tech meets high tech deep in the desert of California?

Just up the road from Barstow and far away from the crowds of Joshua Tree, there’s a junkyard where robots go to die. It consists of one hundred or so cargo-sized steel containers packed tight with the decay of robotic remains. Everything is there: a once scary DARPA-era animal robot weighing in at 250 pounds looks forlorn bundled in a shroud of net; early cobots and autonomous robots can be seen huddled together in one of the containers waiting to be reimagined; broken-down industrial robots that have reached a point of total (mechanical) exhaustion from repetitive stress injuries; abandoned self-organizing drone hives left to slowly disassemble in the desert air; swarms of discarded mini-robots–butterflies, ants and bees; mech/cyb(ernetic) corpses of robots made in the images of attack dogs, cheetahs, and pack animals, all finally untethered from reality and left to rust in the Mojave desert. Most of the valuable sensors seem to be missing but what remains is the skeleton of our robotic past. The only sound heard is the rustle of scattered papers drifting here and there with scribbled lines of start-up algorithmic codes. The only visual is the striking contrast of the sharp-lined geometry of those steel compartments against the soft liquid flows of the desert, land, and sky. The overall aesthetic effect of this robot junkyard is a curious mixture of the desert sublime with the spectral mountains in the background and dusty scrublands close to the watching eye, mixed with a lingering sense of technological desolation.

What’s most interesting about this robot junkyard–interesting, that is, in addition to its lonely beauty as a tarnished symbol of (technical) dreams not realized and (robotic) hopes not achieved–is that it has quickly proven to be a magnetic force attractor for a growing compound of artists, writers, and disillusioned computer engineers. Like a GPS positional tag alert on full open, they come from seemingly everywhere. Certainly from off-grid art communities on the plains of East Texas, some transiting from corporate startups in Silicon Valley, a few drifting in from SF, probably attracted by the tangible scent of a new tech-culture scene; there are even reports of artists drifting in from around the global net–Korean robo-hackers, Japanese database sorcerers, Bulgarian anti-coders, and European networkers–taking up desert-style habitation rights in the midst of the robot junkyard. It’s a place that some have nicknamed RoVent–a site where heaps of robots can be retrieved, repurposed, reimagined and reinvented.

It is almost as if there is a bit of telepathy at play in this strange conjuration of the artistic imagination and robots in transit to rust. Instinctively breaking with the well-scripted trajectory of robotic engineers that have traditionally sought to make robots more and more human-like, these pioneers seem to prefer the exact opposite. Curiously, they commonly seem to want to release the spirit of the robots, junkyard or not, to find their own technological essence. What is the soul of a data hive? What is the spirit of an industrial drone? What is the essence of a junkyard robotic attack dog? What makes a beautiful–though now discarded–robotic butterfly such an evocative expression of vitalism? Strangely enough, it is as if something like a Japanese-inspired spirit of Shinto, where objects are held to possess animate qualities and vital spirits, has quietly descended on this robot junkyard with its detritus of technical waste and surplus of artistic imagination.

The results of this meeting of supposedly dead technology and quintessentially live artists are as inspiring as they are unexpected. For example, one artistic display consists simply of a quiet meditation space where some of the junkyard robots are gathered in a rough circle, similar to a traditional prayer circle or the spatial arrangement of an ancient dirge, all the better to find their inner moe, or, at minimum, to reflect on that illusive point in their individual robotic work histories where the mechanical suddenly becomes the AI, the vital, the controlling intelligence and then, just as quickly, slips on backwards into the pre-mechanical order of the junkyard burial site.

In the darkness of the desert night, there’s another artistic site that is organized as a funeral pyre for dead robots. Without much in the way of wood around for stoking the flames, these artists have paid a nocturnal visit to the ruins of the CIA-funded Project Suntan, close to a super-secret aviation project, where a barrel of abandoned liquid hydrogen has been retrieved for releasing the night-time spirits of (robot) mourning. The funeral pyre should be a somber place, but in reality it’s not at all. Maybe it is simply the visual, and thus emotional, impact of a full-frenzy funeral pyre, fueled by the remains of secret experiments in high-altitude aviation fuels, sparking up the desert air. Or, then again, perhaps it is something different, something more decidedly liminal and definitely illusive. In this case, when robots are stacked on a burning funeral pyre, it is very much like ritualistic final consummation, that point where the visibly material melts down into the dreamy immaterial, and where even the scientifically contrived mechanical skins, electronic circuitry, and articulated limbs finally discover that their final destiny all along was an end-of-the-world return to the degree-zero of flickering ashes. The concluding ceremony for this newly invented Ash Wednesday for dead robots is always the same: a meticulous search by the gathered artists for the final material remains of the robots which are then just as carefully buried in the dirt from which they originally emerged. Ironically, in the liturgy of the funeral pyre, there is a final fulfillment of the utopian–though perhaps misguided–aspiration common, it seems, to many robotic engineers, namely a haunting repetition, in robotic form, of the human life cycle of birth, growth and senescence.

However, if the stoked inferno of the funeral pyre for abandoned robots sometimes assumes the moral hues of an anthropomorphic version of (robotic) imagination, the same is most definitely not the case at a third site where a feverish outburst of the artistic imagination–splicers, mixers, recombinants, recoders–plies its trade anew by remaking this treasure-load of robot technologies. Here, strange new configurations emerge from creative remixes of self-organized drone hives and fluttering robot butterflies. When (dis)articulated robot pack animals, some missing a leg or two, are repaired with extra legs culled from leftover parts of robot dogs or now only two-legged robot cheetahs, the result is often spectacular. It is just as if in this act of robotic reinvention that the drudge-like life-trajectory of many robots, previously valued only for invulnerability to boredom, to boredom with things (repetition) or boredom with human beings (routinization), is suddenly discarded. What’s left is this genuinely fun scene of robots, forever heretofore consigned to compulsory labor, untethered from their AI leashes, finally free to be what they were never designed to be: robotic cheetahs moving at the speed of a just-reassembled pack animal; robotic attack dogs, now equipped with reengineered robotic butterflies for better visual sensing, suddenly sidling away from high-testosterone attack mode in favor of startling, but ungainly, emulations of those exceedingly life-like Japanese theatrical robots. In this artistic scene, it is no longer the animating spirit of Shinto at work, but something else: the splice, the mix, the creative recombination of robotic parts into a menagerie of creative assemblages. Or maybe not. Some of the most fascinating projects involving this group of recombinant artists were those by descendants of Survival Research Lab. Their renderings quickly brushed aside the aesthetics of creative assemblages in favor of a kind of seductive violence that is, it appears, autochthonous to the American imagination. In this scenario, it is all about riding the robots. Robots as monster dogs, cheetahs, wildcats, sleek panthers, and large-winged earthbound birds waging war against one another or at other times left untethered to roam the nighttime desert, whether as sentries, mech watchdogs, or perhaps free-fire zone attack creatures burning with the ecstasy of random violence.

Designs for the Robotic Future

A Cheetah, an Android Actress, and the (AI) Cockroach

Intimations of the robotic future are often provided by the design of robots presently being assembled in engineering research labs in the USA, Japan, and the European Union. Here, the robotic future is not visualized as fully predictable, determined, or, for that matter, capable of being understood as embodying an overall telic destiny, but, much like the human condition before it, as something that will likely be contingent, multiple and complex. Indeed, if robots of the future–presently being designed on the basis of advanced research in sensor technologies, articulated limbs, and artificial intelligence–provide a glimpse of that robotic future, then it may well be that traditional patterns of human behavior notable for their complex interplay of issues related to power, affectivity, and intelligence may be well on their way to recapitulation at the level of an emerging society of future robots. Consequently, while the ultimate destiny of the robotic future remains unclear, its possible trajectories can already be discerned in the very different objectives of remarkably creative robotic research. Building on traditional differences in approaches to technology in which the United States generally excels in software, Japan in hardware and Europe in wetware (the soft interface among technology, culture, and consciousness), new advances in robotics design inform us, sometimes years in advance, concerning how robots of the future will effectively realize questions of (soft) power, (machine) affectivity, and (artificial) intelligence. For example, consider the following three examples of contemporary robotic designs, none of which fully discloses the future but all of which, taken together, may provide a preliminary glimpse of a newly emergent future in which human-robotic interactions will often turn on questions of power, emotion, and consciousness.

Robots of Power

In the cutting-edge research laboratories of Boston Dynamics, there are brilliant breakthroughs underway (mostly funded by DARPA) in designing robots that embody a tangible sense of power, robots with astonishing capabilities in moving quickly over a variety of unexpected terrains. For example, the Cheetah robot is described as “the fastest legged robot in the world,” with “an articulated back that flexes back and forth on each step, increasing its stride and running speed, much like the animal does.” Its robotic successor, the Wildcat, has already been released from the tethers of Cheetah‘s “off-board hydraulic device” and “boom like device to keep it running in the center of the treadmill,” [17] in order to explore potentially dangerous territories on behalf of the US Army. The Cheetah and the Wildcat are perfect robotic signs of forms of power likely to be ascendant in the twenty-first century: remotely controlled, fast, mobile, predatory. That Google has recently purchased Boston Dynamics (possibly as a way of acquiring proprietary rights to its unique sensory software) may indicate that important innovations in software development are themselves always sensitive to the question of power, seeking out, in this case, to ride Google into the robotic future, at least metaphorically, on the “articulated back” and fast legs of Cheetah and Wildcat.

Robots of Affectivity

In Japan, it’s a very different robotic future. Here, unlike the will to power that seems to be so integral to the design of American versions of the (militarized) robotic future–whether terrestrially bound or space-roving robots like Curiosity on Mars–Japanese robots often privilege designs that establish emotional connection with humans. Japanese robots, that is, as the newest of all the “companion species.” Here, focusing on robots specializing in therapeutic purposes (assisting autistic children, augmenting health care, helping the elderly cope with dementia), or for straightforward cultural consumption (androids as pop entertainment icons, robotic media newscasters), the aim has been to cross the uncanny valley in which humans begin to feel “creepy” in the presence of robots that are too human-like in their appearance and behavior. Psychological barriers against crossing the supposed uncanny valley have not stopped one of Japan’s foremost android designers, Professor Hiroshi Ishiguro, who, working in collaboration with Osaka University, has created a series of famous robots, including an android-actress Geminoid F, described as “an ultra-realistic humanlike android,” (who smiles, frowns, and talks) and, in a perfect act of simulational art, an android copy of himself. While Boston Dynamic’s Cheetah and Wildcat may provide a way of riding power into the future, Geminoid F and Professor Ishiguro’s android simulacrum do precisely the opposite by making the meaning of robots fully proximate to the question of human identity itself. If there can be such fascination with android actresses and AI replicants, that is probably because Japan’s version of the robotic future already anticipates a new future of robot-human affectivity, one in which questions of strangeness and the uncanny are rendered into indispensable dimensions of the new normal of the robotic future. In this sense, what is brought to surface by the specifically Japanese realization of the full complexity of robot-human interactions is the very shape and direction of individual and cultural psychology in the future. While Geminoid F, the android actress, will probably never really challenge boundaries between the human and the robotic, since it only represents a direct extension of the theatre of simulation that is mass media today, an android replicant is something very different. When the alter ego finally receives physical embodiment in the form of an android replicant, the question may arise whether in fact an android “selfie” might potentially be perceived in the soon-to-be realized robotic future as the very best self of all.

Robots of (Integrated) Intelligence

While American approaches to designing the robotic future often focus on questions involving the projection of power, and Japanese robotics research explores subtle psycho-technologies involved in robot-human interactions, European versions of the robotic future often privilege the complicated wetware interface involved when swarms of robots intrude into what, from a robot’s perspective, are alien spaces, whether the industrialized workplace, human domestic dwellings, or animal, plant, and insect life. Much like the European Union itself, where the value of integration is the leading social ideal, EU-funded robotic research has quickly attained global leadership in its creative studies of the bifurcations, fractures, and complex fissures involved with the extrusion of robots into the alternative environments of humans, plants, objects, and animals. For example, a press release titled “Robots can influence insects’ behavior” publicizes advances in robotic research under the sign of the European (AI) cockroach:

Scientists have developed robot cockroaches that behave so realistically they can fool the real thing. They were created as part of an EU-funded study for testing theories of collective behavior in insects, using groups of cockroaches as a model. Researchers working as part of the LEURRE project introduced the devices into a group of insects and studied their interactions. A report in the journal Science showed that the cockroaches’ self-organisational patterns of behavior and decision-making could be influenced and controlled by the tiny robots, once they had been socially integrated.

Little larger than a thumbnail, the cube-shaped “insbots” were developed under the EU-funded ‘Future and Emerging Technologies’ (FET) initiative of the Fifth Framework Programme. They were equipped with two motors, wheels, a rechargeable battery, computer processors, a light-sensing camera and an array of infrared proximity sensors. When placed among cockroaches, the machines were able to quickly adapt their behavior by mimicking the creatures’ movements. Coated in pheromones taken from cockroaches, the insbots were able to fool the insects into thinking they were the genuine article.

Coordinator Dr. Jean-Louise Deneubourg from the Universit√© Libre de Bruxelles said, ‘In our project, the autonomous insbots call on specially developed algorithms to react to signals and responses from individual insects.’ The journal Science reported that once the robots were accepted into the group, they began to take part in and influence the group decision-making process. For instance, the darkness-loving creatures followed the insbots towards bright breams of light and congregated there. [18]

The report concludes by noting that the next stage of development for autonomous devices will involve building “groups of artificial systems and animals that will be able to cooperate to solve problems. So the machine is listening to and perceiving what the animals are doing and the animals are in turn perceiving and understanding what the machines are telling them.” [19] In other words, not so much a study of AI robot cockroaches drenched with pheromones (all the better to attract the attention of unsuspecting naturalized cockroaches) but a brilliant futurist probe into the new order of robotic communications, that point where robots learn to communicate with insects and, by extension, with plants, objects, and humans, and those very same plants, objects, and humans actually have a form of quick robotic evolution of their own by finally learning what it means to “perceive”, “understand,” and perhaps even “influence” the otherwise autonomous actions of robots. In this evolutionary scenario, a fundamental transformation in the order of communications, beginning with insects and then rapidly propagating through other species–plants, animals, and humans–anticipates a future in which the integrationist ideals of the European Union are inscribed, unconsciously, unintentionally, but certainly wholesale, on a newly emergent Robotic Union. The lowly cockroach, then, once coated with pheromones, as perhaps a fateful talisman of a possible future in which autonomous robots learn to “mimic” behavior with such uncanny accuracy that humans, like those “darkness-loving creatures” before them, follow the multiple robotic insbots of the future “towards the light and begin to congregate there.” In this situation, the only question remaining has to do with the meaning and direction of the “light” of the robotic future towards which we are congregating, certainly in the future as much as now.

The Psycho-Ontology of Future Robots

The future of robotics remains unclear, still clouded by essentially transhuman visions projected onto the design of robots, still not willing or able to reveal its ultimate destiny, that point when robotic intelligibility takes command and in doing so begins finally to trace its own trajectories in the electronic sky. Yet, for all that, there is much to be learned from reflecting upon contemporary robotics design–lessons not only about robotic technology and creative engineering but about that strange universe signified by complex encounters between robots and humans that takes place in otherwise relentlessly scientific labs around the world, from Japan and the United States to Europe. If the future will be robotic–at least in key sectors of the economy as well as network infrastructure–it is worth noting that the overall direction of that robotic trajectory already bears discernable traces of human presence, whether in terms of conflicting perspectives on robotic design or what might be prohibited, excluded, and disappeared from our successor species. Not really that long ago, an equally strange new phenomenon–the human “self”–was launched into history on the basis of key ontological conditions, some visible (the complex learning process associated with negotiating the human senses) and some invisible (the order of internalized psychic repression). In the same way, contemporary society witnesses, sometimes in mega-mechanical robotic expression and, at other times, in specifically neurological form, the technological launching of a robotic species that, while it may eventually posses its own unique phylogenetic and ontogenetic properties, will probably always bear the enduring sign of the human. Not necessarily in any particularly prescriptive way, but in the more enduring sense that the trajectory of the robotic future hinted at in the creative designs of robotics engineering may well culminate in investing future robots with a complex history of internally programmed psychic traumas that will powerfully shape their species-identity, both visibly and invisibly. In this case, contemporary fascination with robots may have its origins in a more general human willingness, if not eagerness, to displace unresolved anxieties, unacknowledged traumas, and, perhaps, grief over the death of the human onto identified prosthetics, namely robots. Could the future of robotics represent, in the end, the ethical ablation of the human condition, including the sinister and the creative, the compassionate and the cruel, in purely prosthetic form? If that is the case, are robots, like humans before them, born owing a gift–the gift of (artificial) life–that they can never repay? In this case, what is the future psycho-ontology of robots: unrelieved resentment directed against their human inventors for a gift of life organized around “compulsory servitude” or the supposed joy of (robotic) existence?


—————–[1] “Raw: Obama Plays Soccer with Japanese Robot,” video, (April 24, 2014), (accessed May 15, 2014).

[2] Ian Urbina, “I Flirt and Tweet: Follow Me at #Social Bot,” The New York Times Sunday Review (August 11, 2013), (accessed April 17, 2014).

[3] Ibid.

[4] “Robotic prison wardens to patrol South Korean prison,” BBC News Online (November 25, 2011), (accessed April 20, 2014).

[5] For a full description of the Robot Land project, see (accessed July 28, 2014).

[6] “World’s First Robot Theme Park to Open in South Korea,” CTV News (February 10, 2014), (accessed May 23, 2014).

[7] Keane Ng, “South Korea’s Giant Robot Statues to Dwarf Japan’s,” The Escapist (September 8, 2009), (accessed May 11, 2014).

[8] See (accessed May 11, 2014).

[9] Ibid.

[10] See

[11] “World Robot Population, 2000-2011,” International Federation of Robotics, (accessed July 24, 2014).

[12] “Industrial Robot Statistics,” Statistic Brain, (accessed July 24, 2014).

[13] William S. Pretzer, “How Products are Made, Vol. 2: Industrial Robots,” (accessed May 12, 2014).

[14] Oliver Wainwright, “SociBot: the ‘social robot’ that knows how you feel,” Guardian Online (April 11, 2014), (accessed May 15, 2014).

[15] Sigmund Freud, “The Uncanny,” (accessed July 28, 2014).

[16] Ibid., 15.

[17] See (accessed May 14, 2014).

[18] “Robots can influence insects’ behavior,” European Commission: Research and Innovation, (accessed May 19, 2014).

[19] Ibid.