Eradicated Alphabets and Radical Algorithms: Script Reform, Secularism, and Algorithmic Revolution

Theory Beyond the Codes

Eradicated Alphabets and Radical Algorithms

Script Reform, Secularism, and Algorithmic Revolution

Introduction

In 1928, the parliament of the newly established, secular Turkish Republic legislated a nationwide shift from Arabic to Latin alphabetic characters. In the wake of this shift, Arabic writing became criminal in Turkey in all but one public space — the mosque. [1] The implicit link that Republican social engineers forged between script reform and state-managed secularism has fascinated commentators for decades. Some have agreed with early Turkish legislators that a change in alphabet can foster a change in religious belief or expression, [2] and some have questioned both the ethics and logic of such an association. [3]

The Turkish Alphabet Revolution, as it came to be called, is thus ordinarily presented as an interaction among humans concerned with the secular qualities of the state. The Revolution, so the story goes, fed on human linguists, [4] human ideologues, [5] human social engineers, [6] literate human readers now left illiterate, [7] illiterate humans about to become literate, [8] human reactionaries to progressive governance, [9] and human victims of the Republic’s radicalism [10] — all communicating with one another. The nonhuman elements in this story — in particular, the alphabet and God — have remained outside of the conversation.

But what might it mean to reintroduce these nonhuman elements back into the story of script and religious space? What might it mean to take the Alphabet Revolution seriously as a revolution of letters or symbols? First, it would mean recognizing that alphabetic characters continue to exist even when they are not being read — that strings of letters or symbols can be as much operational or technical as expressive or representational. Second, it would mean re-thinking the character of Turkish secularism. If Turkish secularism can be produced so easily via the shifting and replacement of alphabetic symbols, then it too seems to possess algorithmic qualities.

This essay is therefore part of an effort to make the alphabet a more central figure in Turkey’s story of script reform. It is an attempt to position Turkey’s Alphabet Revolution as one example among many of a nonhuman, algorithmic politics at work in the world. Indeed, a case might be made that linking a shift in alphabetic characters to something called secularism makes sense only if we recognize the simultaneously political and computational qualities of these characters. Turkey’s Alphabet Revolution works as a secular revolution, in other words, only if it also works as a function. God will thus make a reappearance at the end of the essay — alongside an invitation to consider the potential of an algorithmic, rather than subjective, theory of secularism.

Algorithmic Politics

Intuitively, the term “algorithmic politics” might seem redundant. If political speech is simultaneously an act, or — to draw on the work of J.L. Austin — if political speaking is an effect in and of itself, then it already operates in the realm of the algorithm. [11] An algorithm is a set of rules that is at the same time a set of activities to the machine running it. In addition to being a function, however, an algorithm is also a relationship among numbers or symbols. And this numeric or symbolic aspect of the algorithm is arguably as important as its operational or performative aspects when we consider its political or revolutionary potential. Recognizing the algorithm as a numerical or symbolic operation rather than as a spatial or subjective expression, in fact, is key to evaluating its political work.

As Shintaro Miyazaki has written in an article on “Algorythmics,” the Hindu-Arabic numeral system that became the basis for modern algorithms and computing differed significantly from the Greek and Roman numeral systems that influenced medieval and early modern European mathematics. The “Greeks calculated mostly with shapes, lines, surfaces and things,” notes Miyazaki; they did not calculate “with numbers.” [12] As a result, “geometry,” for the Greeks, “was more important than arithmetic,” whereas in the Hindu-Arabic algorithmic system, “shifting, deleting, and adding positional numerals became the principal operations of calculation.” [13] Miyazaki continues by highlighting the materiality of this algorithmic mode of calculation [14] — its “technicity” as opposed to its expressivity, [15] and its relationship, again, to contemporary computing. [16] If we were to derive a politics from this algorithmic logic, therefore, we might similarly describe it as a politics concerned with shifting symbols rather than with describing spaces or shapes. Numbers and symbols in such a political realm would not be a handy way to express a more real or true spatial universe. On the contrary, the operation of numbers would constitute a material, if not spatial or subjective, universe.

Moreover, once we recognized the possibility of such a material and symbolic universe, free of subjects and spaces, we could also reconsider what might carry political significance in this universe. The accident, the glitch, or the bug, for example, takes on enormous political salience when our political logic becomes algorithmic rather than spatial or subjective. In mathematical or political systems tied to space or subjectivity, an “accident” is nothing more than a spatial or subjective relationship that has not yet been effectively expressed or described. An accident is thus not really an accident at all. It is a call for a better-developed or more nuanced mode of mathematical or political expression. It is a demand that we find a more effective way of describing some geometric or subjective ideal or reality.

In algorithmic systems tied to numbers, however, an accident is very much an accident. It can be nothing else when numbers operate rather than describe or communicate. An accident is not an invitation to better (or more just) descriptive efforts, but rather a route toward alternative functionality. And an accident therefore has political potential in algorithmic politics that it cannot have in spatial or subjective politics. Things or moments that seem unjust, unproductive, illogical, or unethical in a politics wed to subjects or spaces become productive, logical, and functional — and arguably also just and ethical — when algorithmic politics are in play. Political accidents no longer lead inevitably to renewed descriptive efforts — to reform or to re-education; they lead to new functionalities.

The accident also opens up a mode of purely computational thinking in algorithmic politics that is inconceivable in spatial or subjective politics. Luciana Parisi and Stamatia Portanova, for instance, have made a compelling argument that “the aesthetic of the digital accident … implies that codes are modes of thought.” [17] “With their capacity to disrupt the ‘cold’ automated linearity of formal languages,” digital accidents indeed “encourage us to perceive a dimension where algorithms have almost managed to ‘come to life.'” [18] But is this algorithmic life also political life? Pushing Parisi’s and Portanova’s analysis in a different direction, the answer to this question would seem to be “yes.” As Parisi and Portanova conclude this part of their discussion, algorithms “are not only actions or pragmatic functions but also … suspensions of action or forms of contemplation.” [19] When algorithms make mistakes, in other words, when they reach “the limit of computation,” they do not think of numbers cognitively, rather they begin to intuit as they process “infinite quantities of data” and “incomputable information.” [20]

Through the ostensibly useless labor that happens at the moment of the accident, therefore — through the seemingly meaningless doing and processing that characterizes endless, accidental iteration — algorithms are not only alive but political, and, in quite classical terms, free. Their productivity does not happen in aid of a specific, determined end, but as a mode of contemplation of the infinite. Their being and operation are for their own sake. Algorithmic politics thus assumes that the algorithm exists, experiences, and thinks on its own terms.

Or, as Parisi and Portanova put it, the algorithm is a symbolic — and, for our purposes, political — operation that is neither a “subject thinking” nor a “thought object.” [21] Drawing on the work of Alfred North Whitehead, they show that numbers are not merely expressions of human thought, not merely symbolic tools that help humans or subjective minds work through mathematical problems, but themselves thinking machines. Paraphrasing Whitehead, Parisi and Portanova write that “symbolic operations do the thinking for us.” [22] And algorithms are mechanical techniques that exist apart from any specific human need for them.

Hence symbols in an algorithmic political context might best be described as mechanical as well as material and living — as machines that both think and are thought. They might be best addressed as material modes of thinking that operate apart from any human subjective or syntactical communication or expression. Indeed, separate as they are from the concerns of human cognition, the material fields “in and through” which the algorithm operates are neither “blind” [23] nor “an extended software thought running on material bodies.” [24] Rather, algorithmic contemplation, as it is described by Parisi and Portanova, is “a form of thought emerging from the iterative patterns and calculations of matter,” a type of thought “that does not describe but produces nature as it builds a constrained, ordered, objectified world.” [25]

The algorithm as a political figure, then, first thinks and acts as it processes numbers as matter. Second, the algorithm’s life plays out in the accidents that move it either to zero or to infinity. And third, this algorithmic mode of thinking, acting, and living is unique to the algorithm — unique to a figure or technique whose sphere of politics is non-spatial but nonetheless shifting, and whose contemplation is operational rather than expressive.

Or, put differently, algorithmic politics leaves to the side the oppositions that have framed debates in spatial or subjective politics. The algorithmic political environment is simultaneously productive and contemplative (indeed, productive as contemplative), simultaneously symbolic and mechanical (indeed, symbolic as mechanical). Defining thinking as a type of operation, and symbols as a type of matter or machine, algorithmic politics takes to a logical conclusion the deeply material character of political existence writ large. It makes unavoidable the realization that thinking, acting, and living are always in the world, if not necessarily in Cartesian space. [26]

Alphabets as Algorithms

But what does this largely metaphysical reinterpretation of politics and political existence have to do with Turkish alphabets or with Turkish Republican social engineering? One answer to this question is “everything” — provided, once again, we remain open to the idea that the alphabetic characters at the center of the reform contained within them some computational seed. But perhaps we need not keep an open mind. Indeed, the notion that letters are in some way inherently computational already underlies a number of scholarly analyses of the Turkish Alphabet Revolution. In particular, commentators have suggested with some frequency that the new Turkish alphabet, with its Latin characters, is more “computational” than Ottoman Turkish, with its Arabic characters, could ever have been.

Geoffrey Lewis, for instance, whose account of the language reform by no means relies on any explicitly computational analysis, begins his discussion of the Alphabet Revolution by pointing to the political and cultural underpinnings of the reforms:

Turkish writers on dil devrimi (language reform) do not usually deal with the change of alphabet, which for them is a separate topic, harf devrimi (letter reform). A brief account of it is given here for the sake of completeness, since the two reforms are obviously linked, arising as they did from the same frame of mind. The purpose of the change of alphabet was to break Turkey’s ties with the Islamic east and to facilitate communication domestically as well as with the Western world. One may imagine the difficulty of applying the Morse Code to telegraphing in Ottoman. [27]

A few pages later, Lewis continues by quoting Nicholas Negroponte and expanding on the idea that modern Turkish lends itself more easily to various types of code, that “‘at the word level, Turkish [written in Latin script] is a dream come true for a computer speech synthesizer.'” [28]

Over the course of a single paragraph, Lewis thus reinforces, repeatedly, the computational qualities of alphabetic characters. First, he notes, historians of the early Republic usually treat symbols (the alphabet) and communication or expression (language) separately. Second, he argues that the shift to Latin script weakened religious belief (“ties with the Islamic east”) while facilitating telegraphic code. And third, he mentions the extraordinary suitability of Turkish written in Latin script to nonhuman speech synthesis. For these reasons, New Turkish written in Latin characters is, for Lewis, fundamentally computational. Its history is embedded in the history of computation and code; it is separated from subjective problems like religious belief, and its greatest ally — more so, the thing that dreams of it [29] — is the computer speech synthesizer.

There is, however, arguably more going on in this unexpectedly computational history of the Turkish Alphabet Revolution than the implicit connection Lewis forges between what seems to be the hyper-rational world of computation and what seems to be the hyper-rational world of Latin script. Yes, historians of the Alphabet Revolution have set up a divide between symbolic characters (the alphabet) and communication (language) — and these historians have thereby hinted that alphabetic execution or operation supersedes alphabetic message transmission. In addition, these historians have been tempted to invoke code, computation, and computational synthesis in their discussions. Telegraphing in Ottoman was hard work, apparently. More than this, however, the history of the shift in alphabets has also become a history of the algorithmic quality of writing — broadly defined — in the early Turkish Republic.

Narratives of the Alphabet Revolution, for example, frequently portray it as an overnight success. One morning, in the late fall of 1928, Turks woke up to discover all traces of Arabic script gone. Mustafa Kemal Atatürk, the Turkish President, then toured Anatolia with a blackboard, teaching bureaucrats and villagers alike the value of the new letters. [30] Soon, especially as Latin script facilitated widespread literacy in Turkey, no one mourned the lost Ottoman letters — or, if they did, they were quiet about it. [31]

As Hale Yılmaz has demonstrated, this popular story of the Alphabet Revolution contains some flaws. In fact, the change in Turkish script occurred gradually and sometimes haphazardly — while both the act of Parliament that brought new written Turkish into being and the implementation of this act reflected a complicated social and political reality. [32] According to the November 1928 Alphabet Law, for instance, different types of Arabic script would have different life spans as the Revolution went into effect. By the beginning of December 1928, all newspapers, advertisements, subtitles, and signs would shift from Arabic to Latin characters; by the beginning of January 1929, all books, and government, bureaucratic, or other institutional communications would use the new letters; and by the beginning of June 1929, members of the public would have to use Latin characters in their transactions with government or private institutions. [33] “June 1930,” Yılmaz notes, was “the absolute deadline for all public and private transactions, including all printed matter such as laws and circulars to be in the new letters.” [34]

This gradual transition from Arabic to Latin script makes social and bureaucratic sense, of course. If the change had been as sudden as later narratives have suggested it was, the Revolution would likely have met with bewilderment at best and failure at worst. It is reasonable, as Yılmaz implies, that Turkish social engineers would have tried to make their policies palatable to as great a portion of the population as they could. But this step-by-step transition toward Latin script also lends itself to alternative analyses. It hints, for example, at the possibility that Turkish alphabets — whether Arabic or Latin — were already part of an algorithmic political logic, and that their operational rather than their expressive, and their numeric rather than their communicative, qualities leant them value.

Consider, for instance, the first set of scripts — the most influential — that would shift from Arabic to Latin: street signs, advertisements, movie subtitles, and newspapers. Each of these scripts is characterized by action rather than by communication. Indeed, signs, advertisements, and subtitles have been described, specifically, as sets of letters that act on their own or that, to return to Parisi and Portanova’s paraphrase of Whitehead, “do the thinking for us.” The street sign, for example, thinks through traffic patterns. It makes traffic operate, function, or fall apart. Prominent and visible as it is, the alphabetic symbol on the street sign has almost nothing to do with expression or communication. It exists as part of a material, environmental system — as part of an extended, symbolic, machine.

But advertisements and subtitles, too, belong more to the realm of letters that work than they do to the realm of letters that communicate. Subtitles, after all, are nothing less than machines of translation in the same way that equations are machines of computation. Taken together, a system of subtitles “translates” — “carries across” — certain symbols into other fields. Subtitles literally shift symbols — they translate, or “carry,” letters and words rather than producing or communicating messages or meaning. Like street signs, subtitles are in this way operational (they carry), mechanical (they are machines that carry), and symbolic (they are machines that carry symbols).

Finally, advertisements, such as are found on posters and billboards, are notoriously absent of communication. They exist to do work, to shift matter, or to move and persuade. Indeed, even the most conventional interpretations of advertising recognize it — and the letters of which it is composed — to be a variation on Austin’s illocutionary speech. Once again, therefore, to the extent that letters appear in an advertisement, they do so not to facilitate communication or expression but — if perniciously so [35] — to do the thinking for us.

So, even before the Alphabet Revolution concerned itself with private or public correspondence, (which would shift only a year and a half after the promulgation of the Alphabet Law), and even before it concerned itself with bureaucratic transactions or political or economic exchange, it concerned itself with letters that had no meaning. Script that operated, that shifted, that did its own thinking — that worked as a machine in or through symbols — was script ripe for revolution. Script that communicated was an afterthought. It is true that communicative writing eventually shifted too. But its expressive character disqualified it from participation in the initial revolutionary enthusiasm. Key to the Alphabet Revolution, at least according to the timeline the Turkish Parliament set out for it, was algorithmic, rather than expressive, letters.

But what about the one seeming exception to the rule that working letters would shift first? What about the fact that newspapers and magazines were to move to Latin characters at the same time that street signs, subtitles, and advertisements were? One way to answer this question is to return to Yılmaz’s analysis of the gradual, haphazard implementation of the reform writ large. Yılmaz notes, for instance, that newspapers were largely unsuccessful at making the case for the new alphabet to their readers — and that indeed, “readership dropped sharply when they began printing completely in the new alphabet.” [36] “Some newspapers and magazines,” she continues, even “shut down due to the ensuing financial crisis.” [37]

Yılmaz’s purpose in highlighting the fate of newspapers and magazines after the Alphabet Revolution, again, is to complicate and add nuance to what is ordinarily an uncomplicated story of political, legal, and social triumph. But her reference point — the drop in newspaper circulation — also suggests an alternative interpretation of this outcome. Namely, if newspapers’ value is determined above all by their ability to circulate — by their ability to do the same thing that, say, automotive traffic does — then perhaps newspapers also belong more properly to the sphere of active letters. Perhaps newspapers are part of the same mechanical-symbolic infrastructure that street signs are. If newspapers die because they fail to move rather than because they fail to communicate, then, arguably, newspapers are no different from machines. Under such circumstances, they meet their fate not because their letters have lost their meaning but because their letters cannot process.

Symbols as Machines

A significant question that arose in the rhetoric surrounding the Alphabet Revolution was what, under these revolutionary conditions, letters were then supposed to do. If letters on street signs were essential to the Revolution in a way that letters in books or correspondence were not, what then were these revolutionary letters doing and what role would they play in the future? Or, put differently, if the operational, rather than communicative, quality of letters dominated debates over whether and how to shift the alphabet, then how, in practice, would these letters operate? Would they be, themselves, machines and matter, or would they do nothing more than represent some other mechanical or material process?

A cursory reading of revolutionary rhetoric suggests that revolutionary letters could play only the latter, derivative role. Turkish linguists were adamant that the ideal, simple, rational, and revolutionary letter would be a letter that existed in a one-to-one correspondence with a single sound produced by a Turk communicating in Turkish. [38] A proper, revolutionary alphabet, for them, was a phonetic alphabet that would do nothing more nor less than aid in vocalized human interaction. These linguists, true, may have devalued the capacity of letters to join together and produce content or meaning — street signs still trumped books. But this did not mean that they imagined letters as anything other than tools of human communication.

If we look more carefully at how linguists imagined this phonetic alphabet coming about, however — and if we consider the logic of shift, change, and eradication that motivated alphabetic rationality in Turkey — we can find hints of a nonetheless distinctively algorithmic process at work. After all, in order for a set of letters to participate in a revolution in the name of phonetics, the phonetic quality of these letters cannot yet exist. In the revolutionary present, there are no representational or expressive letters — only letters that will move toward, or work in the name of, a future phonetic rationality. The combination of phonetic logic and revolutionary logic, in other words, transforms these letters almost inevitably into algorithmic strings.

The symbol that must accompany a shift in alphabets, and that can bring about phonetic simplicity, that is to say, is not an algebraic “equals” sign — it is not a sign of representation. Rather, it is an algorithmic arrow. [39] In revolutionary Turkey, there could be no existing one-to-one correspondence between letter and sound — no equality — because the revolution could assume only a potential or future correspondence between sound and symbol. Equality was a limit point that could come into being only as the letters changed.

As much as the supposed simplicity of phonetic alphabets suggests that the operations accompanying script reform will be finite or that the result will be reached (there are only so many sounds that a speaking human can make), therefore, the capacity for the glitch always looms large. One of the problems that linguists as well as revolutionaries were constantly trying to solve, in fact, was the problem of the phonetic alphabet’s iterative qualities — the extent to which alphabetic operations could and did sometimes continue processing toward infinity or zero. Linguists and revolutionaries in this way had to grapple with the same question that Parisi and Portanova have raised: what happens when the dysfunctional algorithm (here, the alphabet) begins to process or contemplate on its own terms? What happens when it acts and produces without end?

The linguist C.R. Lepsius, for example — an influential forerunner to the Turkish linguists who facilitated the Alphabet Revolution [40] — devoted, if implicitly, much of his work to addressing this problem. Lepsius is most often read as an advocate of letters that are simple, that do representational work, and that are part of finite — if large — sets. Indeed, although most of his contemporaries deemed his proposed Standard Alphabet too unwieldy for practical application, [41] his defense of phonetic simplicity remained central to the logic of not just the Turkish alphabet reform, but to shifts toward simple alphabets in other states as well. [42] Letters, for Lepsius, were inert signs, and nothing more than signs. They were derivatives of spoken human language. [43]

Throughout his scholarly career, Lepsius advocated more than this simple and seemingly self-evident truism. Presenting to readers nothing more than a system of symbolic representation was not his primary goal, and he wrote repeatedly that he would demonstrate not only the superiority of a universal, phonetic alphabet, but also the need to replace all existing (or, for that matter, nonexistent) alphabets with this universal template. [44] Lepsius thus insists in his work on a symbolic process, not on a mode of symbolic expression. His proposal, again, suggests an algorithm, not an equation. And indeed, Lepsius makes clear that even his own alphabet can grow and shrink, simplify or enlarge, depending on the work that it is doing. [45] His alphabet is not just a set of symbols. It is a set of operations — it changes when its input changes. And as such, Lepsius’s alphabet suggests, at least, the possibility of other, less functional alphabets that might not only change with evolving input, but trend toward a series of glitches.

One such dysfunctional alphabet, for example, was the nineteenth-century Hebrew alphabet. Hebrew, Lepsius argued, had degenerated over the centuries from its functional, syllabic, and ancient form into its contemporary consonantal form — absent symbols to represent vowels. The ancient Hebrew alphabet, Lepsius continued, simply could not have been consonantal because consonantal alphabets trend toward dysfunction, and ancient scholars of Hebrew would not have allowed for such a trend. A “consonantal alphabet,” writes Lepsius, “would presuppose by far too abstract a phonic doctrine on the part of its inventors” — and, in any case, “intelligible” writing requires that there be “written signs for the principal and most expressive vowels.” [46] For Lepsius, therefore, the early Hebrew alphabet had to have started as a syllabic alphabet — and only later degenerated, or processed badly — because a consonantal alphabet is by definition dysfunctional. [47]

Obviously Lepsius’s attempt to restore the purity or “intelligibility” of a decayed or decadent ancient Asian system places him squarely into the realm of nineteenth-century imperial or colonial scholarship. Leaving aside, however, its imperial connotations, Lepsius’s argument also plays up the algorithmic problems that, he implies, are inherent in all alphabets, including his own. First of all, regardless of whether one characterizes the presumed change in the Hebrew alphabet to be decay or not, Lepsius writes under the assumption that alphabets are not static. They are constantly shifting and changing, even if this change is slow or gradual.

Second, Lepsius’s primary criticism of the consonantal alphabet is that such an alphabet rests on a phonic doctrine that is “too abstract.” His criticism, in other words, is that the symbols in such an alphabet are too dissociated from specific instances of human speech. Consonantal alphabets are a problem for Lepsius because they contain symbols that are prone to doing other, non-phonic work — and consonantal alphabets thus threaten to dwindle, as a set, even as human speech sounds proliferate. In short, the one-to-one correspondence between human sound and symbol that characterizes the functional alphabetic process in Lepsius’s universe is replaced in this example of alphabetic dysfunction by an inverse relationship between sound and symbol. As sounds are made, symbols move toward zero.

And unsurprisingly, this problem that Lepsius attributes to a shifting, or dwindling, Hebrew alphabet is also the problem that — according to observers as well as Turkish revolutionaries — plagued Turkish written in Arabic script. As Birol Caymaz and Emmanuel Szurek have noted, late nineteenth and early twentieth-century commentators particularly criticized Ottoman Turkish because it relied on an alphabet that was, like Hebrew, poor in vowels and rich in consonants to express a language that was poor in consonants but rich in vowels. [48] Such an apparent mismatching of sounds and symbols in turn created a situation in which — in the words of one nineteenth-century commentator — Ottoman spelling became a matter of “whim,” while printing, which required manipulating “more than 480 typographic characters,” faced huge technical and financial obstructions. [49]

Or, as Frances Trix has stated in her comparison of the Albanian and Turkish alphabet reforms, the “multiple equivalencies” between sound and letter that characterized Turkish written in Arabic script “constituted a main criticism of the Ottoman Turkish alphabet.” [50] While challenging the idea that phonetic simplicity was the sole engine driving alphabet reform, [51] Trix points out that “a single letter could represent multiple phonemes” in the Ottoman writing system” while, at the same time, “multiple letters could stand for a single phoneme.” [52] Once again the fundamental problem with Turkish written in Arabic script — according to nineteenth and early twentieth-century commentators — was that it left the “equals” sign meaningless or useless. There was no equality in such an alphabetic system, and there was little representation. Instead, the letters acted on their own. Whether they were obliterating orthography by moving all over the place at whim, or forcing typesetting machines to grind to a halt — proliferating beyond any reasonable mechanical bounds — these letters did work on their own, and for their own purposes.

Having recognized the problems inherent in pre-revolutionary Ottoman Turkish writing, however, it is important to recall that the algorithmic politics characteristic of the Republican era did not result from the apparent phonetic irrationality of written Ottoman Turkish. Rather, the shift in script — the move from Arabic to Latin characters — allowed these symbols to begin thinking, working, and contemplating on what became truly their own terms. Even as Turkish written in Arabic script demonstrated the danger of the algorithmic glitch, that is to say, it was the shift in script itself that created what might be called the fatal glitch. In essence, it did not matter whether the starting point of the Alphabet Revolution was Arabic or Latin. It did not matter if the move was from the phonetically multivalent to the phonetically unambiguous. Rather, what mattered — as far as the algorithmic nature of Turkish politics was concerned — was the process of shifting.

Algorithmic and Alphabetic Accidents

Consider, for example, the introduction to one of the first “spelling dictionaries” produced by the Turkish Republican Language Council in 1928. In describing its logic in compiling the dictionary, the Council states that although it drew on the work of earlier advocates of alphabet reform, it also eliminated from the dictionary unnecessary words and letter combinations that had no place in popular language practices. The Alphabet Revolution, the Council emphasized, had as a first priority the expression of the language as Turks actually used it. [53]

Presented in this way, the shift in script seems to reflect, above all, a moment of radical, modernist nationalism — even Turkish letters had to conform to Republican fantasies about the Turkish “folk.” In addition, however, and especially if we look at the specific play of letters that this phonetic logic produced, we can find evidence of a distinctly algorithmic politics at work. One of the decisions that the Council had to make, for example, was when and where to separate combinations of letters and when to leave these letters as a single string. In addressing this problem, the Council writes, first, that in the past, terms that had expressed two concepts, or that had been composed of two separate ethno-linguistic roots (such as “head scribe — başkâtip” or “head/prime minister — başvekil“) had been written separately. Second, the Council continues that the new script would depart from this practice and combine these terms into single strings of letters. Finally, the Council defends its position by noting that even in the old Arabic script, terms of this sort were sometimes written as a single block — and that longer strings of letters are more logical or “intelligible” in any case. [54]

So why, then, would the Council decide to join the letters together in this way? And why would it support its position by invoking both logic or intelligibility and the orthographic irregularity of the Arabic script from which it was trying to distance itself? One way to think about these questions is to ask what unifying the shorter sets of letters does to meaning or communication. Keeping the ethno-linguistic roots or the root meanings of compound words separate creates a situation in which these singular, if compound, concepts — i.e. prime (in Turkish) minister (in Arabic) — retain hints of their composite ethno-linguistic or content-based parts. Readers and writers consider — even if only for a fraction of a second — where the separate parts of each of these concepts derived. By running the roots and concepts together, contrarily, the Council could create a more uniform final product — a product that would not so insistently prompt readers and writers to consider the multilayered Arabic, Farsi, and Turkish foundations of what was supposed to be a radically homogeneous alphabetic and linguistic system.

This interpretation is no doubt reasonable — reflecting as it does the modernist nationalism of Republican social engineering. But if we set aside meaning and content for a moment, and if we consider the justifications for the change, a second explanation for privileging longer over shorter strings of alphabetic symbols also presents itself. Indeed, a case might be made that the Council’s invocations of both intelligibility and the now obsolete Arabic script echoes Lepsius’s references to alphabetic intelligibility and to a now obsolete Hebrew script. In both cases, the foe of alphabetic intelligibility is abstraction. And in both cases, this abstraction is linked to an alphabet related to the proper or pure one — but decayed and displaced in time.

Recall, for example, that Lepsius’s story is fundamentally a tale of a shift from what had once been a logical, syllabic Hebrew alphabet — where alphabetic symbols represented both consonants and vowels — to a decayed, abstract alphabet, where these symbols represented only consonants. The Council tells a similar story of a shift from an illogical, consonantal Arabic alphabet to an intelligible, phonic Latin one. The progress narratives, it is true, are reversed. The first is a story of decadence and the second is a story of restoration. In the first, the earlier alphabet was intelligible and phonic whereas the later alphabet was unintelligible and abstract — and in the second, for the Council, the earlier alphabet was unintelligible and abstract whereas the later alphabet was intelligible and phonic. But in each, the alphabets shifted toward or away from abstraction and toward or away from intelligibility.

Worth considering, then, is how each addressed the problem of abstraction more generally. For Lepsius, once again, “abstraction” — the absence of a correspondence between sound and symbol — resided in the absence of symbols to represent vowels. For the Council, however, this issue of the consonantal alphabet had been resolved — Arabic had already been abandoned. But the specter of abstraction still clearly haunted their work — and the Council members still worried about how to create a complete correspondence between sound and symbol. In particular, they worried about how to resolve the absence of such a correspondence in the spaces between strings of otherwise related letters.

For the Turkish Language Council, that is to say, it was not the absence of a symbol to represent a vowel sound, but the presence of a space between strings of letters that raised the problem of abstraction and illogic. This question of what to do about spaces between short strings of words was thus by no means a simple problem of uniform orthography — one that could be resolved either way. Rather, the spaces between short words caused the same problems for the Council that the consonantal alphabet did for Lepsius: the space was a place in a series that obliterated any correspondence or concrete link between a sound being made and a symbol appearing. It was a pause — perhaps — in thought, but certainly not in speech. It was where phonetic rationality broke down.

Even as they valorized this phonetic rationality, therefore, the members of the Turkish Language Council invoked the same infinite, unreachable limit point that Lepsius did. Just as Lepsius insisted that intelligible alphabets would do away as much as possible with moments of non-correspondence, so too did the Turkish Language Council. But just as in Lepsius’s analysis, this limit point was impossible to meet — creating, essentially, an infinitely recurring alphabetic algorithm — so too was it impossible for the Turkish Language Council to meet. Filling up all of the spaces, seeing to it that the alphabet was purely concrete, was something that could happen only in the future. Like Lepsius’s Standard Alphabet, the Turkish Language Council’s alphabet could never cease operating. It was always algorithmic rather than representational. It always contained within it the potential not just for a finite, functional endpoint, but for an infinite, dysfunctional accident.

Once again, therefore, by the Language Council’s own logic, Turkish written in Arabic script was not actually the danger. Yes, Ottoman Turkish made more apparent the likelihood that letters could and would act on their own, detached from phonic rationality or relevance. But the true moment of algorithmic politics occurred in the shift from Arabic to Latin script. What had been a problem of mismatched sets became a problem instead of infinite iteration. The process of shifting from a multivalent system to a system that — at some limit point — would reach pure, singular equality created a context in which symbols could do their own work, detached from human linguistic sounds. It made possible an environment in which symbols could and did contemplate. In short, the Alphabet Revolution itself allowed strings of letters to act, symbolically, as both matter and machine. It also became a place where these letters could begin to contemplate zero and infinity as their production went out of control.

Conclusion: Letters and God?

But what does any of this have to do with the fact that Arabic script remains a feature of mosques, but no longer of street signs, in Turkey? Yes, alphabetic symbols may be inherently computational. And yes, an apprehension that alphabetic symbols can operate as symbolic machines — as algorithms — seems to underlie the politics and legislation advocated and enacted by Turkish Republican social engineers. It may even be the case that when phonetic logic meets revolutionary logic, as it does when alphabets like Ottoman Turkish are eradicated, the potential for the glitch — for the moment of infinite algorithmic contemplation — that exists in all alphabets is realized or released. Perhaps once an Alphabet Revolution starts it must, by definition, continue indefinitely. But if we can tell the story of the Alphabet Revolution from the perspective of one set of nonhuman figures (the letters), can we also tell it from the perspective another nonhuman element (God)? These final few paragraphs are the beginning of an answer to this question.

That there is some relationship — even if specious — between alphabets and religious belief becomes clear not only in the ongoing association between script reform and secular reform in Turkish history and historiography, but in the fact that European linguists like Lepsius were usually speaking to Christian missionaries, [55] or in the sectarian nature of debates surrounding official or legislated script in other modern nation-states. [56] Perhaps more interesting than the link between alphabets and religion, however, is the link between computation and religious belief that has also appeared in recent scholarship. This relationship has played out in discussions of the similarities (if, again, specious [57]) between “cyberspace” and “sacred space,” [58] as well as in analyses of the dissolution of human versus nonhuman (here, human versus God) dichotomies in the face of digital existence. [59] In each of these scenarios, computation explodes the conventional, human-centered frameworks that have made liberal humanist political engagement — be this engagement spatial or subjective — possible. And, given these associations, it perhaps makes sense that we should consider the algorithmic or computational qualities of the Alphabet Revolution as the qualities that lend it its supposed secularizing force.

If we narrow our focus to the symbols themselves, however, we can also begin to appreciate some of the more specific defining characteristics of the secularism that might derive from script reform. Religious letters, after all, are much like algorithmic symbols: their value is in their function rather than in their expression, they do work, and ideally they — themselves — contemplate or intuit the infinite without any external human input. Revealed by God, these letters are not supposed to be tools of human communication or human dialogue. Indeed, the inexpressive quality of the letters revealed by God has appeared not only in academic writing on religion, but also in early Islamic commentary on revelation, and in both mystical and orthodox Ottoman religious practice. The idea that revealed letters cannot be reduced to instruments for human use — and certainly not for human communicative use — is a theme that reappears throughout the rhetoric surrounding divine script.

Eugenio Trías, for instance, has characterized the divinely revealed symbol as a “unity that presupposes a break.” [60] As an activity, event, or process rather than an inert object, Trías continues, the religious symbol first suggests a “relation between a presence of some kind that reveals itself and its recognition by a particular witness,” and second, the inability of the witness to comprehend the meaning of this revelation. “Every inquiry into meaning,” Trías writes, “seems to be annulled. This is why the symbol … always retains a mystical quality; its lot is precisely to show how the symbol remains structurally bound to a secret, occult and holy substrate.” [61]

Although the symbol may contain the trappings of human communication, therefore, these trappings have little to do with the symbol’s function in revelation. The symbol exists or lives, according to Trías, in order to work through the sacred — regardless of the response or comprehension of human witnesses. Like the algorithm that processes indefinitely — and that may in fact (if irrelevantly) carry some importance to human users or operators — the religious symbol exists for its own sake. Witnesses may very well experience a communicative event upon regarding the symbol, but the symbol itself cannot be reduced to a tool of communication, and such an event is by definition a failure. The symbol has its own occult work to do.

Within the early Islamic tradition, many mystics developed similar interpretations of the religious textual symbol as an inexpressive and non-instrumental artifact that nonetheless did work. The heterodox Sufi Ibn ‘Arabī, for example, devoted much time to exploring what one writer has called the “language-defeating reality of God.” [62]. Moreover, according to Ibn ‘Arabī, “bewilderment” was perhaps “the best way the believer has of escaping the metaphysical trap of his own perspectiveness.” [63] Writing a century earlier, and on the other side of the Muslim world, another Sufi scholar, Kharkūshī, compiled a dream interpretation manual in which he argued, in a related manner, that the difference between prophets and messengers is that “the former receive revelation in dreams,” whereas “the latter [do so] from the tongue of an angel, while in a waking state.” [64]

Once again, humans are by no means absent from these meditations on the communicative impasse of the revealed symbol. But they are unimportant to it. Reading as a human, all that Ibn ‘Arabī can hope for in his relation to revealed letters is confusion. There is no escape from his own “perspectiveness,” and he is aware that his perspective has little to do with God’s revealed signs or symbols. Similarly, although God (as an exception) is clearly capable of reducing symbols to tools of human communication — via “the tongue of an Angel” — Kharkūshī defines a prophet as a receptacle for, rather than as an interlocutor with, letters that do not communicate. Letters are revealed to a prophet, in their entirety, in a non-linguistic dream. Prophets do not seek to communicate via letters because, for prophets, communication is not the point. In both cases, again, the symbols are doing their own work, across their own environments. Or, as Ian Richard Netton — whose writing on revelation, it is true, does ordinarily emphasize human comprehension — has put it, symbols in the form of both letters and events “constitute … further examples of those signs which God has promised to show on the horizons and in themselves.” [65] But horizons, of course, can never be reached — any more than the endpoint to infinite algorithmic iteration can be reached. Both are limits, and neither has anything to do with the finite transmission of a comprehensible message.

Ottoman commentators on revelation — both Sufi and orthodox — reached similar conclusions about the non-communicative quality of revealed letters. [66] Moreover, the physical work that religious symbols could do was a well-established theme in mystical commentary and practice in the Ottoman Empire well into the twentieth century. [67] When early twentieth-century Republican reformers left the letters revealed by God out of their Alphabet Revolution, therefore, they were — given this tradition of revelation — by no means also leaving these characters inert, inactive, or irrelevant. On the contrary, by introducing a symbolic process that already existed in the mosque into other public spaces — by turning all alphabetic characters into symbols that did work or that could potentially iterate infinitely — these reformers were turning the non-subjective, non-syntactical, non-cognitive, non-communicative, and operational processes that happened only in the mosque into processes that happened everywhere. They were mapping the Republic onto the mosque.

But this does not mean that the reform was secretly a religious reform, or that it was somehow non-secular. Rather, it means that the character of Republican Turkish secularism is perhaps not what many commentators have assumed it to be. As Talal Asad has influentially argued, presupposing an opposition between the religious and the secular is not the most useful starting point for analyses of secularism. [68] And in Turkey, too, it is difficult to make the case that there was a secular break — in culture, civilization, subjectivity, or space — when secularism became the basis of Republican nationalism.

Secularism in Turkey had instead become a symbolic operation. It had become a means of distinguishing between the humans who used letters as a tool for communication and algorithmic alphabets or God — things irrelevant to the problems of both instrumentality and communication. The Alphabet Revolution was a secular revolution because it bracketed the human, not God, and because it thereby removed the human it from any religious concerns. Human communication became a sideshow in the story of symbolic operation and divine non-communication.

There is, though, nonetheless a key difference between eradicated Ottoman Turkish as a secular dream come true for a computer speech synthesizer and revealed writing as a religious dream come true for a prophet. Although each occurs in a symbolic environment given over to work rather than to expression, and although neither values communication above contemplation, divine revelation, unlike algorithmic processing, allows for repeated (if failed) acts of human witnessing. Divine revelation rests on the idea that potentially, at least, revealed letters will spell out words — and in fact the supernaturally rational word of God. Secular algorithmic politics, contrarily, leaves no space at all for human participation. Its norm is machines that think and are thought. Whereas Ibn ‘Arabī’s God could tolerate confused humans, the Republican secularism of the Alphabet Revolution emphatically could not.

To the extent that a shift in script can represent a movement from one mode of existence to another, then, this movement will never be cultural or civilizational, however one might define those words. It will never suggest, for instance, a transition from a self-contained “Islamic East” to a self-contained “secular West” — from one inert set of letters with certain associations to another inert set of letters with alternative associations. Rather, such a movement will always be just one part of an infinite series of symbolic moves — a series with a limit, it is true, but without any obvious endpoint. It will be a movement, in short, of algorithmic radicalism.

Notes

[1] Birol Caymaz and Emmanuel Szurek, “La révolution au pied de la lettre. L’invention de ‘l’alphabet turc,'” European Journal of Turkish Studies 6 (2007): http://ejts.revues.org/1363 (accessed on October 23, 2012), para 3.

[2] Geoffrey Lewis. The Turkish Language Reform: A Catastrophic Success (New York: Oxford University Press, 1999), 27.

[3] Caymaz and Szurek, para 3.

[4] Frances Trix, “The Stamboul Alphabet of Shemseddin Sami Bey: Precursor to Turkish Script Reform,” International Journal of Middle East Studies 31 (1999): 255-72, 255.

[5] Hale Yılmaz, “Learning to Read (Again): The Social Experiences of Turkey’s 1928 Alphabet Reform,” International Journal of Middle East Studies 43 (2011): 677-97, 680.

[6] John Perry, “Language Reform in Turkey and Iran,” International Journal of Middle East Studies 17 (1985): 295-311, 309; Lewis, 27.

[7] Caymaz and Szurek, para 41.

[8] Yılmaz, 678.

[9] Caymaz and Szurek, para 27 (paraphrasing Mustafa Kemal Atatürk).

[10] Ibid, para 43.

[11] J.L. Austin. How to Do Things With Words (Cambridge: Harvard University Press, 1975), 109-14. N. Katherine Hayles influentially argues that machine code conforms to Austin’s understanding of words that “do things” much more closely than human or “natural” language do in How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics (Chicago: University of Chicago Press, 1999), 275.

[12] Shintaro Miyazaki, “Algorythmics: Understanding Microtemporality in Computational Cultures,” Computational Culture 2 (September 2012): http://computationalculture.net/article/soft-thought (accessed on October 9, 2012), 2.

[13] Ibid, 9.

[14] “For a long time “algorism” meant the specific step-by-step method of performing written elementary arithmetic inscribed on materials such as sand, wax and paper.” Ibid, 2.

[15] “The principal technicity of these Arabic ‘algorisms’ as operations of shifting, deleting, adding, is evidenced in the ⇒ sign by Konrad Zuse and was later incorporated in the syntax of higher level programming languages such as Algol 60, as well as inscribed on the level of microprogramming in today’s CPU.” Ibid, 9.

[16] Ibid, 2.

[17] Luciana Parisi and Stamatia Portanova, “Soft Thought (in architecture and choreography),” Computational Culture 1 (November 2011): http://computationalculture.net/article/soft-thought (accessed on October 9, 2012), 1.

[18] Ibid, 2.

[19] Ibid, 17.

[20] Ibid.

[21] Ibid, 8.

[22] Ibid.

[23] Ibid, 9.

[24] Ibid, 12.

[25] Ibid.

[26] Ibid, 16.

[27] Lewis, 27.

[28] Ibid, 37.

[29] For more on the relationship between computation and dreaming, see Ruth A. Miller, Seven Stories of Threatening Speech: Women’s Suffrage Meets Machine Code (Ann Arbor: University of Michigan Press, 2011), 179.

[30] Caymaz and Szurek, paras 20-21; Yılmaz, 691.

[31] Caymaz and Szurek, para 27.

[32] Yılmaz, 677, 680.

[33] Ibid, 680.

[34] Ibid.

[35] Consider, for instance, Caren Kaplan’s discussion of “targeted” advertising in “Precision Targets: GPS and the Militarization of U.S. Consumer Identity,” American Quarterly 58.3 (September 2006): 693-713.

[36] Yılmaz, 685.

[37] Ibid.

[38] İmlâ Lûgati. (İstanbul: Devlet Matbaası, 1928) [prepared by the Dil Encümeni], vi.

[39] “While in algebraic notation a + b = c is merely a symbolic abstract statement which is not bound to any form of concrete execution, and thus can be changed in to a = c – b or b = c – a. The execution of the assignment a + b ⇒ c is non reversible. Only once the values of a + b have been transferred and assigned to c can it get overwritten, reassigned to another variable or deleted.” Miyazaki, 3.

[40] Trix, 255, 258.

[41] Ibid, 258.

[42] Ibid.

[43] For example, “the first principle [is] that the orthography of any language should never use the same letter for different sounds, nor different letters for the same sound.” C.R. Lepsius, Standard Alphabet for Reducing Unwritten Languages and Foreign Graphic Systems to a Uniform Orthography in European Letters (London: Williams and Norgate, 1863), 31 (italics in original).

[44] Ibid, 39.

[45] Ibid, 79.

[46] Ibid, 175.

[47] Ibid.

[48] Caymaz and Szurek, para 7.

[49] Ibid.

[50] Trix, 260.

[51] Ibid, 258.

[52] Ibid, 260.

[53] İmlâ Lûgati, vi.

[54] Ibid, xiii-xiv.

[55] Lepsius, 23.

[56] Trix, 256-57.

[57] Elaine Graham, Representations of the Post/Human: Monsters, Aliens, and Others in Popular Culture (Manchester: Manchester University Press, 2002), 230.

[58] Ibid, 232.

[59] Ibid, 217.

[60] Eugenio Trías, “Thinking Religion: the Symbol and the Sacred,” in Jacques Derrida and Gianni Vattimo, eds. Religion. (Stanford: Stanford University Press, 1996): 95-110, 103.

[61] Ibid, 103 (italics in original).

[62] Ian Almond, “The Honesty of the Perplexed: Derrida and Ibn ‘Arabī on ‘Bewilderment,'” Journal of the American Academy of Religion 70.3 (September 2002): 515-37, 516, 52-26.

[63] Ibid, 516, 525-26.

[64] John C. Lamoreaux. The Early Muslim Tradition of Dream Interpretation (Albany: SUNY Press, 2002), 65.

[65] Ian Richard Netton, “Towards a Modern Tafsīr of Sūrat al-Kahf: Structure and Semiotics,” Journal of Qur’anic Studies 2.1 (2000): 67-87, 79-80.

[66] Nabil al-Tikriti, “Kalam in the Service of State: Apostasy and the Defining of Ottoman Islamic Identity,” in Hakan Karateke and Maurus Reinkowski, eds. Legitimizing the Order: The Ottoman Rhetoric of State Power (Leiden: Brill, 2005): 131-50, 141.

[67] Hakan T. Karateke, “Opium for the Subjects? Religiosity as a Legitimizing Factor for the Ottoman Sultan,” in Hakan Karateke and Maurus Reinkowski, eds., 111-30, 114-15.

[68] Talal Asad. Formations of the Secular: Christianity, Islam, Modernity (Stanford: Stanford University Press, 2003), 25.

Ruth Miller is Professor of History at the University of Massachusetts-Boston. She has published a number of books and essays in the fields of Islamic law, feminist theory, and political theory. Her most recent book, Snarl: In Defense of Stalled Traffic and Faulty Networks, is forthcoming with the University of Michigan Press.