Guest Column: Power Tools

By

Michael Kowalski

   The drive to create acorporal intelligence stems from what would have been known before the Enlightenment as a religious impulse. This is what distinguishes acorporal intelligence from all the prostheses invented through the ages to extend human physical power. The carpenter who trades his handsaw for an electrically powered circular saw undergoes no crisis of faith. Quite the contrary: the power of the carpenter and the power of the saw are conflated in his mind. The same dynamic of human empowerment has applied to most of the history of mental tool development, albeit with some cultural pushback. Writing, the invention of the abacus, the calculus, mechanical and electronic calculators: all of these tools appeared at first to be passive extensions of biological intelligence. The simultaneous emergence of atomic weaponry and general purpose programmable computing machines changed the terms of the debate. A cosmic psychiatrist putting the human race on the couch might well have asked, “Why do you persist in developing tools that end up using you?” I suggest that the answer springs at least in part from the human drive to seek, and to invent if necessary, transcendence—or, failing that, to disguise the absence of it. The Israelites needed their golden calf for psychic relief after generations of homelessness. Today we need a miraculous quick fix to avoid being fricasseed by Mother Earth. Any old deus ex machina might help us get through tomorrow.

The syndrome has its lighter side. The case of a famous early twentieth-century horse and its folkloric antecedents sheds some light on the human drive to attribute intelligence to the merely uncanny. The psychologist’s “Clever Hans effect” is based upon the history of Wilhelm von Osten’s eponymous horse who was supposed to be able to do arithmetic by tapping its foot until the correct answer to a question was reached. While tapping, the horse turned out to have an extraordinary sensitivity to the shifting demeanor of the person who had asked him the question. Amazing as the empathy of the horse might have been—and notwithstanding how well the story serves to illuminate the behavior of humans possessed of a superior talent for pleasing their audience—it’s the antecedents of the history of “Clever Hans” that tell us the most about ourselves. The ironically-named “Clever” Hans in the story as collected by the Brothers Grimm1 actually ends up rather badly, but Hans’s literary cousin—the inept, ridiculed dunce who improbably shows up his betters by making good at the end of the fable—is a staple of European folklore. It is this originary Clever Hans, the stock figure of folk tales who succeeds in showing up his learned “betters”—and not just the genuinely sensitive horse trained by von Osten—who has something to teach us: namely, that we are apt to find transcendence where we want it, if necessary by imputing competence. From the Golden Calf, to Pygmalion’s stony lover, to Wolfgang von Kempelen’s chess-playing Mechanical Turk in the court of Maria Theresa, to the “patients” of Joseph Weizenbaum’s ELIZA program, humans have often found it very hard to resist ascribing sentience, intentionality, and delightfully uncanny competence to the curious creations of other humans. But it does help if the marvelous creation appears to have come from afar. Clever Hans shouldn’t be stabled next door.

ELIZA, Weizenbaum’s emulation of the conversational style of a clinical psychologist, may be the exception that tests the latter rule. It was written down the corridor, so to speak, at M.I.T in 1966.2 It is exceedingly primitive in comparison to the latest syntax-learning machines designed around advanced theories of pattern matching, neural networks, and generative adversarial networks. Back in the 1970s Professor Weizenbaum was dismayed to find his computer-literate colleagues locking themselves in rooms for ersatz therapy sessions with his code. It turns out that yielding to the temptation to ascribe sentience, intention, and even empathy to the latest chatbox is nothing new. Rather than chatting for another hour with the latest Klatschbox, today’s undiscriminating addict to web tools would do better to read a few chapters of Computer Power and Human Reason, Weizenbaum’s brilliantly non-technical description of computer programs as superfast token-shuffling systems that remain forever ignorant of semantics. The book appeared in 1976, in other words, sufficiently long ago for it to be written off now as quaint. But—except for the introduction of parallel processing—the fundamentals of how computers do what they do haven’t changed much since then, and neither apparently have end users’ recurrent need to believe that it’s the device, and not they themselves, who are doing most of the work supposedly performed by some species of ghost in the machine. A profoundly pathetic Post-Postmodern irony lies buried in all of this. While the death of the individual human author has long ago been decreed by both critics and writers who would prefer to emphasize the work performed by active readers in creating both the aura of a literary work and the notion of its author, chatbox users seem to see nothing ironic about reversing the terms of the authorial contract yet again. Now, instead of them holding on to some of the credit for having elevated a machine-generated text to the status of at least proto-literature, chatbox readers appear more than happy to grant the kind of sole authorship to acorporal intelligence that they’d rather deny to the hapless biological agent.

But the relationship of humans to their tools plays out within an infinity of situations, and the majority of tools are not linguistic. Tool use is common among mammals. Many tools don’t require much expertise—let alone a metaphysical or aesthetic infusion from the user—in order to amplify human agency. Not only is this true for the vast majority of physical labor-saving devices, it’s also been true of mental labor-saving devices from the cuneiform clay tablet all the way down to your favorite spreadsheet. But when a tool begins to facilitate, or even obviate decisions formerly made by the tool user, then there are social implications. Misunderstandings, suspicions, and unexpected flickers of resistance can flare up amongst the peasants. This is the flip side of the human drive to fabricate transcendence: the fear of losing control in the face of the idol, of losing even the opportunity to contribute, of being denied credit for work done, or perhaps even worse—being the angst-ridden creatures that we are—of being given credit or taking the blame for work that we didn’t do.

We’re all familiar with the casual epithet, “Luddite,” but discomfort with new tools need not end up in political grumbling, let alone machine-smashing. It can simply manifest itself in barely disguised embarrassment, as in the “I don’t deserve this award” acceptance speeches that are so common at the Oscars and Grammys. Consider the history of Western musical performance. At the turn of the nineteenth century, Beethoven could improvise the technically flashy and musically intricate cadenzas that marked the concluding sections of his piano concertos. He only wrote them down for other pianists. By the end of the nineteenth century this was standard practice. Cadenzas were written out to sound improvised.3 A hundred years later pianists could splice together the best of forty takes in a recording studio to create the illusion of superhuman technique. As magical as the aura surrounding modern performance may be, it’s also true that the performer’s job has grown progressively less heroic over the last two hundred years thanks to a vastly expanded toolkit. The extent of a musician’s embarrassment about taking too much credit for what’s happening onstage can be calibrated by the number of times he or she looks up to the ceiling—typically during a safe slow passage—for divine inspiration. The reductio ad absurdum of this syndrome might be the pop keyboard player’s choreography while backing up a highly amplified vocalist. Gyrating and grimacing, the keyboardist pretends to be contributing mightily to the sound of the string section that s/he conjured up by pressing a button. Should anyone be tempted to write off such an anecdote as trivial, I’d point out that Chelsea Manning paid a very steep price for casting an analogous light on the infinitely more serious angst of the button-pushing drone pilot.

Embedded as we are in capitalism, both economics and ethics tend to boil down to questions of where is credit due, how much, and who pays. Defining, let alone determining credit when smart tools are involved is anything but simple. Imagine this by-no-means improbable career trajectory spanning fifty years in the recent evolution of electronic music tools:

John (ca. 1970) writes a program from scratch to generate a sound on the Physics Department computer that successfully emulates a jazz guitar. He gets credit, but chiefly for being a nerd. John (ca. 1974) writes another application that uses the guitar emulator to play a blues tune that he wrote. He gets more credit—maybe even a fellowship in some graduate music program—and he still owns the rights to his tune. John (ca. 1978) incorporates his emulator into a third program that automatically generates a hundred blues tunes at a pop that reflect his musical taste and experience. He’s invited to work for a large manufacturer of electronic keyboards, but it’s unclear who owns the rights to the songs that his algorithm “writes” from now on. John’s friend (ca. 1982) uses a state-of-the-art synthesizer incorporating John’s software to create the signature sound of his New Wave band. Can John’s friend take any credit for exploiting algorithmic output that was based originally on John’s musicianship and programming skills? Both John and his friend (2020), now with grandchildren in college, go out and buy them the latest commercial incarnation of John’s old blues factory, one that enables the kids to generate, curate, and package a flood of new 1970s-style blues “performances” by just setting a couple of parameters. The resulting podcasts attract some attention on public radio. Are there royalties or streaming rights to consider?

The courts, working with legal codes that chase technical developments by at least a decade-and-a-half, aren’t the place to answer such questions. We hapless humans, whose ability to calculate the consequences of our actions doesn’t seem to be matched by a comparable ability to actually imagine them, have lately conjured for ourselves a package of dilemmas that’s no less pernicious for having been assembled mostly in innocence:

 

• How to discriminate among the many flavors of acorporal intelligence.

• How to justify creating tools whose very use threatens to recklessly redefine the agency of the user in ways that we have yet to understand.

• How to discourage humans from taking credit for work that was done by a tool while also being careful not to overlook the extent to which a tool user’s skill and knowledge make the tool’s efficacy—to say nothing of its aura—possible in the first place.

• And finally, how to understand the notion of mind such that it encompasses both biological intelligence and acorporal intelligence4—but without obliterating their distinctiveness. The golf champion and jazz virtuoso are brilliant in different ways, and we cheat ourselves by asking that they play each other’s game.

 

An examination of the semantic remains of the buzzword formerly known as “algorithm” should shed some light on the first and last problems.

   

The perils of dashboarding

 

The tipping point in my fable of the synthetic blues guitar came in 1978 with the introduction of the tune generator. Up to that point John had been borrowing or writing mathematical models that imitated the sound of a guitar. Once the decision to use this or that acoustical analysis had been made, there were no more problems to be solved beyond making the program as efficient and reliable as possible. The vast majority of digital applications—whether advertised as “smart” or not—fall into this mundane category of passive tools. The source of all of the subsequent confusion concerning credit was the introduction of a programmatic decision-making machine,5 namely, John’s blues algorithm. John’s application now began to control itself—within limits—instead of merely doing what it was told. Our future understanding of the intelligence-tool-agency-responsibility-credit language game will need to be grounded in an acute perception of such limits, in an awareness of who sets and who controls them, as well as in an understanding of the logic encapsulated in the decision-making machine. Unfortunately, these factors are infinitely variable and utterly resistant to any simplistic analysis that’s colored by a sentimental preoccupation with the competition between biological and acorporal intelligence.

A great deal of the despondency of diehard advocates for biological intelligence as well as the devil-may-care enthusiasm of recent converts to chatbox culture stems from their mutual technical illiteracy. What I like to characterize as web dashboarding is at least as dangerous as catching a wave at Huntington Beach. Overconfidence in either type of surfing, whether on the water or on the web, can lead to wipeouts just as easily as can a novice’s fear or lack of experience. Becoming an expert in the frontends of hundreds of the latest apps without understanding the rudimentary materials of computing can create the illusion of tech-savviness—a gullibility that only the most scrupulous owner of the next power tool kit could resist exploiting.

The fundamental elements of the design of any acorporal intelligence agent are still much the same as they were in the 1960s: information, methods of accessing that information, i.e., indices, and models in which the information is used to create more information. Let’s start with the notion of information, the basic material of everything computable. To begin with, if a datum cannot be reduced either to the assignment and cross-referencing of descriptive tokens or to the recording of quantifiable physical measurements, then it does not—cannot—exist for a digital system. Such a datum—for instance, your current feeling about your ex-lover—might qualify as information to us, but it will remain outside the purview of any algorithm. Note that I referred here to the assignment and cross-referencing of tokens, as opposed to Saussure’s linguistic signs, because I want to rule out any suggestion that the binding of a sign to its signification6 has already been achieved algorithmically. Rather than signification having been achieved, its appearances continue to be simulated. All meaning in acorporal intelligence agents is simulated meaning. The algorithm functions as if it “knew” something. All of the information that an acorporal intelligence agent furnishes to its constituent algorithms consists of arithmetic data, specimens of language, and connections based upon identifying patterns of recurrence. The language specimens consist of reports whose ultimate antecedents are undefinable in the system except by reference to other language specimens which are in turn equally lacking in any correspondence to lived experience. A measurement of the number of such contextless specimens in a data system, or of the rate at which they can reliably be exchanged among systems, is what is still—following the pioneering work of Claude Shannon and Warren Weaver at Bell Labs in the late 1940s—often referred to as information. When the acorporal intelligence researcher wants to impress you, she may refer to some astronomical figure approaching the number of atoms in the universe that purportedly corresponds to the information content of her database. The fact that such totals routinely include traces of the calculations and token shuffling produced during the program’s operation—the system’s observations about itself, in other words—is conveniently overlooked. It’s as if every hemming and hawing that I indulged in this afternoon while deciding what to have for dinner were important to the rest of mankind. It’s small wonder that systems which produce second- and third-order information—observations about their own observations—seem so intimidating: they are literally full of themselves. The anthropologist-cybernetician Gregory Bateson had it right: Information is not any old datum; it’s a “difference that makes a difference.” Primarily to us, and only secondarily to our tools.

Whenever raw statistics of the size of some acorporal intelligence agent’s database are cited as proof of its transcendental authority, the layperson would be well-advised to redirect the conversation to the problem of access. How was the database built? By whom? How does the acorporal intelligence agent manage data access and sharing among its many constituent algorithms? How is the information indexed? Does the index resemble an old phone directory? The Library of Congress classifications? Of course not. How old-fashioned! Acorporal intelligent databases create their own database access maps. The problem is that creating a map typically requires skills and perspectives that don’t inhere in the data itself. Remember Bateson’s dictum that information is a difference that makes a difference to us. Context is key, and not everything that’s needed to create a meaningful context may exist in the computer’s own data warehouse. This is why modern internet search engines make money: They somehow manage to infer a context for the searches that you request. Of course they don’t tell you how they do it. That would not only entail revealing a trade secret on which their advertising revenue depends, but it would also constitute an admission of inadequacy that no modern bureaucracy is likely ever to make: That in response to the stupefying number of mental search strategies which any person would employ to create an interpretive map for understanding the mundane situations that come up in the course of surviving just a single day, a computer-generated search engine typically will employ only a few models to generate its own radically simplified analogies. This is the Achilles heel of all algorithms, and not just of search engines: Out of a theoretical infinity of solutions to any representational problem, they encapsulate and deploy at best only a few. This implicit endorsement of a handful of representational models to the exclusion of all other plausible alternative representations—in other words, what amounts to the consecration of a highly constrained simulated view as the effective definition of the real—is what the philosopher of maps Rasmus Winther has characterized as a pernicious reification. I suggest that the term goes a long way to explaining the power of many if not most of the acorporal intelligence agents that have been making such a big splash in our chatbox era.

But such perniciousness isn’t really an attribute of acorporal intelligence per se. It’s more a description of us, the fans of Clever Hans and ELIZA, clumsily rechanneling religious impulses that somehow refuse to die. Meanwhile, algorithms don’t have cares, worries, or bad habits any more than a proof in Euclidian geometry does. Of course the possibility can’t be ruled out that a massively complex and hierarchical assembly of algorithms might someday yield a true self-sustaining acorporal intelligence that doesn’t have to rely on a fixed repertoire of models and subsist on a diet of cultural vampirism. But for now the best we can do is not to lose sight of the fundamental limitation of acorporal intelligence, namely: With the exception of any environmental measurements that might be fed directly into a “smart” app through physical sensors, the models and algorithms employed by an app, and hence, the app itself, can only be said to “live” vicariously. Their qualitative, as opposed to quantitative knowledge consists of second-hand reports couched in a language that they can parse but not understand.

None of this disqualifies acorporal intelligence as a species of intelligence tout court, particularly not in cases where the math in an algorithm doesn’t denude information of its context. Forecasting the weather is a feat of gathering and synthesizing data that clearly surpasses the limits of biological intelligence. Instead of fretting about the presence or absence of semantic competence in some bundle of algorithms, we’d be better served by turning our imaginative energy to the possibility of a more creative understanding of that capacious term, “mind.” For starters, our understanding of the biological mind can be extended beyond the contents of the cranium to include the entire nervous and muscular systems, interactions with other intelligences, and even aspects of the natural environment. The same can be said, mutatis mutandis, for acorporal intelligence. The philosophers Mark Rowlands (The New Science of the Mind) and W. Teed Rockwell (Neither Brain nor Ghost) have joined Bateson in considering the implications of just such an enrichment of the concept of mind. Their sketches of environmental feedback loops point the way to an expansion of the idea of mind that encompasses not only most forms of RNA-DNA-based life but also the many variants of acorporal intelligence. That said, the questions which they raise should also lead us to consider whether profound differences in environmental embeddedness might render any mutually advantageous meeting of the biological and acorporal minds very difficult to manage, or at the very least, limited in ways that we haven’t even begun to imagine.

   

A colonialism of an entirely different sort

 

These brief analyses and anecdotes are offered as a navigational guide. Many varieties of nontrivial collaboration between biological and acorporal agents will have to be negotiated if our species is to survive. But the enclosure of the cultural commons brought about by the attempt to capture every trace of human activity in a single digitized space is a radical move without precedent in the history of tool use. This will surely render the collaborative negotiations difficult. Before heedlessly acceding to further acts of conceptual cannibalism it would be wise to remember a bittersweet warning from the Renaissance Portuguese poet Luis Vaz de Camões. In the heart of his epic celebration of Da Gama’s voyage that opened India and the Spice Islands to European exploitation, the poet’s anonymous “Old Man” utters the following lament as the fleet sets sail from Lisbon:

 

How much better for us, Prometheus,

And how much less hurtful to the world,

If your illustrious creation hadn’t been moved

To act on its burning ambition.

 

   — Os Lusíadas, Canto IV.17

 

Vaz de Camões was writing a little less than a hundred years after Da Gama’s triumphal voyage of discovery, by which time Portugal’s initial euphoria had dissolved in cutthroat competition with larger European powers to colonize the hitherto unreachable. Of course Zeus eventually pardoned Prometheus for sharing the twin fires of consciousness and will, but only after eons of Prometheus serving as a tasty lunchtime snack for the birds.

But who cares? Who reads Renaissance epics nowadays? Facile memes continue to go viral—but in near total ignorance of the history of thought about thought—on the heels of every new commercial release of an unexpectedly cute app. Meanwhile a new strain of colonialism threatens to grow out of control. It’s no longer just land, mineral riches, or even whole peoples that are being exploited in nascent algorithmic colonialism, but the biological experience of experience itself. Just as unfettered neoliberal capitalism tends to redefine human experience as a matrix of financial transactions, the naïve deployment of acorporal intelligence agents threatens to redefine human experience solely in terms of its representability in an n-dimensional data space that was designed for the convenience of the tool instead of the tool user.

The challenge facing us now is how to enter a radically new game without abandoning completely the old. What should be clear is that neither mind, intelligence, perception, intentionality, agency, nor the very notion of tool can be understood without addressing the recursive tissue of understanding into which they are all woven. Picking too much at any single strand—say, intelligence qua intelligence—will destroy the fabric.

Whether it’s meaningless, hopeless, or merely excruciatingly difficult, the balancing act between classical and twenty-first-century notions of mind-intelligence-perception-intentionality-agency-tool promises to be a high-stakes game—and perhaps not a particularly friendly one. How we come to terms with today’s apotheosis of automated positivism will determine whether the next generation of tools empowers the weak or merely buttresses the powerful.

   

Notes

 

  1. «Der gescheidte Hans» is number thirty-two in the first volume of Grimm’s fairy tales.

  2. Perhaps this does qualify as “from afar” for most of us, after all.

  3. Yes, modern musical notation is a multipurpose tool and not just a memory aid. The complexity of contemporary pop and jazz arrangements, to say nothing of symphonic music, would be impossible without it.

  4. A starter list might include the superfluous (your AC thermostat, for example, which doesn’t need digital circuitry at all); the optional (your sound recording device, which you’d be better off adjusting manually if you know what you’re doing); the mission-critical (for example, severe weather tracking systems); the merely distracting (such as the ubiquitous apps that allow unskilled doodlers to generate endless mashups of other people’s work); and the inadvertently antisocial (for example, any app that automates the counterfeiting and undetectable modification of images, sounds, or text).

  5. The notion of a language game as a set of terms all of whose definitions depend upon one another was proposed by Wittgenstein in his Philosophical Investigations and adapted by Thomas Kuhn fifty years later in refinements to his theory of scientific revolutions.

  6. Let alone to the mosh pit known as connotation.

  7.   

   Quanto melhor nos fora, Prometeu,

E quanto para o mundo menos dano,

Que a tua estátua ilustre não tivera

Fogo de altos desejos, que a movera!

 

My translation.