This is Chapter 25 of The Future Does Not Compute: Transcending the Machines in Our Midst, by Stephen L. Talbott. Copyright 1995 O'Reilly & Associates. All rights reserved. You may freely redistribute this chapter in its entirety for noncommercial purposes. For information about the author's online newsletter, NETFUTURE: Technology and Human Responsibility, see http://www.netfuture.org/.
Technology offers us many obscure reasons for worry, not least among them the threat of nuclear terrorism. Yet what I fear most in the technological cornucopia is the computer's gentle reason.
A bomb's fury exhausts itself in a moment, and the poisonous aftermath can at least be identified and tracked. The computer, burrowing painlessly into every social institution, infecting every human gesture, leaves us dangerously unalarmed. The first post-Hiroshima nuclear detonation in a major city -- however deep the gash it rips through human lives and world structures -- may clarify the minds of world leaders marvelously, jolting them, we can hope, to wise and sensible action. The computer, meanwhile, will continue quietly altering what it means to be wise or sensible.
New technologies often exert their early influences reassuringly, by merely imitating older, more comfortable tools. We still use the computer primarily for electronic mail, routine word processing, and text storage. Its distinctive powers remain nascent, rumored endlessly back and forth among silently blinking LEDs, whose precise meditations upon the future betray little to casual observation. We must try, nevertheless, to spy on that future.
It is, after all, one thing to introduce the computer into the classroom as a fascinating curiosity, but quite another when the curiosity becomes the classroom -- and teacher and curriculum as well.
It is one thing to transmit text across a computer network, but quite another when machines are employed to read the text and interpret it, or to translate it into different languages -- or to compose the text in the first place.
A computer very likely supplies your doctor with a database of diagnostic information and a means for recordkeeping. But it can act directly as diagnostician, prescriber, and even surgeon.
Most of us currently interact with our computers via keyboard and mouse. But we could interact by attaching electrodes to our heads and learning to manipulate our own brain waves./1/
What I fear in all this is not the wild, exaggerated claim about tomorrow's technology -- the android nonsense, for example -- but rather the myriad, daily, unsurveyable, incomprehensible, mutually reinforcing, minor advances on a thousand different fronts. And it is not so much the advances themselves that disturb me as their apparent inevitability and global logic. We seem to have no choice about our future, and yet that future appears to be taking shape according to a coherent “plan.”
Jacques Ellul says much the same thing when he points to the “Great Innovation” that has occurred over the past decade or two. The conflict between technology and the broader values of society has ceased to exist. Or, rather, it has ceased to matter. No longer do we tackle the issues head on. No longer do we force people to adapt to their machines. We neither fight against technology nor consciously adapt ourselves to it. Everything happens naturally, “by force of circumstances,”
because the proliferation of techniques, mediated by the media, by communications, by the universalization of images, by changed human discourse, has outflanked prior obstacles and integrated them progressively into the process. It has encircled points of resistance, which then tend to dissolve. It has done all this without any hostile reaction or refusal .... Insinuation or encirclement does not involve any program of necessary adaptation to new techniques. Everything takes place as in a show, offered freely to a happy crowd that has no problems./2/
It is not that society and culture are managing to assimilate technology. Rather, technology is swallowing culture. Three factors, according to Ellul, prevent us from mastering technology:
First, “rational autonomy is less and less referred to those who command machines and corporations and is more and more dependent on the self-regulation of technical networks.” We cannot help deferring to the reason that seems so powerfully operative in our machines and in their patterns of connection. “The system requires it” has become an accepted and often unavoidable justification of human behavior.
Second, the technological thrust has become too diverse, too ubiquitous, too polydimensional for us to guide or pass value judgments on it. If we cannot see clearly in what direction things are going, then we cannot orient ourselves in the manner required for mastery.
Third, the dizzying acceleration of change ensures that any attempt at mastery is always too late. We are finally beginning to acknowledge more freely the monstrous perversities of television. But can we now, for example, disentangle politics, sports, or commerce from these perversities?
The underlying sense of helplessness provoked by our inability to master technology may account for the widespread, if vague, hope now vested in “emergent order,” which many take to be the natural outcome of burgeoning complexity. Oddly, what will emerge is commonly expected to be benign.
If, as Ellul's remarks suggest, runaway technology has achieved a stranglehold over society, it seems at first glance a strangely schizoid grip. Just consider this set of paradoxes -- or apparent paradoxes -- relating to computers and the Net:
The networked computer, we are frequently told, decentralizes and democratizes. By placing knowledge and universal tools of communication in the hands of Everyman, it subverts oppressive authority. Hierarchical structures, focusing power upon a narrow circle at the top, give way to distributed, participative networks. No one can control these networks: “the Net treats censorship like a malfunction, and routes around it.”/3/ Some observers even find support for a principle of anarchy here.
And yet, many who have celebrated most loudly the Net's liberating powers are now sounding the shrillest warnings about the dangers of totalitarian oppression by intrusive governments or money-grabbing corporations. The fearsome means of centralized, hierarchical control? Networked computers! In general, arguments for the centralizing and decentralizing cleverness of the computer continue to flourish on roughly equal terms.
The Net: an instrument of rationalization erected upon an inconceivably complex foundation of computerized logic -- an inexhaustible fount of lucid “emergent order.” Or, the Net: madhouse, bizarre Underground, scene of flame wars and psychopathological acting out, universal red-light district. “You need a chapter on sex,” I was told by one reviewer of an early draft of this book; and he was right, for pornography and erotica drive a substantial percentage of Net traffic.
The Net: a nearly infinite repository of human experience converted into objective data and information -- a universal database supporting all future advances in knowledge and economic productivity. Or, the Net: perfected gossip mill; means for spreading rumors with lightning rapidity; universal source of meanings reduced to a lowest common denominator; ocean of dubious information from which I filter my own, tiny and arbitrary rivulet, mostly of unknown origin.
Computerized technology gives us the power and the freedom to accomplish almost anything. We can explore space, or alter the genetic terms of a human destiny. We can make the individual atom dance to our own tune, or coordinate scores of thousands of employees around the globe in gargantuan, multinational organizations.
Yet all the while we find ourselves driven willy-nilly by the technological nisus toward a future whose broad shape we can neither foresee nor alter. The future is something that happens to us. The possibilities of our freedom, it seems, vanish into the necessities imposed by the tools of our freedom./4/
Television, video game, and computer put the world at a distance from me. I experience everything at one or more removes. All the marvels from all the depths of cyberspace are funneled onto a square foot or two of “user interface,” where they are re-presented as surface features of my screen. I can survey everything on this screen in a perfectly safe and insulated manner, interacting with it even as I remain passively detached from the world lying behind it.
Yet this same, inert screen easily cripples my ability to behold the world from a reflective distance, or from a self-possessed stance. It has been shown that viewers of the evening news typically remember few details of what they saw and heard, and cannot judge accurately what little they do recall./5/ Programmers and readers of the Net news find themselves yielding to obsessive, yet poorly focused and semiautomatic habits. The video game player's conscious functioning retreats into the reflexes of hands and fingers. In general, the distance required for contemplation gives way, through the intervening screen, to a kind of abstract distance across which half-conscious, automatic mechanisms function most easily.
One veteran Net user, apparently welcoming these tendencies, puts the matter this way:
In [Stanley Kubrick's film] 2001, the astronauts are shown poring over structural diagrams of their spacecraft. In the movie, they examine, manipulate, and reject several high-definition maps each second. I recall thinking at the time that this was absurd. Since then, I now routinely skim and reject a screen-full sized message in about a second, and with our video game trained children, perhaps Kubrick's vision will be accurate./6/
The efficient distance from which such a user interacts with the person and meaning behind the text can hardly be a reflective distance. It is more like a reflexive and uninvolved immediacy.
Given such paradoxical juxtapositions, one might wish to say, “Ellul is wrong. Technology does not have us in a stranglehold. Everything depends on how we use technology -- and we can use it in drastically different ways.”
But this is too simple. These juxtapositions, it turns out, are not alternative paths we are free to choose. The apparently contradictory tendencies belong together. They are complex and pernicious unities signaling a threatened loss of freedom.
We need to look again at the paradoxes, but only after first acknowledging the intimate nature of our mutual embrace with computers. Several features of this embrace stand out:
The computer took shape in the human mind before it was realized in the world.
The computer was thought before it was built. It is not that the inventors of the computer either considered or understood all the consequences of their handiwork -- far from it. Subsequent developments have no doubt caught them as unprepared as the rest of us. But, still, they had to conceive the essential nature of the computer before they could build it. That nature therefore lives both in the conceived machine and the conceiving mind, and the consequences likewise flow from both sources. This remains true even when we find ourselves lacking sufficient depth of awareness to foresee those consequences.
In other words, it is not at all clear that the “computational paradigm” belongs more to the computer than to the human mind. Moreover, while this line of thought applies preeminently to the computer, which is a kind of dulled, mechanical reflection of the human intellect, it has a much wider bearing than is commonly thought. Every cultural artifact approaching us from the outside also has an “inside,” and this inside is at the same time our inside.
The long, cultural process leading to the automobile's invention illustrates this well. The mind capable of imagining an early automobile was a mind already relating to physical materials, speed, conspicuous consumption, noise, pollution, mechanical artifacts, time, space, and the esthetics of machines in a manner characteristic of the modern era. It is hard to imagine any subsequent effects of the automobile not already at least implicit in this mindset, even if it is true -- as it surely is -- that the automobile played its own, forceful role in magnifying the preexistent movements of the Western psyche.
It is not evident, then, how one is justified in speaking unambiguously of the automobile's effects, as opposed to the consequences of our own inner life -- of which the automobile itself is another result. More concretely: how could the town's conversion to a spread-out, impersonal, rationalized, streetlight- controlled, machine-adapted metropolis have been unpremeditated, when a prototype of this future was carefully laid out on the floor of the first assembly-line factory?
To reckon with the inside of technology is to discover continuity. This is as true of the automobile assembly line as it is of the automobile itself. Speaking of Henry Ford's manufacturing innovations, MIT social scientist Charles Sabel remarks that “it was as if the Ford engineers, putting in place the crucial pieces of a giant jigsaw puzzle, suddenly made intelligible the major themes of a century of industrialization.”/7/
Even the “freedom and independence” given by the automobile were prefigured on the factory floor. We are, of course, free to go wherever we like in our isolating, gasoline-powered bubbles. But a culture of isolation means that there is no there to get to, and in any case we find ourselves overwhelmingly constrained by the manifold necessities of the system that gave us our freedom in the first place -- costs of car, maintenance, and insurance, crowded highways, incessant noise and vibration, physical enervation, frustrating expenditures of time sitting behind a wheel.
Here again, however, the early automobile worker experienced this same “liberation,” because he and his employer already participated in a mindset much like ours. Freed from meaningful interaction with others, and given a nice, rational, private task of his own, he found himself now bound to the relentless logic and constraints of the assembly line and the overall production system.
In sum, the social effects of the automobile have not entirely blindsided us. They are at least in part the fulfillment of our own visions and long-standing habits of thought. And what is true of the automobile is even more true of the computer, which, you might say, is the direct crystallization and representation of our habits of thought. This points to the second feature of our computational embrace:
What we embed in the computer is the inert and empty shadow, or abstract reflection, of the past operation of our own intelligence.
Our experiment with the computer consists of the effort to discover every possible aspect of the human mind that can be expressed or echoed mechanically -- simulated by a machine -- and then to reconstruct society around those aspects. In other words, after a few centuries of philosophical reductionism, we are now venturing into a new, practical reductionism: we are determined to find out how much of our mental functioning we can in fact delegate to the computer.
There are two primary routes of delegation. We can, in the first place, impart our own words to the computer in the form of program data or the contents of databases. As taken up and “respoken” by the computer, these are always old words, dictionary words, shorn of “speaker's meaning” -- that is, stripped of the specific, creative meanings a fully conscious speaker acting in the present moment always breathes through his words. That is why, for example, efforts at machine translation fail badly whenever the computer must deal with highly meaningful language not explainable strictly in terms of dictionary definitions.
To be employed effectively by the computer -- or shared between computers -- words must represent a generalizable, abstract, “least common denominator” of the already known expressive possibilities. These words always speak out of the past. They represent no present, cognitive activity. The computer manipulates the corpses of a once-living thinking.
Secondly, as programmers we abstract from the products of our thinking a logical structure, which we then engrave upon the computer's logic circuits. This structure is empty -- much as the bare record of a poem's meter, abstracted from the poem itself, is empty. Nevertheless, the logical structure was abstracted from a once-living sequence of thoughts, and it is mapped to an active, electronic mechanism. So the mechanism, at a level of pure abstraction, reflects the emptied, logical structure of the thoughts.
We can now use this programmatic mechanism to reanimate the aforementioned word-corpses. That is, the corpses begin to dance through their silicon graveyard, their stiffened meanings activated rather like dead frog legs jerking convulsively to the imperatives of an electrical beat. In this way, the abstract ghost of past human thinking takes external hold upon the embalmed word-shells of past meanings and, like a poltergeist, sets them in motion.
All this accounts for certain characteristic traits of software -- in particular, the notorious brittleness, the user-unfriendliness, and the penchant for occasional, baffling responses from left field. At the same time, given our willingness to base more and more of society's functioning upon the computer, there is a natural adaptation of human behavior and institutions to these rigidities.
This may not seem onerous, however, for we still recognize the words and logic of the computer as somehow our own, even if they are only distant and brittle reflections of what is most human. Indeed, the willingness of users to personify computers based on the crudest mechanical manipulation of a few words is well attested./8/ It is reasonable to expect that, as we adapt to the computer's requirements, the rigidities may no longer seem quite so rigid. We should keep in mind, therefore, that the slowly improving “user-friendliness” of computers may put on display not only the ingenuity of human-computer interface experts, but also the evolving “computer-friendliness” of users.
Automated telephone answering systems, made possible by computers, illustrate all this at an extremely simple level. They stitch fragments of recorded human speech onto an (often woefully spare) skeleton of logic. No one is present behind the manipulated voice; the caller confronts echoes from the past, calculated in advance to match “all possible situations.”
Whatever her task (telephone answering system or otherwise), the programmer must build up a logical structure abstracted from the task, then map it to the computer's internal structure and hang appropriate words (or data) upon it. In attempting to make her program more flexible, she finds herself elaborating and refining its logic in an ever more complex fashion. The effort is to build upward from the simple, on-off “logic gates” constituting the computer's fundamental capability, until an intricate logical structure is derived that corresponds to the structure of the target task.
What is not so often noticed is that, in carrying out her work, the programmer herself moves along a path opposite to the one she lays out for the computer. Where the computer would ascend from logic to meaning, she must descend from meaning to logic. That is, the computer, following a set of logical tracings, invokes certain (second-hand) meanings by re-presenting text that the programmer has carefully associated with those tracings. She, on the other hand, must grasp the form and meaning of a human function before she can abstract its logical structure. If she has no direct experience, no ability to apprehend meanings, no understanding of things, then she has nothing from which to abstract.
There is, however, no symmetry in these opposite movements. A sense for logical structure emerges only from prior understanding, and is not its basis. As a starting point in its own right, logic leads nowhere at all -- it can never bring us to content. Content always arises from a preexistent word, a preexistent meaning, for whose genesis the computer itself cannot claim responsibility.
Despite what I have called the “brittleness” of computation, we should not underestimate the programmer's ability to pursue her logical analysis with ever greater subtlety; the motions she imparts to the corpses in the graveyard can be made to look more and more lifelike. Certainly the automated answering system can be ramified almost infinitely, gaining cleverness beyond current imagination. But the stitched-together voice will nevertheless remain a voice from the past (no one is really there), and the system's behavior will remain the restricted expression of the programmer's previous analysis./9/
The computer gains a certain autonomy -- runs by itself -- on the strength of its embedded reflection of human intelligence.
We are thus confronted from the world by the active powers of our own, most mechanistic mental functioning.
The computer is the embodiment of all those aspects of our thinking that are automatic, deterministic, algorithmic -- all those aspects that can, on the strength of the past, run by themselves, without any need for conscious presence.
It was, of course, already our strong tendency in the Industrial Age to embed intelligence in mechanisms, which thereby gained increasing ability to run by themselves. Take the intricate, intelligent articulations of a modern loom, add electrical power, and you have an ability to operate independently far exceeding the reach of a simple hand loom.
Every active mechanism of this sort -- whether simple or complex -- creates a local nexus of cause and effect, of determinism, to which the worker must assimilate himself. Computerization universalizes this nexus. Everywhere is “here,” and everyone -- or every machine -- must therefore speak the appropriate languages, follow the necessary protocols. Distributed intelligence is a means for tying things together with the aid of everything in human consciousness that operates at a mechanistic level. Things that run by themselves can now embrace the whole of society.
Many have claimed that computers free us from an older determination by machines. This is false. What is true is that, within a certain sphere, computers give us greater flexibility. This, however, is not a new principle; from the very beginning the increasing complexity of machines has led to improved flexibility. Computers simply raise these powers of flexibility to a dramatically higher level.
What advanced machines give us is a more refined ability to do things. It is a purely instrumental gain, remaining within the deterministic realm of material cause and effect. Freedom, on the other hand, has to do with our ability to act out of the future -- a creatively envisioned future -- so as to realize conditions not already implicit in past causes.
The automobile gives me an immediate gain in outward freedom, enabling me to go places I could not go before. But freedom is not outward, and it turns out that a culture built around the automobile (along with numerous other helpful machines) may actually constrain my ability to act out of the future. It is true that these machines vastly broaden my physical, instrumental options. But if I begin to think about my life's purposes (something all the distracting machinery of my life may discourage), and if I come to feel that something is amiss -- that I have somehow been betraying my own meanings -- I then discover myself bound to a matrix of nearly inescapable necessities imposed by all the mechanisms to which my life must conform. I have little room, in that context, to act out of conviction, to express my own most compelling values, to choose a different kind of life. That is, I have great difficulty allowing my life to flow toward me from the creative future. Instead, I am likely to coast along, carried by circumstances.
This is why the salient social problems of our day seem to have gone systemic. No one is responsible. No one can do anything. Point to any challenge, from the drug culture to environmental pollution to regional famine, and the individual can scarcely feel more than a helpless frustration before the impersonal and never-quite-graspable causes. The problems are “in the nature of things.” It is as if a diffuse and malignant intelligence -- a Djinn -- had escaped human control and burrowed into the entire machinery of our existence.
Intelligence -- even in its most mechanical aspect -- is not an evil. But it needs to be placed in the service of something higher. Intelligence that functions from below upward -- as in a machine -- creates a global, self-sustaining mechanism that binds me, unless I meet it with a still stronger power. The operation of my intelligence should descend from that place in me where I am present and free. This is the challenge for today. It is a challenge, not just with respect to the computer, but with respect to the free and unfree activities of my own consciousness.
Having reconceived my own interior as computation, and having then embedded a reflection of this interior in the computer, I compulsively seek fulfillment -- the completion of myself -- through the “interface.”
Machines bearing our reflections are a powerful invitation for psychological projection. Such projection requires a translation from inner to outer -- from interior awareness to exterior activity. This translation is exactly what public discourse about computers and the Net presents to us on every hand: text instead of the word; text processing or program execution instead of thinking; information instead of meaning; connectivity instead of community; algorithmic procedure instead of willed human behavior; derived images instead of immediate experience.
Psychologists tell us that the outward projection of inner contents typically signifies an alienation from those contents. It also provokes an unconscious, misguided, and potentially dangerous effort to recover “out there” what actually has been lost “in here.” That is, despite the projection, our need to intensify and deepen our inner life remains primary; driven by that need, we may seek misguided fulfillment out where we have unwittingly “thrown” ourselves -- behind the interface. On the Net.
The symptoms of this sadly misdirected striving are all around us. They are evident, for example, in the obsessive programmer, who feels the undiscovered logical flaw in his program to be a personal affront -- an intolerable rift in his own soul. Conversely, the thought of executing the program and finding everything suddenly “working” holds out the prospect of ultimate happiness. The ultimacy, however, proves a mirage, for the next day (or the next hour) he is working just as compulsively on the new, improved version.
All this recalls the infatuated lover, who projects his soul upon the beloved. Thirsting passionately for the one, crucial gesture that will spell his eternal bliss, he receives it only to find, after the briefest interlude, that his thirst has been redoubled rather than quenched.
A similar compulsion seems to drive the recent, unhinged mania for the “information superhighway.” The widespread, almost palpable fear of being left out, of missing the party, is one telltale symptom. Others include the lotterylike hope of discovering “great finds” out on the Net; the investment of huge amounts of time in largely unfocused, undisciplined, semianonymous, and randomly forming, randomly dispersing electronic discussion groups; the entrenched conviction that whatever's “happening” on the Net (we're never quite sure just what it is) surely represents our future -- it is us; and, most obviously, the sudden, unpredictable, and obsessive nature of society's preoccupation with the Net.
But perhaps the most poignant symptom of the projection of a lost interiority lies in the new electronic mysticism. Images of a global, electronically mediated collective consciousness, of Teilhard de Chardin's omega point, and of machines crossing over into a new and superior form of personhood are rife on the Net. Channelers channel onto the Net. Pagans conduct rituals in cyberspace. Most of this is unbearably silly, but as a widespread phenomenon it is difficult to dismiss.
The New Age, it appears, will be won with surprising ease. The wondrously adept principle of “emergence” accounts for everything. It will materialize delightful new organs of higher awareness, not cancerous tumors. As one Net contributor enthuses:
The nature of the organism resulting is the only question. ...Strangely, I think this organizing into a spiritual whole will occur without much effort. When human spirits gather in a common purpose, something happens./10/
Indeed, something happens. But what grows upward from mechanism, easily, automatically, running by itself, is not human freedom. Freedom is always a struggle. What happens easily, on the other hand, is whatever we have already set in motion, having woven it into the dead but effective logic of our external circumstances, and into the sleepwalking strata of our own minds. It is not so easy to call down and materialize a freely chosen future.
The belief that the Net is ushering us toward altogether new and redemptive social forms is a patently dangerous one. It amounts to a projection of human responsibility itself onto our machines. The consequences of this can only be unsavory. What I project unconsciously into the world -- whether as mystical hope or more mundane expectation -- is never my highest and most responsible mental functioning. It is, rather, my least developed, most primitive side. “This way lie dragons.”
Here, then, are select features of the human-computer liaison:
The computer took shape in the human mind before it was realized in the world.
What we embed in the computer is the inert and empty shadow, or abstract reflection, of the past operation of our own intelligence.
The computer gains a certain autonomy -- runs by itself -- on the strength of its embedded reflection of human intelligence. We are thus confronted from the world by the active powers of our own, most mechanistic mental functioning.
Having reconceived my own interior as computation, and having then embedded a reflection of this interior in the computer, I compulsively seek fulfillment -- the completion of myself -- through the interface.
We now have a more satisfactory perspective upon the paradoxes discussed toward the beginning of this chapter. It is not really odd, for example, that a medium of logic, objective information, and artificial intelligence should also be a medium noted for inducing pathologies and subjectivity. Nor is it odd that a medium of calculated distances should also invite visceral immediacy. Both juxtapositions are inevitable whenever the highest elements in human thinking give way to what is mechanical and automatic.
A comparison between television and computers will help to clarify these juxtapositions. Television presents me with a sequence of images in which I cannot intervene as in normal life, and so the images become mere passive stimulation. Not even my eyes perform their real-world work; instead they rest upon a flat surface lacking all depth (despite the contrary illusion). It is not surprising, then, if I lapse into a near-hypnotic state, my higher faculties quiescent.
But the computer poses far greater risks. I can intervene in its processes -- in fact, it seems to encourage this with an ever greater energy and flexibility -- and I thereby gain a sense of meaningful activity. But the computational processes in which I intervene reflect and actively elicit only the machinelike dimensions of my own nature. That this is nevertheless something of my nature makes it all the more natural to anthropomorphize the machine, and to find even the most primitive interaction with it significant. Yet all the while I may be doing nothing more than yielding to the machine's (and my own) automatisms. It is exactly in these lowest strata of my psyche that I give expression to pathologies and the worst excesses of subjectivity.
Some things are, I think, just too close to us for simple recognition. One of these things is the computer screen I am now looking at. Keystroke by keystroke, click by click, command by command, it presents at the first level of approach a completely deterministic field of action -- one I personally happen to live with for much of the day. This mediation of life by mechanism is unique in all of history. I might usefully compare it to my daily “interface” with my family. My wife's reactions, for example, are not always predictable! I must always expect a surprising response from her -- one not fully specifiable in terms of my past knowledge of “the way she works.” She is forever re-making herself, which in turn requires me to re-make myself.
With the computer, on the other hand, it is rather as if I were enclosed in a large box, surrounded by panels full of levers, cranks, gearwheels, and shuttles, through which all my interaction with the world was channeled -- except that the computer's superiority in efficiency and flexibility is so great, and my adaptation to its mechanisms so complete, that I now scarcely notice the underlying terms of my employment contract.
Such an encompassing, deterministic field of action was inconceivable not so long ago -- even for the worker in an industrial-era factory. Not everything was assimilated to his machine. But my screen -- as a front for all the coordinated, computational processes running behind it -- now overspreads all that I do with the dulling glaze of mechanism. And my own condition is such that I feel no burden in this. No factory worker ever stood so wholly enchanted before his machine as I do. Only by piercing through this external, mediating glaze -- only by learning to interact with a human interior somewhere “out there” -- do I transcend the determining mesh. Nor is this so easy. All those computational processes offer previously unheard of opportunities for merely reacting without ever having entertained in any true sense another human being.
It is not just the other person I risk losing; it is myself as well. Even more than television, the Net robs me of mastery over my own thoughts -- or, at least, will do so unless I can set against it wakeful inner resources more powerful than the automatisms it both embodies and so easily triggers. The capacity for concentration, for mature judgment, for the purposeful direction of an inquiry, for deep and sustained reflection -- these dissipate all too readily into the blur of random bytes, images, and computational processes assaulting my nervous system. Habits of association -- which are the lower organism's version of hypertext, and are the antithesis of self-mastery -- tighten their hold over my mental life. Association, unlike true thinking, is automatic, physically “infected,” less than fully conscious. It inevitably expresses a pathology.
The screen's distance is the peculiar sort of distance imposed by abstraction, which is hardly the basis for penetrating contemplation. When a person becomes an abstraction, I can respond to him in an immediate, dismissive, gut-driven manner. I need no longer attend to his inner being. And yet, the habit of such attention is the only thing between me and brutalization.
In sum, cold distance and hard logic do not really stand in contradiction to visceral immediacy and pathology. Rather, in their very one-sidedness they imply these results. We already see a hint of this in television advertising. On the one hand, I may scarcely pay attention to the advertisement -- and when I do note it, I may be content to scorn it unthinkingly. It scarcely seems to touch me at my “objective distance.” Still, my buying choices are affected as I proceed to act in an automatic and unconscious way.
Whatever passes into me while circumventing my highest conscious functioning, sinks down to where it works beyond my control. Where are we mathematically (statistically) predictable? The advertising executive can answer: wherever we act least consciously, from our “chemistry.” And through our interactions with computers the link between “mathematical rigor” and raw, unreflective, animal-like behavior can be extended far beyond the world of advertising.
The paradox of power and powerlessness must likewise be seen as a consistent unity, for the two effects operate on different planes. I can experience manipulative power even as I find myself unable to alter things meaningfully. In fact, a one-sided focus on manipulation all too naturally occludes meaning; for example, when I effectively control others, I lose any meaningful inner connection to them. I then discover that all things that really count have moved beyond my sphere of influence.
But the question of power is woven together with the paradox of centralization and decentralization, which will carry us closer to the heart of them all.
Langdon Winner observes that “dreams of instant liberation from centralized control have accompanied virtually every important new technological system introduced during the past century and a half.” He quotes Joseph K. Hart, a professor of education writing in 1924:
Centralization has claimed everything for a century: the results are apparent on every hand. But the reign of steam approaches its end: a new stage in the industrial revolution comes on. Electric power, breaking away from its servitude to steam, is becoming independent. Electricity is a decentralizing form of power: it runs out over distributing lines and subdivides to all the minutiae of life and need. Working with it, men may feel the thrill of control and freedom once again.
What Hart failed to notice, according to Winner, was that electricity is “centrally generated and centrally controlled by utilities, firms destined to have enormous social power” far exceeding what was seen in the days of steam./11/
Ellul makes a similar point when he notes how computers have made an obvious decentralization possible in banking (just consider all the ATM machines), “but it goes hand in hand with a national centralization of accounting.”/12/
I do not dispute the important truth in these remarks. But I believe we can also look beyond them to a more horrifying truth.
The totalitarian spirit, many have assumed, always rules from a distinct, physically identifiable locus of authority, and propagates its powers outwardly from that locus along discrete, recognizable channels. That is, it always requires a despotic center and a hierarchical chain of command.
But there is another possibility. Every totalitarianism attempts to coerce and subjugate the human spirit, thereby restricting the range of free expression so important for the development of both individual and community. Who or what does the coercing is hardly the main point. Whether it is a national dictator, oligarchy, parliament, robber baron, international agency, mafia, tyrannical parent, or no one at all -- it doesn't really matter. And what the computational society begins to show us is how totalitarianism can be a despotism enforced by no one at all.
To understand this we need to recognize that the computer, due to its reflected, frozen intelligence, is both universal and one-sided. It is universal because the logic of intelligence is by nature universal, linking one thing unambiguously to another and thereby forming a coherent grid of relations extending over the entire surface of every domain it addresses. But at the same time whatever is “off-grid” is ignored. The computer pretends with extraordinary flexibility to do “everything,” and therefore what is not covered by its peculiar sort of “everything” drops unnoticed from the picture./13/
Moreover, the intelligence we're speaking of is an embedded intelligence, operating in the machinery of our daily existence. Where this intelligence links one thing to another according to a universal “hard logic,” so also does the physically constraining machinery. And yet, the whole basis of the computer's power derives from the fact that no one -- no one present -- is in control. The logic and the machinery are, at the level of their own operation, both self-sufficient and without any necessary center. If the sequence of mathematical statements in a strictly logical demonstration possesses no center (and it does not), neither does the elaborated, computational mechanism onto which the sequence is impressed.
Ellul's reference to “a national centralization of accounting” is not inconsistent with this decentering. An accounting database and its associated software must exist somewhere, and we can easily imagine that someone controls access to it. But this is less and less true today. Databases can be distributed, and in any case, direct or indirect access to them may be spread widely throughout all the different, interrelated activities and functions of a business. It is then truer to say that the system as a whole determines access than that any particular person or group does. In fact, anyone who arbitrarily intervenes (even if it is the person “in charge”) is likely to face a lawsuit for having disrupted an entire business and thousands of lives.
Technologies of embedded intelligence inevitably tend toward interdependence, universalization, and rationalization. No clique of conspiring power-brokers is responsible for the current pressures to bring the telephone, television, cable, and computing industries into “harmony.” No monopolistic or centralized power, in any conventional sense, decrees that suppliers connect to the computer networks and databases of the retail chains -- a requirement nevertheless now threatening the existence of small, technically unsophisticated supply operations. (Most of them will doubtless adapt.) Nor does anyone in particular create the pressure for digitization, by which the makers of cameras and photocopiers are waking up to find they are computer manufacturers. And as the amount of software in consumer products doubles each year (currently up to two kilobytes in an electric shaver),/14/ no one will require that the various appliances speak common, standardized languages.
Again, it is no accident that the introduction of robots -- difficult in existing factories -- leads to a more radical redesign of manufacturing than one might first have thought:
A robot requires a whole series of related equipment, that is, another conception of the business. Its place is really in a new factory with automation and computerization. This new factory relates machines to machines without interference./15/
In this relation of machine to machine there is much to be gained. But the gain is always on an instrumental level, where we manipulate the physical stuff of the world. By brilliantly focusing a kind of externalized intelligence at that level we may, if we are not careful, eventually lose our humanity.
Allow me a brief tangential maneuver.
Zoologist Herman Poppelbaum comments that the human hand, so often called the perfect tool, is in an important sense not that at all. Look to the animals if you want limbs that are perfect tools:
The human hand is not a tool in the way the extremities of animals are. As a tool it lacks perfection. In the animal kingdom we find tools for running, climbing, swimming, flying etc. but, if man wants to use his hand to perform similar feats, he has to invent his tools. For this he turns to an invisible treasure chest of capacities he bears within himself of which the remarkable shape of the human hand itself appears to be a manifestation. A being with lesser capacities than man's would be helpless with such a hand-configuration./16/
Endowed with nothing very “special” physically, we contain the archetypes of all tools within ourselves. The tools we make may be rigid in their own right, but they remain flexible in our hands. We are able to rule them, bending them to our purposes. The question today, however, is whether we are entrusting that rule itself to the realm over which we should be ruling.
Insects offer a disturbing analogy. It is Poppelbaum again who remarks on the absence of clear physical links to explain the order of an ant heap or beehive. The amazingly intricate unity seems to arise from nowhere. “Even the queen bee cannot be regarded as the visible guardian and guarantor of the totality, for if she dies, the hive, instead of disintegrating, creates a new queen.”/17/
As a picture, this suggests something of our danger. Where is the “totalitarian center” of the hive? There is none, and yet the logic of the whole remains coherent and uncompromising. It is an external logic in the sense that it is not wakeful, not self-aware, not consciously assenting; it moves the individual as if from without. A recent book title, whether intentionally or not, captures the idea: Hive Mind.
It is into just such an automatic and external logic that we appear willing to convert our “invisible treasure chest of capacities.” But if our inner mastery over tools is itself allowed to harden into a mere tool, then we should not be surprised when the coarsened reflection of this mastery begins reacting upon us from the world. An important slogan of the day, as we saw earlier, bows to the truth: “what we have made, makes us.”/18/
The slogan is not hard to appreciate. Living in homes whose convenience and mechanical sophistication outstrip the most elaborate factory of a century ago, most of us -- if left to our own devices within a natural environment once considered unusually hospitable -- would be at risk of rapid death. In this sense, our system of domestic conveniences amounts to a life-support system for a badly incapacitated organism. It is as if a pervasive, externalized logic, as it progressively encases our society, bestows upon us something like the extraordinary, specialized competence of the social insects, along with their matching rigidities. What ought to be our distinctive, human flexibility is sacrificed.
From another angle: where you or I would once have sought help quite naturally from our neighbors, thereby entering the most troublesome -- but also the highest -- dimensions of human relationship, we now apply for a bank loan or for insurance. Not even a personal interview is necessary. It is a “transaction,” captured by transaction processing software and based solely upon standard, online data. Everything that once followed from the qualities of a personal encounter -- everything that could make for an exceptional case -- has now disappeared from the picture. The applicant is wholly sketched when the data of his past have been subjected to automatic logic. Any hopeful glimmer, filtering toward the sympathetic eye of a supportive fellow human from a future only now struggling toward birth, is lost in the darkness between bits of data. Nor, in attempting to transcend the current system, can either the insurance employee or the applicant easily step outside it and respond neighbor-to-neighbor. The entire procedure has all the remarkable efficiency and necessity of the hive.
So the paradoxes of power and powerlessness, of centralization and decentralization, are not really paradoxical after all. We can, if we wish, seek instrumental power in place of the freedom to achieve a distinctively human future. We can, if we wish, abdicate our present responsibility to act, deferring to an automatic intelligence dispersed throughout the hardware surrounding us. It scarcely matters whether that intelligence issues from a “center” or not. What matters is how it binds us.
Whatever binds us may always seem as if it came from a center. Whether to call the automatic logic of the whole a center may be academic. The real question for the future is no longer whether power issues from many, dispersed groups of people or from few, centralized groups. It is, rather, whether power issues from within the fully awake individual or acts upon him from the dark, obscure places where he still sleeps. If it is exercised wakefully, then it is not really power we're talking about, but freedom and responsibility. But if it is not exercised wakefully, then centralization and decentralization will increasingly become the same phenomenon: constraining mechanism that controls us as if from without.
For the third time, then: does technology exert a stranglehold over us? And for the third time I will wage a minor delaying action, since this question is inseparable from another, more general one to which we must briefly nod: is technology nonneutral or neutral? That is, do technological artifacts, once deployed, coerce our social, political, and psychological development in directions over which we then have little control (nonneutrality)? Or do the major consequences depend largely upon how we freely choose to use those artifacts (neutrality)?
But to pose the question in this way is already to have lost the possibility of profound answer. For it is to posit that we are (or are not) subject to determining influences acting upon us from outside. The problem is that our artifacts do not really exist outside us in the first place. How can I speak of technological neutrality or nonneutrality when
what is “out there,” working on me from without, is an abstract distillate of the intelligence working “in here”; and what is “in here,” guiding my use of technology, is a habit of mind that has long been tending toward the machinelike and external?
When my own interior is one with the interior of the machine, who is unambiguously influencing whom? The only answer I know to give to the question of technological neutrality is twofold: of course technology is neutral, for in the long run everything does depend upon how we relate to artifacts. And of course technology is not neutral, for what works independently in the artifact is our already established habits of thought and use.
These answers still remain too stiff, however, for they speak of the artifact as if, behind our pattern of interaction with it, there stood a simple given. But there is no artifact at all apart from the interior we share with it. This is true of all tools, although most obviously of the computer, in which we can place radically differing abstracts of our consciousness. We call those abstracts “programs.” The program clearly determines what sort of machine we're dealing with.
All this will be enough, I hope, to justify an unusual tack in what follows. I have spoken at length about the abstraction from human consciousness of a shadow-intelligence for which we find a mechanical expression, and of the countereffect this expression has upon the originating mind. It remains, however, to focus upon what cannot be abstracted. If we are to avoid strangulation by technology -- indeed, if we are to avoid simply becoming robots ourselves -- it is to the distinctively human that we must turn.
We can find our first toehold here by considering the human being as learner.
Every educational reformer (and nearly every educator) reminds us that “the student should not be treated as an empty receptacle for facts.” As in most cases of widespread clamor for some popular value, however, the problem runs much deeper than the rhetoric. For example, the respectable formula -- “don't teach facts, but teach the student how to acquire knowledge on his own” -- easily merges with the supposedly rejected point of view. The student is still a receptacle for facts -- it's just that he must learn to stuff himself, instead of being stuffed by someone else. I'm not sure there's much difference between the equally constipated outcomes of these two approaches.
To state the matter as provocatively as possible: what a student learns, considered solely as “objective content,” never matters. The only thing that counts is how vividly, how intensely, how accurately and intentionally, and with what muscular ability to shape and transform itself, his consciousness lays hold of a thing. The qualitative fullness of a thought, the form of the student's own, inner activity -- and not the simplistic abstraction the activity is said to be “about” (that is, not its “information content”) -- is everything.
The reason for this is simply that human consciousness does not “hold” facts about the world; it is itself the inside of the world. It does not lay hold of things, but rather becomes the things. Our need is not to “know the facts” about the truth; it is to become true. The discipline of consciousness is not a preparation for truth-gathering; it is the increasingly harmonious resonance between the laws of the world and the laws of our own thinking activity, which are the same laws expressed outwardly and inwardly.
There was a time when such distinctions would have been obvious -- a time when man experienced himself as a microcosm within the macrocosm, and when all knowledge was rightly felt to be knowledge of man. But this is exactly what a machine-adapted mind can only find perplexing or maddening. Such a mind wants facts contained in a database. It wants knowledge about things rather than a sculpting and strengthening of the knowing gesture. It wants truth rather than meaning -- and its notion of truth has degenerated into mere logical consistency.
For such a mind, the important thing about the statement, “Abraham Lincoln was killed in 1866,” is that it is false. And, as a univocal proposition whose task is solely to identify a coordinate on a timeline (that is, as a few bits of information), so it is. But no human being is capable of speaking such a proposition. So long as we remain conscious, we cannot utter or even conceive a statement wholly devoid of meaning and truth. The hapless student who says “1866” will to one degree or another still appreciate what it meant to be Abraham Lincoln, what it meant to preside over the Civil War, and what it meant to die at the war's conclusion. The question is how fully he will know these things. Perhaps very fully (if he has deepened his own character and learned to know himself well), or perhaps not very much at all (if he has just been taught “the facts”).
The teachers we remember for changing our lives did not convey some decisive fact. Rather, we saw in them a stance we could emulate, a way of knowing and being we could desire for ourselves. Their teaching instructed us, and therefore remains with us, whereas the things taught (if one insists on thinking in such terms) may well have lost their validity. This is why computers have so little to offer either teacher or student. If the student's greatest hope is to learn from his teacher what it can mean to be a human being facing a particular aspect of life, then the implications of wholesale reliance upon computer-mediated instruction are grave indeed.
What can it mean to confront today's computerized technology as a human being? On the one hand, we find ourselves locked in an intimate liaison with our machines, and with the machinelike tendencies in ourselves. We are invited into a world of programmed responses and of database-receptacles collecting information. On the other hand ... what?
The answer, I think, is given in the question itself. That is: on the other hand, the possibility of asking questions, of changing, of transforming ourselves -- the freedom to act out of the future rather than the past.
Freedom, however, is always ambiguous. Paradoxically -- and this is a genuine paradox -- the experience of freedom can never be anything but a movement toward freedom or away from it. If we had perfect freedom, we could not know what freedom meant, for there would be no resistance against which to feel free. And if we we were wholly determined, we could not know what freedom meant, for we would possess no answering experience of our own through which to recognize it./19/
As things are, the healthy consciousness does have an experience of freedom -- whether acknowledged or not -- because it knows both possibility and constraint. Its struggle is always to accept responsibility and grow toward greater freedom.
Surely, however, none of us will claim a perfectly healthy consciousness. We always must reckon with impulses running contrary to freedom and responsibility. We are, in the end, free to use our freedom in order to abandon freedom. In fact, the entire effort to raise up machines in our image can also be read as our own willing descent toward the image of the deterministic machine.
I will try to make this idea of a descent toward the machine a little more concrete. A great deal hinges upon how we choose to wrestle with the descent. The machine provides the necessary resistance against which we may -- or may not -- discover our freedom.
It is not particularly controversial to say: we find ourselves immersed ever more deeply in a kind of creeping virtual reality -- television, electronic games, video conferencing, online shopping malls, escapist entertainment, virtual workplaces, images of frogs for online “dissection,” geography recast as graphical data, community reduced to the mutual exchange of text -- all of which slowly deprives us of a fuller world.
But every discussion of “virtual” and “real” quickly becomes problematic. Lines grow fuzzy. As the fighter plane's cockpit encapsulates the pilot ever more securely within an instrument- mediated environment, when does the preliminary training in a cockpit simulator cease to be mere simulation, becoming instead a recapitulation of the real thing?
Noting astronaut John Glenn's reported inability to feel an “appropriate awe” while circling the earth, Langdon Winner comments that “synthetic conditions generated in the training center had begun to seem more ‘real’ than the actual experience.”/20/ But he might just as well have said: synthetic conditions were the actual experience. Glenn's immediate environment while floating above the earth was not much different from his training environment.
Whatever else we may think about the line separating the real world from simulated worlds, the important point for the moment is that this line can be nudged from either side. Certainly, rising levels of technical sophistication can lead to ever more realistic simulators. But we should not forget that the human experience we're simulating can become ever more abstract and mechanical -- more simulable.
Furthermore, it is undeniable that this latter transformation has been intensively under way for the past several hundred years. Beginning in the Renaissance, Western man found himself cast out from something like a “spatial plenum of wisdom” into an abstract container-space well described by the mathematical techniques of linear perspective. The plenum could never have been simulated with technology; on the other hand, container-space is already a simulation of sorts, and the increasingly abstract, computational processes of the human mind are the means of simulation.
My point here -- actually, everything I have been saying -- will be lost unless we can bring such distinctions as the one between plenum and container-space to some sort of qualitative awareness. We saw earlier how Owen Barfield, who has described the transition from medieval to modern consciousness with delicate and penetrating sensitivity, once remarked that “before the scientific revolution the world was more like a garment men wore about them than a stage on which they moved.” He also referred to the Middle Ages as a time when the individual “was rather less like an island, rather more like an embryo, than we are.”/21/ What could this mean?
Our contemporary minds can, perhaps, gain a faint suggestion of the change Barfield is referring to. Try looking toward an object in the middle distance without actually focusing on it, but instead looking through it. (Don't simply focus on another object behind it. If necessary, choose a spot on a wall, so that no background object substitutes for the spot.) Most likely, you will find this difficult at first. The object in front of you demands attention, instantly drawing your focused gaze and throwing your visual system into a kind of flat, perspective fixation, with your line of sight centered on a nullity -- the vanishing point.
This “standard” sort of vision is reinforced by every one of the thousands of images we see daily -- photographs, drawings, movies -- all conveyed through the technique of linear perspective. The art of linear perspective codifies the view of a single, fixed eye (or camera lens). It first became a theoretically disciplined art in fifteenth-century northern Italy, when a few enterprising artisans were able to reconceive scenes in the world as sets of coordinates and vectors (lines of sight). That reconception has moved ever closer to the heart of what it means for modern man to see.
There is, however, another way of seeing. Tom Brown, Jr., the wilderness tracker, urges the critical importance of what he calls relaxed, or wide-angle, vision. To accomplish this -- and it requires practice -- you must allow your eyes to relax “out of focus,” so that the objects directly in front of you no longer claim sole attention on the visual stage. Instead, allow your alert attention (free of sharp, visual focus) to scan all the objects in your field of vision. To prevent your gaze from hardening into an uncomprehending stare, you should continually move your eyes as well, taking care only to prevent their focusing on particular objects as they move. Your field of vision will now possess less clarity, but your breadth of awareness and ability to perceive what is going on around you will be greatly increased.
Focused vision gives us only a few degrees of clarity around the line of sight. Relaxed vision removes this clear center and with it the dominance of the object in front of us. The single, narrow beam of our attention becomes instead a broad, peripheral awareness that may mediate remarkable discoveries.
Brown claims that a habit of relaxed vision (combined, of course, with periodic, brief reversions to focused vision for purposes of clear identification) is essential for any profound penetration of nature, and is the usual style of vision for peoples who live close to nature. He reports, for example, that when one is stalking an animal, a sudden shift from relaxed vision to the more aggressive, I-versus-object focus will very likely startle the animal into flight -- the “spell” integrating the stalker seamlessly into his natural surroundings is broken./22/
Try the experiment sometime. If, say, you walk slowly and attentively through the woods using relaxed vision, you may well experience yourself within the landscape rather differently from when you employ your more customary habits. You may even be able to imagine ever so slightly what it might have been like to wear the world as a garment instead of moving about in an abstract container-space -- to be living so deeply in the world that you cannot look at it in anything like the modern sense.
It is worth relating this sort of practical experience to our earlier discussion of linear perspective (see Chapter 22). Why, for example, did I refer above to our “flat, perspective fixation,” since perspective, applied to art, yields what we take to be remarkably realistic, lifelike, in-depth images?
To say that our normal vision today is flat is not to say that it has no three-dimensional feel of spatial depth, for that feel is exactly what we today mean by depth, and what the pre- Renaissance painting, for example, did not have. There is no disputing that we sometimes think we could almost step into the three-dimensional image. But Barfield is telling us, as we have seen in Chapter 22, that pre-Renaissance man was more likely to experience himself already in a work of art. And what is meant by “in” differs greatly between these two cases. This difference may be suggested by the contrast between focused and relaxed vision in the experience of the alert ourdoorsman.
The point is that depth is now represented for us by a highly abstract, mathematically conceived container-space full of objects related only by their shared coordinate system (within which we, too, kick about) -- an “extensive” depth -- whereas once the world's depth was more “intensive,” with every object bound meaningfully to every other (and to us) in a way that our perspective vision renders nearly impossible to experience. It is our vision that is flat, abstract, shallow, governed by surfaces without true insides. It lacks nothing in quantifiable information, but lacks nearly everything in weight or qualitative significance.
Certainly the photograph and video screen are undeniably flat. The depth, as I have already mentioned, is illusory, our eyes being forced to remain focused on a flat surface. But this is how we have learned to see the world as well, focusing always on particular, flat surfaces and experiencing even depth as an external relation of surfaces. Today -- it has not always been so -- the world itself arrives on the stage of our understanding “in mathematical perspective.”
I have conducted this little diversion on perspective for several reasons. It gives substance, in the first place, to the question whether computer graphics has been conquering the realities of our experience or our experience has been descending toward computer graphics. At the very least, a little historical awareness should convince us that the latter movement has actually occurred, whatever we make of the former one.
I suspect, however, that both movements really amount to the same thing. The technical advances are real enough, but they can represent an approach toward human experience only to the extent that human experience is being altered toward the mechanical. Advancing technology is born of an increasingly technological mindset, which in turn lends itself to being “conquered” by technology.
Second, our discussion also illustrates an earlier point about learning: the things we learn about do not count for very much. The real learning takes place just so far as our organs of cognition are themselves instructed and changed. After all, nearly everything that matters in the movement from Middle Ages to modernity lies in the different ways of thinking and seeing -- in the qualities of these activities. How can we understand these different qualities without gaining the flexibility of mind that enables us to become, so to speak, different human beings?
You might almost say that what distinguishes modern thinking is the fact that it takes itself to be nothing but aboutness -- it has lost experience of its own qualities. One result is a seeing that has been substantially reduced to an awareness of coordinates and of relative position on an electromagnetic spectrum, with little feel for what once gripped the human being through space and color. For the most part, it is only the artist today who still resonates with what Goethe called the “deeds and sufferings of light,” and who finds an objective, revelatory depth in Goethe's characterization of the feeling-qualities of the various colors. The promiscuous and purely stimulative use of “dramatic” colors (not to mention geometric forms) in so much of computer graphics is well calculated to dull any sensitivity to the profound lawfulness of color and form in nature.
Another result of our “aboutness” knowledge is the drive toward artificial intelligence, where self-experience simply never enters the picture. When nothing is left of the knower except the abstractions he supposedly knows, then the machine can become a knower -- for it, too, can “contain” these abstractions.
In the third place, the story of linear perspective offers a nice little morality tale. It might be tempting to dismiss Tom Brown's “relaxed vision” as a kind of irresponsible dreaminess. No doubt it may on occasion be that -- but only when one's alert, ever scanning attention dies away and the blindness of a vacant stare replaces it. Only, that is, when we cease our own, inner activity as knowers. But this, interestingly, is a much more threatening limitation of the fixed and focused eye of perspective vision.
Try looking at a point several feet away, without moving your eyes at all. This is very difficult -- nearly impossible, in fact, since the eyes will involuntarily shift ever so slightly, and you will need to blink. Nevertheless, you can readily observe that, just so far as you hold an unmoving gaze, the entire field of sight begins to blank out. In the laboratory, an image held constant by attachment to a contact lens fades to gray or black within two or three seconds. An unchanging pattern of stimulation on the retina blinds us.
If, then, a perspective image codifies the outlook of a single, fixed eye, it codifies a kind of blindness. Our eyes must continually strive toward a new view, and only by changing our view do we see. Just as there is no true knowing that is not a continual transformation of the knower, so too -- even at the most literal level -- there is no true seeing that is not a continual transformation of the seer.
All the pieces are in place. If you have followed along sympathetically to this point, you may now expect some sort of resolution. “Surely he will tell us how he thinks we can remain masters of the computer! Exactly what transformation is he asking of us?”
But this is exactly what I must not attempt. There can be no program, no algorithm, for mastering technology. Our automatic resort to algorithmic thinking is an indication of technology's mastery of us. It shows how the limitations and tendencies of the computer have become at the same time limitations and tendencies of our minds. So the only “program” I can offer is the advice that we deprogram ourselves.
It is curious how, amidst the all-but-insoluble problems of our day, it remains so easy to think (however vaguely) that every real problem must yield to some sort of straightforward, external doing -- if we could just hit upon the right formula. At the very least, we should go out and march. Write our congressmen. Protest against a war or a polluter.
I do not wish to denigrate the activist, but virtually every problem we ever tackle turns out to be vastly more complicated than the placards and chants (or, for that matter, the email campaigns) allow. In fact, I would venture the surmise that no genuine social problem has ever been solved by a program of action. Or even that no problem has ever been been solved at all. As we slowly change, we eventually transcend old problems, simply leaving them behind in order to face new ones. Or, you might say, old problems simply assume new forms.
It scarcely needs adding that no hint of this fundamental intractability of our most pressing social problems can be admitted in political discourse. Yet the truth is that every major social problem is too much for us. That's why it's a problem. What we always need is, unsurprisingly, exactly what we have lost sight of. We need a new view of things.
So we seem to be caught in a vicious circle. If what is “out there” reflects our inner landscape (as I have argued it does), how can we ever achieve a new view?
The conviction that we needn't try (because technology is already saving us) or can't succeed (because what we have made, makes us) underlies many responses to technology. Perhaps more common is no response at all. A large majority of U.S. citizens claims to believe that television contributes heavily to a prevailing social malaise, and yet the broad patterns of television use do not seem to change in hopeful ways. Few seem to find a way to act meaningfully upon their misgivings -- and this, too, certifies the viciousness of the circle.
But I would rather think now of a dead end than a vicious circle. Where a circle leaves us treading round and round forever upon our own footsteps -- thinking we're getting somewhere -- a dead end snuffs out all hope. Therefore it offers the only hope. It does so, that is, if we not only know we have reached a dead end, but also suffer it. The more clearly we realize -- even as we fervently desire deliverance -- that there is no way out, no program to save ourselves, no hope in any of our known resources -- the more open we will be to an altogether different level of response. The more open we will be to what is not yet known within us. We will look to a future that can never arrive through a program, but only through the potential for inner self-transformation. Here alone we stand wholly apart from our machines.
For the last time, does technology have us in a stranglehold? Ellul's answer is instructive. If we have any chance of escape, he tells us, then
above all things we must avoid the mistake of thinking that we are free. If we launch out into the skies convinced that we have infinite resources and that in the last resort we are free to choose our destiny, to choose between good and evil, to choose among the many possibilities that our thousands of technical gadgets make available, to invent an antidote to all that we have seen, to colonize space in order to make a fresh beginning, etc.; if we imagine all the many possibilities that are open to us in our sovereign freedom; if we believe all that, then we are truly lost, for the only way to find a narrow passage in this enormous world of deceptions (expressing real forces) as I have attempted to describe it is to have enough awareness and self-criticism to see that for a century we have been descending step by step the ladder of absolute necessity, of destiny, of fate.
Ellul would not have us enjoy our illusions. “Are we then shut up, blocked, and chained by the inevitability of the technical system which is making us march like obedient automatons ... ?” Unrelenting, he answers himself with an apparent abandonment of hope: “Yes, we are radically determined. We are caught up continuously in the system if we think even the least little bit that we can master the machinery, prepare for the year 2000, and plan everything.”
And yet, “not really,” he hedges in the end, for the very system that runs away with us is liable to provoke disasters, which in turn bring unpredictable possibilities for the future. “Not really,”
if we know how little room there is to maneuver and therefore, not by one's high position or by power, but always after the model of development from a source and by the sole aptitude for astonishment, we profit from the existence of little cracks of freedom and install in them a trembling freedom which is not attributed to or mediated by machines or politics, but which is truly effective, so that we may truly invent the new thing for which humanity is waiting./23/
I cannot recommend Ellul's book too highly for anyone enchanted by the glossy allure of what he calls the technical system. And yet, if I were Ellul, I would not have waited until the last paragraph of a four-hundred-page book to make my first and only admission that a few cracks of freedom may exist. After all, no matter how minuscule those cracks may be, all the known universe lies within them. Nothing -- not even the determinations of the technical system -- can be understood outside of them.
For without freedom, there is physical cause and effect, but no understanding and truth. The cause-and-effect mechanism can never recognize and describe its own activity at the very highest level, and then transform itself to an even higher level. But that is exactly what we do when we understand; all understanding is self- understanding and self-transformation, another name for which is freedom.
And in those same cracks of freedom our entire future lies -- the only future possessed of human meaning, the only future free of the machine's increasingly universal determinations, the only future not eternally frozen to the shape of our own past natures.
Cracks cannot exist except as fissures breaking through a resistant material, and in this sense our technological achievements may turn out to have provided the necessary resistance against which we can establish a human future. If, for example, we are now learning to manipulate our own genetic inheritances, this technical ability must lead all but the hopelessly somnolent to a sense of desperation: “What sort of men should we make ourselves?” It is the same question we see reflected back at us from the uncomprehending face of the cleverest robot. There is no technological answer.
How might we find an answer? Only by looking within ourselves. Not at what moves in lockstep with all the external machinery of our lives, but, to begin with, at the silent places. They are like the sanctuary we find ourselves passing through for a few moments upon waking in the morning. Just before we come fully to ourselves -- recollecting who we were, bowing beneath all the necessities of the preceding days -- we may feel ourselves ever so briefly on the threshold of new possibilities, remembering whispered hopes borne upon distant breezes. We know at such moments the freedom -- yes, just the tiniest cracks of freedom, for Ellul was, after all, right -- to awaken a different self. “Must I be fully determined by the crushing burdens of the past? Or is something new being asked of me -- some slight shift in my own stance that, in the end, may transform all the surrounding machinery of my existence, like the stuff of so many dreams?”
Man is he who knows and transforms himself -- and the world -- from within. He is the future speaking.
1. Not to be confused with the transmission of thoughts.
2. Ellul, 1990: 18, 157.
4. Ellul, 1990: 217-20.
5. Milburn and McGrail, 1992: 613
7. Quoted in Howard, 1985: 24.
9. A fairly common, if lamentable, belief among cognitive scientists today has it that, given sufficient complexity of the right sort in the answering system, the voice would become the expression of a genuinely present, thinking consciousness.
11. Winner, 1986: 95-96.
12. Ellul, 1990: 111.
13. This is related to what Joseph Weizenbaum called the “imperialism of instrumental reason.” The chapter called “Against the Imperialism of Instrumental Reason” in his Computer Power and Human Reason deals wonderfully with a number of the themes discussed here.
14. Gibbs, 1994.
15. Ellul, 1990: 316.
16. Poppelbaum, 1993: 127-28.
17. Poppelbaum, 1961: 167.
18. Krueger, forward to Heim, 1993.
19. Kuehlewind, 1984: 76; Ellul, 1990: 217-20.
20. Winner, 1986: 3. (Winner is responding to reportage in Tom Wolfe's The Right Stuff.)
21. Barfield, 1965a: 78, 94-95.
22. Tom Brown, Jr., has written numerous books and nature guides. For autobiographical accounts, see Brown, 1982; Brown and Watkins, 1979.
23. Ellul, 1990: 411-12.