. . . . . . Organs without Bodies - Gilles Deleuze
. . . . . . . .3. Becoming-machine

. . . . . . . .Slavoj Zizek

[space] nav gif nav gif nav gif nav gif nav jpg nav gif nav jpg nav gif nav gif

nav gif nav gif nav gif nav gif

Perhaps the core of Deleuze's concept of repetition is the idea that, in contrast to the mechanical (not machinic!) repetition of linear causality, in a proper instance of repetition, the repeated event is recreated in a radical sense: it (re)emerges every time as New (say, to "repeat" Kant is to rediscover the radical novelty of his breakthrough, of his problematic, not to repeat the statements which provide his solutions). One is tempted to establish here a link with Chesterton's Christian ontology, in which repetition of the same is the greatest miracle: there is nothing »mechanical« in the fact that the sun rises again every morning; this fact, on the contrary, displays the highest miracle of God's creativity. [1] What Deleuze calls "desiring machines" concerns something wholly different from the mechanical: the "becoming-machine." In what does this becoming consist? To many an obsessional neurotic, the fear of flying has a very concrete image: one is haunted by the thought of how many parts of such an immensely complicated machine as a modern plane have to function smoothly in order for the plane to remain in the air - one small lever breaks somewhere, and the plane may very well spiral downwards... One often relates in the same way towards one's own body: how many small things have to run smoothly for me to stay alive? - a tiny clot of blood in a vein, and I die. When one starts to think how many things can go wrong, one cannot but experience total and overwhelming panic. The Deleuzian »schizo,« on the other hand, merrily identifies with this infinitely complex machine which is our body: he experiences this impersonal machine as his highest assertion, rejoicing in its constant tickling. As Deleuze emphasizes, what we get here is not the relationship of a metaphor (the old boring topic of "machines replacing humans"), but that of metamorphosis, of the "becoming-machine" of a man. It is here that the "reductionist" project goes wrong: the problem is not how to reduce mind to neuronal "material" processes (to replace the language of mind by the language of brain processes, to translate the first one into the second one), but, rather, to grasp how mind can emerge only through being embedded in the network of social relations and material supplements. In other words, the true problem is not "How, if at all, could machines IMITATE the human mind?," but, "How does the very identity of human mind rely on external mechanical supplements? How does it incorporate machines?"

Instead of bemoaning how the progressive externalization of our mental capacities in "objective" instruments (from writing on paper to relying on a computer) deprives us of human potentials, one should therefore focus on the liberating dimension of this externalization: the more our capacities are transposed onto external machines, the more we emerge as "pure" subjects, since this emptying equals the rise of substanceless subjectivity. It is only when we will be able to fully rely on "thinking machines" that we will be confronted with the void of subjectivity. In March 2002, the media reported that Kevin Warwick from London became the first cyberman: in a hospital in Oxford, his neuronal system was directly connected to a computer network; he is thus the first man to whom data will be fed directly, bypassing the five senses. THIS is the future: the combination of the human mind with the computer (rather than the replacement of the former by the latter).

We got another taste of this future in May 2002, when it was reported that scientists at New York University had attached a computer chip able to receive signals directly to a rat's brain, so that one can control the rat (determine the direction in which it will run) by means of a steering mechanism (in the same way one runs a remote-controlled toy car). This is not the first case of the direct link between the human brain and a computer network: there already are such links which enable blind people to get elementary visual information about their surroundings directly fed into their brain, bypassing the apparatus of visual perception (eyes, etc.). What is new in the case of the rat is that, for the first time, the "will" of a living animal agent, its "spontaneous" decisions about the movements it will make, are taken over by an external machine. Of course, the big philosophical question here is: how did the unfortunate rat "experience" its movement which was effectively decided from outside? Did it continue to "experience" it as something spontaneous (i.e., was it totally unaware that its movements are steered?), or was it aware that "something is wrong," that another external power is deciding its movements? Even more crucial is to apply the same reasoning to an identical experiment performed with humans (which, ethical questions notwithstanding, shouldn't be much more complicated, technically speaking, than in the case of the rat). In the case of the rat, one can argue that one should not apply to it the human category of "experience," while, in the case of a human being, one should ask this question. So, again, will a steered human being continue to "experience" his movements as something spontaneous? Will he remain totally unaware that his movements are steered, or will he become aware that "something is wrong," that another external power is deciding his movements? And, how, precisely, will this "external power" appear - as something "inside me," an unstoppable inner drive, or as a simple external coercion? [2] Perhaps the situation will be the one described in Benjamin Libet's famous experiment; [3] the steered human being will continue to experience the urge to move as his "spontaneous" decision, but - due to the famous half-a-second delay - he/she will retain the minimal freedom to BLOCK this decision. It is also interesting which applications of this mechanism were mentioned by the scientists and the reporting journalists: the first details mentioned related to the couple of humanitarian aid and the anti-terrorist campaign (one could use the steered rats or other animals in order to contact victims of an earthquake under the rubble, as well as in order to approach terrorists without risking human lives). And, the crucial thing one has to bear in mind here is that this uncanny experience of the human mind directly integrated into a machine is not the vision of a future or of something new, but the insight into something which is always-already going on, which was here from the very beginning, since it is co- substantial with the symbolic order. What changes is that, confroted with the direct materialization of the machine, its direct integration into the neuronal network, one can no longer sustain the illusion of the autonomy of personhood. It is well-known that the patients who need dialysis at first experience a shattering feeling of helplessness: it is difficult to accept the fact that one's very survival hinges on the mechanical device that I see out there in front of me. Yet, the same goes for all of us: to put it in somewhat exaggerated terms, we are all in the need of a mental- symbolic apparatus of dialysis.

The trend in the development of computers is towards their invisibility: the large humming machines with mysterious blinking ligthts will be more and more replaced by tiny bits fitting imperceptibly into our "normal" environs, enabling it to function more smoothly. Computers will become so small that they will be invisible, everywhere and nowhere - so powerful that they will disappear from view. One should only recall today's car, in which many functions run smoothly because of small computers we are mostly unaware of (opening windows, heating...). In the near future, we will have computerized kitchens or even dresses, glasses, and shoes. Far from being a matter for the distant future, this invisibility is already here: Philips soon plans to offer on the market a phone and music player which will be interwoven into the texture of a jacket to such an extent that it will be possible not only to wear the jacket in an ordinary way (without worrying what will happen to the digital machinery), but even to launder it without damaging the electronic hardware. This disappearance from the field of our sensual (visual) experience is not as innocent as it may appear: the very feature which will make the Philips jacket easy to deal with (as no longer a cumbersome and fragile machine, but a quasi-organic prothesis to our body) will confer on it the phantom-like character of an all-powerful, invisible Master. The machinic prothesis will be less an external apparatus with whom we interact, and more part of our direct self-experience as a living organism - thus decentering us from within. For this reason, the parallel between computers' growing invisibility and the well- known fact that, when people learn something sufficiently well, they cease to be aware of it, is misleading. The sign that we learned a language is that we no longer need to focus on its rules: we not only speak it "spontaneously," but, an active focus on the rules even prevents us from fluently speaking it. However, in the case of language, we previously had to learn it (we "have it in our mind"), while invisible computers in our environs are out there, not acting "spontaneously" but simply blindly.

One should accomplish here a step further: Bo Dahlbom is right, in his critique of Dennett, [4] where he insists on the SOCIAL character of "mind" - not only are theories of mind obviously conditioned by their historical, social context (does Dennett's theory of competing multiple drafts not display its roots in "postindustrial" late capitalism, with its motifs of competition, decentralization, etc.? - a notion also developed by Fredric Jameson, who proposed a reading of Consciousness Explained as an allegory of today's capitalism). Much more importantly, Dennett's insistence on how tools - externalized intelligence on which humans rely - are an inherent part of human identity (it is meaningless to imagine a human being as a biological entity WITHOUT the complex network of his/her tools - such a notion is the same as, say, a goose without its feathers), opens up a path which should be taken much further than Dennett goes himself. Since, to put it in good old Marxist terms, man is the totality of its social relations, why does Dennett not take the next logical step and directly analyze this network of social relations? This domain of "externalized intelligence," from tools to, especially, language itself, forms a domain of its own, that of what Hegel called "objective spirit," the domain of artificial substance as opposed to natural substance. The formula proposed by Dahlbom is thus: from "Society of Minds" (the notion, developed by Minsky, Dennett, and others) to "Minds of Society" (i.e., the human mind as something which can only emerge and function within a complex network of social relations and artificial mechanic supplements which "objectivize" intelligence).

NOTES

[1] G.K.Chesterton, Orthodoxy, San Francisco: Ignatius Press 1995, p. 65.

[2] Cognitivists often advise us to rely on commonsense evidence: of course we can indulge in speculations about how we are not the causal agents of our acts, of how our bodily movements are steered by a mysterious evil spirit, so that it just appears that we freely decide what movements to make. In the absence of good reasons, such skepticism is nonetheless simply unwarranted. However, does the experiment with the steered rat not provide a pertinent reason for entertaining such hypotheses?

[3] Benjamin Libet, "Unconscious Cerebral Initiative and the Role of Conscious Will in Voluntary Action," in The Behavioral and Brain Sciences, 1985, Vol. 8, p. 529- 539, and Benjamin Libet, "Do We Have Free Will?", in Journal of Consciousness Studies, 1999, Vol. 1, p. 47-57.

[4] Bo Dahlbom, "Mind is Artificial," in Dennett and His Critics, ed. by Bo Dahlbom, Oxford: Blackwell 1993.



Slavoj Zizek's Bibliography

Slavoj Zizek's Chronology

© lacan.com 1997/2008
Copyright Notice. Please respect the fact that this material in LACAN.COM is copyright.
It is made available here without charge for personal use only. Available only through EBSCO Publishing. Inc.
It may not be stored, displayed, published, reproduced, or used for any other purpose.