Πέμπτη 15 Ιουλίου 2010

DREAM-LOGIC, THE INTERNET AND ARTIFICIAL THOUGHT

What does it mean to think? Can machines think, or only humans? These questions have obsessed computer science since the 1950s, and grow more important every day as the internet canopy closes over our heads, leaving us in the pregnant half-light of the cybersphere. Taken as a whole, the net is a startlingly complex collection of computers (like brain cells) that are densely interconnected (as brain cells are). And the net grows at many million points simultaneously, like a living (or more-than-living?) organism. It's only natural to wonder whether the internet will one day start to think for itself.

(Or is it thinking already?)

These questions are important not only to the internet but to each individual computer. Computers grow more powerful all the time. Today, programs that are guided not just by calculations but by good guesses are important throughout the software landscape. They are examples of applied artificial intelligence — and the ultimate goal of artificial intelligence is to build a mind out of software, a thinking computer — a machine with human-like (or super-human) intelligence.

In a way these possibilities are frightening, or at least thought-provoking. But after all, human intelligence is the most valuable stuff in the cosmos, and we are always running short. A computer-created increase in the world-wide intelligence supply would be welcome, to say the least.

It's also reasonable to expect computers to help clean up the mess they have made. They dump huge quantities of information into the cybersphere every day. Can they also help us evaluate this information intelligently? Or are they mere uncapped oil wells pumping out cyber-pollution — which is today just a distraction but might slowly, gradually paralyze us, as our choices and information channels proliferate out of control? As each of us is surrounded by a growing crowd of computer-paparazzi all shouting questions and waving data simultaneously, and no security guards anywhere?

Here is an unfortunate truth: today's mainstream ideas about human and artificial thought lead nowhere.

We are trapped by assumptions that unravel as soon as we think about them: "we" meaning not only laymen but many philosophers and scientists. Here are three important wrong assumptions.

Many people believe that "thinking" is basically the same as "reasoning."

But when you stop work for a moment, look out the window and let your mind wander, you are still thinking. Your mind is still at work. This sort of free-association is an important part of human thought. No computer will be able to think like a man unless it can free-associate.

Many people believe that reality is one thing and your thoughts are something else. Reality is on the outside; the mental landscape created by your thoughts is inside your head, within your mind. (Assuming that you're sane.)

Yet we each hallucinate every day, when we fall asleep and dream. And when you hallucinate, your own mind redefines reality for you; "real" reality, outside reality, disappears. No computer will be able to think like a man unless it can hallucinate.

Many people believe that the thinker and the thought are separate. For many people, "thinking" means (in effect) viewing a stream of thoughts as if it were a PowerPoint presentation: the thinker watches the stream of his thoughts. This idea is important to artificial intelligence and the computationalist view of the mind. If the thinker and his thought-stream are separate, we can replace the human thinker by a computer thinker without stopping the show. The man tiptoes out of the theater. The computer slips into the empty seat. The PowerPoint presentation continues.

But when a person is dreaming, hallucinating — when he is inside a mind-made fantasy landscape — the thinker and his thought-stream
are not separate. They are blended together. The thinker inhabits his thoughts. No computer will be able to think like a man unless it, too, can inhabit its thoughts; can disappear into its own mind.

What does this mean for the internet: will the internet ever think? Will an individual computer ever think?

We need to see, first, that in approaching the topic of human thought, we usually stop half-way through. In fact, the human mind moves back and forth along a spectrum defined by ordinary logic at one end and "dream logic" at the other. "Dream logic" makes just as much sense as ordinary "day logic"; it simply follows different rules. But most philosophers and cognitive scientists see only day logic and ignore dream logic — which is like imagining the earth with a north pole but no south pole.

Imagine a simple, common-sense view of thought. Philosophers often disparage such ideas as "folk psychology." Naturally our ultimate goal is scientific explanation. But the first goal of science is to explain (or explain away) common sense.

We begin with "focus" or "attention" or "alertness." Our alertness varies. We are alert when we are rested and wide-awake. As we grow tired, our focus or alertness declines. From this simple observation grows an entire intuitive, even self-evident view of thought—which is nonetheless different from any mainstream view.

We think differently when we are alert on the one hand, and not alert (or sleepy) on the other. To solve analytical or mathematical problems, to think acutely or logically, we must be alert.

On the other hand, low-alertness plays its own important role: in this state, our thoughts tend to move by themselves with no conscious direction from us. As you come near to falling asleep, you will find thoughts flowing through your mind without conscious guidance. (Shelley: "The everlasting universe of things/ Flows through the mind, and rolls its rapid waves…")

In this state of free-association, each new thought resembles or overlaps or somehow connects-to the previous thought. As our alertness continues to fall — as we continue to grow more tired — we lose contact with external reality. "The sweetness/ of the gentle world you had made for him dissolving beneath/ his drowsy eyelids, into the foretaste of sleep — ." (Rilke, transl. Stephen Mitchell.) Eventually we sleep and dream.

It follows that your level of "focus" or "alertness" is basic to human thought. We can imagine focus as a physiological value, like heart rate or temperature. Each person's focus moves during the day between maximum and minimum. Your focus is maximum when you are wide-awake. It sinks lower as you become tired, and reaches a minimum when you are asleep. (In fact it oscillates up and down several times over a day.)

This continuum of thought-styles is the "cognitive spectrum," the basic fact of human thought.

Now, why does reality-loss happen as you fall asleep and dream? How does it work? Your mind stores memories; some are remembered scenes or experiences.

Each remembered experience is, potentially, an alternate reality. Remembering such experiences in the ordinary sense — remembering "the beach last summer" — means, in effect, to inspect the memory from outside. But there is another kind of remembering too: sometimes remembering "the beach last summer" means re-entering the experience, re-experiencing the beach last summer: seeing the water, hearing the waves, feeling the sunlight and sand; making real the potential reality trapped in the memory.

(An analogy: we store potential energy in an object by moving it upwards against gravity. We store potential reality in our minds by creating a memory.)

Just as thinking works differently at the top and bottom of the cognitive spectrum, remembering works differently too. At the high-focus end, remembering means ordinary remembering; "recalling" the beach. At the low-focus end, remembering means re-experiencing the beach. (We can re-experience a memory on purpose, in a limited way: you can imagine the look and fragrance of a red rose. But when focus is low, you have no choice. When you remember something, you must re-experience it.)

We re-experience (or re-enter) memories when we dream. The memories we re-enter are sometimes distorted, or incomplete, or have other memories added to them; "dream-logic" governs the process by which memories are re-experienced in dreams. (Dream-logic connects memories together, sometimes one on top of another, using the powerful glue of shared emotional content. As focus falls, memories grow sticky.)

When your focus is high, you control you thoughts. You observe and consider logically; you confront problems and solve them rationally. As your focus-level falls, you begin to lose control of your thinking. Your mind wanders; one thought leads to another. When you look out a window and let your mind drift, your thoughts take their own course — but you can still resume control (and get back to work) when you choose.

As focus-level falls still lower, your thought-stream moves completely beyond conscious control. And when you fall asleep, your dreams seem to happen without conscious guidance. You experience dreams in nearly the same way you experience external reality.

Losing control of your thought stream equals losing reality. Partway down the spectrum, as you look out that window and your thoughts wander, you have not yet lost reality; you are still aware of your environment. But you are day-dreaming, distracted, less aware of reality. As your focus drifts still lower and you approach sleep, loss of thought-control and loss of reality progress. ("Dissolving," lösend, writes Rilke, describes your sense of reality as you approach sleep.) When you sleep and dream, your thoughts are beyond ordinary conscious control — dreams make themselves; and reality is gone.


"On ne doit pas dire, je pense," wrote Rimbaud, "mais, on me pense." "One should say not `I think' but `I am thought.'" Like nearly all poets, he frequented the mental neighborhood between wakefulness and sleep: just beyond conscious control, just before sleep and dreams. "What is time? When is the present?" asks Rilke in a letter after completing the Duino Elegies. He was a master of low-focus thought; he watched his own mind carefully as he descended the long, stout rope of the cognitive spectrum into mental regions where external reality fades and imaginary reality brightens; where thoughts flow freely and strange new analogies emerge. (Rilke himself uses the image of mental descent: "I have descended into my work farther than ever before.")

Poets and madmen haunt this mental neighborhood. Coleridge was intrigued by the state of "half-awake & half-asleep"; he composed Kubla Khan (he called it "a psychological curiosity") in an opium haze. Keats: "Fled is that music. Do I wake or sleep?" Büchner's view of the slightly-insane Lenz: "If I could only decide, whether I am dreaming or awake….."

Some prophets and poets experience this twilight consciousness as a region of visions. A friend of William Blake wrote, "Of the faculty of Vision he spoke as One he had had from early infancy —
He thinks all men partake of it — but it is lost by not being cultivated." Blake believed, in other words, that seeing visions is normal; "all men partake of it." And all men do indeed move down the cognitive spectrum every day.

But some people are more alive to this experience than others. T.S. Eliot comments on the medieval sensibility of Dante: his is "a visual imagination…. It is visual in the sense that he lived in an age in which men still saw visions. It was a psychological habit, the trick of which we have forgotten." We each pass through this zone of visions every day on our way to sleep; but the journey makes no impression on most of us, and we have no desire to linger.

The daily oscillation of human thought is like an ocean tide. Let's pursue this analogy: as the tide (or your focus-level) falls, large stretches of sandy sea-bottom are exposed — and you can see bottom-thoughts that are ordinarily hidden. As you lose control over your thinking, you can no longer consciously avoid bizarre or unpleasant thoughts (although, as Freud points out, you might still unconsciously avoid them).

Creativity has always been fascinating. Cognitive psychologists generally agree that creativity happens when a new analogy is invented. When your mind connects two things that aren't usually connected — an infant bird's first flight and a crack in a tea cup, to use a Rilke example — you have a new analogy, and a basis for seeing the world in a new light. (Rilke draws a sort of conclusion from his new analogy: "So the bat/ quivers across the porcelain of evening." (Transl. Stephen Mitchell.) Of all great lyric poets, perhaps only Keats had a more fertile mind for imagery.)

Most new analogies lead nowhere, but occasionally they reveal something important. Creativity doesn't operate when your focus is high; only when your thoughts have started to drift is creativity possible. We find creative solutions to a problem when it lingers at the back of our minds, not when it monopolizes attention by standing at the front. You can't make yourself fall asleep; nor can you make yourself have a creative inspiration (in the way you can make yourself solve an arithmetic problem). Sleep and creativity happen only when your thoughts drift beyond your control.

Which leads to a final observation. How do we invent new analogies? This is a major unsolved problem of cognitive science. Often, remembered and re-experienced emotions are the key to novel, unexpected analogies. Emotion summarizes experience. If the subtle emotion you happen to feel on the first warm, bright day of spring (an emotion that has no name) is similar to the emotion you felt the first time you took a girl to the movies, this particular emotion might connect the two events; and next year's first warm spring day might cause you to remember the girl and the movie.

No computer will be creative unless it can simulate all the nuances of human emotion.

We tend to think of emotions in a few primary colors: happy, sad, angry…. But our real emotional states are almost always far more subtle and complex. How do you feel when you've hit a tennis ball hard and well, or driven a nail into a plank with two perfect hammer blows? When you first re-enter, as an adult, the school you attended as a child? When you spot the spires of Chartres on the horizon, or your son's girlfriend reminds you of a girl you once knew? Or the day turns suddenly dark and a storm threatens, or your best friend is about to make a big mistake but you can't tell him?

Emotion is the music, the score or soundtrack, that accompanies life; emotions are as distinctive as musical phrases. Just as a snatch of music might bring to mind some long-ago scene, a re-experienced emotion can make us remember a different time and place.

But here the analogy breaks down. A song or phrase might be associated purely by accident with a certain experience. But an emotion is caused by the experience, and summarizes in one feeling an entire, complex scene. An emotion encodes an experience.


We can't understand literature properly unless we know that different works are composed at different "focus levels" (as magnetic tapes are recorded at different speeds). We must read at the correct focus level or "tape speed." Kafka is a famous case. His intention, he said, was to write about his "dreamlike inner life." He meant it literally; we can understand his works only as examples of the dream as a literary form or genre. Kafka's transcription of dream-thought is so accurate that we can use his work as a guide to the structure and logic of dreams. Louis Begley writes in a recent study (2008) that Kafka's great invention was "the nonchalant treatment of events in his fiction that every reader knows are implausible … or outright impossible." But it's better to say that his great invention was a modern version of the dream as a literary form.

Low-focus genres are especially important to ancient literature. Jacob's all-night struggle in Genesis 32 can best be understood as an ancient example of the dream genre (as the medieval philosopher Maimonides knew). Exodus 4:24-26 — perhaps the most difficult passage in the whole Bible — can only be understood as an example of the nightmare genre, which Kafka revived.

Epic, tragedy and romance are literary forms with their own typical structures; so are prophecy, dream, nightmare.

In all this, we have kept to the straight and simple path of common sense. Now we can describe, in rough and simple terms — "folk psychology" terms — the operation of human thought.

Imagine two entities, Consciousness and Memory. Each corresponds to certain physical structures in the human body. But we are interested in the piano sonata, not the piano. The sonata's structure is real, although it is not physical. (In modern terminology we might call it a "virtual structure.") The piano has its own structure. Our topic is the sonata of thought, not the grand piano of the brain.

We can picture the tidal process of human thought in terms of Consciousness and Memory. Imagine a small circle inside a bigger one: at maximum focus, Memory (the small circle) is wholly contained within Consciousness (the large one); and Consciousness is surrounded in turn by external reality. You are conscious of memory within you and reality outside you. You are in conscious control of your thinking and remembering.

At minimum focus, Consciousness is the small circle, wholly surrounded by Memory. Memory comes between consciousness and external reality; consciousness is shut off like a castle by its moat. You are conscious only of internal, imaginary reality.

As focus-level falls, the two circles gradually trade places.

And this is the daily, tidal rhythm of the human mind.

(These pictures might sound abstract, but they can be rough blueprints for software.)


What does the cognitive spectrum have to with the intelligence of the internet, or artificial thought in general?

First: as the philosopher Paul Ziff insisted, intelligence can only mean human or human-like intelligence. (We assume that an animal's mind is human-like to the extent that the animal itself seems human-like.) Some people believe that the internet will develop an entirely new form of intelligence. But this is meaningless (to put it differently, is nonsense). It's like saying that you have discovered a new flavor of chocolate. But the flavor called chocolate is exactly what we say it is; there is no other definition. If your "new flavor" tastes like chocolate, it isn't new; if it doesn't, it isn't chocolate. If your new form of intelligence is human-like, it's not new. If it isn't human-like, it's not intelligence.

Could human-like intelligence emerge on the internet? No. First, the raw materials are wrong. Human beings and animals are conscious and, as the philosopher John Searle has argued (in effect), a scientist must assume that consciousness results from a certain chemical, physical structure — just as photosynthesis results from the chemistry of plants. You can't program your laptop or cellphone to transform carbon dioxide into sugar; computers are made of the wrong stuff for photosynthesis — and the wrong stuff for consciousness.

You can instruct one computer and one man to imagine a rose and then describe it. You might get two similar descriptions, and be unable to tell which is the man's and which the computer's. But there is an important difference: the man actually sees and senses a rose in his mind; he can imagine its color, feel and fragrance. For the computer, no imaginary rose exists and there is no inner mental world; there is only a blank. (In philosophical terms, this is the "absent qualia" problem.)

Furthermore, human consciousness and thought emerged from a mechanism (genetic mutation) that allowed endless, nuanced variations to be tested — under the uncompromising pressure of survival or death. Neither condition holds on the internet as we know it. Expecting intelligence to emerge on the internet is like expecting a car to move when you floor the accelerator, even though it has no motor.

As far as we know, there is no way to achieve consciousness on a computer or any collection of computers. However — and this is the interesting (or dangerous) part — the cognitive spectrum, once we understand its operation and fill in the details, is a guide to the construction of simulated or artificial thought. We can build software models of Consciousness and Memory, and then set them in rhythmic motion.

The result would be a computer that seems to think. It would be a zombie (a word philosophers have borrowed from science fiction and movies): the computer would have no inner mental world; would in fact be unconscious. But in practical terms, that would make no difference. The computer would ponder, converse and solve problems just as a man would. And we would have achieved artificial or simulated thought, "artificial intelligence."

But first there are formidable technical problems. For example: there can be no cognitive spectrum without emotion. Emotion becomes an increasingly important bridge between thoughts as focus drops and re-experiencing replaces recall. Computers have always seemed like good models of the human brain; in some very broad sense, both the digital computer and the brain are information processors. But emotions are produced by brain and body working together. When you feel happy, your body feels a certain way; your mind notices; and the resonance between body and mind produces an emotion. "I say again, that the body makes the mind" (John Donne).

The natural correspondence between computer and brain doesn't hold between computer and body. Yet artificial thought will require a software model of the body, in order to produce a good model of emotion, which is necessary to artificial thought. In other words, artificial thought requires artificial emotions, and simulated emotions are a big problem in themselves. (The solution will probably take the form of software that is "trained" to imitate the emotional responses of a particular human subject.)

One day all these problems will be solved; artificial thought will be achieved. Even then, an artificially intelligent computer will experience nothing and be aware of nothing. It will say "that makes me happy," but it won't feel happy. Still: it will act as if it did. It will act like an intelligent human being.
And then what?

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου