Buy F&SF • Read F&SF • Contact F&SF • Advertise In F&SF • Blog • Forum

June 2002
 
Book Reviews
Charles de Lint
Elizabeth Hand
Michelle West
James Sallis
Chris Moriarty
 
Columns
Curiosities
Plumage from Pegasus
Off On a Tangent: F&SF Style
 
Film
Kathi Maio
Lucius Shepard
 
Science
Gregory Benford
Pat Murphy & Paul Doherty
Jerry Oltion
 
Coming Attractions
F&SF Bibliography: 1949-1999
Index of Title, Month and Page sorted by Author

Current Issue • Departments • Bibliography

A Scientist's Notebook
by Gregory Benford

Real Robots

This last Christmas season the big hit toy was the line of doggie robots. Ranging in price from $50 to $1500, they became a fun addition to the gadget-prone household.

It seems likely that robots are poised to be the next major consumer good that will alter the way we live. Robots are about to become far more interactive, coupling sensors to actuators in real time, as responsive as living tissue. This is a critical transition, because once people find that robots are not any longer, well, robotic, they will treat them differently--as servants, even as pets. We will see the advent of whole product lines: Here Comes the Robo-[Fill In Your Need Here].

But a house filled with humanoid robo-butlers and maids is still far in the future. We will meet our future servants as small devices engineered for narrow tasks. Robo-lawn mowers, golf caddies, and vacuum cleaners will use MEMS to navigate their world without getting hung up on obstacles or running us over. Within a decade, robot security guards, at-home agents and helpers will be common among the same economic classes that first adopted computers. Remember when Atari and others pushed the personal computer as a game-playing device? That opening wedge gave us within a decade the spread of computers that did real tasks, becoming indespensible.

If the same is in the offing for robots, what can we expect? Will nearly a century of fictional thinking about them be relevant?

Though it will surely be decades before robots think with any subtlety, our attention quickly focuses on the problem of how much like us they could become--a symptom of a profound anxiety, I suspect. For this column, let's concede that the problem will eventually arise, though not immediately. How useful is the vast lore of 20th Century thinking in dealing with it?

The first attempts to think constructively about how to deal with forms that were quite different from us--including both cyborgs, androids and robots--came from science fiction. (For a current summary, a handy reference is Mind Matters: Exploring the World of Artificial Intelligence by James P. Hogan, Ballantine, 1999.)

Robots present the most extreme case of this, with no fleshy components, so they attracted the vivid imaginations of such early thinkers as Isaac Asimov. Used to thinking systematically because he was a trained biochemist with a PhD from Columbia University, Asimov wrote a ground breaking series of stories built around his Fundamental Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These Laws shaped much thinking, both in science fiction and in robotics, for decades. Only now, over half a century since they were worked out in an ingenious series of stories which tested each phrase of the Laws, can we see them as projections of the anxieties and assumptions of their time. Rather than universal Laws, they are really rules for behavior. They center around several implicit attitudes, ones which can affect how any partially artificial being can encounter, such as androids or cyborgs.

The message of these laws is that avoiding evil robot impulses is crucial, as though they would naturally arise among any thinking entities. That assumption glares out from the 1940s, when certainly recent history seemed to give ample proof of seemingly inherent human evil. What's more, the animal kingdom—"nature red in tooth and claw," as the poet has it—echoes with the cries of those who came in second place in the struggle to survive.

Asimov noted that as seemingly basic an instinct as self preservation would need to be introduced into a mind that one was building from scratch. Nature gives animals the savvy to stay alive, but machines must have this inserted. Generally they don't; in the decades since the Three Laws were worked out, we have built "smart bombs" and cruise missiles that happily commit suicide while damaging our enemies. This gets patched up in the Third Law, presciently. The Second Law is actually needed to enforce the First Law--otherwise, how would a robot know that it must obey? Robots ordering other robots must not override human commands. (But with the advent of cyborgs, how is a robot to know a true human? This is an interesting channel for future stories. One can imagine a cyborg demoted to less-than-human status because robots refuse to recognize him or her.)

The Three Laws of Robotics are in fact moral principles disguised as instructions. Compare them to the Ten Commandments, which are much more specific: Honor thy father and mother. Do not kill. Of course working out how to use these (when is it okay to kill in warfare? and who?) demands more interpretation.

But then, so do the Three Laws. In fact, rather than guides to how to build robots and program them, they are better seen as what they originally were: a neat way to frame a continuing series of ingenious stories, each testing the boundaries of the Laws. Real robots need much more specific engines to tell them how to work. How to do this is still unknown. Humans know the law and obey it if they choose. Robots we want to obey the law always, since with superior strength, endurance and ruggedness,they could be terribly dangerous.

But we do not know how to force such compliance in a machine which still has to have some measure of autonomy. Perhaps we never will. There could be an inherent tension between independence of mind and law-abiding. In this regard, robots would be much like us. We may have to accept some danger as a trade-off for some degree of robot autonomy.

*     *     *

A half-century of artificial intelligence research has made us now realize that the tough problem is how to instill motivations in other minds at all. Getting robots to obey our laws may fundamentally emerge from getting them to do anything whatever.

For example, survival is not one task but a suite of ever-alert programs which have to interact with the ever-shifting environment. So are other, milder motivations like empathy and cooperation.

This realization, that our commonplace urges are really quite complex, has made us see that many supposedly simple human tasks are very complicated. Take, say, picking up a cup of tea and then not sipping from it, and instead blowing across the top because we can tell it is too hot. This in fact a feat of agility, sensing and judgment that no machine can presently do nearly as well. (Nor does a machine store the memory of burnt lips from a decade ago, which pops up as a warning when we reach for a cup.) Indeed, if one computer can do all that, it must be specifically designed for that job and can do nothing else. Yet we can sip tea and read the newspaper, half-listen to our mate's breakfast conversation, and keep breathing.

Such intricacy is built in at the foundations of our minds and bodies. Life is tough; we must do several things at once,or more versatile creatures will do us in.

But do we perform such adroit tasks as (to echo an old joke) simultaneously walking and chewing gum, all by following rules? This is the contrast between knowing how and knowing that, as Keith Devlin of Stanford University puts it. This fundamentally Cartesian emphasis on following rules to order our actions begs a central question: can we work that way? Do we?

It seems logical; just follow the directions. But in 1670 Blaise Pascal, the mathematical philosopher, saw the flaw: "Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous...the mind...does it tacitly, naturally, and without technical rules."

But we do not ride a bicycle by following serial rules; we parallel many inputs and respond in ways not yet understood. Here is where the entire agenda of rules-run intelligence runs into a deep problem.

If tasks are done sequentially, they run the risk of not getting done fast enough--so the only answer is to speed up the computer. This can lead to huge problems, because neither humans nor computers can simply run up their speed to meet any problem. Our brains have engineered parallel processing (solving problems by running separate programs simultaneously) to keep up with the real world's speeds. They can't just add new lobes for new problems, except on an evolutionary time scale of millions of years.

This also relates to the top-down approach to artificial intelligence. Perhaps that approach is fundamentally limited because a rules-based mind would be too hobbled and slow. The alternative, starting with systems that learn in small ways and build up a concept of the world from direct experience (as in mobile robots), may work better. Nobody knows as yet.

One can imagine a robot brought to trial for some misdeed, perhaps injuring a human. Since it is allowed freedom of movement, it is help liable for its acts. Such a trial would then define robots as of human status, held accountable to human law. Persons. Very Asimovian.

*     *     *

A standard plot device of much fiction, and television, can be expressed as a simple question: Will robots feel?

The robots we meet in the next few decades will not look like mechanical men, the classic science fictional image. There were good storytelling reasons to make robots humanoid, to get the audience to identify with them. Some current robot builders betray the same need to make their machines look or move like humans (as with the MIT facial robot, Kismet). We have experience in dealing with humanlike others, after all. But perhaps the essence of robots will be that they are not like us, and we should not think of them that way, however appealing that might be.

We are trained by life and by society to assume a great deal about others, without evidence. Bluntly put, nobody knows for sure that anyone else has emotions. There are about six billion electrochemical systems walking around this planet, each apparently sensing an operatic mix of feelings, sensations, myriad delights--but we only infer this, since we directly experience only our own.

Society would be impossible to run without our assumption that other people share our inner mental states, of which emotions are the most powerful. Without assuming that, we could anticipate very little of what others might do.

Robots bring this question to the foreground. How could we tell what a robot would do? We could install Asimov's Three Laws, and pile on maybe dozens more—but working out what will happen next is like treating life as an elaborate exercise with an instruction manual. Nobody thinks that way, and robots would be rule-bound catatonics if they had to function like that.

Should we want a robot to take up the task of acting so that we could always predict how it would feel? That ability is available already, without the bother of soldering components together in a factory--it only takes two people and nine months, plus a decade or two of socialization. Surely we do not want robots to just act like us, if they are to be anything more than simple slaves.

Suppose as good Darwinians we define emotions as electrical signals that are apt to make us repeat certain behaviors, because those increase our chances of reproducing ourselves. Then feelings look almost like computerized instructions, overriding commands--and machines can readily experience these. Suppose that in a decade your personal DeskSec comes built into your office, and from the first day quickly learns your preferences in background music, favorite phone numbers, office humidity, processing typeface on your computer monitor--the works. A great simplifier.

Then this Girl Friday might get quite irked when things don't go right, letting you know by tightening "her" voice, talking faster, maybe even fidgeting with the office hardware. Does the DeckSec have emotions as we understand them? To answer that, we must assess how realistically the DeskSec does its acting job.

That is, is it acting? All such questions arise from the tension between internal definitions of mental states, and external clues to them. We imagine that other people feel joy or pain because they express it ways that echo our expressions. Of course, we have a lot of help. We know that we share with other humans a lot of common experience, from the anxieties and joys of growing up, to the simple pain of stubbing a toe. So with plenty of clues, we can confidently believe we understand others. This is the task met by good actors—how to render those signs, verbals and physical, that tell an audience "See, I feel this, too."

Even so, juries notoriously cannot tell when witnesses are lying. We can't use our sense of connection to others to get reliable information about them, because people know how to fake signals. Evidence accumulates that even our nearest surviving relations, the chimpanzees, do not readily ascribe to their fellows (or to us!) an inner consciousness. We would say that they have little social awareness, beyond the easy signals of dominance hierarchy.

Chimps can build a model including human awareness, though. This is explored in experiments with them, which test whether they really do register where human attention is directed. They quickly work this out, if there is a reward in store. (This may come from trying to guess what the Alpha Male of the tribe is going to do next.) But they do not naturally carry this sense into their ordinary lives. Taught to realize that humans facing away from them can be looking over a shoulder, they respond to this fact to get bananas. But a year later they have forgotten this skill; it's not part of their consciousness tool kit.

These are the sorts of tests we should apply to any robots who petition to be regarded as human. Such tests are rigorous; perhaps only the dolphins can pass them now.

When and if robots can compose symphonies, then we'll be on the verge of asking serious questions about the inner experiences of machines. If we decide that robots have a supple model of us, we may have to ascribe human-like selfhood to them. Aside from myriad legal implications, this means we will inevitably be led to accept, in machines, emotions as well as abstractions.

Not that this will be an unalloyed plus. Who wants robots who get short tempered, or fall in love with us?

Inevitably, robots that mimic emotions will elicit from us the urge that we treat them as humans. But we should use "mimic" because that is all we will ever know of their true internal states. We could even build robots who behave like electronic Zen masters, rendering services with an acute sense of our human condition, and a desire to lessen our anguish. But we will not know that they are spiritual machines.

Probably someone will strive to perfect just such robots. After all, why should robotic emotions not be the very best we can muster, instead of, say, our temper tantrums and envy?

In Part I, Chapter 3 we argued that emotions are a vital part of our psyche. We have no idea how an intelligent mind of any subtly would work without emotions. Humans with disabled emotional centers do things that seem rational to them, but in their lack of foresight and insight into people seem absurd or even suicidal to the rest of us. That is the implicit threat many feel about intelligent, emotionless robots—that they would be beyond our understanding, and so eventually beyond our control.

We may be forced, then, to include some emotional superstructure in any advanced robot "psyche." Perhaps the inevitable answer to Will robots feel? is "They'll have to—we'll demand it."

*     *     *

How will they act? All along, philosophers and computer mathematicians have told us that our uniquely human skill at juggling symbols, particularly words and numbers, defines us. Small surprise that they happen to be good at this themselves, and in believing these abilities define the pinnacle of creation, think that they have captured consciousness. This belief is comforting, and goes back to Plato and Marcus Aurelius, who commanded, "Use animals and other things and objects freely; but behave in a social spirit toward human beings, because they can reason."

But other, simpler definitions can illuminate how robots may behave. Humans are not just symbol-movers. One of our least noticed traits is that we fall unconscious every day for many hours, while many animals do not.

Is sleep important?

Living on a planet with a single sun, and a pronounced day-night cycle, has shaped the biology and ecology of almost all animals. One must say 'almost' because the deep-sea ecology is uniformly dark, and yet sustains a surprisingly complex ecology—witness the thermal vent communities.

As day-living, light-adapted creatures, we are most familiar with the other day inhabitants, but at night, in the ocean as well as on land, a whole new suite of animals emerges. Among them, owls replace hawks, moths replace butterflies, bats fly instead of most birds, flying squirrels replace almost identical day-living ones. On coral reefs all manner of creatures emerge from sheltered recesses when night falls.

Animals without backbones, and the slower, cold-blooded chordates, do not indulge in sleep as we do. They hide and rest for a few hours, but display little change in neural activity while they do so. This fits with the idea that it is smart to stay out of the way of predators for a while, and that some rest is good for any organism, but these periods among the simpler orders of life are brief, a few hours, and carry no mental signatures of diminished brain activity. Quite probably, the defenses are still running, ears pricked for suspicious sounds, nose twitching at the unfamiliar scent.

Even among vertebrates, only mammals and birds have a characteristic shift from fast waves to slow ones in the forebrain, the typical signature of deep sleep. Probably this is due to the great development of the latter groups' cerebral hemispheres. The simpler brains could not display the advanced signs of sleep, because they do not have a cerebral cortex, and do not shift wave rhythms.

Indeed, sleep is risky. Like consciousness, it demands time and body energy. Nature does not allow such investments to persist without payoffs, so both traits must have conferred survival capability far back in antiquity. On the face of it, lying around in a deep torpor, exposed to attack, does not sound like a smart move. Yet we and other mammals cannot do without our sleep. Deprived of it, we get edgy, then irritable, then have fainting spells, hallucinations, and finally we collapse or even die. Sleep can't be a simple conservation move, either, because we save only about 120 calories during a full eight hours of lying insensate. Even for the warm-bloods, that's not a big gain; it equals the calories in a can of Pepsi.

It's also unlikely that nature enforces true sleep solely to keep us from wandering around in the dark, when we are more vulnerable. If the day-night cycle imposed by the planet was the primary reason for an enforced downtime (an ecological reason), it seems likely that evolution would've taken advantage of it for purely biological reasons. For example, plants undergo dark time chemical reactions that ultimately trigger flowering at a precise time of the year. Animals, too, would've 'invented' things to do during an imposed rest period. So which came first--the ecological or the biological reason for sleep? It's a chicken-and-egg kind of argument that science hasn't answered with finality.

In any case, large animals and birds must sleep, even when they have no ready shelter, or prospect of any, as in the African veldt. Horses sleep only three hours a day, with only about 20 minutes lying down, but they would be safer if evolution let them stay awake all the time.

Sea otters, air-breathing mammals living precariously in the ocean waves, tie themselves to giant kelp and sleep half a brain at a time. One hemisphere sleeps while the other literally keeps watchful eye out for danger.

Sleep seems basic. We process memories while dead to the world, throwing out some and storing away many fewer for later use. We arise refreshed, probably because sleep has tidied up and repaired some sort of damage that consciousness does to our brains. Take that processing and neatening-up away and we work less reliably and get sick more often.

This correspondence between sleep and consciousness suggests that animals slumber because they have some need of repair work, just like we do. Plausibly, the daily waking state of mind among animals that must sleep resembles our mental frame of the world, the modeling we call consciousness. This seems a sensible explanation for our intuition that our mammal pets have some kind of consciousness, interpreting their world in ways we understand automatically--as, say, when a dog tugs on his leash as he nears a favorite running spot, giving all the signs of joy and anticipation.

Since consciousness has evolutionary utility, and sleep cleans up after consciousness has messed up our minds a bit, we must see these as parallel abilities, each making its contribution to our survival.

A natural conclusion, then, is that conscious robots will have to sleep. They will not be tireless workers like the present automatons in car factories, riveting doors to frames around the clock. "Useless" sleep hours must be budgeted into their lives.

The same then holds true for Artificial Intelligences. Mathematicians have long seen these as complex devices for carrying out programs, called algorithms. But if robots must be refreshed, sleep is probably only one of the necessities. We do not keep people trapped in rooms, laboring incessantly when they are not catching their zzzzzzs. Not only would they protest, they would get dull, listless, and inefficient.

So robots and even computer minds will probably have to have regular outings, vacations, time off to recreate themselves. This will make them seem far more human-like to us, of course, because they will be exactly so. As philosopher Matt Cartmill notes, "If we ever succeed in creating an artificial intelligence, it's going to have to be something more than just an algorithm machine." How much more, no one knows as yet. Probably it will be much more like ordinary workers, needing time to laze around, be amused, distracted, and relaxed.

So they will resemble us rather more than we might like. They will need down time, and probably have vexing emotions. Whether they are worth such liabilities will be a matter of taste.

===THE END===


copyright © 2002 Abbenford Associates

Comments on this column welcome at gbenford@uci.edu, or Physics Dept., Univ. Calif., Irvine, CA 92717 This column was based in part on the PBS TV show and book Beyond Human by Gregory Benford and Elisabeth Malartre.


To contact us, send an email to Fantasy & Science Fiction.
If you find any errors, typos or anything else worth mentioning, please send it to sitemaster@fandsf.com.

Copyright © 1998–2020 Fantasy & Science Fiction All Rights Reserved Worldwide

Hosted by:
SF Site spot art