본문 바로가기

Enginius/Robotics

Robots and humans - 번역하자

Robots and humans
an essay (1997)
Colin Fahey

1. Foreword

The purpose of the following essay is to inspire people to think about the interesting possibilities for the future of robots and humans.  The arguments presented here are informal, and the topics are ambitious and controversial: reality, the mind, consciousness, and humanity.  Philosophy students may find the reasoning exceedingly naive, and maybe a little arrogant, but even if this work does not represent the latest advances in epistemology and metaphysics, it is hoped that many readers will find the conclusions plausible, if not absolutely convincing.  The general strategy here is to appeal to what might be called our practical sensibilities, rather than indulge the consummate skeptic. 

 
2. Introduction


An automaton is a machine that is capable of operating for a significant amount of time without external guidance or assistance.  A robot is an automaton that resembles a human.  With each advance in technology we learn that it is possible to imitate the efficiency and agility of our bodies, and the complexity and power of our intellect, with greater success.  Our society enthusiastically supports the efforts of scientists and engineers to build faster, smaller, and more sophisticated machines.  Our culture adapts to the new machines; a process made easier by the fact that many of the new machines are simply improvements upon older designs, or are functionally equivalent to devices or systems we once used.  The concept of making a machine that looks and behaves like a human was certainly discussed centuries ago, and confidence in our ability to make an artificial man that can not be distinguished from a member of our species is increasing.  

Perhaps we have been fortunate that the problems facing those who attempt to build better robots have been so challenging that the rate of progress has not overtaken our ability to anticipate and accept changes in our lives.  However, we are nonetheless heading toward a time when the relationship between humans and robots will be very different than it is today, and humans will be faced with an identity crisis that may lead us to redefine ourselves as more than organic creatures with particular habits.  

This essay discusses the future of robots and humans.  After establishing a model of reality, the idea of consciousness is introduced and refined, leading to the conclusion that robots can come arbitrarily close to behaving as we do, and that such robots would not simply imitate us, but would be independent creatures that made their own decisions and experienced the world as any of us would.  Various social issues concerning the interaction between robots and humans will be presented.  Finally, the question of whether or not robots will replace us, after competing with our species, is asked with the intent of determining what it means to be human. 

 
3. Model of reality


The following assumptions define the model of reality upon which the arguments in this essay are based: 
(1) Nothing other than the physical world exists.  

(2) The physical laws as we understand them today will continue to successfully describe phenomena from galaxies to subatomic particles, and future discoveries will only refine these laws. 
Few people are willing to accept these assumptions, because a host of unpleasant implications result.  However, neither of the two assumptions can be refuted by logic or current evidence.  Furthermore, it does not seem especially arbitrary for us to assume these two things when we receive reassuring feedback from the world when we act according to these assumptions. Although argument can do little more than suggest, rather than prove, the validity of these assumptions, it is worth some discussion.  

First, we can accept these assumptions even if we are really suffering from massive deception about the true nature of reality, or if we are really alone in the Universe and dreaming up everything we experience.  If the illusion continues in the same way, then we can accept the assumptions as a true description of the illusion in which we are trapped.  Of course anything can happen if the entity thus far maintaining the illusion should change its mind about how your reality should work, but if you are willing to seriously consider this possibility then you have concerns more serious than the subject at hand.  For those less speculative, but worried that the physical laws may change someday, it would be worthwhile to embrace the assumptions in much the same spirit; using them until they don't work.  

For those who believe that gods, spirits, or psychic forces have an effect on the physical world, the only way to salvage the arguments that follow is to accept that these metaphysical entities do not interact with the physical world often enough to sabotage the general applicability of the physical laws.  If god observes for the most part, and occassionally diverts a bullet or falling tree, and only influences our decisions once in a while, then perhaps the assumptions can be accepted with the caveat that the cross-country trek that your lost puppy took to find you, or winning the lottery just in time to prevent the demolition of the orphanage, might not be explained by physics alone.  Scientists who believe in God are likely to rationalize their faith in this way. 

 
4. Simulating the human brain


If we accept the assumptions about reality presented above, then we come to the conclusion that the human brain is a physical object that obeys physical laws.  A brief physics review will provide the context for arguing that the brain can be simulated completely, if not actually duplicated as a physical object.  

Computers and human brains, among other material objects, are made up of molecules.  A molecule is a collection of atoms bound together.  Atoms are made up of protons, neutrons, and electrons.  The electron is thought to be a fundamental particle, but the proton and neutron are considered composite particles, made up of still smaller particles called quarks.  Quantum Mechanics is an area of physics that describes how these sub-atomic particles behave.  It turns out that quantum mechanics works nicely on larger scale systems, which are collections of small scale objects.  In fact, the mathematical formulae of quantum mechanics reduce to the classical mechanics form, such as Newton's Laws of Motion, for so-called macroscopic objects, like billiard balls and comets.  

One of the most controversial aspects of quantum mechanics is that the theory is not able to make definite predictions.  At the very foundation of quantum mechanics is the concept of a "wave function", which describes the "state" of a particle or system.  Although the wave function itself can never be directly observed, its "amplitude" gives the probability of finding the particle or system in a particular state.  Quantum mechanics can only make predictions in the form of probabilities, rather than certainties.  Not being able to predict a definite outcome of an interaction would seem to render a theory useless to science, but it turns out that quantum mechanics is enormously successful at explaining the experimental data of almost all experiments conducted to date, where the few exceptions are assumed to be due to human error or incomplete application of the theory.  

Unfortunately, the "Uncertainty Principle" and other counter-intuitive aspects of quantum mechanics has captured the imagination of the public, leading to debate and specuation that hurts the credibility of science.  A commonly held belief is that quantum mechanics puts an end to determinism.  Probably (forgive the pun), but not nearly as completely as the public is inclined to think.  It turns out that the behavior of collections of many particles, like atoms, is somewhat more predictable than its parts.  And a small cluster of atoms, in a crystal perhaps, has a group behavior that is substantially more predictable.  The individual particles are still as unpredictable as ever, but the overall state of the group is fairly definite.  As an analogy, consider the average height of humans.  A person selected at random is likely to have a height that is quite different from the average.  However, selecting two people at random and averaging their heights is likely to yield a number that is closer to the global average.  Averaging heights for larger groups of people selected at random will lead to numbers that are likely to converge on the global average.  It takes only a few thousand people perhaps to yield a number which differs from the global average by an amount that is smaller than the ability of a ruler to reliably measure (tiny differences due to irregularities in scalp, posture, or the skin of the foot).  In the same way, groups of atoms can have a group characteristic that is very well defined and certain, despite the tremendous uncertainty and fluctuations that occur with each individual atom in the group.  

Computers rely on the immunity of certain properties of groups of atoms to the random fluctuations of individual atoms which comprise the groups.  Millions of tiny transistors are crammed on to the small silicon chips of a computer's microprocessor.  Each transistor is capable of making transitions between active and passive states hundreds of millions of times every second.  If even one transistor does not make the transition in a completely deterministic way, then the microprocessor has "malfunctioned".  Microprocessors are used in digital watches, microwave ovens, calculators, cellular phones, video cassette recorders, stereo systems, pagers, musical greeting cards, answering machines, automobiles, televisions, and even toys that talk.  A malfunction in the microprocessor of any of these devices is likely to result in an obvious performance problem.  Yet under normal operating conditions these devices can function continuously for years without problems.  If a problem does occur, it can usually be attributed to conditions outside the microprocessor.  Therefore, assuming conservatively that each of these devices executes a million instructions each second, and making the pessimistic assumption that each microprocessor malfunctions once a year due to quantum fluctuations, then each device makes only one non-deterministic calculation for every 31,000,000,000,000 deterministic calculations.  If the microprocessor were any less deterministic then you can bet that there wouldn't be over a billion microprocessor-based devices in use today.  

Another apparent threat to the ability to predict how a system will behave involves Chaos Theory, or Complexity Theory, both of which are in vogue today.  Basically, chaos theory is an attempt to predict the probability of a system being in a certain state as time progresses, despite how sensitive the system is to its initial conditions.  For example, we may know a planet's position only to a precision of a few thousand miles, and after a few million years we will not be able to specify its position on its orbit at all; but the exciting thing is to note that we can predict that it will be somewhere on its orbit, which is still an informative result.  An example that is familiar to the public is the "Butterfly in Beijing", in which the tiny wing flapping of a butterfly changes the outcome of the global weather patterns because the physics of air molecules is sensitive to the precise initial configuration of the system, which the butterfly disturbs;  however, regardless of inability to predict the individual raindrops or gusts of wind that we observe, we can still make reliable generalizations about the overall behavior of the atmosphere.  It may not seem like much, but we can confidently predict average temperatures and precipitation for a given location and day of the year.  We can also predict the behavior of large air masses (hundreds of miles wide) for relatively short spans (days or hours) with some success.  Of course the weatherman is a despised figure in our culture, which is a testament to the crude state of meteorology.  

Complexity Theory is similar to Chaos Theory in that it highlights the difficulty of predicting the future state of a deterministic system.  In the case of complexity theory it is not so much the sensitivity to initial conditions that renders a system unpredictable, but just how complicated the system is.  Put one person in a room and you have a boring situation, but put twenty people in a room and you have an interesting conversation, a party, or perhaps a brawl, which are phenomena that can only arise in groups of people, not in individuals.  Complexity theory is all about the saying that systems are "greater than the sum of their parts".  Flocks of birds, schools of fish, swarms of insects, colonies of ants, and even computer networks exhibit complex behavior.  One of the most interesting aspects of complexity theory is "self-organizing systems", in which individuals behave according to the needs of a system which arises when the individuals come together as a group.  No individual has a plan for the structure of the group, but the identical design of each of the group members is sufficient to result in the group structure.  

The brain is a collection of billions of nerve cells, neurons, that are highly interconnected in networks of various configurations.  Individual neurons can be connected to thousands of other neurons, with some connections stretching from one end of the brain to the other, while most connections are localized to the neighborhood of the neuron.  The neuron is a complicated living thing.  It has regular inputs and inhibitory inputs, and an output.  Each of its regular and inhibitory inputs has a sensitivity level that can be adjusted over time, and it is this mechanism that is the basis of learning.  When the sum of the input signals, with their sensitivity levels taken in to account, exceeds the sum of the inhibitory input signals by a certain amount known as the threshhold, the neuron sends out its own signals in the form of pulses.  Factors that complicate this model include the fact that nerve cells require time to recover the chemical potential to "fire" other pulses, and that nerve cells exist in a chemical environment that may inhibit or promote signal transmission, irrespective of the input signals.  Alcohol, sleeping pills, LSD, cocaine, Prozac, Xanax, nitrous oxide, marijuana, and a host of other drugs, herbs, and medications, can drastically affect the way in which our networks of brain cells function.  While all signals in the brain rely on chemicals leaving one place and arriving at receptors at another place, as between the tiny gap between axons and dendrites of different neurons, not all of these chemical signals are between "connected" neurons.  In some cases the chemical signals are meant to be global, affecting the performance of all neurons of the brain.  Other chemical signals target specific areas of the brain, but still on a scale that involves large masses of neurons that are not directly "connected" to the source of the chemical signals.  Therefore, drugs that enter and permeate all parts of the brain may only have an effect on small parts of the brain, but may also have a global effect.  Obviously the brain relies on the different chemical signals for normal operation, but drugs or faulty glands can lead to imbalances that cause sleep or sleeplessness, anxiety, loss of short-term memory, hallucinations, seizures, and irreversible modification to the network itself that can result in unusually strong memories, flashbacks, and disorganized thoughts.  

The question of whether or not the brain can be simulated, if not physically duplicated, is relevant to our discussion.  First, we note that quantum mechanics tells us that each brain cell, and its chemical environment, is subject to random fluctuations.  However, I assert without proof that brain cells are sufficiently large so that the quantum uncertainties associated with each atom making up the cell does not affect the overall behavior of the cell.  Therefore, each brain cell is essentially deterministic, and if a brain cell were isolated and tested billions of times by a computer that was programmed to compare the cell's response to various stimuli with those predicted by a simple model, the correlation would be extremely high.  Complexity theory suggests that the brain as a whole may be too difficult to simulate, despite highly deterministic brain cells.  However, the nature of the interaction between neurons is rather limited:  neurons are either in a passive state or in a state of sending signals, and these signals only reach a few thousand other neurons at most.  Although the brain has billions of cells, and perhaps a thousand times that number of connections between those cells, it seems feasible to construct a computer capable of calculating the behavior of such a system using a particular brain cell model and an initial network configuration that resembles that of a functioning human brain.  Of course some provision will need to be made for the global and more localized chemical signals, and the entire brain simulation will require input from sensory organs, and perhaps even outputs to a simulated body, to make sure that the overall simulation involved "normal" circumstances.  

These arguments support the possibility that the human brain is a substantially deterministic system, and can therefore be simulated by another deterministic system, like a computer. 

 
5. Consciousness


If the activity of the human brain can be simulated by a computer, then it would seem possible to construct a robot which resembles a human so completely that only a physical exam would reveal a difference.  Even if an identical, organic copy of a human were made, and lived its life among us, we might be inclined to regard it as sub-human, artificial, and expendable.  There is an aura, a quality, to living things, which is hard to dismiss.  Our culture attaches a significance to life, and the thought of creating or duplicating life through technology can worry us because it emphesizes the fact that living things are just particular patterns of chemical activity, and this puts humans on the same level as rusting metal, atmospheric ozone, and crystal formation.  Some might argue that humans, and perhaps several types of animals, have a characteristic that can not be duplicated or simulated by machines, or organic copies: consciousness.  Whether or not this is a meaningful concept is worth discussing.  

What does it mean to be aware of one's existence, thoughts, and feelings?  Humans are convinced that they are conscious.  It seems impossible to escape the feeling that one exists as an individual, with a continuity that is only mildly modulated by periods of sleep.  Plants and rocks, the sentiment tells us, are not capable of having this sensation.  

Just how aware can someone be of their existence?  

While the feeling of consciousness may be reinforced by the very basic belief that the senses are telling us about a world outside of our bodies, consciousness does not rely on the senses.  If one's arms and legs were tranquilized, so that he could not sense their configuration, their warmth, or even that they belonged to his body, would that person be less conscious?  What if one entered a sensory deprevation tank, in lieu of tranquilizing nerves leading to the brain, so that he could not see, hear, smell, taste, or feel anything?  If a person was born with no functioning senses, and continued in to adulthood never having sensed his body or the outside world, then consciousness would still seem possible in principle, even though this person would never realize that a physical world existed and that he was not the only entity in the universe.  A person could also be deceived from birth, via special machines that stimulated the eyes, ears, tongue, and skin, that he or she was a dolphin, a cat, or even a blue hexagon, whatever life such a polygon might lead.  When we ponder these situations we are inclined to conclude that consciousness has nothing to do with the senses, or an "accurate" knowledge of one's own body and the physical world.  

Being aware of one's existence must be a feeling that originates from the brain itself.  So having the thought that other thoughts are occurring in the mind is the specific thought that seems to be the key of self-awareness.  Most other thoughts are in a different category, on a lower level: sensory "thoughts", memories, or a combination of the two that has been processed in a manner that might be called "logical".  

How is it possible to notice a thought?  What does it mean?  How can we distinguish between noticing our own thinking, and just thinking the thought "noticing our own thinking"?  Although the idea of stepping outside one's own thinking process to gain a larger perspective is romantic, and certainly makes defining consciousness easier, it is paradoxical.  The observer can not move outside himself or itself, at least in the physical world.  Even in the abstract world of thoughts and ideas it would seem impossible for an observer, which we may regard as a collection of thoughts and as a potential for more thoughts, to study its own thoughts.  However, there is no difficulty when an observer splits himself in to various parts and has one part observe another part.  

An analogy in the physical world may clarify the paradox.  Suppose that engineers wanted to make a camera aware of itself.  They might first design lenses that bent light from the camera's surface in to the camera body, so that an image of the surface would be captured on the film.  But then they add mirrors and lenses to the interior to attempt to capture an image of the film itself.  But now more lenses and mirrors must be added to allow images of the previously added mirrors and lenses to be captured on the film.  Obviously each additional set of mirrors and lenses permits more information about the camera to be recorded on the film, but the film is not "aware" of the additional lenses and mirrors in the camera that made the introspection possible.  So, although the process can be repeated indefinitely, each step will result in successfully capturing the old version of the camera, at the expense of making the camera twice as complicated!  

Obviously the mind is capable of the thoughts: "thinking of thinking", "thinking of 'thinking of thinking'", and even "thinking of 'thinking of "thinking of thinking"'".  However, perhaps in some sense these thoughts are never fully perceived or appreciated.  We can picture a man with a fist supporting his chin, ignoring the outside world, and then declare that we are "thinking about thinking".  Perhaps we can even convince ourselves that we can "think about thinking" without the imagery.  But thinking of thinking of thinking seems incomprehensible, even if we can understand that there is a chain of thinking going on.  We also understand the concept of infinity, eternity, and the fourth dimension, but it would be pretentious to claim to understand these concepts beyond formal definitions.  In the same way, the mind can not fully form the thought of thinking.  The mental camera will never capture the layer of complexity required to make introspection possible on its film of perception.  However, like the camera, the mind can nonetheless get a very subtle image of itself, even if far from being complete.  It is enough to give us consciousness.  

Maybe it strains the camera analogy of introspection to the breaking point, but it is interesting to consider the possibility that consciousness is not merely an all or nothing characteristic, but a quantity on a continuous scale.  Two humans, both fully awake, might have a different consciousness rating, just as two cameras might have a different number of mirrors and lenses to capture their interior on film.  

We have been assuming that only the physical world exists, and thus the human brain is a physical object.  Furthermore, thoughts can only exist as physical states of the brain.  Clearly, then, an artificial brain can be constructed.  We can imagine designing it so that it has introspective ability, recognizing the limited nature of our own introspective ability.  A giant computer could check on its own data processing.  In the same way, computers today also check on their internal states.  If we accept the variable consciousness idea, then all computer chips are conscious, but each to a different degree.  The global applicability of the term "consciousness" does not render it a useless term any more than the term "length" or "mass" is made useless by the fact that it applies to any object in the Universe. 

 
6. Functional equivalence


"If it walks like a duck, quacks like a duck, and flies like a duck, etc, then it must be a duck!"  No doubt this well-known saying is a call to the more speculative folk to be more practical, and to resist the temptation to question everything and to assume nothing.  

Even if the arguments concerning consciousness and the physical nature of the brain are not convincing, we might be willing to accept the possibility that robots will one day imitate humans so well that nobody can make a distinction.  These robots are functionally equivalent to humans.  

Obviously functional equivalence can be regarded as a continuum, with partial equivalence situations.  For example, computers today can read handwriting, speak, recognize faces, play games, solve equations, and control motorized limbs to pick up objects or play ping-pong;  these mimic things that humans can do, but no single machine can do them all, and we can do much more.  Thus, computers today can be regarded as partially functionally equivalent...very partially. 

 
7. What is human?


Perhaps the strongest motivation in our lives comes from our sense of identity.  Each of us has a name, a personal history, and most of us know members of our biological families.  Only in the past few centuries has it been possible for very large numbers of people to travel, communicate, and mate with people in other lands;  thus we live in a world with different races, in which, for better or for worse, most of us can be categorized.  We are born in a particular country, with citizens that speak a certain language, in an age with its own ideas and culture, and frequently to parents who raise us with their principles and whatever lifestyle their financial situation permits.  In our adult lives we have careers and reputations, established after years of decisions and chance events.  But the most personal aspect of our identity is, by definition, personality; and people tend to remember and judge each other according to this trait.  However, none of these characteristics help us to find an identity for our species.  The fact that we have different names, histories, families, races, countries, languages, cultures, principles, finances, lifestyles, careers, reputations, or personalities, shows that these attributes do little or nothing to describe what being human is all about.  Furthermore, we might imagine raising a newborn baby in an isolated environment where those numerous distinctions do not exist, emphesizing how easy it is to detach humans from a largely arbitrary, invented, meaningless legacy that thousands of years of people have established.  The story of Tarzan, a baby lost in the jungle and raised by friendly animals to be a man who has the ability to speak with tigers, elephants, and even birds, intrigues us because it is so difficult for us to imagine life without the influence of civilization.  No doubt there are many people who believe that part of the human identity necessarily includes our history and our culture;  however, as interesting as our past may be, it can be forgotten, and evidence destroyed, and it is strictly a matter of opinion of whether or not this would be a tragedy or an emancipation. 

If the identity of our species has nothing to do with our past, and nothing to do with information that we have learned, then perhaps we can consider the design of the human body as our defining characteristic.  Of course the differences between males and females would have to be fundamental part of such a definition.  Also, the differences between races, and variations in height and weight, require that the physical definition of human be somewhat flexible.  Defining human as a particular arrangement of bones, muscles, organs, and brain cells, would seem satisfactory, but technology is forcing us to reconsider even this conservative definition.  

Prosthetics, artificial body parts, are nothing new;  ancient civilizations had their share of fake teeth and wooden legs.  However, today we are able to do much more than replace a missing body part by an imitation that only simulates the appearance of the original;  we can make a replacement that functions like the original.  

If one loses an arm, then it is possible to replace it with an artificial arm with a rubber skin that can sense both physical contact and warmth.  We can even attach sensors to the stump of the person's arm that detect nerve signals, and thus control motors in the artificial arm to permit extending and grabbing motions.  It is safe to assume that one day the design of the artificial arm will reach the point where even a new user will not experience diminished mobility, sensation, or performance.  Indeed, people may even elect to replace their organic limbs with artificial counterparts.  

It doesn't end there.  Doctors can surgically implant an artifical cochlea that restores hearing to some deaf people.  Medical researchers have also been able to restore limited sight to blind people, temporarily, and in laboratory conditions, through the use of video cameras and direct electrical connection to the brain.  The artifical heart has been used in numerous cases with some success.  Although not very portable, kidney dialysis machines can prolong life while patients await kidney transplants.  Researchers have also developed synthetic blood capable of replacing human blood, although the lack of white blood cells leaves a person vulnerable to infections.  Machines have been developed that can "smell", build arbitrary DNA molecules, cause paralyzed legs to walk, control heartbeat, and regulate blood-sugar levels with a computerized implant that releases insulin.  All of these advances suggest that there will come a time when every part of the human body can be replaced.  

Most people would not question the identity of a person who has had an arm or a leg replaced, or even all limbs, both eyes and ears, nose, heart, blood, lungs, kidneys, and bones -- but our society regards the brain as the core of our being.  Where would be draw the line however, if it became possible to replace some or all of the brain, in such a way that the personality and knowledge of a person could be preserved with no discontinuity?  Just as we can replace sensory organs with electronics, or a limb with a motorized one, how would we regard a person who adds one or more capabilities to his or her body that were not part of the original human design, like a third arm, or a computer that preprocessed sights and sounds such that foreign languages were translated before even reaching the person's brain?  It is worth pointing out that there isn't much difference between, for example, wearing an electronic hearing aid and having one surgically implanted;  in the same way, any other augmentation to the human body could be considered external, regardless of how deeply hidden within the body such devices are placed.  

One way to salvage our physical definition of a human, after considering the numerous ways in which we can alter and even enhance our bodies, is to insist that any deviation from our original "flesh and blood" physical definition means that an individual is less human.  Any device added to the body would simply be a device in the body, a foreign object.  

Suppose, however, that some day it becomes quite normal for an average person to have a full-body prosthesis operation at an early age.  With all of the cosmetic surgery that goes on today, the number of hearing aids and prozac patients, and a public that is willing to accept increasingly experimental methods to prolong and improve the quality of life, it seems certain that people will elect to trade their in their human bodies for sporty, durable, powerful, replacable, mechanical frames.  Instead of doctors we will have body mechanics.  Exercise will be unnecessary, and consumers can shop around for faster, stronger, more efficient arms and legs with better warranties.  

Science and technology will continue to provide us with new opportunities and motives for replacing, and some might say escaping, our original human design.  There is no doubt, given current trends, that our species has no particular loyalty to the human body, or to the life such a body seems destined to lead in nature (disease, handicap, aging).  Given these facts, we have to wonder what is happening to humanity.  

For the sake of argument, let us briefly entertain the hypothesis that humanity can continue even after the last natural birth, or the last organic brain is grown, despite the implications of the arguments above.  What characteristic might establish the continuity of our species?  Is the gap too much of a gulf, like the difference between Homo Erectus and Homo Sapiens?  Could we take an organic human of today, and the highly mechanized person of tomorrow, and find a fundamental similarity that might be called human?  Obviously the physical bodies can not be compared favorably.  Mechanized people can customize their bodies to suit their needs and personalities, much as people today modify their automobiles to perform in certain ways and impress other people with its unique appearance.  So differences between mechanized people are likely to increase to a point where no generalization about these people will be satisfactory.  Therefore, the term "human" would become meaningless if we attempted to expand it to include the mechanical beings we will eventually give rise to.  

In conclusion, it appears that our species is headed for voluntary extinction, if we rule out global disasters, natural or technological.  Although genetic engineering will no doubt result in many changes in human life in the next century, people will clearly want to migrate to mechanical bodies as soon as possible because they are less fragile than organic bodies, and machines can be modified far more quickly than a living organism can.  Virtual Reality, which promises a new kind of existence provided by powerful computers, is also likely to play a significant role in human life over the next few decades.  However, as attractive as virtual reality may become, allowing us to live out dreams that physical reality has denied us, we will still be trapped in human bodies unless we take action.  The lure of virtual reality may be irresistable, and humans may decide to dispense with their physical bodies, and choose to confine their brains in small rooms with life-support systems.  In any case, there will come a day when humans will no longer be around, and in our place there will be machines. 

 
8. Setting the stage for robot conquest of the human race

8.1 We are ignorant

Very few people fully understand the internal mechanisms of the machines they use.  Even the technically inclined usually view a machine from a certain abstract level, or layer, that involves simplifying assumptions about the fundamental laws governing the machine's operation.  

The computer is an excellent example of a machine that is too complicated for any individual human to comprehend.  Let us descend the hierarchy of abstraction, starting from the top.  We begin with the computer user, who employs "software" to write papers, draw pictures, and play games.  Next we meet the computer programmer, who has a model of computer operation called the "system", which he or she uses to create software.  The system is a symbolic representation of the complicated electronic circuitry that makes up the physical computer "hardware", which professionals in digital design understand.  Each microchip found in the hardware is made up of thousands or millions of tiny transistors on small silicon wafers, and such chips are designed by experts in Very Large Scale Integration (VLSI).  Each transistor in a chip operates according to the physics of semiconductors, which condensed-matter researchers study.  Finally, we arrive at the very bottom of things, and the only people around are quantum field theorists, and they pause between drinks of coffee, surrounded by cigarette smoke, to scratch their facial hair and complain about something called "normalization".  To illustrate the peculiar nature of the compartmentalized, hierarchical models of the computer, consider the fact that the quantum field theorist is also likely to be a computer end-user.  Thus, it seems that it is possible to "know" a computer from many different perspectives, each being a self-contained paradigm, but nonetheless incomplete or hopelessly impractical when it comes to explaining all computer behavior.  You don't want to work out trillions of quantum field theory equations to figure out how to get the computer to change the name of your word processing document;  instead you think entirely within the paradigm of the word processor software.  

Basically, our inability, as individual humans, to comprehend the method of operation and capabilities of machines, means that even collectively we may never fully appreciate the significance of each advance in robotics.  Even individual humans can get themselves in to situations that they themselves can not get out of, like walking until getting lost, or swimming until becoming tired and drowning.  Think how much worse these situations can get if whole industries work together.  Just as a government sometimes causes many individual humans personal suffering, like being forced to fight in a war, or enduring hardships due to the country's financial depression, it is possible for intellectual cooperation between humans to lead to a technology which causes us as individual humans to suffer.  The key is that we do not understand complicated systems, like domestic economics, global politics, or robot brains, well enough to avoid situations that can be catastrophic.  Furthermore, once the disaster strikes, individual humans are not equipped to make much sense of the situation;  instead each is carried along with the powerful tides of society.  A hermit can avoid situations, but some situations can spill out and affect even neutral, isolated people, as in the case of the total annihilation of life on our planet that would result if a significant number of atomic bombs and missles were used in a nuclear war;  radioactive fallout and climate changes would quickly eliminate any survivors of the initial bomb blasts.  

If robots become advanced enough to become self-sustaining, even as a "species" (i.e., they can not only build copies of themselves using parts, but can collect all energy and materials to fabricate those parts, without any human intervention), then it would take a collective effort on the part of humans to stop their progress.  It is not inconceivable that such robots will advance to the point where we can not even comprehend their logic, or their agenda.  Just as a clever human can outwit or manipulate another, less intelligent human, extremely complicated machines might be able discover a sequence of statements that allows them to lead our minds to any arbitrary mental destination.  A clever robot could construct an argument, tailored for the specific human listener, designed to make the human believe that he or she was born to serve robots as a slave, or maybe that suicide was necessary to make the world a better place for robots. 

8.2 We are becoming dependent

A substantial fraction of the global population works with machines on a daily basis.  Whether it is an automobile, digital watch, personal computer, telephone, or even a soda vending machine, we are interacting with automata.  In as much as we use these machines, and in a sense cooperate with them, we have given up some of our direct control over our lives.  

Two significant examples of our loss of control of machines are: computer processing of financial transactions, and police records stored on computer systems.  If a computer error causes an inaccurate bill or debit on our account, then we will certainly face an enormous amount of resistance when we argue with the bill collectors or bank tellers.  With the high volume of financial transactions taking place every day, it is quite easy for any evidence of an error to be lost in the system.  Credit cards, phone cards, Automated Teller Machine (ATM) cards, on-line banking and stock trading, and electronic fund transfers are all in widespread use today.  Imagine the havoc that would ensue following a computer network collapse or a breach in network security.  Needless to say, any errors in the computer-based police, FBI, or CIA records, could lead to significant grief for any victims involved.  Whether earned by a verified history of accurate performance, or forced upon us by overwhelming numbers of willing followers, trust in computers is almost necessary to fully participate in our society.  Ultimately, our trust really rests on other human beings;  we trust that most of the people responsible for establishing and maintaining the computer records are honest, and that any errors or tampering will be detected and corrected.  However, if problems develop that can not be addressed through any people currently in place to manage the system, then it may be impossible for an individual to fight the system.  For example, it is a common experience today to have a problem with a product or service and not be able to fix the problem simply because there doesn't appear to be any particular contact point in the organization that can comprehend the problem;  you call the bill collector and they can only say that it appears that you owe the money;  you call the technical support hotline, and you listen to half an hour of telephone music until you hang up in frustration;  you call some sort of manager, and he tells you that he really has nothing to do with your problem, or maybe defends the product, and refers you to other people whom you have already talked to with no satisfactory results.  There is nowhere to turn, and nobody appears to be at fault;  the system has a problem, but it would take enormous public influence to change the situation.  So, if a trusted computer says this or that, then we are at its mercy; it is defended by the people who surround it.  

Even though human beings walked in to this situation with their eyes open, it is not clear that any of us really sees the trend.  We voluntarily submit to the organizations we create, like governments and computer banking.  We trust that these man-made structures have our sense of justice and compassion somehow imbedded in their architecture.  But civil wars, rioting, economic depression, and mass starvation have resulted from governments malfunctioning.  Computer stock trading has led to major market collapse on more than one occasion, and reliance on the little understood, so-called "derivative" stocks played a role in Orange County's $1 Billion loss and bankruptcy several years ago.  

The point is that our dependency on machines, which for now is largely minimal because we can always drop everything and move to a rural farm, can increase if we continue to install them in to our social power structure.  If we go too far, it is possible that a machine glitch, or machine self-interest, will result in disaster for us.  We may put computers in charge of certain things, thinking that we can always adjust them when things go wrong, but if machines have more than a critical level of control, then we expose ourselves to the possibility of sudden, irreversible calamity. 

8.3 Ultimate control

Since we can not change the laws of physics, we are not ultimately in control of anything.  In a deterministic world there would be nothing more to say.  However, let us explore the idea of management, at the risk of discussing a concept that may not have any real meaning.  

One party can manage another party, imposing restrictions and strongly encouraging certain behaviors.  While internal mechanisms may lead a party to act in ways beyond the ability of a manager to change in the short term, presumably the manager earned its status by demonstrating its ability to enforce its rules and guide the managed party.  For example, a driver can reasonably be called the manager of an automobile, directing it to turn and accelerate, even though there may be occasional disobedience due to engine trouble.  The driver can overcome even engine trouble by preparing for it in advance.  

Machines such as traffic signals and clocks are given some management power.  We obey them, but it's really because we wish to cooperate with other humans, not the machines themselves.  It's strictly voluntary, because the machines can not enforce their restrictions on our behavior.  Indeed, the flashing lights and numbers have no intrinsic meaning, and so these machines depend upon our culture to even communicate their suggestions to us.  

Today we are resourceful enough to stop any machine from functioning; we can "pull the plug", so to speak.  Even if a machine is powerful, like a giant robot on the assembly line of an automobile manufacturing plant, we have designed the robot to stop whenever a person pushes a large red panic button, for example.  

Suppose someone were to build a giant robot that carried nuclear missles, other weapons, and all kinds of communication hardware.  This giant robot could destroy numerous cities from a distance.  It could monitor the media and its surroundings to sense any threatening human activity.  Finally, it could scoop up a few human slaves to do its thinking, telling each human on board that it will torture or kill them if they do not cooperate.  Such a robot would exploit our desire to survive to gain control of us.  It could plan ways of strengthening its grip on humanity, until it eventually finds some way of surviving without us.  

Society can avoid losing control to robots; indeed, robots only exist because we created them, and any control they have is control we willingly gave away.  However, once one party gives another party an amount of control that exceeds a certain threshhold, then the overall control shifts to the other party.  Sure, we have given machines "control" over our banking and traffic signals, but we can take this control away; we are ultimately in control.  The difficultly for our species is in determining how much control we can afford to give to robots without jeopardizing our ultimate control, while maximizing our convenience, pleasure, and quality of life.  

Coexistence of robots and humans is possible, but robots will not stop progressing, and our biological brains will not significantly evolve from generation to generation without genetic engineering intervention.  So, robots could far surpass us in intelligence, and whether or not they coexisted with us would strictly be up to them.  We choose to coexist with many animals, but consider all animals expendable and sometimes even edible, and you can bet that the intelligence gap between humans and animals helps us rationalize our treatment of them.  Similarly, imagine robot minds having quadruple our intelligence.  These robots might collect humans as trainable pets, or perform experiments on us.  Such robots could never see any value in our limited reasoning ability;  our behaviors, although relatively complex, would be transparent to them -- mere reflexes of sorts. 

 
9. Conclusion


Humanity is heading for philosophical crises that stagger the imagination, all due to technology.  The abortion pill raised questions about the rights of a mother and her fetus.  Cloning and genetic engineering forces us to address the issue of our right to customize our offspring, or discrimination against people on the basis of, say, the presence of an alcoholism gene.  Computers and the Internet have both necessitated new ways of thinking about society and the law; information became a commodity, and hackers needed to be punished.  Perhaps we could not have anticipated the abortion pill, cloning, or the Internet, and so it was inevitable that we would have to deal with the issues as they emerged.  However, there is no question that artificial intelligence, artificial neural networks, and robotics in general, are rapidly progressing fields that have had remarkable success so far in imitating various aspects of the human mind and body.  And this, extrapolated to the creation of a complete artificial man, is clearly the biggest, most personal, philosophical problem that we have ever pushed ourselves in to.  Sure, just being alive makes us contemplate death, and just existing and thinking compels us to wonder about reality, and these rank higher than any other conceivable philosophical issue we could possibly encounter.  But we presumably had no control over becoming alive, or existing; whereas we can avoid dilemmas associated with technology.  However, in the case of machines there seems to be no stopping us; we are attracted to the idea of creating robots "in our own image".  

Whether or not we consider robots to be the next form of mankind, despite the rather odd, non-biological connection to our race, originating in our minds and not our bodies, we will have to reckon with them as an independent species at some point.  Someday a robot will not simply represent any human or groups of humans, operating on another's behalf, but will be responsible for its actions.  If such a robot committed a crime, then it, and not any master, would receive the punishment.  As "pure" humans find themselves unable to compete against prosthetically enhanced humans, or perhaps robots, our species will vanish.  It could happen even more suddenly if a powerful robot were to go out of control and decide to wipe us out, just as a determined hunter could locate and kill all tigers or pandas.  

Of one thing you can be sure: the future of robots and humans is going to be an increasingly popular topic of discussion in the years to come.  During the very month this essay was written, the world's greatest human chess game player, Garry Kasparov, lost a tournament to a powerful IBM computer named "Deep Blue".  Journalists quipped that computers still couldn't drink beer, fix the perfect martini, surf, or comprehend the words "I love you".  Many people today panic when a computer beeps, or when the photocopier stops working.  Fear of technology is appropriate; our very survival depends on how well we understand our environment, and robots surround us and are getting smarter! 

'Enginius > Robotics' 카테고리의 다른 글

Coverage Algorithms  (0) 2012.01.10
Harvard swarm robot kilobot  (0) 2011.11.03
노인과 로봇  (0) 2011.10.15
Rat Brain Robot  (0) 2011.10.06
Machine Learning - Stanford Lecture  (0) 2011.08.31