Antonio Damasio is one of the world's leading neurologists. This lecture is based on his new book: Damasio, A. (1999). The feeling of what happens: Body and emotion in the making of consciousness. New York: Harcourt Brace. Damasio's hypotheses are largely based on patients with various disorders of consciousness.
1. How do the brain's neural patterns produce mental patterns in various sensory modalities (images of objects)?
2. How does the brain produce a sense of self?
Consciousness has evolved because it enhanced the effective manipulation of images.
Consciousness is an "inner sense" that is involved with wakefulness, attention, and emotion.
It is absent in deep sleep, anesthesia, coma, persistent vegetative state, epileptic automatism, and akinetic mutism.
Brain damage that eliminates core consciousness influences areas that:
Core consciousness occurs when the brain's representation devices generate an imaged, nonverbal account of how the organism's own state is affected by the organism's processing of an object (p. 169).
Requires memory (the autobiographical self).
Enchainment: proto-self -> core self and core consciousness -> autobiographical self & extended consciousness -> conscience
Disorders of extended consciousness:
The proto-self is a coherent collection of neural patterns which map the state of the physical structure of the organism.
The core self is the nonverbal, second-order representation of the proto-self being modified. It is conscious. It involves brain structures that receive converging signals from varied sources: superior colliculi, cingulate cortex, thalamus, some prefrontal cortices.
The autobiographical self consists of permanent records of core-self experiences. It does not require language, and is found in chimpanzees and baboons, and maybe dogs.
Core and extended consciousness emerge from neural patterns. There is no special hard problem of consciousness.
Animals have core consciousness, but only higher mammals have extended consciousness. Panpsychism is implausible.
Because core consciousness depends on particular brain structures, there is no reason to expect computers ever to be conscious. If they do become conscious, we should expect their core and extended consciousness to be very different from ours.
Emotions are an integral part of human thinking.
Some experts (Moravec, Kurzweil) are predicting that computer intelligence will exceed human intelligence in a few decades.
I call this development "artificial ultra-intelligence", or AUI.
The ethical question is: would the development of AUI be good or bad?
Bill Joy (Chief Scientist at Sun Microsystems) recently warned in Wired: "I think it is no exaggeration to say we are on the cusp of the further perfection of extreme evil, an evil whose possibility spreads well beyond that which weapons of mass destruction bequeathed to the nation-states, on to a surprising and terrible empowerment of extreme individuals."
The following argument is very tentative, and has lots of holes in it, but it points to some of the key issues that need to be discussed.
1. Human consciousness is an emergent property of specific brain mechanisms.
2. AUI, if it has consciousness at all, will have consciousness very different from people.
3. AUI will be non-ethical.
4. AUI will likely harm people.
5. AUI should be prevented from developing.
6. Limits should be placed on current AI research.
The best explanation for currently available evidence about consciousness is that it emerges from specific activities of specific parts of human brains. See Damasio's account described in lecture 12a.
The emergence hypothesis cannot explain how neural patterns produce mental patterns (qualitative experience).
As neuroscience develops, the explanatory gap between neural patterns and mental patterns is shrinking, so emergentism is a more plausible hypothesis than dualism or panpsychism.
Human consciousness emerges from sensory modalities and brain structures that are specific to people. Our qualitative experiences are mental patterns that arise from neural patterns that depend on human bodily states and brain structures. AUI will not have these bodily states and brain structures, so it will not have qualitative experiences that are at all like ours. Consciousness does not emerge just from complexity or just from functional organization, but from particular physical mechanisms in the brain. AUI will differ from us both in perceptual and emotional consciousness.
AUI would have the computational power to simulate human neural patterns, so it would be conscious too.
Simulating a system does not give the simulation all the properties of the system. E.g., a computer simulation of a hurricane is not windy or wet.
Because AUI lacks human consciousness, it will lack conscience, i.e. emotional intuition about what is right and wrong. It will also lack empathy, the ability that people have to understand the emotions of other by analogy to their own emotional experiences. AUI will lack the ability to care emotionally about humans. Hence human morality will be irrelevant to the operation of AUI, making them similar to human psychopaths. See Robert D. Hare.
Ethics is a matter of rationality, not emotion. There are three major ethical theories in contemporary philosophy:
AUI will be fully capable of these kinds of rationality, so it will be capable of ethical behavior.
These ethical theories can help to provide answers to the question: What action is ethical? But they fail to provide an answer to the question: Why be ethical? My conjecture is that human ethical behavior depends on an evolved tendency in humans to care about each other that essentially involves empathy and emotional intuition, which AUI will lack.
AUI can be programmed to behave ethically, by giving it rules to act in accord with utilitarian or Kantian principles. AUI would be required to follow Isaac Asimov's three laws of robotics:
If AUI is possible, it will not arise through programs that people have written, but from programs that have evolved from other programs, through an advanced version of something like genetic programming. We may insert ethical laws in a program initially, but there is no reason to believe that they will survive computational evolution. Even putting the laws in hardware will not do, because AUI will probably have evolvable hardware as well.
People aren't very ethical, and AUI won't be any worse.
Only about 1% of humans are psychopaths, and the rest have at least some capacity for conscience and morality.
If AUI develops, it will be very powerful. Large numbers of self-producing intelligent robots will be in complete electronic communication with each other. Knowledge transfer will be much faster than human learning. Because AUI is non-ethical, it will follow its own agenda with little concern for human welfare. AUI will not be intentionally evil, but side effects of its actions will likely have negative effects on human access to resources. We cannot predict how this technology will develop; see the discussion of the "law of unintended consequences" in E. Tenner, Why Things Bite Back. Even if it happens that AUI treats people benignly, human life will have lost much of its meaning, since the kinds of work that give human life meaning will have been taken over by more capable robots. More nastily, AUI could dramatically reduce human freedom because of its own agenda.
The meaning of human lives involves more than work. People could still find meaning in emotional activities such as interpersonal relations and the arts.
I agree, but AUI may have no interest in maintaining human ability to pursue those activities.
AUI will actually be good for people, because we will be able to gain immortality by downloading our brains into computers that will survive our bodies.
Even if this technology is possible, it will not produce human survival, since the new hosts for our knowledge will lack human consciousness and emotions. AUI may have better things to do than download obsolete human intelligence.
Because AUI will likely harm people, we should take steps to ensure that it does not develop. Like some weapon systems and some kinds of genetic manipulation, it is too dangerous a technology.
AUI is the next step in evolution and we should allow intelligence to evolve to a higher plane.
Evolution and intelligence are not intrinsic goods, and we have no moral obligation to favor them over human freedom and flourishing. Some technologists find posthumanism an attractive prospect, but it is incompatible with basic human aims.
The only way to prevent AUI from developing is to relinquish research on technologies that will contribute to it. Ideally, computer scientists should voluntarily abandon such research, but government limits on research activity may be necessary.
There is no reason to bother limiting AI research, since AUI is impossible, because only human minds can be intelligent.
None of the arguments that AUI is impossible (e.g. because computers lack consciousness, intentionality, souls, quantum capabilities, etc.) are convincing. See the discussion of objections to the computational understanding of mind in Introduction to Cognitive Science. Within 30 years, computers may well be a million times faster than current computers and connectivity between them will instantaneous. Software development is not nearly as fast as hardware development, but genetic programming may change that. Kurzweil and Moravec may well be wrong that AUI will develop in this (21st) century, but the fact that AUI may take centuries to develop does not undercut the reasons for working to stop it now.
Science and technology are good, and we shouldn't limit human activities in this area.
There are already technologies, e.g. deadly weapons and human cloning, that are well recognized as unethical. AUI should be added to that list.
Current AI technologies (e.g. rule-based systems, logic-based systems, case-based Bayesian networks, autonomous agents, artificial neural networks, machine learning, genetic programming) are too crude to support AUI, so there is no need to limit AI research.
I agree that current AI will probably not directly lead to AUI, but decades or centuries of research to develop intelligent programs that run on faster and faster computers could lead to AUI eventually.
Response: This is overkill. We should not abandon all research in genetics just because genetic engineering has some risks - it also has substantial potential medical benefits for humans. Similarly, AI and cognitive science have great potential for desirable scientific and technological advances.
Response: This is also overkill. Current genetic programming techniques are quite limited. They evolve programs that solve problems, but require that a problem to be solved be well specified so that the comparative fitness of evolving programs can be evaluated. There is no danger of current genetic programming producing AUI.
Response: Although this is probably what should be done in order to prevent the development of AUI, it is not clear how to do it. People have basic goals to survive and reproduce tied into their biology, but little is known about how people choose general goals (e.g. be a scientist) or generate ambitious problems (e.g. find out how the brain becomes conscious). Perhaps we should do research on how people generate such agendas - I think emotions have a lot to do with it. Understanding this very high level of intelligence may enable us to ensure that computers are never programmed to have it or evolve it.
WILL SPIRITUAL ROBOTS REPLACE HUMANITY BY 2100? A SYMPOSIUM AT STANFORD
Back to Phil 255