Dualist: No, need the soul.
Eliminativist: No, consciousness is nothing.
Mysterian: Don't know - consciousness is too complicated to figure out.
Reductionist: Yes, it is a computational process, or at least a biological one.
Emergentist: Yes, it is an emergent property of a biocomputational process.
McDermott, D. (2001). Mind and mechanism. Cambridge, MA: MIT Press.
P. 94. "If there is ever such a thing as an intelllgent robot, then it willl have to exhiti somethiving very much like consciousness." He means not just awareness or self-awareness, but phenomenal consciousness, i.e. experiences or qualia.
P. 98. "A system has free will if and only if it makes decisions based on causal models in which the symbols denoting itself are marked as exempt from causality." A system has free will if it believes it is exempt from causal laws. Humans have this illusory belief, and so could robots.
Robots are able to make sensory discriminations, e.g. between different colors, and to have preferences that assign different values to different outcomes. Robots could believe that they have these abilities. Hence robots could believe they have qualia.
Like people, a robot could have a self-model that it uses to predict its own behavior. This self-model would include the belief that they have qualia. Hence, like people, robots could have qualia.
P. 135. "There is simply no place, and no need, for qualia in an ordinary computational system. The quale is brought into being solely by the process of self-modeling."
Free will may be an illusion, but qualia like visual experiences and emotions are not.
There is much more to qualia than making discriminations and having preferences. Themostats and cars cand do that.
Self-modeling is not necessary for simple consciousness, i. e. Damasio's core consciousness.
Sensory and emotional consciousness more plausibily emerges from far more complex processes than the ones described by McDermott. So it is not clear that a robot would have consciousness of a non-illusory sort.
We need a much more elaborate explanation of why minds are conscious.
Explanation of a type of system having a property consists of:
Because water consists of H2O molecules, which are not connected by stong bonds that would make water a solid, but which are connected by electrostatic forces (van der Waals bonds) that make the molecules weakly attract each other. Liquidity is a property of a collection of molecules.
A cell consists of molecules, which are not alive, but which participate in chemical reactions.
Cell processes include mitosis (division), apoptosis (death), fueling, locomotion, tissue formation, and signaling.
The life of a cell results from ongoing molecular reactions that support mitosis and other processes. A cell dies when these reactions stop.
Why do people get a disease? Many diseases involve complex interactions of genes, organs, and environments.
Why do magnets attract?
A brain consists of neurons and other cells (glia). Individually, no brain cell is conscious.
Brain processes include low-level neuronal operations, but also high-level cortical dynamics relating different areas.
Conjecture: consciousness emerges from high-level cortical dynamics involving multiple brain areas, sometimes including self-representations. These are computational processes, but are also biochemical.
A computational model of consciousness would consist of interrelated cortical areas computing together. E.g. emotion: nucleus accumbens integrating information from hippocampus, neocortex, and amygdala. No need for self-modeling or higher-level representation.
LeDoux, J. (2002). The synaptic self. New York: Viking.
Prefrontal cortex receives convergent inputs from sensory, memory, emotional and motivational circuits. It performs working memory and executive functions.
This information convergence, possibly combined with neuronal synchrony, accounts for conscious experience.
Emotional consciousness is the result of interactions among prefrontal cortex, sensory thalamus, sensory cortex, medial temporal lobe memory system, and the amygdala.
Different emotional feelings are caused by different patterns of neuronal processing.
How does the mind turn sounds into words (phonology)?
How does the mind turn words into sentences (syntax)?
How does the mind understand words and sentences (semantics)?
How does the mind understand discourse (semantics, pragmatics)?
How does the mind generate discourse?
How does the mind translate between languages?
How does the mind acquire the capacities just described?
To what extent is knowledge of language innate?
Linguistic knowledge consists largely of rules that govern phonological and syntactic processing.
The computational procedures involved in understanding and generating language are largely rule-based.
Language learning is learning of rules.
Many of these rules are innate.
The leading proponent of this general view has been Noam Chomsky.
Rule-based models of language comprehension and generation have been developed in the SOAR system and within other frameworks. See week 4.
Linguistic knowledge consists largely of statistical constraints that are less general than rules and are encoded in neural networks.
The computational procedures involved in understanding and generating language are largely parallel constraint satisfaction.
Language learning is adjusting constraints, i.e. excitatory and inhibitory links.
The innate component of language is limited. Genes do not code for synaptic connectivity.
Connectionist (artifical neural network) models of various stages of language comprehension, using techniques discussed in week 7 and week 8.
The leading proponent of the connectionist approach to language is Jeff Elman.
Perhaps use of language depends on statistical methods different from those being studied with artificial neural networks, e.g. latent semantic analysis.
Psychological experiments to see determine linguistic capabilities.
Computational models to see whether neural network or rule-based models most generally explain our linguistic capabilities.
Hybrid hypothesis: the linguistic mind is a rule-based AND a constraint-based system.
Computational Epistemology Laboratory.
This page updated March 14, 2005