Rationality and Science
Paul Thagard
Philosophy Department
University of Waterloo
pthagard@uwaterloo.ca
Thagard, P. (2004). Rationality and science. In A.Mele & P. Rawlings (Eds.), Handbook of rationality. Oxford: Oxford University Press, (pp. 363-379).
Introduction
Are scientists rational? What would constitute scientific rationality?
In the philosophy of science, these questions are usually discussed
in the context of theory choice: What are the appropriate standards
for evaluating scientific theories, and do scientists follow them?
But there are many kinds of scientific reasoning besides theory
choice, such as analyzing experimental data. Moreover, reasoning
in science is sometimes practical, for example when scientists
decide what research programs to pursue and what experiments to
perform. Scientific rationality involves groups as well as individuals,
for we can ask whether scientific communities are rational in
their collective pursuit of the aims of science.
This chapter provides a review and assessment of central aspects
of rationality in science. It deals first with the traditional
question: What is the nature of the reasoning by which individual
scientists accept and reject conflicting hypotheses? I will
also discuss the nature of practical reason in science, and then
turn to the question of the nature of group rationality in science.
The remainder of the chapter considers whether scientists are
in fact rational, that is, whether they conform to normative standards
of individual and group rationality. I consider various psychological
and sociological factors that have been taken to undermine the
rationality of science.
What is Science For?
First, however, it is necessary to deal with a prior issue:
What are the goals of science? In general, rationality requires
reasoning strategies that are effective for accomplishing goals,
so discussion of the rationality of science must consider what
science is supposed to accomplish. To begin, we can distinguish
between the epistemic and the practical goals of science. Possible
epistemic goals include truth, explanation and empirical adequacy.
Possible practical goals include increasing human welfare through
technological advances. My view is that science has all of these
goals, but let us consider some more extreme views.
Some philosophers have advocated the view that the primary epistemic
aim of science is the achievement of truth and the avoidance of
error (Goldman, 1999). On this view, science is rational to the
extent that the beliefs that it accumulates are true, and scientific
reasoning is rational to the extent that it tends to produce true
beliefs. The philosophical position of scientific realism
maintains that science aims for true theories and to some extent
accomplishes this aim, producing some theories that are at least
approximately true. In contrast, the position of anti-realism
is that truth is not a concern of science. One of the most
prominent anti-realists is Bas van Fraassen (1980), who argues
that science aims only for empirical adequacy: scientific theories
should make predictions about observable phenomena, but should
not be construed as true or false. The anti-realist view, however,
is at odds with the practice and success of science (see Psillos,
1999, for a systematic defense). Most scientists talk and act
as if they are trying to figure out how the world actually works,
not just attempting to make accurate predictions. Moreover, the
impressive technological successes of science are utterly mysterious
unless the scientific theories that made them possible are at
least approximately true. For example, my computer would not
be processing this chapter unless there really are electrons moving
through its silicon chips.
But truth is not the only goal of science. The most impressive
accomplishments of science are not individual facts or even general
laws, but broad theories that explain a great variety of phenomena.
For example, in physics the theory of relativity and quantum
theory each provide understanding of many phenomena, and in biology
the theory of evolution and genetic theory have very broad application.
Thus a major part of what scientists strive to do is to generate
explanations that tie together many facts that would individually
not be very interesting. A scientist who aimed only to accumulate
truths and avoid errors would be awash in trivialities. Hence
science aims for explanation as well as truth. These two goals
subsume the goal of empirical adequacy, because for most scientists
the point of describing and predicting observed phenomena is to
find out what is true about them and to explain them.
But there are also practical goals that science accomplishes.
Nineteenth-century physicists such as Faraday and Maxwell were
primarily driven by epistemic goals of understanding electrical
and magnetic phenomena, but their work made possible the electronic
technologies that now pervade human life. Research on topics
such as superconductivity and lasers has operated with both scientific
and technological aims. Molecular biology is also a field
that began with primarily epistemic aims but that has increasingly
been motivated by potential practical applications in medicine
and agriculture. Similarly, the major focus of the cognitive
sciences such as psychology and neuroscience has been understanding
the basic mechanisms of thinking, but there have also been practical
motivations such as improving education and the treatment of mental
illnesses. It is clear, therefore, that one aim of the scientific
enterprise is the improvement of human welfare through technological
applications. This is not say that each scientist must have
that aim, since many scientists work far from areas of immediate
application, but science as a whole has made and should continue
to make technological contributions.
More critical views on the practical aims of science are extant.
It has been claimed that science functions largely to help maintain
the hegemony of dominant political and economic forces by providing
ideologies and technologies that forestall the uprising of oppressed
peoples. This claim is a gross exaggeration, but there is no
question that the products of scientific research can have adverse
effects, for example the use of dubious theories of racial superiority
to justify social policies, and the use of advanced technology
to produce devastating weapons. But saying that the aims of
science are truth, explanation, and human welfare does not imply
that these aims are always accomplished, only that these are the
aims that science generally does and should have. We can now
address the question of what strategies of rational thinking best
serve the accomplishment of these aims.
Models of Individual Rationality
Consider a recent example of scientific reasoning, the collision
theory of dinosaur extinction. Since the discovery of dinosaur
fossils in the nineteenth century, scientists have pondered why
the dinosaurs became extinct. Dozens of different explanations
have been proposed, but in the past two decades one hypothesis
has come to be widely accepted: dinosaurs became extinct around
65 million years ago because a large asteroid collided with the
earth. Evidence for the collision hypothesis includes the
discovery of a layer of iridium (a substance more common in asteroids
than on earth) in geological formations laid down around the same
time that the dinosaurs became extinct. What is the nature of
the reasoning that led most paleontologists and geologists to
accept the collision hypothesis and reject its competitors?
I shall consider three main answers to this question, derived
from confirmation theory, Bayesian probability theory, and the
theory of explanatory coherence. In each case, I will describe
a kind of ideal epistemic agent, and consider whether scientists
are in fact agents of the specified kind.
Confirmation and Falsification
Much work in the philosophy of science has presumed that scientists
are confirmation agents that operate roughly as follows
(see, for example, Hempel, 1965). Scientists starts with hypotheses
that they use to make predictions about observable phenomena.
If experiments or other observations show that the predictions
are true, then the hypotheses are said to be confirmed. A hypothesis
that has received substantial empirical confirmation can be accepted
as true, or at least as empirically adequate. For example, the
hypothesis that dinosaurs became extinct because of an asteroid
collision should be accepted if it as been confirmed by successful
predictions.
Popper (1959) argued that scientists should not aim for confirmation,
but should operate as the following sort of falsification agents.
Scientists use hypotheses to make predictions, but their
primary aim should be to find evidence that contradicts the predicted
results, leading to the rejection of hypotheses rather than their
acceptance. Hypotheses that have survived severe attempts to
falsify them are said to be corroborated. On this view, the
proponents of the collision theory of dinosaur extinction should
attempt to falsify their theory by stringent tests, and only then
consider them as corroborated, but not as accepted as true.
Although hypotheses are often used to make predictions, the process
of science is much too complex for scientists to function generally
as either confirmation agents or falsification agents. In particular,
it is exceedingly rare for scientists to set out to refute their
own hypotheses, and, given the difficulty of performing complex
experiments, it is fortunate that they aim for confirmations
rather than refutations. There are many reasons why an experimental
prediction might fail, ranging from problems with instruments
or personnel to failure to control for key variables. A falsification
agent would frequently end up throwing away good hypotheses.
But scientists are not just confirmation agents either, since
hypotheses often get support, not just from new predictions, but
from explaining data already maintained. Moreover, it often
happens in science that there are conflicting hypotheses that
are to some extent confirmed by empirical data. As Lakatos (1970)
argued, the task then is not just to determine what hypotheses
are confirmed, but rather what hypotheses are better confirmed
than their competitors. Hypothesis assessment is rarely a matter
of evaluating a hypothesis with respect to its predictions, but
rather requires evaluating competing hypotheses, with the best
to be accepted and the others to be rejected. There are both
probabilistic and explanatory approaches to such comparative assessment.
Probabilities
Carnap and numerous other philosophers of science have attempted
to use the resources of probability theory to illuminate scientific
reasoning (Carnap, 1950; Howson and Urbach, 1989; Maher, 1993).
Probabilistic agents operate as follows. They assess hypotheses
by considering the probability of a hypothesis given the evidence,
expressed as the conditional probability P(H/E). The standard
tool for calculating such probabilities is Bayes's Theorem, one
form of which is:
P(H/E) = P(H) * P(E/H) / P(E).
This says that the posterior probability of the hypothesis H given
the evidence E is calculated by multiplying the prior probability
of the hypothesis by the probability of the evidence given the
hypothesis, all divided by the probability of the evidence.
Intuitively, the theorem is very appealing, with a hypothesis
becoming more probable to the extent that it makes improbable
evidence more probable. Probabilistic agents look at all the
relevant evidence, calculate values for P(E) and P(E/H), take
into account some prior value of P(H), and then calculate P(H/E).
Of two incompatible hypotheses, probabilistic agents prefer
the one with the highest posterior probability. A probabilistic
agent would accept the collision theory of dinosaur extinction
if its probability given the evidence is higher than the probability
of competing theories.
Unfortunately, it is not so easy as it sounds for a scientist
to be a probabilistic agent. Various philosophers, e.g. Glymour
(1980) and Earman (1992), have discussed technical problems with
applying probability theory to scientific reasoning, but I will
mention only what I consider to be the three biggest roadblocks.
First, what is the interpretation of probability in P(H/E)?
Probability has its clearest interpretation as frequencies in
populations of observable events; for example, the probability
that a die will turn up a 3 is 1/6, meaning that in a large number
of trials there will tend to be 1 event in 6 that turns up a 3.
But what meaning can we attach to the probability of dinosaur
extinction being caused by an asteroid collision? There is
no obvious way to interpret the probability of such causal hypotheses
in terms of objective frequencies in specifiable populations.
The alternative interpretation is that such probabilities are
degrees of belief, but there is substantial evidence that people's
thinking does not conform to probability theory (e.g. Kahneman,
Slovic, and Tversky, 1982; Tversky, 1994). One might say that
the probability of a hypothesis is an idealized degree of belief,
but it is not clear what this means. Degree of belief is sometimes
cashed out in terms of betting behavior, but what would it mean
to bet on the truth of various theories of dinosaur extinction?
The second difficulty in viewing scientists as probabilistic agents
is that it there are computational problems in calculating probabilities
in accord with Bayes's theorem. In general, the problem of
calculating probabilities is computationally intractable in the
sense that the number of conditional probabilities required increases
exponentially with the number of propositions. However, powerful
and efficient algorithms have been developed for calculating probabilities
in Bayesian networks that make simplifying assumptions about the
mutual independence of different propositions (Pearl, 1988).
No one, however, has yet used Bayesian networks to simulate
a complex case of scientific reasoning such as debates about dinosaur
extinction. In contrast, the next section discusses a computationally
feasible account of scientific inference based on explanatory
coherence.
The third difficulty with probabilistic agents is that they may
ignore qualitative factors affecting theory choice. Scientists'
arguments suggest that they care not only how much evidence there
is for a theory, but also about the variety of the evidence, the
simplicity of the theory that accounts for it, and analogies between
proposed explanations and other established ones. Perhaps
simplicity and analogy could be accounted for in terms of prior
probabilities: a simpler theory or one offering analogous explanations
would get a higher value for P(H) to be fed into the calculation
via Bayes's theorem of the posterior probability P(E/H). But
the view of probability as subjective degree of belief leaves
it mysterious how people do or should arrive at prior probabilities.
Explanatory Coherence
If scientists are not confirmation, falsification, or probabilistic
agents, what are they? One answer, which goes back to two nineteenth-century
philosophers of science, William Whewell and Charles Peirce, is
that they are explanation agents. On this view, what scientists
do in theoretical inference is to generate explanations of observed
phenomena, and a theory is to be preferred to its competitors
if it provides a better explanation of the evidence. Theories
are accepted on the basis of an inference to the best explanation.
Such inferences are not merely a matter of counting which of
competing theories explains more pieces of evidence evidence,
but requires assessment in terms of the overall explanatory coherence
of each hypothesis with respect to a scientist's whole belief
system. Factors that go into this assessment for a particular
hypothesis include the evidence that it explains, its explanation
by higher-level hypotheses, its consistency with background information,
its simplicity, and analogies between the explanations offered
by the hypothesis and explanations offered by established explanations
(Harman, 1986; Lipton, 1991; Thagard, 1988).
The major difficulty with the conception of scientists as explanatory
agents is the vagueness of concepts such as explanation, inference
to the best explanation, and explanatory coherence. Historically,
explanation has been conceptualized as a deductive relation, a
probabilistic relation, and a causal relation. The deductive
conceptualization of explanation fits well with the confirmation
and falsification view of agents: a hypothesis explains a piece
of evidence if the a description of the evidence follows deductively
from the hypothesis. Similarly, the probabilistic conceptualization
of explanation fits well with the probabilistic view of agents:
a hypothesis explains a piece of evidence if the probability
of the evidence given the hypothesis is higher than the probability
of the evidence without the hypothesis. Like Salmon (1984) and
others, I prefer a conceptualization of explanation as the provision
of causes: a hypothesis explains a piece of evidence if it provides
a cause of what the evidence describes. The causal conceptualization
must face the problem of saying what causes are and how causal
relations are distinct from deductive and probabilistic ones (see
Thagard 1999, ch. 7)
Assuming we know what an explanation is, how can we characterize
inference to the best explanation? I have shown how a precise
and easily computable notion of explanatory coherence can be applied
to many central cases in the history of science (Thagard, 1992).
For example, we can understand why the collision theory of dinosaur
extinction has been accepted by many scientists but rejected by
others by assessing its explanatory coherence with respect to
the evidence available to different scientists (see Thagard, 1991,
for computer simulations of the dinosaur debate using the program
ECHO).
I prefer to view scientists as explanation agents rather than
as confirmation, falsification, or probabilistic agents because
this view fits better with the historical practice of scientists
as evident in their writings, as well as with psychological theories
that are skeptical about the applicability of deductive and probabilistic
reasoning in human thinking. But I acknowledge that the probabilistic
agent view is probably the most popular one in contemporary philosophy
of science; it has largely absorbed the confirmation agent view
by the plausible principle that evidence confirms a hypothesis
if and only if the evidence makes the hypothesis more probable,
i.e. P(H/E) > P (H). It is also possible that that scientists
are not rational agents of any of these types, but rather are
reasoners of a very different sort. For example, Mayo (1996)
develops a view of scientists as modeling patterns of experimental
results that are useful for distinguishing errors. Solomon (2001)
describes scientists as reaching conclusions based on a wide variety
of "decision vectors," ranging from empirical factors
such as salience of data to non-empirical factors such as ideology.
Practical Reason
As mentioned in this chapter's introduction, there is much
more to scientific rationality than accepting and rejecting hypotheses.
Here are some of the important decisions that scientists make
in the course of their careers:
1. What general field of study should I enter, e.g. should I
become a paleontologist or a geologist?
2. Where and with whom should I study?
3. What research topics should I pursue?
4. What experiments should I do?
5. With whom should I collaborate?
When scientists make these decisions, they are obviously acting
for more than epistemic reasons, entering a field for more reasons
than that it would maximize their stock of truths and explanations.
Scientists have personal aims as well as epistemic ones, such
as having fun, being successful, living well, becoming famous,
and so on. Let us now consider two models of scientists as practical
decision makers: scientists as utility agents and scientists
as emotional agents.
The utility agent view is the familiar one from economics, with
an agent performing an action because of a calculation that the
action has more expected utility than alternative actions, where
expected utility is a function of the utilities and probabilities
of different outcomes. This view is consonant with the epistemic
view of scientists as probabilistic agents, and has many of the
same difficulties. When scientists are considering between different
research topics, do they have any idea of the relevant probabilities
and utilities? Suppose I am a molecular biologist doing genome
research, and have to decide whether to work with yeast or with
worms? I may have hunches about which research program may
yield the more interesting results, but it is hard to see how
these hunches could be translated into anything as precise as
probabilities and utilities.
A more realistic view of the decision making of scientists and
people in general is that we choose the actions that receive the
most positive emotional evaluation based on their coherence with
our goals (Thagard, 2000, ch. 6; Thagard, 2001). On this view,
decision making is based on intuition rather than on numerical
calculation: unconsciously we balance different actions and different
goals, arriving at a somewhat coherent set of accepted ones.
The importance of goals is affected by how they fit with other
goals as well as with the different actions that are available
to us. We may have little conscious awareness of this balancing
process, but the results of the process comes to consciousness
via emotions. For example, scientists may feel excited by a
particular research program and bored or even disgusted by an
alternative program. Psychologists use the term valence
to refer to positive or negative emotional evaluations. For
discussions of the role of emotions in scientific thinking, see
Thagard (forthcoming-a, forthcoming-b). Like Nussbaum (2001),
I view emotions as intelligent reactions to perceptions of value,
including epistemic value.
Just as there is a concordance between the probabilistic view
of epistemic agents and the utility view of practical agents,
there is a concordance between the explanatory coherence view
of epistemic agents and the emotional coherence view of practical
agents. In fact, emotions play a significant role in inference
to hypotheses as well as in inference to actions, because the
inputs to and outputs from both kinds of inference are emotional
as well as cognitive. The similarity of outputs is evident
when scientists appreciate the great explanatory power of a theory
and characterize it as elegant, exciting, or even beautiful.
As with practical judgments of emotional coherence in practical
decision making, we have no direct conscious access to the cognitive
processes by which we judge some hypotheses to be more coherent
than others. What emerges to consciousness from a judgment
of explanatory coherence is often emotional, in the form of liking
or even joy with respect to one hypothesis, and dislike or even
contempt for rejected competing hypotheses. For example, when
Walter and Luis Alvarez came up with the hypothesis that dinosaurs
had become extinct because of an asteroid collision, they found
the hypothesis not only plausible but exciting (Alvarez, 1998).
In contrast, some skeptical paleontologists thought the hypothesis
was not only dubious but ridiculous. Emotional inputs to hypothesis
evaluation include the varying attitudes that scientists toward
different experimental results and even to different experiments
any good scientist knows some experiments are better than
others. Another kind of emotional input is analogical: a theory
analogous to a positively-viewed theory such as evolution will
have greater positive valence than one that is analogous to a
scorned theory such as cold fusion.
Thus my view of scientists as explanatory-emotional agents is
very different from the view of them as probabilistic-utility
agents. My emphasis on emotions will probably have readers wondering
whether scientists are rational at all. Perhaps they are just
swayed by their various intellectual prejudices and personal desires
to plan research programs and accept hypotheses in ways that disregard
the epistemic aims of truth and explanation. There are, unfortunately,
cases where scientists are deviant in these ways, with disastrous
results such as fraud and other kinds of unethical behavior.
But the temperaments and training of most scientists is such
that they have an emotional attachment to the crucial epistemic
aims. Many scientists become scientists because they enjoy finding
out how things work, so that the aims of truth and explanation
are with them from the beginnings of their scientific training.
These attachments can be fostered by working with advisors who
not only value these aims but transmit their emotional evaluations
of them to the students and postdoctoral fellows with whom they
work. So, for most scientists, a commitment to fostering explanation
and truth is an emotional input into their practical decision
making.
Models of Group Rationality
As Kuhn (1970) and many other historians, philosophers, and
sociologists of science have noted, science is not merely a matter
of individual rationality. Scientists do their work in the
context of groups of various sizes, from the research teams in
their own laboratories to community of scientists working on similar
projects to the overall scientific community. As I have documented
elsewhere (Thagard 1999, ch. 11), most scientific articles have
multiple authors, and the trend is toward increasing collaboration.
In addition, all scientists operate within the context of a
wider community with shared societies, journals, and conferences.
Therefore the question of the rationality of science can be
raised for groups as well as individuals: What is it for a group
of scientists to be collectively rational, and are such groups
generally rational? I will assume that groups of scientists
have the same primary aims that I attributed to science in general:
truth, explanation, and human welfare via technological applications.
It might seem that the rationality of scientific groups is just
the sum of the rationality of the individuals that comprise them.
Then a group is rational if and only if the individual scientists
in it are rational. But it is possible to have individual rationality
without group rationality, if the pursuit of scientific aims by
each scientist does not add up to optimal group performance.
For example, suppose that each scientist rationally chooses
to pursue exactly the same research strategy as the others, with
the result that there is little diversity in the resulting investigations
and paths that would be more fertile with respect to truth and
explanation are not taken. Philosophers such as Kitcher (1993)
have emphasized the need for cognitive diversity in science.
On the other hand, it might be possible to have group rationality
despite lack of individual rationality. Hull (1989) has suggested
that individual scientists who seek fame and power rather than
truth and explanation may in fact contribute to the overall aims
of science, because their individualistic pursuit of non-epistemic
motives in fact leads the scientific group as a whole to prosper.
This is analogous to Adam Smith's economic model in which individual
greed leads to overall economic growth and efficiency.
It is important to recognize also that group rationality in science
is both epistemic and practical. Of a particular scientific
community, we can ask two kinds of question:
(1) Epistemic: Given the evidence, what should be the distribution
of beliefs in the community?
(2) Practical: What should be the distribution of research initiatives
in the community?
For the epistemic question, it might be argued that if all scientists
have access to the same evidence and hypotheses, then they should
all acquire the same beliefs. Such unanimity would, however,
be detrimental to the long-term success of science, since it would
reduce cognitive diversity. For example, if Copernicus had been
enmeshed within the Ptolemaic theory of the universe, he might
never have generated his alternative heliocentric theory that
turned out to be superior with respect to both truth and explanation.
Similarly, in the dinosaur case Walter Alvarez would never have
formulated his theory of why dinosaurs became extinct if he had
been a conventional paleontologist.
Moreover, epistemic uniformity would contribute to practical uniformity,
which would clearly be disastrous. It would be folly to have
all scientists within a scientific community following just a
few promising leads, since this would reduce the total accomplishment
of explanations as well as retard the development of novel explanations.
Garrett Hardin (1968) coined the term "tragedy of the commons"
to describe a situation in which individual rationality could
promote group irrationality. Consider sheep herders who share
a common grazing area. Each herder separately may reason that
adding one more sheep to his or her herd would not have any serious
effect on the common area. But such individual decisions might
collectively produce over-grazing, so that there is not enough
food for any of the sheep, with the result that all sheep herders
are worse off. Analogously, we can imagine in science and other
organizations a kind of "tragedy of consensus", in which
the individuals all reach similar conclusions about what to believe,
stifling creative growth.
So, what should be our model of group rationality in science?
Kitcher (1993) and Goldman (1999) develop models of group rationality
that assume that individual scientists are probabilistic agents.
Although these analyses are interesting with respect to cognitive
diversity and truth attainment, I do not find them plausible because
of the problems with the probabilistic view discussed in the last
section. As an alternative, I have developed a model of scientific
consensus based on explanatory coherence.
This model is called CCC, for consensus = coherence + communication
(Thagard 2000, ch. 10). It assumes that each scientist is an
explanation agent, accepting and rejecting hypotheses on the basis
of their explanatory coherence with evidence and alternative hypotheses.
Communication takes place as the result of meetings between
scientists in which they exchange information about available
evidence and hypotheses. If all scientists acquire exactly
the same information, then they will agree about what hypotheses
to accept and reject. However, in any scientific community, exchange
of information is not perfect, so that some scientists may not
hear about some of the evidence and hypotheses. Moreover, different
scientists have different antecedent belief systems, so the overall
coherence of a new hypothesis may be different for different scientists.
Ideally, however, if communication continues there will eventually
be community consensus as scientists accumulate the same sets
of evidence and hypotheses and therefore reach the same coherence
judgments. The CCC model has been implemented as a computational
extension of the explanatory coherence program ECHO in which individual
scientists evaluate hypotheses on the basis of their explanatory
coherence but also exchange hypotheses and evidence with other
scientists. These simulated meetings can either be pairwise exchanges
between randomly selected pairs of scientists, or "lectures"
of the sort that take place at scientific conferences in which
one scientist can broadcast sets of hypotheses and evidence to
a group of scientists. Of course, communication is never perfect,
so it can take many meetings before all scientists acquire approximately
the same hypotheses and evidence. I have performed computational
experiments in which different numbers of simulated scientists
with varying communication rates achieve consensus in two interesting
historical cases: theories of the causes of ulcers, and theories
of the origins of the moon.
The CCC model shows how epistemic group rationality can arise
in explanation agents who communicate with each other, but it
tells us nothing about practical group rationality in science.
One possibility would be to attempt to extend the probabilistic-utility
model of individual practical reason. On this model, each scientist
makes practical decisions about research strategy based on calculations
concerning the expected utility of different courses of action.
Research diversity arises because different scientists attach
different utilities to various experimental and theoretical projects.
For reasons already given, I would prefer to extend the explanatory-emotional
model described in the previous section.
The extension arises naturally from the CCC model just described,
except that in large, diverse communities we should not expect
the same degree of practical consensus as there is of epistemic
consensus, for reasons given below. For the moment, let us
focus on particular research groups rather than on whole scientific
communities. At this level, we can find a kind of local consensus
that arises because of emotional coherence and communication.
The characteristics of the group include the following:
1. Each scientist is an explanation agent with evidence, hypotheses,
and the ability to accept and reject them on the basis of explanatory
coherence.
2. In addition, each scientist is an emotional agent with actions,
goals, valences and the ability to make decisions on the basis
of emotional coherence.
3. Each scientist can communicate evidence and hypotheses with
other scientists.
4. Each scientists can, at least sometimes, communicate actions,
goals, and valences to other scientists.
5. As the result of cognitive and emotional communication, consensus
is sometimes reached about what to believe and also about what
to do.
The hard part to implement is the component of (4) that involves
valences. It is easy to extend the CCC model of consensus to
include emotional coherence simply by allowing actions, goals,
and valences to be exchanged just like evidence, hypotheses,
and explanations.
In real life, valences are not so easily exchanged as verbal information
about actions, goals, and what actions accomplish which goals.
Just hearing someone say that they really care about something
does not suffice to make you care about it too, nor should it,
because your goals and valences may be orthogonal or even antagonistic
to mine. So in a computational model of emotional consensus
the likelihood of exchange of goals and valences in any meeting
should be much lower than the likelihood of exchange of hypotheses,
evidence, and actions.
Still, in real-life decision making involving scientists and other
groups such as corporate executives, emotional consensus is sometimes
reached. What are the mechanisms of valence exchange, that is,
how do people pass their emotional values on to other people?
Two relevant social mechanisms are emotional contagion and
attachment-based learning. Emotional contagion occurs when
person A expresses an emotion, person B unconsciously mimics A's
facial and bodily expressions, and then begins to acquire the
same emotion (Hatfield, Cacioppo, and Rapson, 1994). For example,
if a group member enthusiastically presents a research strategy,
then the enthusiasm may be conveyed through both cognitive and
emotional means to other members of the group. The cognitive
part is that the other group members become aware of possible
actions and their potential good consequences, and the emotional
part is conveyed by the facial expressions and gestures of the
enthusiast, so that the positive valence felt by one person spreads
to the whole group. Negative valence can also spread, not just
from a critic pointing out drawbacks to a proposed action as well
as more promising alternatives, but also by contagion of the negative
facial and bodily expressions.
Another social mechanism for valence exchange is what Minsky (2001)
calls attachment-based learning. Minsky points out that cognitive
science has developed good theories of how people use goals to
generate sub-goals, but has had little to say about how people
acquire their basic goals. Similarly, economists employing the
expected utility model of decision making take preferences as
given, just as many philosophers who hold a belief-desire model
of rationality take desires as given.
Minsky suggests that basic goals arise in children as the result of praise from people to whom the children are emotionally attached. For example, when young children share their toys with their playmates, they often receive praise from their parents or other caregivers. The parents have positive valence for the act of sharing, and the children may also acquire a positive emotional attitude toward sharing as the result of seeing that it is something cared about by people whom they care about and who care about them. It is not just that sharing becomes a sub-goal to accomplish the goal of getting praised by parents; rather, being kind to playmates becomes an internalized goal that has intrinsic emotional value to the children.
I conjecture that attachment-based learning also occurs in science
and other contexts of group decision making. If your supervisor
is not just a boss but a mentor, then you may form an emotional
attachment that makes you particularly responsive to what the
supervisor praises and criticizes. This makes possible the attachment-based
transmission of positive values such as zeal for truth and understanding,
or, more locally, for integrity in dealing with data and explanations.
Notice that both emotional contagion and attachment-based learning
require quite intense interpersonal contacts that will not be
achieved in a large lecture hall or video conference room, let
alone through reading a published article. The distinguished
social psychologist, Richard Nisbett, told me that he learned
how to do good experiments through discussions with his supervisor,
Stanley Schacter. Nisbett said (personal communication, Feb. 23,
2001) "He let me know how good my idea was by grunts: non-committal
(hmmm...), clearly disapproving (ahnn...) or (very rarely) approving
(ah!)." These grunts and their attendant facial expressions
conveyed emotional information that shaped the valences of the
budding researcher.
Accordingly, when I extend my CCC model of consensus as coherence
plus communication to include group decisions, I will include
two new variables to determine the degree of valence transmission
between agents: degree of personal contact, and degree of attachment.
If personal contact and attachment are high, then the likelihood
of valence transmission will be much greater than in the ordinary
case of scientific communication, in which the success of verbal
transmission of information of hypotheses, evidence, and actions
is much higher than the transmission of valences.
There may, however, be quasi-verbal mechanisms for valence transfer.
Thagard and Shelley (2001) discuss emotional analogies whose
purpose is to transfer valences as well as verbal information.
For example, if a scientist presents a research project as
analogous to a scientific triumph such as the asteroid theory
of dinosaur extinction, then listeners may transfer the positive
value they feel for the asteroid theory to the proposed research
project. Alternatively, if a project is analogous to the cold
fusion debacle, then the negative valence attached to that case
may be projected onto the proposed project. Thus emotional analogies
are a third mechanism, in addition to emotional contagion and
attachment-based learning, for transfer of valences. All three
mechanisms may interact with each other, for example when a mentor
uses an emotional analogy and facial expressions to convey values
to a protégée. Alternatively, the mentor may function
as a role model, providing a different kind of emotional analogy:
students who see themselves as analogous to their role models
may tend to transfer to themselves some of the motivational and
emotional characteristics of their models.
I hope it is obvious from my discussion of practical group rationality
in science why science need not succumb to the tragedy of consensus,
especially with respect to practical rationality. Communication
between scientists is imperfect, both with respect to cognitive
information such as hypotheses and evidence and especially with
respect to emotional valences for particular approaches. Scientists
may get together for consensus conferences such as the ones sponsored
by the National Institutes of Health that regularly deal with
controversial issues in medical treatment (see Thagard, 1999,
ch. 12 for a discussion). But not all scientists in a community
attend such conferences or read the publications that emanate
from them. Moreover, the kinds of close interpersonal contact
needed for communication of values by emotional contagion and
attachment-based learning occur only in small subsets of the whole
scientific community. Hence accomplishment of the general scientific
aims of truth, explanation, and technological applications need
not be hindered in a scientific community by a dearth of practical
diversity. Solomon (2001) provides a rich discussion of consensus
and dissent in science.
Is Science Rational?
A person or group is rational to the extent that its practices
enable it to accomplish its legitimate goals. At the beginning
of this paper, I argued that the legitimate goals of science are
truth, explanation, and technologies that promote human welfare.
Do scientific individuals and groups function in ways that further
these goals, or do they actually pursue other personal and social
aims that are orthogonal or even antagonistic to the legitimate
goals? I will now consider several psychological and sociological
challenges to the rationality of science.
Psychological challenges can be based on either cold cognition,
which involves processes such as problem solving and reasoning,
or hot cognition, which includes emotional factors such as motivation.
The cold-cognition challenge to scientific rationality would
be that people's cognitive processes are such that it is difficult
or impossible for them to reason in ways that promote the aims
of science. If scientific rationality required people to be
falsification agents or probabilistic agents, then the cold-cognition
challenge would be a serious threat: I cited earlier some of
the experimental and historical data that suggest that probabilistic
reasoning and falsification are not natural aspects of human thinking.
In contrast, there is evidence that people can use explanatory
coherence successfully in social judgments (Read and Marcus-Newhall,
1993).
One might argue that there is evidence that people are confirmation
agents, and not very good ones in that they tend towards confirmation
bias in looking excessively to confirm their hypotheses rather
than falsify them (Klayman and Ha, 1987). However, the psychological
experiments that find confirmation biases involve reasoning tasks
that are much simpler than those performed by actual scientists.
Typically, non-scientific subjects are asked to form generalizations
from observable data, for example in seeing patterns in numerical
sequences. The generalization tasks of real scientists are
more complex, in that data interpretation requires determining
whether apparent patterns in the data are real or just artifacts
of the experimental design. If scientists did not try hard
to get their experiments to confirm their hypotheses, the experiments
would rarely turn out to be interesting. Notably, trying hard
to confirm is not always sufficient to produce confirming results,
so scientists sometimes have falsification thrust upon them.
But their bias toward finding confirmations is not inherently
destructive to scientific rationality.
A more serious challenge to the rationality of science comes from
hot cognition. Like all people, scientists are emotional beings,
and their emotions may lead to distortions in their scientific
works if they are attached to values that are inimical to the
legitimate aims of science. Here are some kinds of cases where
emotions have distorted scientific practice:
1. Scientists sometimes advance their own careers by fabricating
or distorting data in order to support their own hypotheses.
In such cases, they have greater motivation to enhance their
own careers than to pursue truth, explanation, or welfare.
2. Scientists sometimes block the publication of theories that
challenge their own by fabricating problems with submitted articles
or grant proposals that they have been asked to review.
3. Without being fraudulent or intentionally evil, scientists
sometimes unintentionally deceive themselves into thinking that
their hypotheses and data are better than those of their rivals.
4. Scientists sometimes further their careers by going along
with politically mandated views, for example the Nazi rejection
of Einsteinian physics and the Soviet advocacy of Lysenko's genetic
theories.
Cases like these show indubitably that science is not always rational.
Some sociologists such as Latour (1987) have depicted scientists
as largely concerned with gaining power through the mobilization
of allies and resources.
It is important to recognize, however, that the natural emotionality
of scientists is not in itself a cause of irrationality. As
I documented elsewhere, scientists are often motivated by emotions
that further the goals of science, such as curiosity, the joy
of discovery, and appreciation of the beauty of highly coherent
theories (Thagard, forthcoming-b). Given the modest incentive
structure of science, a passion for finding things out is a much
more powerful motivator of the intense work required for scientific
success than are extrinsic rewards such as money and fame. Thus
hot cognition can promote scientific rationality, not just deviations
from it. The mobilization of resources and allies can be in the
direct or indirect service of the aims of science, not just the
personal aims of individual scientists.
A useful response to the question "Is science rational?"
is: "Compared to what?" Are scientists as individuals
more adept than non-scientists at fostering truth, explanation,
and human welfare? The history of science and technology over
the past two hundred years strongly suggests that the answer is
yes. We have acquired very broadly explanatory theories such
as electromagnetism, relativity, quantum theory, evolution, germ
theory, and genetics. Thousands of scientific journals constitute
an astonishing accumulation of truths that ordinary life would
never have allowed. Moreover, technologies such as electronics
and pharmaceuticals have enriched and lengthened human lives.
So the occasional irrationality of individual scientists and
groups is compatible with an overall judgment that science is
in general a highly rational enterprise.
In recent decades, the most aggressive challenge to the ideal
of scientists as rational agents has come from sociologists
and historians who claim that scientific knowledge is "socially
constructed." Obviously, the development of scientific
knowledge is a social as well as an individual process, but the
social construction thesis is usually intended to make the much
stronger claim that truth and rationality have nothing to do with
the development of science. My own view is that an integrated
psychological/sociological view of the development of scientific
knowledge is perfectly compatible with scientific rationality
involving the frequently successful pursuit of truth, explanation,
and human welfare (Thagard, 1999).
Crucially, however, the assessment of scientific rationality
needs to employ models of individual reasoning and group practices
that reflect the thought processes and methodologies of real scientists.
Models based on formal logic and probability theory have tended
to be so remote from scientific practice that they encourage the
inference that scientists are irrational. In contrast, psychologically
realistic models based on explanatory and emotional coherence,
along with socially realistic models of consensus, can help to
illuminate the often impressive rationality of the enterprise
of science.
References
Alvarez, W. (1998). T. rex and the crater of doom.
New York: Vintage.
Carnap, R. (1950). Logical foundations of probability.
Chicago: University of Chicago Press.
Earman, J. (1992). Bayes or bust? Cambridge, MA: MIT Press.
Glymour, C. (1980). Theory and evidence. Princeton: Princeton
University Press.
Goldman, A. (1999). Knowledge in a social world. Oxford:
Oxford University Press.
Hardin, G. (1968). The tragedy of the commons. Science, 162,
1243-1248.
Harman, G. (1986). Change in view: Principles of reasoning.
Cambridge, MA: MIT Press/Bradford Books.
Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1994). Emotional
contagion. Cambridge: Cambridge University Press.
Hempel, C. G. (1965). Aspects of scientific explanation.
New York: The Free Press.
Howson, C., & Urbach, P. (1989). Scientific reasoning:
The Bayesian tradition. Lasalle, IL: Open Court.
Hull, D. (1989). Science as a process. Chicago: University
of Chicago Press.
Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment
under uncertainty: Heuristics and biases. New York: Cambridge
University Press.
Kitcher, P. (1993). The advancement of science. Oxford:
Oxford University Press.
Klayman, J., & Ha, Y. (1987). Confirmation, disconfirmation,
and information in hypothesis testing. Psychological
Review, 94, 211-228.
Kuhn, T. (1970). Structure of scientific revolutions (
2 ed.). Chicago: University of Chicago Press.
Lakatos, I. (1970). Falsification and the methodology of scientific
research programs. In I. Lakatos & A. Musgrave (Eds.), Criticism
and the growth of knowledge (pp. 91-195). Cambridge: Cambridge
University Press.
Latour, B. (1987). Science in action: How to follow scientists
and engineers through society. Cambridge, MA: Harvard University
Press.
Lipton, P. (1991). Inference to the best explanation. London:
Routledge.
Maher, P. (1993). Betting on theories. Cambridge: Cambridge
University Press.
Mayo, D. (1996). Error and the growth of experimental knowledge.
Chicago: University of Chicago Press.
Minsky, M. (2001). The emotion machine.: http://www.media.mit.edu/people/minsky/.
Nussbaum, M. (2001). Upheavals of thought. Cambridge: Cambridge
University Press.
Pearl, J. (1988). Probabilistic reasoning in intelligent systems.
San Mateo: Morgan Kaufman.
Popper, K. (1959). The logic of scientific discovery. London:
Hutchinson.
Psillos, S. (1999). Scientific realism: How science tracks
the truth. London: Routledge.
Read, S., & Marcus-Newhall, A. (1993). The role of explanatory
coherence in the construction of social explanations. Journal
of Personality and Social Psychology, 65, 429-447.
Salmon, W. (1984). Scientific explanation and the causal structure
of the world. Princeton: Princeton University Press.
Solomon, M. (2001). Social empiricism. Cambridge, MA: MIT
Press.
Thagard, P. (1988). Computational philosophy of science.
Cambridge, MA: MIT Press/Bradford Books.
Thagard, P. (1991). The dinosaur debate: Explanatory coherence
and the problem of competing hypotheses. In J. Pollock & R.
Cummins (Eds.), Philosophy and AI: Essays at the Interface.
(pp. 279-300). Cambridge, Mass.: MIT Press/Bradford Books.
Thagard, P. (1992). Conceptual revolutions. Princeton:
Princeton University Press.
Thagard, P. (1999). How scientists explain disease. Princeton:
Princeton University Press.
Thagard, P. (2000). Coherence in thought and action. Cambridge,
MA: MIT Press.
Thagard, P. (2001). How to make decisions: Coherence, emotion,
and practical inference. In E. Millgram (Ed.), Varieties of
practical inference (pp. 355-371). Cambridge, MA: MIT Press.
Thagard, P. (forthcoming-a). Curing cancer? Patrick Lee's path
to the reovirus treatment. International Studies in the Philosophy
of Science.
Thagard, P. (forthcoming-b). The passionate scientist: Emotion
in scientific cognition. In P. Carruthers & S. Stich &
M. Siegal (Eds.), The cognitive basis of science. Cambridge:
Cambridge University Press.
Thagard, P., & Shelley, C. P. (2001). Emotional analogies
and analogical inference. In D. Gentner & K. H. Holyoak &
B. K. Kokinov (Eds.), The analogical mind: Perspectives from
cognitive science (pp. 335-362). Cambridge, MA: MIT Press.
Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional
representation of subjective probability. Psychological Review,
101, 547-567.
van Fraassen, B. (1980). The scientific image. Oxford:
Clarendon Press.
Back to science and medicine articles.
Back to emotion articles table of contents.