Computation and the Philosophy of Science
Paul Thagard
Philosophy Department
University of Waterloo
Waterloo, Ontario, N2L 3G1
pthagard@watarts.uwaterloo.ca
What do philosophers do? Twenty years ago, one might have
heard such answers to this question as "analyze concepts"
or "evaluate arguments". The answer "write computer
programs" would have inspired a blank stare, and even a decade
ago I wrote that computational philosophy of science might sound
like the most self-contradictory enterprise in philosophy since
business ethics (Thagard 1988). But computer use has since
become much more common in philosophy, and computational modeling
can be seen as a useful addition to philosophical method, not
as the abandonment of it. I will try in this paper to summarize
how computational models are making substantial contributions
to the philosophy of science.
If philosophy consisted primarily of conceptual analysis, or mental
self-examination, or generation of a priori truths, then computer
modeling would indeed be alien to the enterprise. But I prefer
a different picture of philosophy, as primarily concerned with
producing and evaluating theories, for example theories
of knowledge (epistemology), reality (metaphysics), and right
and wrong (ethics). The primary function of a theory of knowledge
is to explain how knowledge grows, which requires both describing
the structure of knowledge and the inferential procedures by which
knowledge can be increased. Although epistemologists often focus
on mundane knowledge, the most impressive knowledge gained by
human beings comes through the operation of science: experimentation,
systematic observation, and theorizing concerning the experimental
and observational results. Hence at the core of epistemology
is the need to understand the structure and growth of scientific
knowledge, a project for which computational models can be very
useful.
In attempting to understand the structure and development of scientific
knowledge, philosophers of science have traditionally employed
a number of methods such as logical analysis and historical case
studies. Computational modeling provides an additional method
that has already advanced understanding of such traditional problems
in the philosophy of science as theory evaluation and scientific
discovery. This paper will review the progress made on such issues
by three distinct computational approaches: cognitive modeling,
engineering artificial intelligence, and theory of computation.
The aim of cognitive modeling is to simulate aspects of human
thinking; for philosophy of science, this becomes the aim to simulate
the thinking that scientists use in the construction and evaluation
of hypotheses. Much artificial intelligence research, however,
is not concerned with modeling human thinking, but with constructing
algorithms that perform well on difficult tasks independently
of whether the algorithms correspond to human thinking. Similarly,
the engineering AI approach to philosophy of science seeks to
develop computational models of discovery and evaluation independently
of questions of human psychology. Computational philosophy of
science has thus developed two streams that reflect the two streams
in artificial intelligence research, one concerned with modeling
human performance and the other with machine intelligence. A
third stream of research uses abstract mathematical analysis and
applies the theory of computation to problems in the philosophy
of science.
1. Cognitive Modeling
Cognitive science is the interdisciplinary study of mind,
embracing philosophy, psychology, artificial intelligence, neuroscience,
linguistics, and anthropology. From its modern origins in the
1950s, cognitive science has primarily worked with the computational-representational
understanding of mind: we can understand human thinking by postulating
mental representations akin to computational data structures and
mental procedures akin to algorithms (Thagard 1996). The cognitive-modeling
stream of computational philosophy of science views topics such
as discovery and evaluation as open to investigation using the
same techniques employed in cognitive science. To understand
how scientists discover and evaluate hypotheses, we can develop
computer models that employ data structures and algorithms intended
to be analogous to human mental representations and procedures.
The cognitive modeling stream of computational philosophy of
science can be viewed as part of naturalistic epistemology, which
sees the study of knowledge as closely tied to human psychology,
not as an abstract logical exercise.
Discovery
In the 1960s and 1970s, philosophers of science discussed
whether there is a "logic of discovery" and whether
discovery (as opposed to evaluation) is a legitimate topic of
philosophical (as opposed to psychological) investigation. In
the 1980s, these debates were superseded by computational research
on discovery that showed how actual cases of scientific discovery
can be modeled algorithmically. Although the models that have
been produced to date clearly fall well short of simulating all
the thought processes of creative scientists, they provide substantial
insights into how scientific thinking can be viewed computationally.
Because of the enormous number of possible solutions involved
in any scientific problem, the algorithms involved in scientific
discovery cannot guarantee that optimal discoveries will be made
from input provided. Instead, computer models of discovery employ
heuristics, approximate methods for attempting to cut through
data complexity and find patterns. The pioneering step in this
direction was the BACON project of Pat Langley, Herbert Simon
and their colleagues (Langley et al. 1987). BACON is a program
that uses heuristics to discover mathematical laws from quantitative
data, for example discovering Kepler's third law of planetary
motion. Although BACON has been criticized for assuming an over-simple
account of human thinking, Qin and Simon (1990) found that human
subjects could generate laws from numerical data in ways quite
similar to BACON.
Scientific discovery produces qualitative as well as quantitative
laws. Kulkarni and Simon (1988) produced a computational model
of Krebs' discovery of the urea cycle. Their program, KEKADA,
reacts to anomalies, formulates explanations, and carries out
simulated experiments in much the way described in Hans Krebs
laboratory notebooks.
Not all scientific discoveries are as data-driven as the ones
so far discussed. They often involve the generation of new concepts
and hypotheses that are intended to refer to non-observable entities.
Thagard (1988) developed computational models of conceptual combination,
in which new theoretical concepts such as sound wave
are generated, and of abduction, in which new hypotheses
are generated to explain puzzling phenomena. Darden (1990, this
volume) has investigated computationally how theories that have
empirical problems can be repaired.
One of the most important cognitive mechanisms for discovery is
analogy, since scientists often make discoveries by adapting previous
knowledge to a new problem. Analogy played a role in some of
the most important discoveries ever made, such as Darwin's theory
of evolution and Maxwell's theory of electromagnetism. During
the 1980s, the study of analogy went well beyond previous philosophical
accounts through the development of powerful computational models
of how analogs are retrieved from memory and mapped to current
problems to provide solutions. Falkenhainer, Forbus, and Gentner
(1989) produced SME, the Structure Mapping Engine, and this program
was used to model analogical explanations of evaporation and osmosis
(Falkenhainer 1990). Holyoak and Thagard (1989) used different
computational methods to produce ACME, the Analogical Constraint
Mapping Engine, which was generalized into a theory of analogical
thinking that applies to scientific as well as everyday thinking
(Holyoak and Thagard 1995).
Space does not permit further discussion of computational models
of human discovery, but the above research projects illustrate
how thought processes such as those involved in numerical law
generation, theoretical concept formation, and analogy can be
understood computationally. Examples of non-psychological investigations
of scientific discovery are described in section 2 and 3.
Evaluation
How scientific hypotheses are evaluated has been a central
problem in philosophy of science since the nineteenth century
debates between John Stuart Mill and William Whewell. Work in
the logical positivist tradition has centered on the concept of
confirmation, asking what it is for hypotheses to be confirmed
by observations. More recently, various philosophers of science
have taken a Bayesian approach to hypothesis evaluation, using
probability theory to analyze scientific reasoning. In contrast,
I have developed an approach to hypothesis evaluation that combines
philosophical ideas about explanatory coherence with a connectionist
(neural network) computational model.
Coherence theories of knowledge, ethics, and even truth have been
popular among philosophers, but the notion of coherence is usually
left rather vague. Hence coherence theories look unrigorous compared
to theories couched more formally using deductive logic or probability
theory. But connectionist models show how coherence ideas can
be precisely and efficiently implemented. Since the mid-1980s,
connectionist (neural network, PDP) models have been very influential
in cognitive science. Loosely analogous to the operation of the
brain, such models have numerous units that are roughly like neurons,
connected to each other by excitatory and inhibitory links of
varying strengths. Each unit has an activation value that is
affected by the activations of the units to which it is linked,
and learning algorithms are available for adjusting the strengths
on links in response to experience.
My connectionist computational model of explanatory coherence,
ECHO, uses units to represent propositions that can be hypotheses
or descriptions of evidence, and links between units to represent
coherence relations. For example, if a hypothesis explains a
piece of evidence, then ECHO places an excitatory link between
the unit representing the hypothesis and the unit representing
the evidence. If two hypotheses are contradictory or competing,
then ECHO places an inhibitory link between the units representing
the two hypotheses. Repeatedly adjusting the activations of the
units based on their links with other units results in a resting
state in which some units are on (hypotheses accepted) and other
units are off (hypotheses rejected). ECHO has been used to model
many important cases in the history of science (Nowak and Thagard
1992a, 1992b; Thagard 1991, 1992, in press). Eliasmith and Thagard
(in press) argue that ECHO provides a better account of hypothesis
evaluation than available Bayesian accounts.
A different connectionist account of inference to best explanation
is given by Churchland (1989). He conjectures that abductive
discovery and inference to the best explanation can both be understood
in terms of prototype activation in distributed connectionist
models, i. e. ones in which concepts and hypotheses are not represented
by individual units but by patterns of activation across multiple
units. There is considerable psychological evidence that distributed
representations and prototypes are important in human cognition,
but no one has yet produced a running computational model of hypothesis
evaluation using these ideas. Non-connectionist models of hypothesis
evaluation, including probabilistic ones, are discussed in the
next section.
2. Engineering AI
As the references to my own work in the last section indicate,
I pursue the cognitive modeling approach to computational philosophy
of science, allying philosophy of science with cognitive science
and naturalistic epistemology. But much valuable work in AI and
philosophy has been done that makes no claims to psychological
plausibility. One can set out to build a scientist without trying
to reverse engineer a human scientist. The engineering AI approach
to computational philosophy of science is allied, not with naturalistic,
psychologistic epistemology, but with what has been called "android
epistemology", the epistemology of machines that may or may
not be built like humans (Ford, Glymour, and Hayes 1995). This
approach is particularly useful when it exploits such differences
between digital computers and humans as computers' capacity for
very fast searches to perform tasks that human scientists cannot
do very well.
Discovery
One goal of engineering AI is to produce programs that can
make discoveries that have eluded humans. Bruce Buchanan, who
was originally trained as a philosopher before moving into AI
research, reviewed over a dozen AI programs that formulate hypotheses
to explain empirical data (Buchanan 1983). One of the earliest
and most impressive programs was DENDRAL which performed chemical
analysis. Given spectroscopic data from an unknown organic chemical
sample, it determined the molecular structure of the sample (Lindsay
et. al. 1980). The program META-DENDRAL pushed the discovery
task one step farther back: given a collection of analytic data
from a mass spectrometer, it discovered rules explaining the fragmentation
behavior of chemical samples. A more recent program for chemical
discovery is MECHEM, which automates the task of finding mechanism
for chemical reactions: given experimental evidence about a
reaction, the program searches for the simplest mechanism
consistent with theory and experiment (Valdes-Peres, 1994).
Discovery programs have also been written for problems in biology,
physics, and other scientific domains. In order to model biologists'
discoveries concerning gene regulation in bacteria, Karp (1990)
wrote a pair of programs, GENSIM and HYPGENE. GENSIM was used
to represent a theory of bacterial gene regulation, and HYPGENE
formulates hypotheses that improve the predictive power of GENSIM
theories given experimental data. More recently, he has shifted
from modeling historical discoveries to the attempt to write programs
that make original discoveries from large scientific databases
such as ones containing information about enzymes, proteins,
and metabolic pathways (Karp and Mavrovouniotis 1994). Cheeseman
(1990) used a program that applied Bayesian probability theory
to discover previously unsuspected fine structure in the infrared
spectra of stars. Machine learning techniques are also relevant
to social science research, particularly the problem of inferring
causal models from social data. The TETRAD program looks at statistical
data in fields such as industrial development and voting behavior
and builds causal models in the form of a directed graph of hypothetical
causal relationships (Glymour et al., 1987).
One of the fastest growing areas of artificial intelligence is
"data mining", in which machine learning techniques
are used to discover regularities in large computer data bases
such as the terabytes of image data collected by astronomical
surveys (Fayyad, Piatetsky-Shapiro, and Smyth 1996). Data mining
is being applied with commercial success by companies that wish
to learn more about their operations, and similar machine learning
techniques may have applications to large scientific data bases
such as those being produced by the human genome project.
Evaluation
The topic of how scientific theories can be evaluated can
also be discussed from a computational perspective. Many philosophers
of science (e.g. Howson and Urbach 1989) adopt a Bayesian approach
to questions of hypothesis evaluation, attempting to use probability
theory to describe and prescribe how scientific theories are assessed.
But computational investigations of probabilistic reasoning must
deal with important problems involving tractability that are usually
ignored by philosophers. A full-blown probabilistic approach
to a problem of scientific inference would need to establish a
full joint distribution of probabilities for all propositions
representing hypotheses and evidence, which would require 2n probabilities
for n hypotheses, quickly exhausting the storage and processing
capacities of any computer. Ingenious methods have been developed
by computer scientists to avoid this problem by using causal networks
to restrict the number of probabilities required and to simplify
the processing involved (Pearl 1988, Neapolitain 1990). Surprisingly
such methods have not been explored by probabilistic philosophers
of science who have tended to ignore the substantial problem of
the intractability of Bayesian algorithms.
Theory evaluation in the context of medical reasoning has been
investigated by a group of artificial intelligence researchers
at Ohio State University (Josephson and Josephson 1994). They
developed a knowledge-based system called RED that uses data concerning
a patient's blood sample to infer what red-cell antibodies are
present in the patient. RED performs an automated version of
inference to the best explanation, using heuristics to form a
composite hypothesis concerning what antibodies are present in
a sample. Interestingly, Johnson and Chen (1996) compared the
performance of RED with the performance of my explanatory coherence
program ECHO on a set of 48 cases interpreted by clinical experts.
Whereas RED produced the experts' judgments in 58% of the cases,
ECHO was successful in 73% of the cases. Hence although the
engineering AI approach to scientific discovery has some evident
advantages over the cognitive modeling approach in dealing with
some problems such as making mining hypotheses from large data
bases, the cognitive modeling approach exemplified by ECHO has
not yet been surpassed by a probabilistic or other program that
surpasses human performance.
3. Theory of Computation
Both the cognitive modeling and engineering AI approaches
to philosophy of science involve writing and experimenting with
running computer programs. But it is also possible to take a
more theoretical approach to computational issues in the philosophy
of science, exploiting results in the theory of computation to
reach conclusions about processes of discovery and evaluation.
Discovery
Scientific discovery can be viewed as a problem in formal
learning theory, in which the goal is to identify a language given
a string of inputs (Gold 1968). Analogously, a scientist can
be thought of as a function that takes as input a sequence of
formulas representing observations of the environment and produces
as output a set of formulas that represent the structure of the
world (Kelly 1995, Kelly and Glymour 1989, Osherson and Weinstein
1989). Although formal learning theory has produced some interesting
theorems, they are limited in their relevance to the philosophy
of science in several respects. Formal learning theory assumes
a fixed language and therefore ignores the conceptual and terminological
creativity that is important to scientific development. In addition,
formal learning theory tends to view hypotheses produced as a
function of input data, rather than as a much more complex function
of the data and the background concepts and theories possessed
by a scientist. Formal learning theory also overemphasizes the
goal of science to produce true descriptions, neglecting the important
role of explanatory theories and hypothetical entities in scientific
progress.
Evaluation
The theory of computational complexity has provided some interesting
results concerning hypothesis evaluation. If you have n
hypotheses and want to evaluate all the ways in which combinations
of them can be accepted and rejected, you have to consider 2n
possibilities, an impossibly large number for large n.
Bylander et al. (1991) gave a formal definition of an abduction
problem consisting of a set of data to be explained and a set
of hypotheses to explain them. They then showed that the problem
of picking the best explanation, is NP-hard, i.e. it belongs
to a class of problems that are generally agreed by computational
theorists to be intractable in that the amount of time to compute
them increases exponentially as the problems grow in size.
Similarly, Thagard and Verbeurgt (1996) generalized explanatory
coherence into a mathematical coherence problem that is NP-hard.
What these results show is that theory evaluation, whether it
is conceived in terms of Bayesian probabilities, heuristic assembly
of hypotheses, or explanatory coherence, must handled by computational
approximation, not an exhaustive algorithm. So far, the theoretical
results concerning scientific evaluation have been largely negative,
but they serve to provide outline the limits within which computational
modeling must work.
4. What Computation Adds to Philosophy of Science
Almost twenty years ago, Aaron Sloman (1978) published an
audacious book, The Computer Revolution in Philosophy,
which predicted that within a few years any philosopher not familiar
with the main developments of artificial intelligence could fairly
be accused of professional incompetence. Since then, computational
ideas have had a substantial impact on the philosophy of mind,
but a much smaller impact on epistemology and philosophy of science.
Why? One reason, I conjecture, is the kind of training that
most philosophers have, which includes little preparation for
actually doing computational work. Philosophers of mind have
often been able to learn enough about artificial intelligence
to discuss it, but for epistemology and philosophy of science
it is much more useful to perform computations rather than just
to talk about them. To conclude this review, I shall attempt
to summarize what is gained by adding computational modelling
to the philosophical tool kit.
Bringing artificial intelligence into philosophy of science introduces
new conceptual resources for dealing with the structure and growth
of scientific knowledge. Instead of being restricted to the usual
representational schemes based on formal logic and ordinary language,
computational approaches to the structure of scientific knowledge
can include many useful representations such as prototypical concepts,
concept hierarchies, production rules, causal networks, mental
images, and so on. Philosophers concerned with the growth of
scientific knowledge from a computational perspective can go beyond
the narrow resources of inductive logic to consider algorithms
for generating numerical laws, discovering causal networks, forming
concepts and hypotheses, and evaluating competing explanatory
theories.
In addition to the new conceptual resources that AI brings to
philosophy of science, it also brings a new methodology involving
the construction and testing of computational models. This methodology
typically has numerous advantages over pencil-and-paper constructions.
First, it requires considerable precision, in that to produce
a running program the structures and algorithms postulated as
part of scientific cognition need to be specified. Second, getting
a program to run provides a test of the feasibility of its assumptions
about the structure and processes of scientific development.
Contrary to the popular view that clever programmers can get a
program to do whatever they want, producing a program that mimics
aspects of scientific cognition is often very challenging, and
production of a program provides a minimal test of computational
feasibility. Moreover, the program can then be used for testing
the underlying theoretical ideas by examining how well the program
works on numerous examples of different kinds. Comparative evaluation
becomes possible when different programs accomplish a task in
different ways: running the programs on the same data allows
evaluation of their computational models and background theoretical
ideas. Third, if the program is intended as part of a cognitive
model, it can be assessed concerning how well it models human
thinking.
The assessment of cognitive models can address questions such
as the following:
1. Genuineness. Is the model a genuine instantiation
of the theoretical ideas about the structure and growth of scientific
knowledge, and is the program a genuine implementation of the
model?
2. Breadth of application. Does the model apply to lots
of different examples, not just a few that have been cooked up
to make the program work?
3. Scaling. Does the model scale up to examples that
are considerably larger and more complex than the ones to which
it has been applied?
4. Qualitative fit. Does the computational model perform
the same kinds of tasks that people do in approximately the same
way?
5. Quantitative fit. Can the computational model simulate
quantitative aspects of psychological experiments, e.g. ease
of recall and mapping in analogy problems?
6. Compatibility. Does the computational model simulate
representations and processes that are compatible with those found
in theoretical accounts and computational models of other kinds
of cognition?
Computational models of the thought processes of sciences that
satisfy these criteria have the potential to greatly increase
our understanding of the scientific mind. Engineering AI need
not address questions of qualitative and quantitative fit with
the results of psychological experiments, but should employ the
other four standards of assessment.
There are numerous issues connecting computation and the philosophy
of science that I have not touched on in this review. Computer
science can itself be a subject of philosophical investigation,
and some work has been done discussing epistemological issues
that arise on computer research (see e.g. Fetzer, this volume;
Thagard, 1993). In particular, the philosophy of artificial intelligence
and cognitive science are fertile areas of philosophy of science.
My concern has been more narrow, with how computational models
can contribute to philosophy of science. I conclude with a list
of open problems that seem amenable to computational/philosophical
investigation:
1. In scientific discovery, how are new questions generated?
Formulating a useful question such as "How might species
evolve?" or "Why do the planets revolve around the sun?"
is often a prerequisite to more data-driven and focused processes
of scientific discovery, but no computational account of scientific
question generation has yet been given.
2. What role does visual imagery play in the structure and growth
of scientific knowledge? Although various philosophers, historians,
and psychologists have documented the importance of visual representations
in scientific thought, existing computational techniques have
not been well suited for providing detailed models of the cognitive
role of pictorial mental images (see e.g. Shelley 1996).
3. How is consensus formed in science? All the computational
models discusses in this paper have concerned the thinking of
individual scientists, but it might also be possible to develop
models of social processes such as consensus formation along the
lines of the field known as distributed artificial intelligence
which considers the potential interactions of multiple intelligent
agents (Thagard 1993).
Perhaps problems such as these will, like other issues concerning
discovery and evaluation, yield to computational approaches that
involve cognitive modeling, engineering AI, and the theory of
computation.
References
Buchanan, B. (1983). "Mechanizing the Search for Explanatory
Hypotheses. In PSA 1982 East Lansing: Philosophy of Science
Association.
Bylander, T., Allemang, D., Tanner, M., & Josephson, J. (1991).
"The Computational Complexity of Abduction." Artificial
Intelligence, 49, 25-60.
Cheeseman, P. (1990). "On Finding the Most Probable Model."
In J. Shrager & P. Langley (Eds.), Computational Models
of Scientific Discovery and Theory Formation (pp. 73-96).
San Mateo: Morgan Kaufmann.
Churchland, P. (1989). A Neurocomputational Perspective.
Cambridge, MA: MIT Press.
Darden, L. (1990). "Diagnosing and Fixing Fault in Theories."
In J. Shrager & P. Langley (Eds.), Computational Models
of Discovery and Theory Formation. (pp. 319-246). San Mateo,
CA: Morgan Kaufman.
Eliasmith, C., & Thagard, P. (in press). "Waves, Particles,
and Explanatory Coherence." British Journal for the Philosophy
of Science.
Falkenhainer, B. (1990). "A Unified Approach to Explanation
and Theory Formation. In J. Shrager & P. Langley (Eds.), Computational
Models of Discovery and Theory Formation. (pp. 157-196). San
Mateo, CA: Morgan Kaufman.
Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). "The
Structure-mapping Engine: Algorithms and Examples." Artificial
Intelligence, 41, 1-63.
Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). "From
Data Mining to Knowledge Discovery in Databases." AI Magazine,
17(3), 37-54.
Ford, K. M., Glymour, C., & Hayes, P. J. (Ed.). (1995). Android
Epistemology. Menlo Park: AAAI Press.
Glymour, C., Scheines, R., Spirtes, P., & Kelly, K. (1987).
Discovering Causal Structure. Orlando: Academic Press.
Gold, E. (1968). "Language Identification in the Limit."
Information and Control, 10, 447-474.
Holyoak, K. J., & Thagard, P. (1989). "Analogical Mapping
by Constraint Satisfaction." Cognitive Science, 13,
295-355.
Holyoak, K. J., & Thagard, P. (1995). Mental Leaps: Analogy
in Creative Thought. Cambridge, MA: MIT Press/Bradford Books.
Howson, C., & Urbach, P. (1989). Scientific Reasoning:
The Bayesian Tradition. Lasalle, IL: Open Court.
Johnson, T. R., & Chen, M. (1996). "Comparison of Symbolic
and Connectionist Approaches for Multiple Disorder Diagnosis:
Heuristic Search vs. Explanatory Coherence." Unpublished
manuscript, Ohio State University.
Josephson, J. R., & Josephson, S. G. (Ed.). (1994). Abductive
Inference: Computation, Philosophy, Technology. Cambridge:
Cambridge University Press.
Karp, P. (1990). "Hypothesis Formation as Design." In
J. Shrager & P. Langley (Eds.), Computational Models of
Discovery and Theory Formation. (pp. 276-317). San Mateo,
CA: Morgan Kaufman.
Karp, P., & Mavrovouniotis, M. (1994). "Representing,
Analyzing, and Synthesizing Biochemical Pathways." IEEE
Expert, 9(2), 11-21.
Kelly, K. (1996). The Logic of Reliable Inquiry. New York:
Oxford University Press.
Kelly, K., & Glymour, C. (1989). "Convergence to the
Truth and Nothing but the Truth. Philosophy of Science ,"
56 , 185-220.
Kulkarni, D., & Simon, H. (1988). "The Processes of Scientific
Discovery: The Strategy of Experimentation." Cognitive
Science, 12, 139-175.
Langley, P., Simon, H., Bradshaw, G., & Zytkow, J. (1987).
Scientific Discovery. Cambridge, MA: MIT Press/Bradford
Books.
Lindsay, R., Buchanan, B., Feigenbaum, E., & Lederberg, J.
(1980). Applications of Artificial Intelligence for Organic
Chemistry: The DENDRAL project. New York: McGraw Hill.
Neapolitain, R. (1990). Probabilistic Reasoning in Expert Systems.
New York: John Wiley.
Nowak, G., & Thagard, P. (1992a). "Copernicus, Ptolemy,
and Explanatory Coherence." In R. Giere (Eds.), Cognitive
Models of Science, (pp. 274-309). Minneapolis: University
of Minnesota Press.
Nowak, G., & Thagard, P. (1992b). "Newton, Descartes,
and Explanatory Coherence." In R. Duschl & H. R. (Eds.),
Philosophy of Science, Cognitive Psychology and Educational
Theory and Practice. (pp. 69-115). Albany: SUNY Press.
Osherson, D., & Weinstein, S. (1989). "Identifiable Collections
of Countable Structures." Philosophy of Science ,
56, 94-105.
Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems.
San Mateo: Morgan Kaufman.
Qin, Y., & Simon, H. (1990). "Laboratory Replication
of Scientific Discovery Processes." Cognitive Science,
14, 281-312.
Shelley, C. P. (1996). "Visual Abductive Reasoning in Archaeology."
Philosophy of Science, 63, 278-301.
Sloman, A. (1978). The Computer Revolution in Philosophy.
Atlantic Highlands: Humanities Press.
Thagard, P. (1988). Computational Philosophy of Science.
Cambridge, MA: MIT Press/Bradford Books.
Thagard, P. (1991). "The Dinosaur Debate: Explanatory Coherence
and the Problem of Competing Hypotheses." In J. Pollock &
R. Cummins (Eds.), Philosophy and AI: Essays at the Interface.
(pp. 279-300). Cambridge, Mass.: MIT Press/Bradford Books.
Thagard, P. (1992). Conceptual Revolutions. Princeton:
Princeton University Press.
Thagard, P. (1993). "Societies of Minds: Science as Distributed
Computing." Studies in History and Philosophy of Science,
24, 49-67.
Thagard, P. (1996). Mind: Introduction to Cognitive Science.
Cambridge, MA: MIT Press.
Thagard, P., & Verbeurgt, K. (1996). "Coherence."
Unpublished manuscript, University of Waterloo.
Valdes-Peres, R. E. (1994). "Conjecturing Hidden Entities
via Simplicity and Conservation Laws: Machine Discovery in Chemistry."
Artificial Intelligence, 65, 247-280.