In analogical inference, a problem is solved by comparison to a similar problem, not by means of general rules or schemas.
Example: taking a course in a subject because another course in that subject was interesting.
In AI, the use of analogy is often called case-based reasoning (CBR): one case is used to provide a solution to another case.
In terms of frames, analogical inference is not by inheritance from a class frame, but by transfer from one instance frame to another instance frame.
Analogical reasoning is common in legal, medical, scientific, and everyday reasoning, e.g. in decision making.
For a general review, see Holyoak and Thagard, The Analogical Mind.
How should cases be represented?
An instance frame can represent aspects of a case.
E.g. Phil/Psych 256
Subject: philosophy, psychology
Instructor: Paul Thagard
Interestingness: high (I hope)
But frames are limited in their ability to represent complex relations, e.g. Fred gave flowers to Jill.
This is very important for analogy, because of the difference between that example and one where Jill gave flowers to Fred.
A problem frame should have slots describing the current situation, as well as slots describing the desired solution.
Formal logic is useful for expressing relations, e.g.
(give (Fred flowers Jill)).
An analog (case) is then a set of assertions, e.g.
(is-a Phil/Psych256 course)
(has-instructor Phil/Psych256 PaulThagard)
Not all cases are best represented verbally. For example, the route you took last time to the Philosophy Department could be represented by words, but a mental picture might do a lot better.
Some cases are not only visual but dynamic, e.g. your memory of a plane being shot out of the sky.
A full theory of analogical reasoning will have to include representations for visual and dynamic cases, as well as auditory, olfactory, tactile, and emotional information.
How can analogs, with their relational complexity, be represented neurally? This is currently an active area of research. See Week 9 or 10 of these notes, and C. Eliasmith and P. Thagard, Integrating Structure and Meaning.
The standard model for analogical reasoning (CBR)
1. You are presented with a problem to solve, the target or base.
2. You remember a similar problem, the source.
3. You map the source to the target, figuring out how the two problems correspond.
4. You adapt the solution to the source problem to produce a solution to the target problem.
5. You learn from the successful solution, by storing the new problem or solution or possibly by abstracting a schema that describes what the source and target have in common.
1. Serendipity: you forget about the target problem until you encounter a source problem that reminds you of the target, as Laennec did when he invented the stethoscope.
2. Construction: you do not have a source problem to use, but you generate one, as Maxell did in developing his model of electromagnetism.
For medical illustrations of these kinds of analogical problem solving, see P. Thagard, Medical analogies.
Finding a problem in memory is a very difficult matching problem if you have many cases stored in your mind.
Doing an exhaustive search for the "nearest neighbour" will be impossible with large numbers of complex cases.
There are currently three competing theories of analogical remembering:
1. Indexing. This is popular in CBR. Cases are indexed by solution types so that a new case relevant to a target can be found quickly. This seems efficient, but psychological experiments suggest that human memory does not work so directly.
2. Gentner & Forbus's MAC/FAC "many are called but few are chosen") Do a first quick search comparing the predicates used in the analogs, and then do a more careful mapping checking for structural correspondences.
3. Thagard and Holyoak's, ARCS (analogical retrieval by constraint satisfaction). Based on multiconstraint theory. Analogs are retrieved based on a parallel process that considers:
Comparing two analogs seems simpler than retrieval, but is still tricky: subgraph ismorophism is computationally intractable.
Gentner and Forbus (SME) claim that analogical mapping is essentially syntactic.
Holyoak and Thagard (ACME) argue that analogical mapping uses all three constraints: similarity, structure, purpose.
Transferring a solution from one analog to another is similar to how answers are inherited in frame systems.
Sometimes transfer is more complicated, and requires "tweaking" the source problem to provide a more flexible solution to the target problem.
Transfer can also involve emotions: emotional analogies.
Abstracting a problem schema is like learning a class frame from 2 instance frames (week 5).
SME, MAC/FAC, ACME, and ARCS all represent analogs using a variant of logical notation that specifies relations and their arguments.
Input a target and base (source) description.
Compute a set of global interpretations of the comparison between the base and the target, including:
Pick the best global interpretation.
Input a target and source description.
Create a constraint (neural) network with units representing hypotheses about what matches to what.
Create positive and negative links based on semantic, structural, and pragmatic constraints.
Spread activation through the network to activate and deactivate the units. The most active units represent the best matches.
ACME matches non-identical predicates.
ACME allows mappings that are not 1-1.
ACME uses pragmatic relevance as a factor.
Input a target.
Use a computationally cheap, non-structural filter to select candidates in long term memory.
Use SME to compute structural matches for the small number of candidates.
Input a target (probe).
Use a semantic network based on WordNet to identify potentially relevant source analogs.
Construct a constraint network with nodes representing hypotheses about correspondences between the probe and the stored analogs.
Use links in the network to represent positive and negative constraints, including similarity, structure and purpose.
Settle the network to pick the best analog.
The best computational work on this is done by CBR. How people do it in complex cases is not well understood.
The development and comparison of computational models of analogy has produced a much deeper understanding of how analogies work, and encouraged convergence of ideas. Besides the analogy models mentioned above, many others have been developed. For a survey, see K. Holyoak and P. Thagard, Mental Leaps.
Computational Epistemology Laboratory.
This page updated Feb. 14, 2005