1. Theoretical development.
2. Precision. Specify data structures and algorithms that produce a running program.
3. Qualitative testing. Write a program that simulates a task performed by humans. Failures can be as informative as successes.
4. Quantitative testing. Model results of psychological experiments.
5. Comparative evaluation. Run your program on someone else's input.
1. Genuineness. Is the model a genuine instantiation of the theoretical ideas, and is the program a genuine implementation of the model?
2. Breadth of application. Does the model apply to lots of different examples, not just a few that have been cooked up to make the program work?
3. Scaling. Does the model scale up to examples that are considerably larger and more complex than the ones to which it has been applied?
4. Qualitative fit. Does the computational model perform the same kinds of tasks that people do in approximately the same way?
5. Quantitative fit. Can the computational model simulate quantitative aspects of psychological experiments, e.g. ease of recall and mapping in analogy problems?
6. Compatibility. Does the computational model simulate representations and processes that are compatible with those found in theoretical accounts and computational models of other kinds of cognition?
Other considerations: neurological plausibility, computational efficiency, comprehensibility
Books:
Newell, A. (1990). Unified theories of cognition. Cambridge, MA: Harvard University Press.
Rosenbloom, P. S., Laird, J. E., & Newell, A. (Eds.). (1993). The SOAR papers: Research on integrated intelligence. Cambridge, MA: MIT Press.
Web sites:
A SOAR home page, including tutorials and downloadable software.
SOAR = State, Operator, And Result
Cognitive architecture: description of a fixed set of computational mechanisms claimed to underlie human cognition.
Problem space: representation of a task in terms of initial state, desired state, and current state.
Production: rule.
Chunking: learning from experience by converting goal-based problem solving into long-term memory (productions)
Working-memory elements (i.e. what Winston calls assertions) consists of attributes and values, e.g. (block b70 ^on table ^on-top b61).
Productions consists of conditions (antecedents) and actions (consequents). The conditions can include descriptions of goal states.
English version of a production: IF the problem space is the base-level space and, and the state has a box with nothing on top, and the state has input that has not been examined, THEN make the comprehend operator acceptable, and note that the input has been examined.
Decision cycle:
When Soar runs into an impasse, e.g. if no productions can be used to generate anything new, then it creates a subgoal.
Books
Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard University Press.
Anderson, J. R. (1993). Rules of the mind. Hillsdale, NJ: Erlbaum.
Web site
The ACT home page
ACT: Adaptive Control of Thought Early version: ACT*. Later version: ACT-R.
Declarative memory: long-term store of propositional information, consisting of cognitive units (assertions). Contrast with SOAR, whose only long-term memory is rules.
Production: rule, consisting of conditions (antecedents) and actions (consequents)
Cognitive units include:
Cognitive units are all represented by attribute-value pairs, e.g.
fox1
Sample production: IF the goal is to do an addition problem, THEN the subgoal is to iterate through the columns of the problem.
Spreading activation: only some cognitive units are active and available for matching in working memory. Activation spreads in accord with psychological experiments on priming. Goals can be a source of information.
Matching: unlike SOAR, ACT allows partial matching of assertions to the antecedents of rules. Degree of activation affects degree of match.
Selection of a production rule to fire is governed by: (Anderson, 1993, p. 63):
If a number of productions were fired to achieve a goal, form a new production that summarizes the computation.
E.g. if a problem is solved using IF A THEN B and IF B THEN C, then create a shortcut rule IF A THEN C. Then if you want to accomplish C, you can go straight to C.
In SOAR, this is called chunking. In ACT*, it is called composition.
All known A's are B's, so create rule IF A THEN B.
How many A's required depends on background knowledge about variability. See PI (processes of induction system) described in P. Thagard, Computational Philosophy of Science, 1988.
Create mathematical formulas to describe data. E.g. BACON system for scientific discovery modelled the discovery of Kepler's and Ohm's laws.
Langley, P., Simon, H., Bradshaw, G., & Zytkow, J. (1987). Scientific discovery. Cambridge, MA: MIT Press/Bradford Books.
Suppose the rule IF A THEN B is too general to be useful. Create a more specialized rule IF A AND C THEN B.
Attach a parameter (strength) to a rule. Rules that contribute to problem solutions have their strength increased and will be more likely to be used in the future.
Represent a rule by a bitstring, i.e. string of 0's and 1's, representing the presence and absence of features. E.g. IF 1010000 THEN 10101100
Create new rules by genetic operators:
Use natural selection to produce a successful set of rules.
Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986). Induction: Processes of inference, learning, and discovery. Cambridge, MA: MIT Press/Bradford Books.
Some neuroscientists think that natural selection operates at the neuronal level.
Which of these procedures for learning rules are used in human learning?
Computational Epistemology Laboratory.
This page updated Jan. 24, 2005