Connectionism


Lecture notes


Phil/Psych 256
Feb. 25, 1997

Connectionism preview:

1.	Representation
		- networks
		- contraints (excitatory, inhibitory)
		- local representation

	Computation
		- learning, classification, planning

2.	Representation
		- neural networks
		- contraints (fitness)
		- distributed, recurrent

	Computation
		- learning, classification

	Systems
		- NETtalk

Q: What is a connectionist network?

	1. A set of nodes, or processing units

	2. A set of links between nodes

	3. A set of activation levels a for each node

	4. A set of weights for each link

	Each node can be thought of as a "neuron," with different
	links or connections with certain other neurons in a 
	brain-like structure

Q: What are these components good for?

	A1. Each node may represent a discrete piece of knowledge,
	e.g., a proposition (local representation)

	A2. Each link may represent a constraint between nodes.  
	A positive constraint is excitatory, a negative constraint 
	is inhibitory, e.g.,
		- if P and Q are consistent, then 
		  link(P,Q) is excitatory (+'ve)
		- if P and Q are inconsistent, then 
		  link(P,Q) is inhibitory (-'ve)

	A3. Each activation level a determines how much its node 
	affects the current state of the network.  Activation may 
	be governed by an output function (o), e.g., a threshold.

	A4. The spread of activation is determined by an 
	"activation function," often by multiplying activation (a)
	by weight (w), e.g.,

	a_j = SUM(w_ij * o_j(t))

	In simple, local networks, activations are sometimes 
	0 or 1, or weights are +1 or -1.

Q: How are such networks used?

	1. All relevant information is represented by nodes

	2. Constraints between nodes are represented by links, 
	+'ve and -'ve

	3. The input nodes (representing the problem) are kept 
	active

	4.  Activation spreads throughout the network via updates 
	(relaxation)

	5. The network (hopefully) reaches a stable state 
	(settling) i.e., so that 

		network-a_t = network-a_t+1 

 	The problem solution can then be read off the active output
 	nodes

	This whole process may be called "parallel constraint 
	satisfaction."

 
Figure 7.6.  A constraint network for decision making.  Boxes 
represent units, thin lines represent positive constraints based
on facilitation (excitatory links), while the thin line with a 
minus represents a negative constraint (inhibitory link).  The 
GOAL PRIORITY unit pumps activation to the other nodes that have 
to compete for it.
 
Figure 7.7.  Network for picking the best explanation of why Fred 
did not show up.  The thin lines are symmetric excitatory links 
and the thick line marked with a minus is a symmetric inhibitory 
link.

Examples:

	1. visual cognition, e.g., the Necker cube 
	(Hopfield network)

	2. planning, e.g., LRRH (Jones & Hoskins)

	3. decision, e.g., grad school (Thagard & Millgram)

	4. explanation, e.g., Fred (Thagard)

Phil/Psych 256
Feb. 27, 1997

Q: What is a "neural" network (NN)?

	A1. Representation is not local, but distributed - we don't 
	assume each node corresponds to some concept or proposition

	A2. A NN is organized into layers of nodes: 
		1. an input layer
		2. hidden layer(s) (optional)
		3. an output layer

	A3. A NN usually acquires representations by learning from 
	examples

	A4. Activations (a) and weights (w) and output functions 
	(o) are often more general with NNs than with local 
	networks 

Q: What do NNs do?

	They learn to associate features with "concepts."
	The most popular method is "supervised learning" with 
	backpropagation:

	1. Select a set of examples (e.g., faces)
	2. Select a network design, i.e., nodes, links, 
	organization, activation & output functions
	3. Initialize, e.g., assign weights (w) randomly
	4. Backpropagation: for each example
		i. Activate input nodes appropriately
		ii. Allow network to settle
		iii. Compute activation errors (from output 
		backwards)
		iv. Use errors to adjust weights (w)
	5. Repeat 4 until enough examples are classified 
	correctly (training "epoch")
	6. "Freeze" the network weights (w)

	If a proper classification scheme exists, backpropagation 
	can be made to find it.

Q: What kinds of NNs are there?

	A1. Feedforward - all links point towards the output layer 
	(required for standard backpropagation)

	A2. Recurrent - links may point towards the input layer, 
	e.g., for sentence understanding (Elman)
		- "Dog bites man" vs. "Man bites dog"
		- "Chris beats his wife...at Scrabble"
		- "The man who came to dinner ate and left"

	A3. And many more...

Example: NETtalk (Sejnowski & Rosenberg)

	- learned to pronounce English words, i.e., match 
	letters to phonemes

	- used a "sliding-window" of 7 letters

	- feedforward network, with backpropagation

	- 5000 word training set

	- training:
		 100 epochs: words separate
		 500 epochs: consonants and vowels separate
		1000 epochs: pronunciation distinct
		1500 epochs: training set 95% correct

Q: What are the advantages of NNs?

	- represent typical conditions
	- learn representations effectively
	- generalize from examples
	- do parallel constraint satisfaction
	- content-addressable memory
	- graceful degradation

Q: What are the disadvantages of NNs?

	- representation is opaque
	- training sets are often very large
	- backpropagation can be very slow
	- network design is difficult 
	- graceful degradation

Review of connectionism: 

	1. Database - local and neural networks

	2. Knowledgebase - learning, parallel constraint 
	satisfaction, relaxation

	3. Goals - classification, decision, planning, 
	language

	4. Learning strategies - backpropagation

	5. Good psychological basis - Thagard, Rumelhart, 
	Elman

Don't forget:
	1. Friday, Feb. 28: Essay 1 due, PAS 3289 by 4:00pm
	2. Tuesday, April 4: review class
	3. Thursday, April 6: midterm, in class
	    and Essay 2 outline due

Further materials


Return to Phil/Psych 256 home page