The definition they selected of "agent" was a functional view - that an intelligent agent is one which can perceive, reason and act. The specific model they chose was Genesereth and Nilsson's "knowledge level agent". (Genesereth & Nilsson, 1987) Using this model, they identify several "resources" - the set S of external states (ie., the environment), the set D of statements about S that the agents "know" to be true, the set T of external states that the agent is capable of sensing and the set A of actions that the agent can perform.
They then define the functions that map these sets to each other - perception reduces the complete set of external states S to the set T which the agent can distinguish. Inference modifies the agent's "knowledge" D by applying what it already knows and what it perceives, T. Selection modifies the set of appropriate actions A using what it knows D and what it perceives T. Finally, action applies the selected acts A to the environment S.
Using a simple graphical notation, they then present the ways in which agents can share these resources. "Sharing" is defined as a relationship where two or more agents have direct access to a single set; that is, where the information contained in the set does not need to be communicated between the agents. Two agents sharing the same environment are shown as two groups each containing a set T, a set D and a set A, with a single set S external to both. This is the fundamental "coupling" - for two or more agents to be in the same multi-agent system, they must by definition share the same environment. Fulbright and Stephens refer to this as a Type 1 or "autonomous" agent, and state that all natural (ie biological) agents are Type 1 agents. The remaining combinations are named after the resource(s) they share - "perceptually coupled", "distributed cognition" and so on.
Fulbright and Stephens also discuss what they call cohesion, stating that the sets D, A and T must be "generated" (that is, contained in or provided by) some computational entity ("some agent") and that this provides a measure of interdependence in every multi-agent system.
However, Fulbright and Stephens discuss only intelligent agents - those that fit the "knowledge level agent" definition. Does their taxonomy extend to encompass reactive agents, or indeed non-agents?
The discussion of cohesion excludes the possibility that the function of perception can be shared without an agent in the system existing to provide that function . While this simplification certainly reduces the number of couplings possible, it is not clear that cohesion remains a useful part of the taxonomy, since it is easy to imagine a scenario where the simplification does not apply. That is, an apparently useful and obvious relationship is excluded by the simplification, which indicates that the simplification is not justified.
For example, a reflexive agent (or rather, a non-deliberative one) could be defined in Fulbright and Stephens' terms as one where a direct mapping of the set T to A occurs - that is, the set T of perceived environmental characteristics directly generates the set A of agent actions. It would seem that this set T could then be shared by deliberative agents of the type being classified.
Like Goodwin's formalism, this taxonomy could also be used to describe an inanimate object such as a brick, at least in functional terms. A brick has a degenerate set D, a degenerate set T, and a static set A.
Together, Goodwin, Fulbright and Stephens are bringing home the point that agents cannot reasonably be investigated using a purely behaviouralist approach. If we take a behaviouralist approach, we end up forced to allow unchanging, unchoosing objects as legitimate examples of agents. While formally correct, such an outcome is unsatisfying, counterintuitive and not useful. Any useful definition of agent must include a capacity for directed action and a capacity for choice.
Interestingly, Fulbright and Stephens specifically allow humans as agents within their taxonomy - they note that all biological agents are Type 1 agents. Given this note, we infer that the perceive function in their agent model also permits the agent to perceive itself, to perceive it perceiving itself and so on and to apply the results of these perceptions to D and A. However their model does not appear to extend to mapping D across D itself to generate further "facts about facts" upon which the agent can act. In short, I feel that their statement that "all biological agents are Type 1" agents is defensible only in a very limited way, and especially so when applied to humans.