Karl's University Project



This paper began with a discussion of Goodwin's work. Goodwin presents a very clear description of some of the technical aspects of agency. The very clarity and precision of the description is in itself a flaw - in particular the environment in which agents operate is very weakly treated, omitting any discussion of the emergent properties of an environment containing other agents. Ancillary issues such as responsibility and obligation are not treated at all. The matter of learning by agents is glossed over. In summary, Goodwin's formalism covers some of the technical issues, but does not really get to the core of agency.

The taxonomy presented by Fulbright and Stephens likewise offers only a very technical suggestion as to ways in which agents might be classified, by grouping them according to the ways in which they share certain functions. While interesting for its own sake, their taxonomy does not offer any insight into the nature of agency, rather begging the question, in fact.

Goodwin, Fullbright and Stephens all begin from a common premise - that they know what an agent is! In fact, most of the authors cited in this paper begin from a very similar premise, typically basing their definitions of an "agent" on very restricted qualities. Goodwin's is quite all-encompassing - "[entities] created to perform some task or set of tasks". Genesereth and Ketchpel's is far narrower - "software 'components' that communicate with their peers ... in an expressive agent communication language".

However, the definitions used tend to focus inwards, on some property of the mechanisms used by the agents to accomplish their tasks. The attributes that distinguish agents from other kinds of software are given as implied properties of those mechanisms.

This paper has looked at some of the key attributes of agency that others have presented as qualities of agents. Specifically, it has looked at independence, intelligence, communication, learning, mobility and representation of the user.

It was not the intention of this paper to provide rigorous definitions of any of these attributes; rather, the intention was to relate these attributes to agency in such a way as to arrive at a concept of agency that would be useful.

Most of these attributes are aspects of each other - we have seen how independence requires intelligence, how mobility implies independence, how learning and communication are related.

We have also seen that for any of these attributes there is no hard and fast measure - each attribute presents a continuum upon which any given entity may be positioned.

One central theme which emerges when considering these attributes is choice. However large the set of environments may be in which an agent can operate, it is only independent if it has choices to make within those environments. Intelligence is the ability to deal with ambiguity, and thus is fundamentally the ability to make effective choices between alternatives. Learning is the ability to decide what is important, to abstract from experience and retain information which may be of use in later situations. Responsibility and obligation are meaningless terms in the absence of choice.

At first glance, choice does not seem to be a factor in communication or mobility. However, communication is largely a matter of translation. Except at the lowest, technical levels, communication is a matter of making sense out of ambiguous "sensory" input and deciding on appropriate meaning. Similarly mobility is no more than transportability unless the entity concerned has choices about where to move and when.

Given that these entities, agents, have choices to make and the power to make them, the question arises "why?" What purpose informs the decisions made by such entities? The answer seems simple enough: "to carry out the tasks set for them by their creators".

This simple answer is deceptive. One agent may serve several masters; agents may be faulty; agents may have been given conflicting or mutually exclusive tasks; tasks may be transformed by circumstance. For all these reasons and precisely because they can make choices, it is inappropriate to expect agents to behave in the same simpleminded, mechanistic way that we expect of other forms of software.

If these choice-making, independent entities were moving about in a closed environment, the above issues would be of academic interest only. No matter what choices they made, their effect on the real world would be negligible. If, however, we allow such entities to do some of our work for us and to make some of our decisions for us, we are then delegating to these entities powers that hitherto only we ourselves have exercised. These entities are now representing us in the real world. The choices that they make are now our choices, informed by our needs.

Thus we end up needing words like "responsibility" and "obligation". Where no choice exists, responsibility and obligation are meaningless. We have looked at how these concepts apply in the context of agency.

Many of the above attributes would be - and are - useful in contexts other than agency. However, if we take independence and representation together, and recognising that independence in particular implies attributes such as intelligence, it seems that we have a measure of agency that is not in conflict with most conceptions of agency found in the literature.

Of the two, it is representation that is the chief distinguishing feature: An entity that does not represent any other entity cannot reasonably be said to be an agent. Useful perhaps; sophisticated perhaps - but there is nothing useful to be gained from calling it an agent.

In summary, the property of independence and the extent to which a software entity represents its user are the measure of an agent. By applying this criterion, we can make a useful distinction between agents and other software entities which may share some agent-like properties.

 [previous]  [index]
Email me!
Last modified 23 December 1995, 20:30
© Copyright 1995 Karl Auer