Agents

Representation of the User

Like "intelligence", representation is a slippery term.

At one level, something as simple as an email message "represents" the writer.

In the context of this discussion of agents, however, the term is restricted to mean representation of the user in terms of action rather than mere existence. This is is the sense in which a travel consultant represents the holiday maker, a stock broker represents buyers and sellers of stocks or an ambassador represents his or her country. An email message is not an agent in terms of this discussion because it fulfils its sender's purpose by existing and being read in an entirely passive way.

Representation in these senses is part of what Wooldridge and Jennings (1995, p.5) call the "strong notion of agency". They do not themselves make this assertion, but representation, if it is to be active rather than passive, must be seen as the extension of desires, beliefs and purposes from their originator into the representing entity. Otherwise, we would have to see a hammer as "representing" the person driving a nail.

For many researchers, representation is the key to agency (eg., Maes (1995a, 1995b). Others, such as Foner (1993), do not seem to require any representation at all in their view of agency. Foner views the behaviour of a particular MUD [1] robot called Julia with a sort of paternal pride in "her" adventures in the artificial world of MUDs.

As noted above, for representation to be meaningful, the purposes of the entity being represented must be extended into the agent. The question "what is Julia's purpose?" neatly illustrates the distinction between the purposes of the agent itself and the purposes of the entity it represents.

For its creator and observers such as Foner who run Julia, Julia is an experiment, created for the purpose of eliciting interesting and amusing responses from real people in the MUD "she" inhabits and created also in an attempt to build an entity that could in some senses pass the Turing test. Julia itself has no purpose; that is, the specific actions it performs are irrelevant - only the kind of action is relevant, namely human-like communication, sufficiently complex to fool human correspondents into believing that Julia is also human.

Except for some debugging interactions, Julia's communications with its author is exactly the same as its interactions with any other entity in the MUD. It is fair to say that Julia represents only by its existence, even though that existence is expressed in complex activity.

A simple way to distinguish active representation from passive representation such as that of Julia is to perform the thought experiment of removing the author or interested observer from the environment inhabited by the agent. Is the behaviour of the agent modified at all by this removal? If not, the author or observer is not relevant to the agent, and vice versa. This has some interesting effects on the discussion - which will not be entered into here - of whether or not an agent's purpose must necessarily be a human purpose.

Passive representation is not particularly interesting from an agency point of view, interesting as the behaviour of such software may be in itself. As noted above, an agent capable of only passive representation is no more than a (possibly very complex) hammer. For this reason, it seems that some aspect of active representation is crucial to the notion of agency [2].

If we take the idea of representation as crucial to agency, we have a useful criterion to apply to several other possible agents. For example, is a computer virus an agent? The usual variety of computer virus is no more representative of its author than a falling brick is representative of the person who drops it. What about a beneficial virus, or a virus that reports back to its author?

Given the discussions of "depth" given in a previous chapter, we must be cautious when drawing lines using this single criterion. Like the other attributes mentioned here, representation is a continuum.

The idea of representation, and the degree to which any particular agent represents a human user, changes the ways in which matters such as responsibility, obligation and trust are dealt with. This area will be explored in the next chapter.


[1] MUD is an acronym for "multi-user dungeons", a text-based virtual reality. The virtual reality is maintained in one or more computers; people then interact within that environment by sending and receiving text messages. MUD robots are software entities that send and receive these messages, thus interacting within the MUD entirely on equal terms with the human participants.

[2] Perhaps this distinction is also useful for drawing a line between agents (machine intelligence applied to human purposes) and pure artificial intelligence (machine intelligence with its own purposes. However, it may be premature to regard any extant machine intelligences as capable of having their own purposes.


 [previous]  [index]  [next]
Email me!
Last modified 17 December 1995, 18:45
© Copyright 1995 Karl Auer