Agents

The issue of "Depth"

Any system can be seen as a set of nested or peer subsystems, each subsystem presenting an interface "outwards" to its peers or the system containing it and "inwards" to the subsystems it contains. Each layer of software abstracts the layer below to some extent.

Looking at agents, we see the same thing - layers of activity showing an ever-decreasing level of abstraction until we strike the "metal" of the task at hand; print a file, open a document, whatever.

At one extreme, everything is an "agent". A subroutine in a program designed to add two numbers together could be seen to be in some sense existing in an environment (the code and data space of an executing program), reacting to context (different numbers passed as parameters), behaving with intelligence (solving a problem), representing the user (who wanted the two numbers added and delegated the subroutine to do so) and so on. Some researchers are happy to have agents defined more or less thus - Genesereth and Ketchpel (1991) define agents as "software 'components' that communicate with their peers by exchanging messages".

At the component level, agents are little more than a programming technique or design paradigm. The parallels with object-oriented techniques and paradigms are unavoidable (ibid.).

At the other extreme, we have fully functional artificial intelligence, coping with arbitrarily complex situations with flair, learning the needs and wants and peccadillos of its user and generally being an electronic Jeeves. This is the kind of agent that researchers such as Maes (1995a, 1995b) are working towards.

As mentioned in the introduction, as soon as we understand the principles of a particular process completely we tend to dismiss it as uninteresting. When we understand a mechanism we tend to describe it - reasonably enough - as mechanistic, and "therefore" undeserving of labels such as "intelligent". The more complex a system is, the more likely it is that the mechanism will be hidden - not by design necessarily, but in a range of behaviours too wide for the principles to be easily expressed in mechanistic terms.

The modern automobile, for example, is a fantastically complex machine by most people's standards, but its principles and behaviours remain fairly straightforward - the conversion of the force of a reciprocating piston into the rotation of wheels for the purpose of pushing a large lump of metal along the ground. Any more complicated systems are subordinate to this basic operation. The mechanical nature of a car is further emphasised by the simple ways in which it responds to its driver and its environment.

If we could get into a car and request it to drive us somewhere, were able to leave it to choose its own route, were able to rely upon it to automatically obey the road rules, deal with unexpected situations posed by other drivers and so forth, we would be far more disposed to use words like "intelligent"" when describing the vehicle. We would without hesitation use phrases like "obey the road rules", allow that it "choose" such-and-such a route and so on.

To take another example, a word often found in the literature on agents is "delegate", that is, to transfer responsibility for an activity or goal to someone else (Maes (1995b), calls agents "digital proxies"). This seems an appropriate term to use with software that is set in motion to automatically perform some sophisticated task on our behalf, but would we use the term when using, say, a hammer?

In discussing this kind of language as applied to agents, Wooldridge and Jennings (1995) sum up by calling an agent an example of "a system that is most conveniently described by the intentional stance; one whose simplest consistent description requires the intentional stance". In other words, intentional notions serve as useful abstraction tools.

In the literature, agents are often characterised as operating within complex environments [1]. I believe this is due to the above phenomenon - if agents operated in a simple environment or acted in simple ways, they would not be "agents", they would be "just software".

The various properties that make an agent an agent will fall, for any given entity, somewhere on a continuum. The question "is this an agent?" does not really have an answer, but it is still valid to talk about the kind of attributes that entities must have in some measure before they can be considered agents at all.

However, it seems that complexity itself is a fundamental attribute of agents; that is, when the term "agent" is used, it is used to mean software that has a complex reaction to its environment and carries out complex tasks. The degree of complexity must remain a moot point, but Wooldridge and Jennings seem to present a practical measure.


[1] For example, in Pattie Maes describes autonomous agents as "...computational systems that inhabit some complex, dynamic environment, sense and act autonomously in this environment, and by doing so realise a set of goals or tasks that they are designed for." (Maes, 1995a)


 [previous]  [index]  [next]
Email me!
Last modified 16 December 1995, 23:45
© Copyright 1995 Karl Auer