Karl's University Project

Agents

Intelligence

"Intelligence", at least in purely functional terms, is that set of facilities, attributes or features which enables an agent to decide what actions to take. Thus a measure of intelligence is a kind of modified Turing test - how well, objectively speaking, does the agent fulfil the tasks it is set (cf. Goodwin's "utility").

Obviously this rather holistic approach has its limitations, given that (as Goodwin points out (1993, pp.44-47)) success must also relate to whether the task that has been set is reasonably achievable given the tools at the agent's disposal

By and large, the methods used to endow agents with intelligence have thus far fallen into two categories, reflexive and deliberative (Goodwin's terms). Deliberative agents have also been termed "symbolic", with non-symbolic agents being termed "reactive" agents (note that this use of "reactive" should not be confused with Goodwin's entirely different use of the term).

A deliberative agent contains some kind of model of the world, possibly including itself. This model is to some extent "built in", but its state is then modified by the agent in response to new information about the world that is received via the agent's sensors.

The agent predicts what actions will be needed to achieve some goal by interpreting this model, then proceeds to actions which will actually achieve that goal.

Goodwin makes a further distinction between simple deliberative agents and complex deliberative agents - while the former modify the state of their world model according to inputs, the latter are able to modify the model itself, thus effectively learning.

Reflexive agents do not model the world in order to determine their actions. This kind of agent can be thought of as containing a table mapping responses to situations - when a particular environmental state occurs, the agent carries out the corresponding action. Goodwin calls this "stimulus/response " behaviour.

The process by which a deliberative agent moves from model to action, and in particular the process by which the agent determines which of a range of actions is appropriate and selects from among them, is known in the artificial intelligence community by the general term "planning". Planning is an entire area of research by itself, so I do not propose to discuss the various approaches to solving problems associated with planning.

However, as Wooldridge and Jennings [1995] point out, there are several problems with the deliberative model for agent intelligence. The kinds of problems posed by the symbolic approach tend to be very difficult to solve in theory, let alone in practice, with some of the underlying logics being formally undecidable, or at least undecidable in predictable or reasonable time-frames.

This has led to more attention being paid to the reflexive variety of agent, and to hybrid approaches. Brooks contends that intelligence is an emergent property of complex systems. In a very engaging paper (Brooks, 1986) he argues that to attempt to directly model a real, dynamic and infinitely complex world using a static package of abstracted facts is doomed to failure. He also points out that the act of abstraction is really the difficult part, yet it is precisely this aspect that human researchers perform on behalf of their creations! Brooks has demonstrated several systems which use many very simple, very fast and above all non-symbolic agents operating within a larger entity to produce sophisticated behaviour in the larger entity.

The topic of ambiguity arises time and time again in the literature, directly and indirectly. Etzioni and Weld's softbot (Etzioni & Weld, 1994), for example, was specifically designed to allow a human user to specify only a minimum of information, with the softbot resolving ambiguities using other information available to it. Cohen and Cheyer (1994) call this a characteristic of delegation.

Ambiguity occurs at many levels through the process of moving from the expression of a task towards action to carry out the task. Etzioni and Weld concentrate on "disambiguation" of the request from the user, but ambiguity persists more deeply. Any deliberative agent is at some level resolving ambiguity at any point where it chooses one action over another. Even Brooks' reflexive robots are effectively resolving ambiguity as their conflicting subsystems arrive at one action rather than another.

Given the techniques used in reflexive agents, considering intelligence as a property of decision seems inappropriate. A more comprehensive and ultimately more useful statement would be as follows:


 [previous]  [index]  [next]
Email me!
Last modified 16 December 1995, 23:45
© Copyright 1995 Karl Auer